Peter Duniho said:
No. You write the comparer. Note what I wrote: "if you were sorting
class instances directly".
ah sorry maybe I read what you wrote a bit quick or I wrote what I wrote a
bit quick,
cant remember now, I was just off to bed ...
Yes I know, what I meant was its an int so you cant add an interface to it,
at least not to my knowledge, although ive been looking through .net 3.5 and
notice you can add
pseudo member functions, maybe theyl add this to ints to in 5.0 ....
the binary Add was what I had written before wich did exactly what you are
mentioning,
I just had to add an extra parameter for the comparer.
I'm not really sure what you thought I wrote, but whether you're sorting
an array of ints or sorting an array of references to instances of a
class, you have complete control over the comparison. Whatever you meant
when you wrote "I can't do that", I'm sure you're mistaken. You can do
the sort however you like (assuming, of course, your comparison follows
the standard rules of sortability).
That's not "far fewer".
lol
well that was only for a Very short run, it takes so long for the longer
runs
when I turn on the debug printout. 50% reduction in my code would be
significant.
however given that it only spends 25% of the time in my code as to library
and only 25% of my code in this fucntion its no great difference,
however I have to start looking at some place first to cut down speed,
although it may very well spend more time in the library due to this
function.
Each algorithm is going to be sensitive to the specific nature of the
data, and even this sensitivity varies according to the specific
implementation (well, at least for quicksort...binary search is more of a
"one way to do it" sort of thing, and so is only really sensitive to the
input data, rather than the exact algorithm).
In any case, in the context of algorithm _order_, even if the difference
was a consistent factor of 2, that's not significant enough to warrant
considering the algorithms significantly different (especially considering
the other costs associated with the binary search/add approach). The
concept of "order" as it applies to algorithm cost analysis is very
coarse-grained, and a constant factor differentiating two algorithms that
are otherwise the same cost is irrelevant in that analysis.
That said, if that 2x difference _is_ in fact consistent, you may find a
performance gain using a binary tree. The binary tree should have a
similar number of comparisons to the binary search, but without the
overhead of an array insertion.
In all cases, you need to be careful to not just look at the total number
of calls to your comparison function. There are lots of other things that
can affect performance, and so you'll want to measure actual time cost
when comparing different implementations. And if you choose one
implementation over another based on this measurement, you should make
sure that a) you have used representative data, and b) that the
implementation you choose doesn't have a worst-case time cost that could
happen with the data you're using.
An implementation that is on average twice as fast as another
implementation, but which has a real-world worst-case possibility that
changes the actual order of the algorithm (from O(n log n) to O(n^2), for
example), is _not_ an implementation you want to use. (So, for example,
don't use a binary tree if there's a possibility that the data will be
added to the collection in its sorted order, or at least do someting to
detect that scenario so that you avoid resorting data that's already
sorted).
Yes its coming back to me now, binary tree in sorted order is a nightmare
lol.
Im getting an old programmer now, ive forgoton so much it seems,
It was nice to be able to forget about low level stuff when I moved to c#
think maybe I forgot a bit too much lol.
I just didnt know what was under the hood of the List<T>.Sort() so I assumed
it wasnt very good,
seems now I was mistaken ...
I did read somewhere the SortedList<T> and SortedDictionary<T> were very
fast
maybe its al the same as List<T>
your replies are actually very clear and helpfull
and a few other peoples too, I thank you a lot,
I always seem to hurry mine,
usualy becuase ive spent ages trying to solve a problem and
need a break cos my head hurts but think il just ask on
the forum maybe il have an answer when i get back ..
(am I lazy or what!)
[...]
thanks for the help, seems like the add and then sort I had to start with
might be best after all.
class reference might speed things up a bit by losing the level of
indirection,
so il give that a spin.
I see from your other post it didn't make a difference in time cost using
a class versus a struct. So IMHO your choice should be according to code
maintainability, and I think that a class comes out ahead in that case.
Especially since you can then have the class implement IComparable, making
it simpler to do things like sorting.
Yes I agree
I dont know if you saw my other post but I intend to try and sort the data
very crudly by using the length
as an index, the maximum length can be determined as the data is being
created,
sorting the data into bins of 1% would make little diference to the outcome,
ofc each 1% bin could then be sorted.
I did have a look on google and wiki lists al the sort algorothms nicely,
but doesnt list this type.
I gues it is just a classic case of dividing the data down into smaller
chunks.
again il give it a try and keep you posted.
The whole process is to clean up the mess of lines created by intersecting
one polyhedron to many polyhedrons
this algorithm cleans up one surface at a time.
this particular very big list is a result of a huge 3d maze,
the square floor is intersected by many walls wich adds about 2000 lines,
this algorithm simply adds lines so the result consists of a set of concave
polygons,
by adding lines to vertices where there is an angle >180' wich amounts to
700 points.
the algorithm works through the list in length order, if it finds a line
that fixes 2 points then it uses this fix
in preference to a line wich fixes just one point with 1/4 the line length.
ofc it also has to make sure it doesnt cross any other lines wich acounts
for another 25% of execution time.
it worked well indeed for small models, I was caught out by this large
number though,
there may well be a better way of doing this altogether but for now I at
least want to get to the bottomm
of sorted lists.
many thanks.
Colin =^.^=