I have to top post because somehow you managed to turn quoting off.
Can allocating an ordinary array, be done through the managed heap? If it can
not be done using the managed heap, then wouldn't allocating unmanaged memory be
considered a poor practice in terms of good .NET design?
There are two different facets to my problem, one requiring the array to
dynamically grow, and the other most time critical one, knows its size in
advance. Even the one that is required to dynamically grow, must do this
relatively quickly, thus can not take the boxing / unboxing overhead. I wanted
to see if I could adapt my system to .NET using my current compiler, the answer
is no. The next level question is can my system be adapted to .NET at all, the
answer is yes if I use generics.
Peter said:
Arne Vajhøj said:
Peter said:
So then the answer is clear, .NET without generics can be unacceptably
slow,
58-fold slower than unmanaged array storage and 500% slower than unmanaged
array retrieval, but, with generics very comparable to std::vector. I would
suppose that we could greatly speed up the std::vector storage by using
resize(), and operator[]() instead of push_back(). With my time critical
processing, I know the size in advance.
If you know the size then why not just allocate a good
oldfashioned array ?
Arne
I would estimate that might not be one of the .NET best practices.
Good grief.
Your original post stated that the solution had to work on "older
versions of .NET" and implied that you required a dynamic memory
structure. Now, after mountains of back-and-forth, it turns out that
you have no problem with using .NET 2.0 and that you know the size up
front.
If you'd told us those two things at the outset it would have saved a
lot of time and effort.
If you know the size up front, use an array. Period. Dynamic structures
cost: you get what you pay for, and you pay for what you get. This has
nothing to do with .NET as such: it's just a basic tenet of computing.
My purpose here on this forum is to evaluate the feasibility of using .NET for
my screen
recognition system. It looks likes your benchmarks derive a passing score for
.NET that includes generics, and a failing score for earlier versions.
Only for dynamic memory structures. If you can use a fixed-size array
then any version of .NET will yield similar (and speedy) results. If
you require a dynamic structure then .NET 1.1 forces you to box and
unbox values (unless you roll your own, naturally). .NET 2.0 introduces
generics which get around the boxing issue.
But for heaven's sake, next time state the situation clearly, so that
it doesn't take 50 or so posts to arrive at a conclusion!