ArrayList without Boxing and UnBoxing

P

Peter Olcott

Bill Butler said:
Is your current App running so close to the edge that you cannot afford any
spare clock cycles?

Its real-time, every clock cycle counts.
Doing a quick proof of concept shouldn't be THAT time consuming. Perhaps you
really can't afford the overhead, but you will never know if you don't do a
test

It requires buying a $500.00 compiler and learning .NET and C#. This is far too
much commitment when this can simply be a question answered by someone that
already knows the answer. In theory there is no reason why .NET generics, need
to be any slower at all than std::vector on read access. In theory there is no
reason why C# must be any slower than C++. Sometimes theory and practice do not
correspond.
 
B

Ben Newsam

In theory there is no reason why .NET generics, need
to be any slower at all than std::vector on read access. In theory there is no
reason why C# must be any slower than C++. Sometimes theory and practice do not
correspond.

I am only learning the beginnings of C#, but even I recognize that it
is bound to be slower than C or C++. I am willing to allow that as
long as I get something back in return. I am still hopeful about that,
but the jury is still out...
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Peter said:
It requires buying a $500.00 compiler and learning .NET and C#. This is far too
much commitment when this can simply be a question answered by someone that
already knows the answer. In theory there is no reason why .NET generics, need
to be any slower at all than std::vector on read access. In theory there is no
reason why C# must be any slower than C++. Sometimes theory and practice do not
correspond.

The C# compiler is free.

Actually my tests showed the same overhead for read from C# List<int>
and C++ vector<int>. About 50% for both !

Arne
 
P

Peter Olcott

Ben Newsam said:
I am only learning the beginnings of C#, but even I recognize that it
is bound to be slower than C or C++. I am willing to allow that as
long as I get something back in return. I am still hopeful about that,
but the jury is still out...

..NET and C# do have an excellent design, and probably will be able to provide
the same performance as unmanaged native code eventually, if they do not already
do so. I need to know how well they do this now. The official word from
Microsoft is that managed C++ is faster than managed C#.
 
P

Peter Olcott

Arne Vajhøj said:
The C# compiler is free.

Actually my tests showed the same overhead for read from C# List<int>
and C++ vector<int>. About 50% for both !

Arne

integer array: 0,046875
object array: 1,515625
32-fold more time?

How does this compare when we are testing read access between an integer array,
and the fastest way that .NET can provide integer array access, generics?
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Peter said:
integer array: 0,046875
object array: 1,515625
32-fold more time?

How does this compare when we are testing read access between an integer array,
and the fastest way that .NET can provide integer array access, generics?

You are looking at the combined store and retrive numbers.

The split up were:

save retrieve
integer array : 0,08 0,03
object array : 3,05 0,08
array list : 4,67 0,16
generic list : 0,31 0,05

Arne
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Ben said:
On Wed, 15 Nov 2006 19:27:38 -0600, "Peter Olcott"
I am only learning the beginnings of C#, but even I recognize that it
is bound to be slower than C or C++. I am willing to allow that as
long as I get something back in return. I am still hopeful about that,
but the jury is still out...

Well - that is not obvious to me.

Why should optimization at compile time be better
than optimization first time run ?

Arne
 
P

Peter Olcott

Arne Vajhøj said:
You are looking at the combined store and retrive numbers.

The split up were:

save retrieve
integer array : 0,08 0,03
object array : 3,05 0,08
array list : 4,67 0,16
generic list : 0,31 0,05

Arne

I did not understand these numbers:
(1) Is the comma intended to take tha place of a decimal point, or are there
four different numbers per line ?
(2) What are these numbers clocks per operation? Seconds per fixed number of
operations ?
(3) How do they compare to std::vector ?
 
B

Ben Newsam

Well - that is not obvious to me.

Why should optimization at compile time be better
than optimization first time run ?

Faster. I wouldn't know about better.
 
P

Peter Olcott

Ben Newsam said:
Faster. I wouldn't know about better.

There are two factors: when you have the original source code more information
is provided so that a greater degree of changes can be made, and still derive
the same result. From a marketability point of view slow compile times effect
fewer users than slower run times, so you have more time to do a better job at
compile time. Neither of these two things is a limitation for .NET.
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Peter said:
I did not understand these numbers:
(1) Is the comma intended to take tha place of a decimal point, or are there
four different numbers per line ?

Comma is decimal point in the locale I am using.
(2) What are these numbers clocks per operation? Seconds per fixed number of
operations ?

The last. But it should really not matter. Only that smaller is better.
And that should be obvious from the results.
(3) How do they compare to std::vector ?

A posted 8-10 posts previous:

GCC 3.2 with -O3:

integer array 0.06 0.03
vector 0.33 0.06

BCB 5.6:

integer array 0.06 0.03
vector 0.33 0.05

VC++ 7.1 with /Ox:

integer array 0.06 0.03
vector 0.44 0.05

Arne
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Peter said:
There are two factors: when you have the original source code more information
is provided so that a greater degree of changes can be made, and still derive
the same result. From a marketability point of view slow compile times effect
fewer users than slower run times, so you have more time to do a better job at
compile time. Neither of these two things is a limitation for .NET.

I think most optimizing compiler do convert source to an
intermediate form before optimizing anyway, so the source info
is lost when the optimization is done.

Arne
 
P

Peter Olcott

Arne Vajhøj said:
I think most optimizing compiler do convert source to an
intermediate form before optimizing anyway, so the source info
is lost when the optimization is done.

Arne

The source code itself it lost, but, not the richer semantic structure that is
provided by the source.
 
P

Peter Olcott

Arne Vajhøj said:
Comma is decimal point in the locale I am using.


The last. But it should really not matter. Only that smaller is better.
And that should be obvious from the results.


A posted 8-10 posts previous:

GCC 3.2 with -O3:

integer array 0.06 0.03
vector 0.33 0.06

BCB 5.6:

integer array 0.06 0.03
vector 0.33 0.05

VC++ 7.1 with /Ox:

integer array 0.06 0.03
vector 0.44 0.05

Arne

So then the answer is clear, .NET without generics can be unacceptably slow,
58-fold slower than unmanaged array storage and 500% slower than unmanaged array
retrieval, but, with generics very comparable to std::vector. I would suppose
that we could greatly speed up the std::vector storage by using resize(), and
operator[]() instead of push_back(). With my time critical processing, I know
the size in advance.
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Peter said:
So then the answer is clear, .NET without generics can be unacceptably slow,
58-fold slower than unmanaged array storage and 500% slower than unmanaged array
retrieval, but, with generics very comparable to std::vector. I would suppose
that we could greatly speed up the std::vector storage by using resize(), and
operator[]() instead of push_back(). With my time critical processing, I know
the size in advance.

If you know the size then why not just allocate a good
oldfashioned array ?

Arne
 
P

Peter Olcott

Arne Vajhøj said:
Peter said:
So then the answer is clear, .NET without generics can be unacceptably slow,
58-fold slower than unmanaged array storage and 500% slower than unmanaged
array retrieval, but, with generics very comparable to std::vector. I would
suppose that we could greatly speed up the std::vector storage by using
resize(), and operator[]() instead of push_back(). With my time critical
processing, I know the size in advance.

If you know the size then why not just allocate a good
oldfashioned array ?

Arne

I would estimate that might not be one of the .NET best practices. My purpose
here on this forum is to evaluate the feasibility of using .NET for my screen
recognition system. It looks likes your benchmarks derive a passing score for
..NET that includes generics, and a failing score for earlier versions. Thanks
for all your help.
 
P

Peter Olcott

Arne Vajhøj said:
Peter said:
So then the answer is clear, .NET without generics can be unacceptably slow,
58-fold slower than unmanaged array storage and 500% slower than unmanaged
array retrieval, but, with generics very comparable to std::vector. I would
suppose that we could greatly speed up the std::vector storage by using
resize(), and operator[]() instead of push_back(). With my time critical
processing, I know the size in advance.

If you know the size then why not just allocate a good
oldfashioned array ?

Arne

Oh Yeah, one more question, an Array List can be allocated in advance, can't it?
 
B

Bruce Wood

Peter said:
Arne Vajhøj said:
Peter said:
So then the answer is clear, .NET without generics can be unacceptablyslow,
58-fold slower than unmanaged array storage and 500% slower than unmanaged
array retrieval, but, with generics very comparable to std::vector. I would
suppose that we could greatly speed up the std::vector storage by using
resize(), and operator[]() instead of push_back(). With my time critical
processing, I know the size in advance.

If you know the size then why not just allocate a good
oldfashioned array ?

Arne

I would estimate that might not be one of the .NET best practices.

Good grief.

Your original post stated that the solution had to work on "older
versions of .NET" and implied that you required a dynamic memory
structure. Now, after mountains of back-and-forth, it turns out that
you have no problem with using .NET 2.0 and that you know the size up
front.

If you'd told us those two things at the outset it would have saved a
lot of time and effort.

If you know the size up front, use an array. Period. Dynamic structures
cost: you get what you pay for, and you pay for what you get. This has
nothing to do with .NET as such: it's just a basic tenet of computing.
My purpose here on this forum is to evaluate the feasibility of using .NET for my screen
recognition system. It looks likes your benchmarks derive a passing scorefor
.NET that includes generics, and a failing score for earlier versions.

Only for dynamic memory structures. If you can use a fixed-size array
then any version of .NET will yield similar (and speedy) results. If
you require a dynamic structure then .NET 1.1 forces you to box and
unbox values (unless you roll your own, naturally). .NET 2.0 introduces
generics which get around the boxing issue.

But for heaven's sake, next time state the situation clearly, so that
it doesn't take 50 or so posts to arrive at a conclusion!
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Peter said:
Oh Yeah, one more question, an Array List can be allocated in advance, can't it?

Both ArrayList and List<> has a constructor with an int argument
specifying initial capacity.

Arne
 
P

Peter Olcott

I have to top post because somehow you managed to turn quoting off.
Can allocating an ordinary array, be done through the managed heap? If it can
not be done using the managed heap, then wouldn't allocating unmanaged memory be
considered a poor practice in terms of good .NET design?

There are two different facets to my problem, one requiring the array to
dynamically grow, and the other most time critical one, knows its size in
advance. Even the one that is required to dynamically grow, must do this
relatively quickly, thus can not take the boxing / unboxing overhead. I wanted
to see if I could adapt my system to .NET using my current compiler, the answer
is no. The next level question is can my system be adapted to .NET at all, the
answer is yes if I use generics.


Peter said:
Arne Vajhøj said:
Peter said:
So then the answer is clear, .NET without generics can be unacceptably
slow,
58-fold slower than unmanaged array storage and 500% slower than unmanaged
array retrieval, but, with generics very comparable to std::vector. I would
suppose that we could greatly speed up the std::vector storage by using
resize(), and operator[]() instead of push_back(). With my time critical
processing, I know the size in advance.

If you know the size then why not just allocate a good
oldfashioned array ?

Arne

I would estimate that might not be one of the .NET best practices.

Good grief.

Your original post stated that the solution had to work on "older
versions of .NET" and implied that you required a dynamic memory
structure. Now, after mountains of back-and-forth, it turns out that
you have no problem with using .NET 2.0 and that you know the size up
front.

If you'd told us those two things at the outset it would have saved a
lot of time and effort.

If you know the size up front, use an array. Period. Dynamic structures
cost: you get what you pay for, and you pay for what you get. This has
nothing to do with .NET as such: it's just a basic tenet of computing.
My purpose here on this forum is to evaluate the feasibility of using .NET for
my screen
recognition system. It looks likes your benchmarks derive a passing score for
.NET that includes generics, and a failing score for earlier versions.

Only for dynamic memory structures. If you can use a fixed-size array
then any version of .NET will yield similar (and speedy) results. If
you require a dynamic structure then .NET 1.1 forces you to box and
unbox values (unless you roll your own, naturally). .NET 2.0 introduces
generics which get around the boxing issue.

But for heaven's sake, next time state the situation clearly, so that
it doesn't take 50 or so posts to arrive at a conclusion!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top