I wasn't expected that !

M

Marchel

For a long time I was a gib fan of Borland C++ Builder with VCL framework and never gave a second look in Microsoft
products since I've seen MFC. Anyway, recently Borland decided out of the blue to abandon it's C++ Builder and I decided
to give a look into the .NET C++ and .NET C# products. After briefly playing with them I put them to test against each
other in some floating point tasks. The outcome might surprise you. It certainly surprised me !

See for yourself:

http://mywebpages.comcast.net/marchel/en/benchmark/benchmark_01.htm

Jack
 
C

Carl Daniel [VC++ MVP]

Marchel said:
For a long time I was a gib fan of Borland C++ Builder with VCL
framework and never gave a second look in Microsoft products since
I've seen MFC. Anyway, recently Borland decided out of the blue to
abandon it's C++ Builder and I decided to give a look into the .NET
C++ and .NET C# products. After briefly playing with them I put them
to test against each other in some floating point tasks. The outcome
might surprise you. It certainly surprised me !

See for yourself:

http://mywebpages.comcast.net/marchel/en/benchmark/benchmark_01.htm

R:\>cl -O2 -EHs -Op fpbench1.cpp
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.3077 for 80x86
Copyright (C) Microsoft Corporation 1984-2002. All rights reserved.

fpbench1.cpp
Microsoft (R) Incremental Linker Version 7.10.3077
Copyright (C) Microsoft Corporation. All rights reserved.

/out:fpbench1.exe
fpbench1.obj

R:\>cl -O2 -EHs -Op fpbench2.cpp
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.3077 for 80x86
Copyright (C) Microsoft Corporation 1984-2002. All rights reserved.

fpbench2.cpp
Microsoft (R) Incremental Linker Version 7.10.3077
Copyright (C) Microsoft Corporation. All rights reserved.

/out:fpbench2.exe
fpbench2.obj

R:\>fpbench1
10000000 3.1415927035898 0.250
20000000 3.1415926785905 0.500
30000000 3.1415926702568 0.735
40000000 3.1415926660896 0.984
50000000 3.1415926635893 1.250
60000000 3.1415926619226 1.485
70000000 3.1415926607317 1.734
80000000 3.1415926598387 1.984
90000000 3.1415926591444 2.235

R:\>fpbench2
10000000 3.1415927035898 0.250
20000000 3.1415926785905 0.500
30000000 3.1415926702568 0.750
40000000 3.1415926660896 1.000
50000000 3.1415926635893 1.234
60000000 3.1415926619226 1.500
70000000 3.1415926607317 1.735
80000000 3.1415926598387 2.000
90000000 3.1415926591444 2.250

This code appears to be very sensitive to the -Op "Improve floating point
consistency" option. In fact, when I compiled your two samples without -Op,
I got times of 0.000 reported for every case!

-cd
 
A

Arnold the Aardvark

Marchel said:

Very interesting! But there are far more variables in choice of tools
than raw speed. Most Windows applications are not number-crunching but
spend all their time waiting for the user to do something.

Perhaps you should consider other features such as ease of use,
portability, etc. I'm not claiming that Borland would necessarily do
better - I just think your tests need a wider scope. And let's not
forget deterministic cleanup - vital for resources other than memory
IMV, and often memory, too.

It's a great pity that Borland are ditching the VCL. It has had years
to mature into an excellent library, whereas the .NET library is still,
IMHO, in it's infancy. How is old Anders these days, anyway?


Arnold the Aardvark
 
M

Marchel

Arnold the Aardvark said:
Very interesting! But there are far more variables in choice of tools
than raw speed. Most Windows applications are not number-crunching but
spend all their time waiting for the user to do something.

Perhaps you should consider other features such as ease of use,
portability, etc. I'm not claiming that Borland would necessarily do
better - I just think your tests need a wider scope. And let's not
forget deterministic cleanup - vital for resources other than memory
IMV, and often memory, too.

It's a great pity that Borland are ditching the VCL. It has had years
to mature into an excellent library, whereas the .NET library is still,
IMHO, in it's infancy. How is old Anders these days, anyway?


Arnold the Aardvark

I did not intended to test anything else, but speed. I agree with you, that there are many other resons to choose
particular language. In my personal experience, most of the engineers choose BASIC for small projects. The choice of the
particular language for ocassional programmers is often driven by their personal basic and very limited contact with the
programming world. I did the test mostly for myself but wanted to share my experience with others. In my case, you can
say, I have speed obsession :)

JM
 
M

Marchel

I normally don't run compiler from command line. Remeber, that this test is intended is for ocassional programmer that
is using IDE and is not familiar to all the internal "secrets" of the compiler. Quite frankly, lot's of prople who
ocassionally program, have no idea what "quad word" or "C calling convention" etc means and how particular settings they
relate to the peformance of the final product. IDE has a clear option to choose either "Debug" or "Release" version and
under release version it has "optimized for speed" setting. That is all one should need to know. If Microsoft compiler
has some other secrets beyond that, that is too bad for Microsoft.

The option -Op you mentioned is impossible to be changed in the standard version of the package (at least in IDE). It is
grayed out. Most of the occasional users of the C++ package don't have higher versions of the compiler simply because
such versions are too expensive and the cost is not justified for home or light use.

In you setting without -Op there must be something wrong with the clock measurement in the program. It is simply
impossible on a today PC even overclocked to 4GHZ to execute 4 floating point double size additions and two floting
point double size divisions 90,000,000 times under 0.001 second. That would make your PC some kind of supercomputer.

JM
 
C

Carl Daniel [VC++ MVP]

Marchel said:
The option -Op you mentioned is impossible to be changed in the
standard version of the package (at least in IDE). It is grayed out.
Most of the occasional users of the C++ package don't have higher
versions of the compiler simply because such versions are too
expensive and the cost is not justified for home or light use.

You're doing these tests with the Standard Version? That's hardly a
relevant test for doing any kind of performance benchmarking, since that
compiler has no optimizer whatsoever, and all optimizer options specified in
the IDE or command line are simply ignored.
In you setting without -Op there must be something wrong with the
clock measurement in the program. It is simply impossible on a today
PC even overclocked to 4GHZ to execute 4 floating point double size
additions and two floting point double size divisions 90,000,000
times under 0.001 second. That would make your PC some kind of
supercomputer.

Actually, I think it means there's something wrong with the generated code
with -O2 but not -Op. I haven't had a chance to really look into it - the
benchmark cleary took about the same time as without -Op, so I'm guessing
that the calculation of elapsed time somehow got bunged up.

-cd
 
T

Tester

Comparing against standard version of VC++ compiler is useless because it
has no meaningful optimizations.

Marchel said:
I normally don't run compiler from command line. Remeber, that this test
is intended is for ocassional programmer that
is using IDE and is not familiar to all the internal "secrets" of the
compiler. Quite frankly, lot's of prople who
ocassionally program, have no idea what "quad word" or "C calling
convention" etc means and how particular settings they
relate to the peformance of the final product. IDE has a clear option to
choose either "Debug" or "Release" version and
under release version it has "optimized for speed" setting. That is all
one should need to know. If Microsoft compiler
has some other secrets beyond that, that is too bad for Microsoft.

The option -Op you mentioned is impossible to be changed in the standard
version of the package (at least in IDE). It is
grayed out. Most of the occasional users of the C++ package don't have
higher versions of the compiler simply because
such versions are too expensive and the cost is not justified for home or light use.

In you setting without -Op there must be something wrong with the clock
measurement in the program. It is simply
impossible on a today PC even overclocked to 4GHZ to execute 4 floating
point double size additions and two floting
point double size divisions 90,000,000 times under 0.001 second. That
would make your PC some kind of supercomputer.
 
S

sashan

The option -Op you mentioned is impossible to be changed in the standard version of the package (at least in IDE). It is
grayed out. Most of the occasional users of the C++ package don't have higher versions of the compiler simply because
such versions are too expensive and the cost is not justified for home or light use.

The standard version has no optimizer as Carl Daniel pointed out and
Microsoft point out http://msdn.microsoft.com/visualc/howtobuy/choosing.aspx

You should ammend your benchmark results to say that you used the
Standard version.
 
M

Marchel

You're doing these tests with the Standard Version? That's hardly a
relevant test for doing any kind of performance benchmarking, since that
compiler has no optimizer whatsoever, and all optimizer options specified in
the IDE or command line are simply ignored.

As I said, it's too bad for Microsoft. I'm not a professional programmer and I cannot, as many engineers and scientists
in similar to mine position, justify cost of more expensive versions of the software. I mentioned that clearly in the
test page.

Consider this for example:

(www.borland.com)
Borland C++ Builder 6.0 (Personal) $69
Borland C++ Builder 6.0 (Professional) $999
Borland C++ Builder 6.0 (Enterprise) $2999

(www.microsoft.com)
Microsoft Visual C++ .NET (Standard) $109
Microsoft Visual C++ .NET (Professional) $1079

In fact you can download complete, full version of the Borland compiler (without IDE) for free. Also you can get open
source (gcc) C++ windows compiler freely. This make the price span even more ridiculous. I'm refusing to buy expensive
versions of the software for occasional programming sessions. My money making job has nothing to do with computer
programming.
Actually, I think it means there's something wrong with the generated code
with -O2 but not -Op. I haven't had a chance to really look into it - the
benchmark cleary took about the same time as without -Op, so I'm guessing
that the calculation of elapsed time somehow got bunged up.

And that makes sense. On my Athlon XP 1800 + (similarly to your timing) 90,000,000 loops of four additions and two
divisions took roughly 2 seconds. For the sake of simplicity assuming that addition and division takes the same time, it
means six general math operations per loop, which means about 540,000,000 math operations in two seconds or 270,000,000
operations per second. Athlon XP 1800+ runs at 1533 MHz. So in fact it made each operation in about 5.7 clock cycle
which is pretty good. You can't massively improve this. That is why the differences between Borland, C# and VC++ were
not that big. All those compilers are in fact pretty good.


JM
 
M

Marchel

To answer your reservations:

I seriously doubt that optimization will help in this particular test. The floating point math is done in FPU by
hardware. It will probably help with controversy of running the different style of coding almost twice as fast as the
other style of coding which I still consider ridiculous. I dont own a professional version of any compilers that I
tested.

I did mentioned, that the versions of all three compilers used in the test is the most stripped down, cheapest version
available on the market. It is mentioned in the section "2. Operating system and languages choice." of my test.

JM
 
B

Brandon Bray [MSFT]

Marchel said:
As I said, it's too bad for Microsoft. I'm not a professional programmer
and I cannot, as many engineers and scientists in similar to mine
position, justify cost of more expensive versions of the software. I
mentioned that clearly in the test page.

While I completely agree that it is unfortunate an optimizing version of
Visual C++ is not freely available, the conclusions drawn in the writeup do
not reflect that problem at all.

For example, the discussion on "Microsoft Visual C++ problem" is solely
demonstrable in non-optimized debug builds. The debug compiler allocates
variables when it sees them, which does make a difference in this case. The
optimizer, on the other hand, does perform rather standard life time
analyses which result in exactly the same code generated for both.

In the end, the writeup makes the conclusion, "And the looser (sic) is ...
Microsoft Visual C++ 7.0 (Win32 MFC version)." FYI, every version of Visual
C++ includes MFC. Because the conclusion is based on a non-optimizing
compiler, some mention of that fact is necessary. In reality, if Visual C#
is rated higher, it should be suspicious that Visual C++ rated so low for
two reasons: (1) Visual C++ has a far more sophisticated optimizer and thus
memory allocation issues are the only class that would possibly make C#
faster than a native C++ program, and (2) Visual C++ can produce the same
MSIL that gets Just-in-Time compiled as Visual C#, thus they both would rank
at the same level.

The good news is that in the future, we are including the optimizing
compiler in all SKUs. This should address the concern of availability.

I hope this information helps!
 
M

Marchel

While I completely agree that it is unfortunate an optimizing version of
Visual C++ is not freely available, the conclusions drawn in the writeup do
not reflect that problem at all.

For example, the discussion on "Microsoft Visual C++ problem" is solely
demonstrable in non-optimized debug builds. The debug compiler allocates
variables when it sees them, which does make a difference in this case. The
optimizer, on the other hand, does perform rather standard life time
analyses which result in exactly the same code generated for both.

I agree, that this is probably the case. It is still dissapointitng that competitor versions of C++ and even Microsoft
own C# Standard version does not behave so badly depending on the style of writing code. For me it was a dissapointment.
I understand, that you are trying to "excuse" Microsoft, but the fact remains, that the stnadard version is inferior
comparing to the competition. Since large majority of the users will buy standard version, this test will help them to
make their mind and clearly understand the restrictions they are getting into.
In the end, the writeup makes the conclusion, "And the looser (sic) is ...
Microsoft Visual C++ 7.0 (Win32 MFC version)." FYI, every version of Visual
C++ includes MFC

This is unfortunate choice of words on my side. By MFC I meant the old "style" that produces fully executable code
capable of running on every version of Windows. .NET version of Visual C++ produces intermediate code that requires JIT
to be installed. I was under impression, that these are not the same versions of the code. You can see that in test # 3
in which Visual C++ .NET console version outrun by several precent the Visual C++ Win32 console version of the same
code. Both version were produced using exactly the same issue of Visual Studio.
. Because the conclusion is based on a non-optimizing
compiler, some mention of that fact is necessary. In reality, if Visual C#
is rated higher, it should be suspicious that Visual C++ rated so low for
two reasons: (1) Visual C++ has a far more sophisticated optimizer and thus
memory allocation issues are the only class that would possibly make C#
faster than a native C++ program,

I disagree. The test # 2, in which Visual C++ simply stinks, all you use, is library functions. It seems, that C# math
library is much better written than C++. All three tests do not use any extensive memeory allocation that would change
test outcome significantly. I avoided on purpose allocating memory. Visual C++ Win32 console version runs really slow
comparing to C# (and presumably .NET C++ which should equal C#). I suspect, that math run time libraries used with
Win32 console are not that good as the ones in the .NET Math class. Borland is even worse here. How much "optimization"
you can do here with optimizing compiler if you still are going to use bad libraries ?
and (2) Visual C++ can produce the same
MSIL that gets Just-in-Time compiled as Visual C#, thus they both would rank
at the same level.

And they do when you actually use .NET console application in Visual C++. If you try to use Win32 console application,
Visual C++ losses to C# at least in it's standard version. That is exactly what my test shows. I also mentioned on the
web page, that Microsoft Visual C++ .NET is one of the winners. Only the Win32 version in its standard edition is
considered by me to be a looser. For some people this distinction is important. Corporation that I work for has PC
computers equipeed with Win2000 version of Windows that does not has .NET installed. Therefore I cannot write programs
that use .NET version of the languages and I'm left to choose between Borland C++ Builder and Visual C++ Win32.
The good news is that in the future, we are including the optimizing
compiler in all SKUs. This should address the concern of availability.

I hope this info rmation helps!

Excellent move !

JM
 
C

Carl Daniel [VC++ MVP]

Marchel said:
To answer your reservations:

I seriously doubt that optimization will help in this particular
test.

And yet the test results that I posted prove just that case: when compiled
with the optimizing compiler, your two flavors of code have nearly identical
performance. I didn't check myself, but I believe Brandon Bray indicated
that they in fact generate identical machine code. The conclusions you
present on the web site are misleading at best, and highly suspect in any
case. If nothing else, I see no justification for the leap from "bechmark 2
was faster than benchmark 1" to "I'll do all my coding in this style" based
on a single data point of un-optimized code.

-cd
 
J

Jack

Carl Daniel said:
And yet the test results that I posted prove just that case: when compiled
with the optimizing compiler, your two flavors of code have nearly identical
performance.

And I don't dispute that. I belive that the controversy of treating
two different styles of programming with almost double time penalty
for one style over the other is an issue of non optimized code. I
consider that ridiculous but that is beyond the issue here. Keep in
mind, that once I realized the problem, for the comaprison tests I was
using coding style that generates the faster code. I still challenge
the opinion that the optimization of my very simplistic code would
change MS VC++ in any significant way to help in any meaningfull way
the outcome of the tests against C# or BCB. VC++ was especially bad in
test #2 where it lost by a large margin on the run time library math
functions. Could you explain, what makes VC++ to execute squrare root
function more than twice as long as C# ? This function is already
compiled in DLL library and optimizer would do squat here. It maybe
that optimization would put VC++ on a better place in test # 3, but in
that test the difference between the best Borland and the worst VC++
was within 10% which is not a big deal.
I didn't check myself, but I believe Brandon Bray indicated
that they in fact generate identical machine code. The conclusions you
present on the web site are misleading at best, and highly suspect in any
case. If nothing else, I see no justification for the leap from "bechmark 2
was faster than benchmark 1" to "I'll do all my coding in this style" based
on a single data point of un-optimized code.

Absolutely not misleading. Majority of occasional programmers will buy
Standard version of the software. The enigmatic "non optimized
compiler" is not very clear for those buyers, what they are getting
into. Before I run those tests I would never suspect that it would
mean almost doubling execution time if the programmer was using old
"C" type coding style where all the declarations are put at the top of
the function. I consider my test to be a very good warning for those
people who like me are yet unaware of it. I would probably skip the
purachase of the product if I knew beforehand that it behaves like
this. My experience with Borland was such, that the Standard version
(which is called in Borland - Personal) was very decent but lacked
many optimizing tools, components available in higher versions and had
a restrictive license. But Borland compiler never behaved so badly
depending on coding style. Niether does Microsoft C# Standard for that
matter. You guys are lucky to have optimizing version of the compiler
and that's good for you. But there are only handful of people ready to
invest $1000 into the software used sparringly. For those folks my
results are not misleading at all. They are in fact perfect warning to
look into some other competitor products and competitive langugaes
such as Microsoft own C# for example.

JM
 
G

Guest

The option -Op you mentioned is impossible to be changed in the
You're doing these tests with the Standard Version? That's hardly a
relevant test for doing any kind of performance benchmarking, since that
compiler has no optimizer whatsoever, and all optimizer options specified in
the IDE or command line are simply ignored.
Someone have the professional version to repeat this test?
I am not really suprised in this, since one day in the future .NET code will
surpass conventional exe code because of the improved JIT compilers
optimized for the latest processor, while the conventional exe gets frozen
since the latest release version at the moment of the compilaion and service
pack of the C++.

It suprises me that it might already be the case today.

I think you will find far more "Standard" versions of VC ++ than
professional versions in the scientific area, so I guess this test is
interesting for scientists that have limited resources. "Standard" C#
clearly outperforms "Standard" C++ in this test.

Standard C# comparing with "Professional" C++ would be the same as comparing
speed of a Porsch to a normal priced car.
 
C

Carl Daniel [VC++ MVP]

Jack said:
Absolutely not misleading. Majority of occasional programmers will buy
Standard version of the software. The enigmatic "non optimized
compiler" is not very clear for those buyers, what they are getting
into. Before I run those tests I would never suspect that it would
mean almost doubling execution time if the programmer was using old
"C" type coding style where all the declarations are put at the top of
the function. I consider my test to be a very good warning for those
people who like me are yet unaware of it. I would probably skip the
purachase of the product if I knew beforehand that it behaves like
this.

Your conclusions:

<quote>

Microsoft Visual C++ in the old MFC version is simply a piece of junk. It
performed poorly in most of the tests.
....
Microsoft Visual C++ 7.0 (Win32 MFC version).
It was unexpected to see, how this software, considered by many to be
professional standard of Windows programming, failed to perform well in most
of the tests that I run.
</quote>

First, there is no such product. The average reader, unless they study your
site thoroughly is going to read this as "Visual C++" (in general), not
"Visual C++ 7.0 Standard Edition targeting Win32 executables".

Second, the product you tested (not identified here, but only in a small
table on a distant page) is not considered to be the professional standard
of Windows programming by anyone at all. I understand that you're
disappointed in the performance of Visual C++ Standard Edition in your
tests, but this "conclusion" is nothing more than ignorant Microsoft
bashing.
My experience with Borland was such, that the Standard version
(which is called in Borland - Personal) was very decent but lacked
many optimizing tools, components available in higher versions and had
a restrictive license. But Borland compiler never behaved so badly
depending on coding style. Niether does Microsoft C# Standard for that
matter. You guys are lucky to have optimizing version of the compiler
and that's good for you. But there are only handful of people ready to
invest $1000 into the software used sparringly. For those folks my
results are not misleading at all. They are in fact perfect warning to
look into some other competitor products and competitive langugaes
such as Microsoft own C# for example.

That may be. On the other hand, I wouldn't expect the casual programmer to
be terribly concerned about performance of their code. Most likely, they're
going to write such badly organized code that any improvement in compiler
optimization will be totally swamped by poor algorithm choices, poor data
structure choices, and so on.

There's a valid conclusion that your article can reach: If you're
interested in performance, don't use Visual C++ Standard Edition to generate
Win32 code. That's just a bit less inflamatory than saying it's "a piece of
junk", which it is not.

-cd
 
G

Guest

His test was intended for Scientists and Engineers!
Not the typical database and Web programmer!
That may be. On the other hand, I wouldn't expect the casual programmer to
be terribly concerned about performance of their code. Most likely, they're
going to write such badly organized code that any improvement in compiler
optimization will be totally swamped by poor algorithm choices, poor data
structure choices, and so on.

I doubt that. Many scientific students will try to get the best performance
out of their code.
And they don't have the money to buy the professional version.
There's a valid conclusion that your article can reach: If you're
interested in performance, don't use Visual C++ Standard Edition to generate
Win32 code. That's just a bit less inflamatory than saying it's "a piece of
junk", which it is not.

I agree, it should not be classified as junk since it probably can do other
things C# would lose at.

But he could also give the following conclusion:
"For Scientists and Engineers on a tight budget, C# is clearly the winner in
this test".

Then again, we have now VC ++ 2003, this might outperform the C# again, he
compared this to the 2002 version. .
 
B

Bo Persson

His test was intended for Scientists and Engineers!
Not the typical database and Web programmer!


I doubt that. Many scientific students will try to get the best performance
out of their code.
And they don't have the money to buy the professional version.

And they don't have to!

I you really are a student, you can often get an Academic License, which
gives you the Professional version for about the same price as the Standard
Edition!

The Standard Edition is good for learning programming, as it runs in debug
mode all the time - which is just fine for that purpose.

You shouldn't benchmark it though - I have programs that run 100 to 1000
times slower in debug mode...
piece

I agree, it should not be classified as junk since it probably can do other
things C# would lose at.

But he could also give the following conclusion:
"For Scientists and Engineers on a tight budget, C# is clearly the winner in
this test".

Yes, "in this test". :)


Bo Persson
 
M

Marchel

Someone have the professional version to repeat this test?
I am not really suprised in this, since one day in the future .NET code will
surpass conventional exe code because of the improved JIT compilers
optimized for the latest processor, while the conventional exe gets frozen
since the latest release version at the moment of the compilaion and service
pack of the C++.

This is excellent suggestion. I'm really interesting to see if optimizer
can really improve as much as many here seem to belive. I listed the code on the
web page, so it is a matter of just copy and paste, should not take more than
a few minutes. To make things "equal" I'm looking for somebody, who has
at least VC++ Professional and C# and could run at least those two and list the
timing ratio between the two.

JM
 
M

Marchel

Bo Persson said:
...
The Standard Edition is good for learning programming, as it runs in debug
mode all the time - which is just fine for that purpose.

I'm not sure what exactly you mean by this,. The same code compiled in
stanadard version as "debug" runs about twice as slow as the same code
compiled with "release" setting, so clearly there is a difference.
If your statement would be true, changing this setting would have no effect
whatsoever or the setting would not be even made available, wouldn't be ?

Also, if you are right, why is it, that fully optimized code compiled with Borland
or with open source gcc runs roughly the same time as VC++ ? I have
hard time to belive that VC++ can run in debug mode complete loop that
includes four double floating point additions and two double floating point
divisions on top of loop service in less than 35 clock cycles.
Yes, "in this test". :)

I tell you what. Why don't you just repeat the test # 1 with "fully" optimized code
and post it here to let all of us know, how many clock cycles it took to run one loop
after you made sure it runs in a "release" mode and with "full optimization" ?

Maybe you can prove something in a scientific - engineering way :)
Bo Persson

JM
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top