C# vs C++

W

Willy Denoyette [MVP]

Richard Grimes said:
I did it with a Fast Fourier Transform, compiled as managed C++ and native
C++. I ran the managed C++ library once before running it through a timing
loop (this was to discount the effect of JIT compilation).

Somewhere I have the results, but they varied depending on the
optimization switches I used. The fastest time was for one of the managed
C++ builds, but only just.

Richard

Richard,

Note that the topic is C# vs. C++, if you run your FFT using C# you will
notice a performance drop compared to managed C++ (I guess you are refering
to C++/CLI) . This is the result of a slightly better optimization done by
the managed C++ compiler.

Willy.
 
R

Richard Grimes

Willy said:
Note that the topic is C# vs. C++, if you run your FFT using C# you
will notice a performance drop compared to managed C++ (I guess you
are refering to C++/CLI) . This is the result of a slightly better
optimization done by the managed C++ compiler.

OK, if you insist, however, I don't thjink it is necessary because I
think the performance is determined by the runtime and not by the
Microsoft .NET compilers.

I have converted the managed C++ FFT routine over to C# and I have
repeated the tests. These results show that the not optimised C# library
is just slower than the not optimised C++ library, but the optimised C#
is slightly faster than the optimised managed C++ library. The
difference in the optimised librarys is very small. This reinforces the
point in the article that the *runtime* does the optimizations, and that
the optimizations provided by the .NET compilers are marginal.

http://www.grimes.demon.co.uk/dotnet/man_unman.htm

Please don't suggest I repeat the test with VB.NET, I refuse to touch
that language. <g>

Richard
 
J

John Timney \( MVP \)

Cant remember where the figure came from but I'm sure it was at a Redmond
presentation, anyway - I'm glad you enlightened me with research Richard.
The figures you quote will be useful in the future.

In my case 4% for 2 billion computations does count very much and they are
extremely large numbers involved in intense mathematical computations. In
this instance I have no choice but to consider those minor percentages. I
do however entirely agree with your point though, typically 4% is really
negligable, especially when you weight up the benefits of turnaround time
for development.

--
Regards

John Timney
ASP.NET MVP
Microsoft Regional Director
 
B

Bruce Wood

I'm impressed by your benchmarks... really. However, keep in mind that
you're comparing managed C++ versus unmanaged C++, but Redmond has
always claimed that managed C++ is faster than managed C# due to better
optimizing done by the C++.NET compiler.

That said, the last word I heard is that starting with .NET 2.0 they
would be concentrating their efforts on introducing better optimization
at the JIT stage, so any differences between the two managed languages
should diminish in the future.
 
W

Willy Denoyette [MVP]

Richard Grimes said:
OK, if you insist, however, I don't thjink it is necessary because I think
the performance is determined by the runtime and not by the Microsoft .NET
compilers.
Agreed, the differences between C# and managed C++ (I've compiled to managed
code using /clr:safe) are extremely small, here it's C# which is the fastest
~5% compared with managed C++.

[my results using v2.0]
Average for native code: 9780,000 µseconds
Average for managed C++ code: 10820 µseconds
Average for managed C# code: 10272 µseconds

Other benchmarks I ran show the same pattern, sometimes C# is faster
sometimes it's slower, but the delta's remain within the +/-5% ranges. It's
also great to see the difference between native and managed getting smaller
from v1.x and v2.
I have converted the managed C++ FFT routine over to C# and I have
repeated the tests. These results show that the not optimised C# library
is just slower than the not optimised C++ library, but the optimised C# is
slightly faster than the optimised managed C++ library. The difference in
the optimised librarys is very small. This reinforces the point in the
article that the *runtime* does the optimizations, and that the
optimizations provided by the .NET compilers are marginal.

Agreed.

http://www.grimes.demon.co.uk/dotnet/man_unman.htm

Please don't suggest I repeat the test with VB.NET, I refuse to touch that
language. <g>

LOL.


Willy.
 
R

riscy

Hi Richard

I'm working on FFT project (from strach) for education purpose, I just
create nice time-domain waveform window and spectrum window and running
DFT at the moment. I would be interested to learn more of your FFT
experience in C# world.

Have you used FFTW (fastest fourier transform in the west)?, I was
wondering how you managed to setup interop service for the which is
written in C.

Is there way of keep track of the message, it does not seem to have
forward email if new message is added?.

Regards
 
R

Richard Grimes

I'm working on FFT project (from strach) for education purpose, I just
create nice time-domain waveform window and spectrum window and
running DFT at the moment. I would be interested to learn more of
your FFT experience in C# world.

Not very much <g> If you asked me that 12 years ago I would be able to
tell you *everything* about the routine and the theories current at the
time. But now, all I know is that it is a computationally intensive
algorithm.
Have you used FFTW (fastest fourier transform in the west)?, I was
wondering how you managed to setup interop service for the which is
written in C.

I haven't used it. But the procedure is not a problem. The project on my
site shows three approaches.

1) Identify the 'main' entry point in the library and add extern "C"
__declspec(dllexport) to it, and then compile the library as a DLL. Then
use platform invoke to call it from C#.

2) Identify the 'main' entry point make this a public static method of a
public type and compile it as managed C++. Access the assembly as you
would access any other assembly from C#.

3) Convert the entire code to C#. This is perhaps the least best option,
but if the code does not use the CRT (maths routines excepted), nor uses
pointers, and only uses array syntax then it should not be too
difficult. The conversion of the project on my site took about 15
minutes. Pay particular attention to long - in C this is 32 bits, in C#
it is 64 bits.

Richard
 
G

Guest

Regarding performance, I would hope that floating point (i.e. float, double)
math operations in .NET will still use the math hardware in the CPU. I.e. the
runtime layer, even though not native, will still access hardware facilities
as needed?
 
J

Jon Skeet [C# MVP]

Greg said:
Regarding performance, I would hope that floating point (i.e. float, double)
math operations in .NET will still use the math hardware in the CPU. I.e. the
runtime layer, even though not native, will still access hardware facilities
as needed?

It will certainly still do the operations in hardware - but whether the
JIT has long enough to work out whether it can parallelise operations
using SSE or the like is a different matter. I wouldn't like to
speculate on whether it does or not, but that's an area a static
compiler *might* get an advantage in. (That's not to say that any such
advantage will stay around forever, of course.)
 
G

Guest

Thank you.
It will certainly still do the operations in hardware...

Isn't this another example of why the fuss about even native C++ performance
over C# is overrated. I.e. it still comes down to hardware for a lot of the
number crunching and graphics display.
 
J

Jon Skeet [C# MVP]

Greg said:
Isn't this another example of why the fuss about even native C++ performance
over C# is overrated. I.e. it still comes down to hardware for a lot of the
number crunching and graphics display.

Well, not entirely. If a native compiler can use parallelism in a way
that the JIT can't, the performance difference could be enormous. It's
unlikely to affect most applications, however.
 
L

Lloyd Dupont

Theorically speaking JITed software could even be faster because:
1. they are, after all, fully compiled (by the JIT) as well
2. the JIT could optimize the code for the current hardware.

It's not true because because some CPU/Memory hungry optimization are not
done by the JIT while they are done by the native compiler.
But, in fact, unoptimized native C++ could be much slower than C# code.
For reference look at that Article by Richard Grimes
http://www.grimes.demon.co.uk/dotnet/man_unman.htm

--
Regards,
Lloyd Dupont

NovaMind development team
NovaMind Software
Mind Mapping Software
<www.nova-mind.com>
 
G

Guest

I have converted the managed C++ FFT routine over to C# and I have

Bingo! This tells me right here that for .NET there is no good reason to use
C++ if one is more productive in C#. Thank you will Richard and Willy.
 
G

Guest

Anyone can answer my question?


-- In C#, one can use
new Object [,] {{"name","=","Yuje"}, {"ID","=","1"}} to represent "where"
in SQL select, But what about in C++

Can we say

Object *obj[] ={{"name","=","Yuje"}, {"ID","=","1"}} ?
What was the corresponding of C++ to C# in this case.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top