Managed vs Unmanaged Bare Bones Performance Test

G

Guest

At our company we are currently at a decisive point to choose between managed
and unmanaged code on the basis of their performance. I have read stuff about
this on various blogs and other websites. Then I decided to take my own test
as I am more concerned with basic performance at this point.

By basic I mean, just the basic stuff inside the CLR i.e. function calling
cost, for loop, variable declaration, etc. Let us not consider GC, memory
allocation costs, etc.

To my surprise the managed code I generated in my test through C# was
lagging behind to a considerable degree when compared with the code generated
by the C++ compiler.

I was wondering if someone can take a quick look at this and tell me why is
this the case. I was under the assumption, once the JIT happens, the CLR
virtual machine and JIT will give the same performance as native C++ compiler
does (as we are talking basic stuff only - no objects, just pure language
constructs and primitive data types).

I created two sample console applications (one in C# and other in C++). They
both call a function passing an int by value from inside a for loop. Nothing
happens inside the function. I used QueryPerformance.... apis for
measurement. (Code is pasted at the bottom of this posting).

Here are the results (for release mode running from console, with default
settings in the IDE)

C# Test for loop (50000 iterations) 0.000023931 (23 micro seconds)
C++ Test for loop (50000 iterations) 0.000000350 (0.35 micro seconds)

So its like C++ compiler is about 20 times faster than the managed CLR
Jitter. And if I also remove time taken for the QueryPerf...... apis then the
diff is even more

Can anyone please elaborate.

Thanks
adhingra

===========================================
C# Code PROGRAM.CS
===========================================

using System;
using System.Collections.Generic;
using System.Text;
using System.Runtime.InteropServices;

namespace ConsoleApp
{
class Program
{
//API declarations for frequency timers
[DllImport("kernel32.dll")]
extern static short QueryPerformanceCounter(ref long x);
[DllImport("kernel32.dll")]
extern static short QueryPerformanceFrequency(ref long x);

static long m_lStart = 0, m_lStop = 0, m_lFreq = 0;
static long m_lOverhead = 0;
static decimal m_mTotalTime = 0;

static void Main(string[] args)
{
//get the CPU frequency
QueryPerformanceFrequency(ref m_lFreq);

//record the overhead for calling the performance counter API
QueryPerformanceCounter(ref m_lStart);
QueryPerformanceCounter(ref m_lStop);

m_lOverhead = m_lStop - m_lStart;

Console.WriteLine("Starting with a simple For Loop calling a
simple function");

QueryPerformanceCounter(ref m_lStart);
for (int i = 0; i < 50000; i++)
{
Run(i);
}
QueryPerformanceCounter(ref m_lStop);

long lDiff = m_lStop - m_lStart;
Console.WriteLine(lDiff);
//Comment or Uncomment the overhead lines to see the times drop
//
//if (lDiff > m_lOverhead)
//{
// lDiff = lDiff - m_lOverhead;
//}

m_mTotalTime = ((Decimal)lDiff)/((Decimal)m_lFreq);
Console.WriteLine(m_mTotalTime);

Console.WriteLine("Press Enter to Continue");
Console.ReadLine();
}

static void Run(int i)
{
//Console.WriteLine(i);
}
}
}


===============================================
C++ Code ConsoleApp.cpp
===============================================

// ConsoleApp.cpp : Defines the entry point for the console application.
//

#include "stdafx.h"

void Run(int i)
{
//printf("%d\n",i);
}

int _tmain(int argc, _TCHAR* argv[])
{
LARGE_INTEGER m_start, m_stop, m_freq;
::QueryPerformanceFrequency(&m_freq);

//record the overhead for calling the performance counter API
::QueryPerformanceCounter(&m_start);
::QueryPerformanceCounter(&m_stop);

LONGLONG m_overhead = m_stop.QuadPart - m_start.QuadPart;
m_start.QuadPart = 0;
m_stop.QuadPart = 0;

printf("%s\n","Starting with a simple For Loop calling a simple function");

QueryPerformanceCounter(&m_start);
for (int i = 0; i < 50000; i++)
{
Run(i);
}
QueryPerformanceCounter(&m_stop);

LONGLONG lDiff = m_stop.QuadPart - m_start.QuadPart;
printf("%d\n",lDiff);
//Comment or Uncomment the overhead lines to see the times drop
//
//if (lDiff > m_overhead)
//{
// lDiff = lDiff - m_overhead;
//}

double totalTime = ((double)lDiff) / ((double)m_freq.QuadPart);
printf("%15.15f\n",totalTime);

printf("%s", "Press Enter to Continue");

int c = getchar();
return 0;
}
 
C

Cowboy \(Gregory A. Beamer\)

You can probably speed up the C# code a bit, but it will most likely still
perform less than C++. So, if performance is REALLY the ONLY issue, choose
C++.

Cha-ching! Another problem solved!

Now, the real question is "Is performance REALLY the ONLY issue"? Most
likely the answer, despite protestations of suits, is not the real issue. In
my years as a consultant, I have found that buying an additional server is
almost always cheaper than hiring additional C++ programmers. Now, there are
certainly instances where performance is the ONLY issue or where testing
reveals that a specific portion of code is too slow and needs help. In those
cases, boost the perf of the component and compile as a library. If you do
that you reduce the number of employees needed to maintain the code and the
number of rock stars you need in your organization.

Please note: there are instances where performance trumps all other aspects
of development. In general, however, maintenance (cost of dev team) is a lot
higher than cost to boost perf.

--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA
http://gregorybeamer.spaces.live.com

*********************************************
Think outside the box!
*********************************************
adhingra said:
At our company we are currently at a decisive point to choose between
managed
and unmanaged code on the basis of their performance. I have read stuff
about
this on various blogs and other websites. Then I decided to take my own
test
as I am more concerned with basic performance at this point.

By basic I mean, just the basic stuff inside the CLR i.e. function calling
cost, for loop, variable declaration, etc. Let us not consider GC, memory
allocation costs, etc.

To my surprise the managed code I generated in my test through C# was
lagging behind to a considerable degree when compared with the code
generated
by the C++ compiler.

I was wondering if someone can take a quick look at this and tell me why
is
this the case. I was under the assumption, once the JIT happens, the CLR
virtual machine and JIT will give the same performance as native C++
compiler
does (as we are talking basic stuff only - no objects, just pure language
constructs and primitive data types).

I created two sample console applications (one in C# and other in C++).
They
both call a function passing an int by value from inside a for loop.
Nothing
happens inside the function. I used QueryPerformance.... apis for
measurement. (Code is pasted at the bottom of this posting).

Here are the results (for release mode running from console, with default
settings in the IDE)

C# Test for loop (50000 iterations) 0.000023931 (23 micro seconds)
C++ Test for loop (50000 iterations) 0.000000350 (0.35 micro seconds)

So its like C++ compiler is about 20 times faster than the managed CLR
Jitter. And if I also remove time taken for the QueryPerf...... apis then
the
diff is even more

Can anyone please elaborate.

Thanks
adhingra

===========================================
C# Code PROGRAM.CS
===========================================

using System;
using System.Collections.Generic;
using System.Text;
using System.Runtime.InteropServices;

namespace ConsoleApp
{
class Program
{
//API declarations for frequency timers
[DllImport("kernel32.dll")]
extern static short QueryPerformanceCounter(ref long x);
[DllImport("kernel32.dll")]
extern static short QueryPerformanceFrequency(ref long x);

static long m_lStart = 0, m_lStop = 0, m_lFreq = 0;
static long m_lOverhead = 0;
static decimal m_mTotalTime = 0;

static void Main(string[] args)
{
//get the CPU frequency
QueryPerformanceFrequency(ref m_lFreq);

//record the overhead for calling the performance counter API
QueryPerformanceCounter(ref m_lStart);
QueryPerformanceCounter(ref m_lStop);

m_lOverhead = m_lStop - m_lStart;

Console.WriteLine("Starting with a simple For Loop calling a
simple function");

QueryPerformanceCounter(ref m_lStart);
for (int i = 0; i < 50000; i++)
{
Run(i);
}
QueryPerformanceCounter(ref m_lStop);

long lDiff = m_lStop - m_lStart;
Console.WriteLine(lDiff);
//Comment or Uncomment the overhead lines to see the times drop
//
//if (lDiff > m_lOverhead)
//{
// lDiff = lDiff - m_lOverhead;
//}

m_mTotalTime = ((Decimal)lDiff)/((Decimal)m_lFreq);
Console.WriteLine(m_mTotalTime);

Console.WriteLine("Press Enter to Continue");
Console.ReadLine();
}

static void Run(int i)
{
//Console.WriteLine(i);
}
}
}


===============================================
C++ Code ConsoleApp.cpp
===============================================

// ConsoleApp.cpp : Defines the entry point for the console application.
//

#include "stdafx.h"

void Run(int i)
{
//printf("%d\n",i);
}

int _tmain(int argc, _TCHAR* argv[])
{
LARGE_INTEGER m_start, m_stop, m_freq;
::QueryPerformanceFrequency(&m_freq);

//record the overhead for calling the performance counter API
::QueryPerformanceCounter(&m_start);
::QueryPerformanceCounter(&m_stop);

LONGLONG m_overhead = m_stop.QuadPart - m_start.QuadPart;
m_start.QuadPart = 0;
m_stop.QuadPart = 0;

printf("%s\n","Starting with a simple For Loop calling a simple
function");

QueryPerformanceCounter(&m_start);
for (int i = 0; i < 50000; i++)
{
Run(i);
}
QueryPerformanceCounter(&m_stop);

LONGLONG lDiff = m_stop.QuadPart - m_start.QuadPart;
printf("%d\n",lDiff);
//Comment or Uncomment the overhead lines to see the times drop
//
//if (lDiff > m_overhead)
//{
// lDiff = lDiff - m_overhead;
//}

double totalTime = ((double)lDiff) / ((double)m_freq.QuadPart);
printf("%15.15f\n",totalTime);

printf("%s", "Press Enter to Continue");

int c = getchar();
return 0;
}
 
J

Jon Skeet [C# MVP]

adhingra said:
At our company we are currently at a decisive point to choose between managed
and unmanaged code on the basis of their performance. I have read stuff about
this on various blogs and other websites. Then I decided to take my own test
as I am more concerned with basic performance at this point.

<snip>

And yet it looks like you didn't really measure basic performance. You
measured the performance of doing *nothing*. It's quite possible that
the C++ compiler optimised out your whole loop, because it just called
a method which didn't do anything. That's not an optimisation which is
likely to have much impact on real life code.

Try making the code actually *do* something. For example, the Run
method could square the given integer and return it. Keep a total,
adding the results of each call to Run, and then print out the result
at the end. You'll still end up with all the method calls being inlined
(which again isn't particularly like real life) but it'll be more
realistic than your current test.

Basically, it's very hard to tell what microbenchmarks are *actually*
doing. I'd also suggest changing the number of iterations so that the
total times ends up in the range of seconds, not microseconds.
 
C

Chris Mullins [MVP]

[Snip All]

This has been done a huge number of times, by a wide variety of people. In
just about every case, it's turned out performance is surprisingly equal.
There are occasionally some very subtle things that go on though, that make
what looks like an "apples to apples" test turn out now to be the case.

For example:
http://www.wintellect.com/cs/blogs/...versus-local-variable-access-performance.aspx

As someone who writes high performance code for s living, I can also say
that "bad performing code" is generally a red-herring. There are typically
very few spots in any given program that are truly performance bottlnecks,
and the only way to find these is to use a good profiler.
 
P

Peter Duniho

[...]
To my surprise the managed code I generated in my test through C# was
lagging behind to a considerable degree when compared with the code
generated by the C++ compiler.

I was wondering if someone can take a quick look at this and tell me why
is this the case.

In addition to what I think are already some pretty good comments, I'd
like to offer some other thoughts, particularly to expand on what Jon
wrote.

First, without seeing the generated code it's hard to know for sure what
you've measured. But it seems to me that at best, you've measured the
cost of a function call, when in fact what would be much more interesting
is the cost of doing actual work. At worst, you don't even have
apple-to-apple performance comparisons between the two pieces of code, due
to differences in what the compilers do with each version of the code.

Second, keep in mind that broadly there are two large classes of code:
code that mostly uses the operating system to do its work, and code that
is mostly computational in nature and so does most of its own work. The
latter is much more likely to be affected by differences in the
environment, and the latter is also much less common. One of the reasons
that things like Virtual PC (which runs Windows on pre-Intel Macintoshes)
and Rosetta (which runs pre-Intel Mac code on Intel Macintoshes) work so
well is that the programs being run spend very little time in the code
that needs to be translated.

Likewise, if you measure the performance difference between a task being
done in C# versus C++, but then your real-world application spends most of
its time executing code in the operating system, then the performance
differences between C# and C++ are meaningless. If your code only spends
1% or less of its time executing the code you actually wrote, and the rest
of its time either waiting on i/o (very common) or executing libraries in
the operating system (also fairly common), then even if you have a 20X
difference in performance (and frankly, I don't think a real-world
comparison would result in that great a difference), you're only really
looking at a 20% cost in the "slower" environment. There are a few
classes of applications where this sort of difference matters. For
others, it's entirely irrelevant.

I would advise making a decision on development environment based on what
seems most appropriate to your application from a design and
implementation perspective. There's no question that performance is also
important, but it's been my experience that you achieve performance in
practically any environment. Any gross performance problems are almost
always addressable simply through better coding (that is, using the
environment in the correct way). When you start trying to squeeze that
last 5-10% of performance out, then sometimes you find yourself needing to
move some of the more time-critical stuff into an environment with
less-overhead. But generally, it's almost always better to use the
environment that provides the tools you need for fastest, most efficient,
and most bug-free development.

If you are writing applications that can benefit greatly by the services
offered in .NET Framework (and most Windows applications fall into that
class, mainly because of the vast difference in coding up a UI from
scratch under Windows versus the simplicity of putting one together in
..NET), then C# and .NET are probably the right tools for the job. If .NET
doesn't provide the kinds of components that would actually be all that
useful to the kinds of operations your application needs to do, then don't
waste time with it.

I can guarantee you that the real-world performance difference between
..NET and plain-vanilla C++ Windows programming is *nowhere near* a
difference of 20X. The biggest thing I notice in my applications is
start-up time, as the .NET Framework imposes a relatively large burden
with respect to application initialization as compared to a straight
Windows application. Once an application is running, I get practically
identical performance to a similar one that might be written in plain
Windows. Of course, this is because most of the time these applications
aren't doing things that tax the .NET Framework.

Occasionally I run into issues where my code, due to inefficient design,
spends too much time in my own code rather than letting the Framework
handle things. For example, doing file i/o with buffers that are too
small. In those situations, fixes are relatively easy.

And of course, every now and then you will run across something that is
just plain costly in the .NET Framework, and which would run significantly
faster without managed code. In those relatively uncommon cases, I think
it's appropriate to shift the work to an unmanaged DLL that you call from
the managed code, allowing the heavy lifting to be done in a fast,
efficient way.

Pete
 
E

Egghead

Hi here,

I did my own test as well. :S
C# is a lot slower than native C++; only managed VC++ comes close to native
C++ performance. Nevertherless, it is not always truth. Some times, the
managed C++ will have the performance as bad as C#. My Q is :
(1) Is the speed your own req as other pointed out?
(2) Can you mix managed/unmanaged VC++?

--
cheers,
RL
adhingra said:
At our company we are currently at a decisive point to choose between
managed
and unmanaged code on the basis of their performance. I have read stuff
about
this on various blogs and other websites. Then I decided to take my own
test
as I am more concerned with basic performance at this point.

By basic I mean, just the basic stuff inside the CLR i.e. function calling
cost, for loop, variable declaration, etc. Let us not consider GC, memory
allocation costs, etc.

To my surprise the managed code I generated in my test through C# was
lagging behind to a considerable degree when compared with the code
generated
by the C++ compiler.

I was wondering if someone can take a quick look at this and tell me why
is
this the case. I was under the assumption, once the JIT happens, the CLR
virtual machine and JIT will give the same performance as native C++
compiler
does (as we are talking basic stuff only - no objects, just pure language
constructs and primitive data types).

I created two sample console applications (one in C# and other in C++).
They
both call a function passing an int by value from inside a for loop.
Nothing
happens inside the function. I used QueryPerformance.... apis for
measurement. (Code is pasted at the bottom of this posting).

Here are the results (for release mode running from console, with default
settings in the IDE)

C# Test for loop (50000 iterations) 0.000023931 (23 micro seconds)
C++ Test for loop (50000 iterations) 0.000000350 (0.35 micro seconds)

So its like C++ compiler is about 20 times faster than the managed CLR
Jitter. And if I also remove time taken for the QueryPerf...... apis then
the
diff is even more

Can anyone please elaborate.

Thanks
adhingra

===========================================
C# Code PROGRAM.CS
===========================================

using System;
using System.Collections.Generic;
using System.Text;
using System.Runtime.InteropServices;

namespace ConsoleApp
{
class Program
{
//API declarations for frequency timers
[DllImport("kernel32.dll")]
extern static short QueryPerformanceCounter(ref long x);
[DllImport("kernel32.dll")]
extern static short QueryPerformanceFrequency(ref long x);

static long m_lStart = 0, m_lStop = 0, m_lFreq = 0;
static long m_lOverhead = 0;
static decimal m_mTotalTime = 0;

static void Main(string[] args)
{
//get the CPU frequency
QueryPerformanceFrequency(ref m_lFreq);

//record the overhead for calling the performance counter API
QueryPerformanceCounter(ref m_lStart);
QueryPerformanceCounter(ref m_lStop);

m_lOverhead = m_lStop - m_lStart;

Console.WriteLine("Starting with a simple For Loop calling a
simple function");

QueryPerformanceCounter(ref m_lStart);
for (int i = 0; i < 50000; i++)
{
Run(i);
}
QueryPerformanceCounter(ref m_lStop);

long lDiff = m_lStop - m_lStart;
Console.WriteLine(lDiff);
//Comment or Uncomment the overhead lines to see the times drop
//
//if (lDiff > m_lOverhead)
//{
// lDiff = lDiff - m_lOverhead;
//}

m_mTotalTime = ((Decimal)lDiff)/((Decimal)m_lFreq);
Console.WriteLine(m_mTotalTime);

Console.WriteLine("Press Enter to Continue");
Console.ReadLine();
}

static void Run(int i)
{
//Console.WriteLine(i);
}
}
}


===============================================
C++ Code ConsoleApp.cpp
===============================================

// ConsoleApp.cpp : Defines the entry point for the console application.
//

#include "stdafx.h"

void Run(int i)
{
//printf("%d\n",i);
}

int _tmain(int argc, _TCHAR* argv[])
{
LARGE_INTEGER m_start, m_stop, m_freq;
::QueryPerformanceFrequency(&m_freq);

//record the overhead for calling the performance counter API
::QueryPerformanceCounter(&m_start);
::QueryPerformanceCounter(&m_stop);

LONGLONG m_overhead = m_stop.QuadPart - m_start.QuadPart;
m_start.QuadPart = 0;
m_stop.QuadPart = 0;

printf("%s\n","Starting with a simple For Loop calling a simple
function");

QueryPerformanceCounter(&m_start);
for (int i = 0; i < 50000; i++)
{
Run(i);
}
QueryPerformanceCounter(&m_stop);

LONGLONG lDiff = m_stop.QuadPart - m_start.QuadPart;
printf("%d\n",lDiff);
//Comment or Uncomment the overhead lines to see the times drop
//
//if (lDiff > m_overhead)
//{
// lDiff = lDiff - m_overhead;
//}

double totalTime = ((double)lDiff) / ((double)m_freq.QuadPart);
printf("%15.15f\n",totalTime);

printf("%s", "Press Enter to Continue");

int c = getchar();
return 0;
}
 
B

Barry Kelly

adhingra said:
At our company we are currently at a decisive point to choose between managed
and unmanaged code on the basis of their performance.

You need to find a more real-world example, where real-world consists of
one of the typical performance problems that you find in your existing
apps. That is, pick some algorithm you had difficulty optimizing, and
rewrite it in C#.
By basic I mean, just the basic stuff inside the CLR i.e. function calling
cost, for loop, variable declaration, etc. Let us not consider GC, memory
allocation costs, etc.

I'm 100% certain you're not measuring what you think you're measuring
(see below). Your code doesn't do anything, so it can be optimized away
entirely.

Make sure you use a debugger, such as WinDbg (+ SOS) to check the
generated machine code is roughly what you think it should be. Also,
it's a big mistake in a micro-benchmark situation to not actually
aggregate some result and output that result, otherwise you're at risk
of having the whole thing optimized away.

Re function calling cost, consider that simple functions are inlined in
both C# when marked to optimize ('csc /optimize+', be sure not to run
under the debugger) and in typical C++ -O2 implementations. So, for as
simple a function as you're presenting here, you're measuring something
else.

By the way, your C++ program runs in the same time whether it loops
50,000 times or 2,000,000 times, when compiled with VC++ 2005 with cl
-O2. Without further investigation, that tells me that the entire loop
was removed.

-- Barry
 
L

Lloyd Dupont

Also you should execute the method once before measuring its performance in
C#.
The 1st time a method is called, it is compiled on the fly, hence that add
some fix amount of time...
 
J

Jon Skeet [C# MVP]

Egghead said:
Hi here,

I did my own test as well. :S
C# is a lot slower than native C++; only managed VC++ comes close to native
C++ performance.

As a general statement, that's pretty meaningless. What *exactly* did
you measure? Might your benchmark be as flawed as the one which started
this thread?
 
E

Egghead

I am not start a war here. Go to the VC++ web site at Microsoft and find out
what other saying. Or, write a good unmanaged C++ test app and good C# test
app , for example open a large data file,do some thing with the data, and
write it back, see the result. Put in this way, if managed code is so good,
why Microsoft have all Vista in unmanaged C++?
Missing a lot of point from the community is because C# vs Java, not C# vs
native C++.
As I said, if performance is VERY VERY important, you shall do the
managed/unmanaged VC++. If not, using C#/VB.net is not a bad idea.
Hey, I am a C# developer.However, truth is the truth.
 
J

Jon Skeet [C# MVP]

Egghead said:
I am not start a war here. Go to the VC++ web site at Microsoft and find out
what other saying.

Others with meaningless micro-benchmarks like the one originally posted
here?
Or, write a good unmanaged C++ test app and good C# test
app , for example open a large data file,do some thing with the data, and
write it back, see the result.

Doing that will almost certainly give very similar results between C++
and C#. Indeed, I've seen various benchmarks on various sites which
*do* show the performance being roughly equal, assuming that they've
been implemented appropriately in all cases. Why? Because IO is likely
to be the largest part of the performance bottleneck, and reading a
file from unmanaged code doesn't make the disk rotate any faster than
it does when reading a file from managed code.
Put in this way, if managed code is so good, why Microsoft have all
Vista in unmanaged C++?

Code that is appropriate for OS functionality isn't always the best
choice for applications.
Missing a lot of point from the community is because C# vs Java, not C# vs
native C++.

Not sure what you mean here.
As I said, if performance is VERY VERY important, you shall do the
managed/unmanaged VC++. If not, using C#/VB.net is not a bad idea.
Hey, I am a C# developer.However, truth is the truth.

Unless you specify what you're actually doing, it's impossible to say
whether the performance of C# will be any better or worse than
unmanaged code. I'm quite happy to accept that there *are* things where
unmanaged code will perform a lot better - particularly if you need
lots of calls to unmanaged APIs. But there are *also* plenty of
situations where the performance of managed code is just as good as
that of unmanaged code.
 
P

Peter Duniho

[...]
Put in this way, if managed code is so good, why Microsoft have all
Vista in unmanaged C++?

Code that is appropriate for OS functionality isn't always the best
choice for applications.

Though, IMHO that lends too much credence to the question. That is, while
AFAIK it's true that Vista didn't wind up using managed code, discussing
what's "appropriate" implies that none of the managed code environment
would be suitable for OS components.

In fact, my understanding is that while it would have been perfectly
feasible to code up a large part of the OS as managed code (shell, IE,
etc.) the main reason that didn't happen is the main reason most of the
other innovations scheduled for Longhorn didn't wind up in Vista: poor
schedule management and the need to get *something* out the door.

Porting everything over to managed code would have been a monumental task,
considering everything that's actually in an operating system (Windows or
otherwise). While there would have certainly been a variety of benefits,
when you've already got a complete code base that implements most of the
functionality you want already, time in development isn't one of those
benefits. And so when time in development is the highest-priority
criteria, porting stuff becomes the last thing you want to do.

It's true that there's a fair chunk of what an OS does that should be
native and wouldn't be appropriate for managed code, for a variety of
reasons (many having nothing to do with performance). But the fact is
that there's actually lots of stuff the OS does that *can* be managed, and
would gain good benefits from it. I fully expect to see much of the
Windows components slowly transitioning over to managed code over time, as
it provides a number of clear benefits:

* security, through the reuse of code that's already been heavily
tested and scrutinized with security in mind
* consistency, by reusing the same platform that application
developers are using
* rapid implementation, for the same reasons application developers
benefit
* proof-of-concept, demonstrating once and for all that "serious
applications" can indeed be written using managed code

The fact that Vista today doesn't use any managed code doesn't in any way
detract from the usefulness of managed code, nor does it suggest there are
serious performance issues with managed code. It's much more about
development schedules and resources than it is about the quality of the
technology.

Pete
 
E

Egghead

I do not say C# is second class; offend no one.
I do C# desktop apps, and enterprise softwares as well.
As I said, do the test yourself.
It is like db stuff: are you using "sqldatareader" or "sqldataadapter" ? It
completely depends on what you need.
I am MCSD in C# as well. I like C#. Just we need to know the limit of C# as
well. At the end, C# is not silver-bullet.

--
cheers,
RL
Peter Duniho said:
[...]
Put in this way, if managed code is so good, why Microsoft have all
Vista in unmanaged C++?

Code that is appropriate for OS functionality isn't always the best
choice for applications.

Though, IMHO that lends too much credence to the question. That is, while
AFAIK it's true that Vista didn't wind up using managed code, discussing
what's "appropriate" implies that none of the managed code environment
would be suitable for OS components.

In fact, my understanding is that while it would have been perfectly
feasible to code up a large part of the OS as managed code (shell, IE,
etc.) the main reason that didn't happen is the main reason most of the
other innovations scheduled for Longhorn didn't wind up in Vista: poor
schedule management and the need to get *something* out the door.

Porting everything over to managed code would have been a monumental task,
considering everything that's actually in an operating system (Windows or
otherwise). While there would have certainly been a variety of benefits,
when you've already got a complete code base that implements most of the
functionality you want already, time in development isn't one of those
benefits. And so when time in development is the highest-priority
criteria, porting stuff becomes the last thing you want to do.

It's true that there's a fair chunk of what an OS does that should be
native and wouldn't be appropriate for managed code, for a variety of
reasons (many having nothing to do with performance). But the fact is
that there's actually lots of stuff the OS does that *can* be managed, and
would gain good benefits from it. I fully expect to see much of the
Windows components slowly transitioning over to managed code over time, as
it provides a number of clear benefits:

* security, through the reuse of code that's already been heavily
tested and scrutinized with security in mind
* consistency, by reusing the same platform that application
developers are using
* rapid implementation, for the same reasons application developers
benefit
* proof-of-concept, demonstrating once and for all that "serious
applications" can indeed be written using managed code

The fact that Vista today doesn't use any managed code doesn't in any way
detract from the usefulness of managed code, nor does it suggest there are
serious performance issues with managed code. It's much more about
development schedules and resources than it is about the quality of the
technology.

Pete
 
J

Jon Skeet [C# MVP]

Egghead said:
I do not say C# is second class; offend no one.
I do C# desktop apps, and enterprise softwares as well.
As I said, do the test yourself.
It is like db stuff: are you using "sqldatareader" or "sqldataadapter" ? It
completely depends on what you need.
I am MCSD in C# as well. I like C#. Just we need to know the limit of C# as
well. At the end, C# is not silver-bullet.

I never claimed that it was the silver bullet. I just think that
claiming that unmanaged code will be significantly faster than using
C++ in *every* situation (or claiming it to be generally true, without
specifying the situation) is misleading.
 
J

Jon Skeet [C# MVP]

Egghead said:
No one said native C++ is great in "every" situtation.

But you *did* state it as a generality:

<quote>
C# is a lot slower than native C++; only managed VC++ comes close to
native C++ performance.
</quote>

Now C# is a lot slower than native C++ in *some* situations, but I
don't believe it's fair to say that in the general case. That's what I
was objecting to.
 
E

Egghead

Sorry,

I believe you cannot handle the truth here now. It is a general case. You
can try it. Just in some situations, the performance of C# is as good as
native C++. As I said, go to read the VC++.net community at Microsoft.
Unless you can say the performance of VC++.net is not as good as C#. Anyway,
may be in .Net 3.5 or 4.5, it will be better. Just like assembly language vs
native C++. The compiler makes the c++ better.

cheers,
RL
 
J

Jon Skeet [C# MVP]

Egghead said:
Sorry,

I believe you cannot handle the truth here now. It is a general case. You
can try it.

I have, and I've seen plenty of *reasonable* benchmarks (unlike the one
presented in this thread) showing C# to be within 10-20% of native C++.
Most CPU-intensive benchmarks which are just doing maths, effectively,
are within those bounds. Benchmarks which rely on lots of calls to
unmanaged code tend to be significantly worse. For most applications
(which is what "the general case" means, to me) the performance
bottleneck is somewhere completely different in the first place - in
the database, disk performance etc.

Almost every benchmark I've seen which *does* show C# being "a lot
slower" than unmanaged code is seriously flawed, like this one is. They
tend to present programs which take advantage of C++ compiler
optimisations which don't do anything in real applications (like the
loop hoisting one here) or which measure for small amounts of time and
include the app startup time.

In other words, not only are the majority of applications not
particularly performance critical, but in the majority of applications
the performance (aside from startup time) wouldn't be significantly
affected anyway.
Just in some situations, the performance of C# is as good as
native C++. As I said, go to read the VC++.net community at Microsoft.

Who would, of course, be completely unbiased?
Unless you can say the performance of VC++.net is not as good as C#.

I never claimed that.
Anyway, may be in .Net 3.5 or 4.5, it will be better. Just like
assembly language vs native C++. The compiler makes the c++ better.

I don't expect it to get very much better, as I don't believe there's a
problem for most things anyway. They may well make things like
reflection significantly faster, but I don't believe there's *that*
much more performance to gain in the general case.
 
L

Lloyd Dupont

Speaking of performance test.
I think I did it properly.
Sometimes ago I found an implementation for something called "the sieve of
erathosthen" or something like that.
With implementation which looks good in both C# and C++.

The C# version run as fast (or even slightly faster) than the GCC compiled
version.
However the MS C compiled version was running twice as fast.

That for purely integer arithmetics.

So I would say native C++ code is faster.

However I agree for many app it's irrelevant as most times is spending doing
IO or maybe painting code....
 
J

Jon Skeet [C# MVP]

Lloyd Dupont said:
Speaking of performance test.
I think I did it properly.

For a single case :)

Do you have a link so we could have a look?
Sometimes ago I found an implementation for something called "the sieve of
erathosthen" or something like that.
With implementation which looks good in both C# and C++.
The C# version run as fast (or even slightly faster) than the GCC compiled
version.
However the MS C compiled version was running twice as fast.

That's very interesting.
That for purely integer arithmetics.

Did you look at the compiled code to see where the differences were? It
would be interesting to see whether it's another specific optimisation
which is of little use in "real" code, or whether it's a genuinely
useful one.

I believe modern C++ compilers sometimes make better uses of SSE etc in
some situations. With any luck the JIT will get better on that front.
That could be a major factor in some particular cases.
So I would say native C++ code is faster.

In some (even many) particular cases, yes. However, it certainly
*isn't* twice as fast (your result) in all integer arithmetic
situations, or in "the general case".
However I agree for many app it's irrelevant as most times is
spending doing IO or maybe painting code....

Yes, that's arguably a separate - and much more important - point.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top