Mixed Mode Slow?

J

Jos Vernon

I've been trying a mixed mode project but I'm having speed problems.

I was using a DLL and linking into it from C#. I thought I'd try and stick
the C# functionality and the raw unmanaged code into a mixed mode project.

It works but it's incredibly slow. All the optimization settings are the
same. All the code which was in the DLL is compiled with the /clr off. Only
the bits which were in C# have been re-written in managed C++ (and I've
checked them - they're not the cause).

However it's now five times slower than it was.

Any ideas or should I just go back to the original design?

Jos
 
T

Tomas Restrepo \(MVP\)

Jos,
I've been trying a mixed mode project but I'm having speed problems.

I was using a DLL and linking into it from C#. I thought I'd try and stick
the C# functionality and the raw unmanaged code into a mixed mode project.

It works but it's incredibly slow. All the optimization settings are the
same. All the code which was in the DLL is compiled with the /clr off. Only
the bits which were in C# have been re-written in managed C++ (and I've
checked them - they're not the cause).

However it's now five times slower than it was.

Any ideas or should I just go back to the original design?

It might very well depend a lot on what you're doing. For example: is the
interface between your MC++ wrapper and the unmanaged C++ library very
chatty? If so, that will cause too many managed->unmanaged->managed
transitions, which can be costly. You can avoid this by making your
interface more granular, and sometimes even by compiling some of your
unmanaged code with /clr.
 
J

Jos Vernon

Thomas
It might very well depend a lot on what you're doing. For example: is the
interface between your MC++ wrapper and the unmanaged C++ library very
chatty? If so, that will cause too many managed->unmanaged->managed
transitions, which can be costly. You can avoid this by making your
interface more granular, and sometimes even by compiling some of your
unmanaged code with /clr.

Thanks. However I don't think this is the issue. It's a very chunky
interface and the number of transitions is very limited. I did also do some
experiments along these lines and these indicated that the problem was not
in the transitions.

The only thing I can think of is - either - that turning off the clr is
being ignored sometimes - or - that the optimizations I'm applying are being
ignored. It feels like I'm running a debug version. Very odd.

Out of interest I compiled the whole thing as managed. Very impressive that
it compiles without even a murmur. But... it was ten times slower - ouch!

I'm convinced this must be possible so I think my next step is to compile
the native code as a library and import that into my mixed mode project. If
that doesn't work then I just don't know.

All suggestions gratefully received.

Jos
 
J

Jos Vernon

No - exactly the same as before. Still five times slower.

I had to turn off some optimizations because they were incompatible with the
CLR flag. Can they really make that much difference?

This is a complex real world app not a demo so I would have thought that
things like whold program optimization might make a difference of 10% not
500%.

Anyone from MS care to comment.

Jos
 
T

Tomas Restrepo \(MVP\)

Hi Jos,
No - exactly the same as before. Still five times slower.

I had to turn off some optimizations because they were incompatible with the
CLR flag. Can they really make that much difference?

Can you post the set of flags you're compiling with?
Also, can you give us some idea of the kind of MC++ code you're writing
here? That should give us a clue as to what's going on....
 
J

Jos Vernon

Can you post the set of flags you're compiling with?
Also, can you give us some idea of the kind of MC++ code you're writing
here? That should give us a clue as to what's going on....

Well my latest incarnation is written as an x86 library which is then linked
with a MC++ shell.

The x86 lib is compiled with - /02 (Maximise Speed), /Ob2 (Any Suitable
Inline Function Expansion), No global optimization (coz it's incompatible
with CLR). Otherwise it's a standard release build,

The .NET shell has the same settings but has global optimizations turned on
(I think this may be being ignored because it's incompatible with .CLR).
Also has Favor Fast Code turned on. The linker optimization is standard.

The shell is a shell - not much there. Typically things like this
(STARTFUNCTION and ENDFUNCTION are a simple set of try/catch handlers for
SEH)...

[Category("Settings"), Description("Add a page at a specified location -
return the page ID")]
int AddPage(int page)
{
STARTFUNCTION()
return mPDF->AddPage(page);
ENDFUNCTION()
}

All suggestions gratefully received.

Does anyone know if MS use MC++ for any of the .NET Framework? Or do they
package all their x86 code in standard x86 DLLs and call it that way?

I can't believe it was used for .NET 1.0 because the AppDomain bug meant
that anything written using it wouldn't work under ASP.NET. Has anything
changed with 1.1?

Jos
 
M

Mark Mullin

Number five needs more data.......

We have a 3D engine for massive financial visualization and we use C#
to implement the interface. While we wouldn't recommend C# for vector
processing, it's pretty snappy at what it does. I suspect somethings
at cross purposes.

First principles
< begin a set of opinionated statements>
C# is based on a intermediate virtual system, so it's going to run
slower than C++ native code, and I'm really getting sick of the 'but
no' from the msft folks on this. First because they keep arguing with
how the facts came to be and secondly because they can't claim their
virtualization has zero overhead, and with no overhead the cost is
still higher because they're not doing straight stack pushes and pops
to get parameters like a native app can. C# probably costs an order
of magnitude or more in potential performance. In most cases this
difference is so small as to be pointless, but that doesn't mean its
not there.

So, are you doing any computationally horribly complex or long
operations ? We have operations where we need to normalize several
100K values. That kind of thing goes in C++. Anything that talks to
the user or the net is fine, C# is not shabby, I'm just saying it's
not perfect.

Data transport between dlls by straight function call is cheap. Real
cheap. It's also unmanaged. So's the Windows kernel, but people from
MSFT.NET seem to keep forgetting that when casting elderberry wine and
hampsters at unmanaged code. Transition between the .net managed
architecture and the unmanaged architecture is something I'm pretty
sure I don't want to know anything about. Probably _worse_ than
making sausage. In any case it's very expensive.

..Net makes it real easy to use COM based communications. That's a
wonderful thing. If you haven't dealt with COM before, it's very very
expensive compared to traditional function calls. Really outrageously
expensive if the component you're talking to is in a different
process. And if it's off the machine, hey you're using the DAL du
jour.

< begin a set of highly opinionated statements>

The rules of thumb I use, and the answers to them might help here

< this goes in C++>
for(i = 0;i < 100K or so;i++)
gruesomely complex numerical operation or data diddling


Stay managed as long as possible on the C# side. Technically, we
never do _anything_ unmanaged on the C# side except when we actually
call our DLL functions.

COM is nice but learning to use arrays is twice as nice. Never ever
make 10 com calls when 1 call with an array will do.

Remember that C# garbage collects. If your C++ and C# are too tightly
bound, you'll see a regular 'anti-heartbeat'.

It's nice that C++ can generate IL code. Pointless, but nice. C# is a
better tool for managed code, C++ for unmanaged. For that matter,
it's possible to write C#, generate IL, and then decompile Java, but
thats also pointless. Funny, but pointless. The bad news - managed
C++ code is not C++ with a wee few nits, it's slower. If it wasn't,
it wouldn't be secure. Don't write managed C++, write managed C#.

Unmanaged debugging is real damn slow, compared to either managed c#
debugging or native C++ debugging. Forget about unmanaged debugging if
you turn it on, start it off in C#, and the system immediately starts
running at 1Khz. I haven't worked this all out yet, but I do know
that using a mixture of DLL function and COM calls when one object of
my affection was Excel caused me to form this rule. It was
interesting for the first few minutes as it tried to deal with Excel,
but I guess futility isn't a concept the debugger understands.

Anyway, hope this helps some. Check out the profiler you can get of
the msft site, make sure you only use unmanaged debugging when you
absolutely have to (it is way cool, that is true), make your com calls
efficient if you have to make them at all, and dont make the system
keep jumping between managed and unmanaged code.

regards
mmm
Jos Vernon said:
Can you post the set of flags you're compiling with?
Also, can you give us some idea of the kind of MC++ code you're writing
here? That should give us a clue as to what's going on....

Well my latest incarnation is written as an x86 library which is then linked
with a MC++ shell.

The x86 lib is compiled with - /02 (Maximise Speed), /Ob2 (Any Suitable
Inline Function Expansion), No global optimization (coz it's incompatible
with CLR). Otherwise it's a standard release build,

The .NET shell has the same settings but has global optimizations turned on
(I think this may be being ignored because it's incompatible with .CLR).
Also has Favor Fast Code turned on. The linker optimization is standard.

The shell is a shell - not much there. Typically things like this
(STARTFUNCTION and ENDFUNCTION are a simple set of try/catch handlers for
SEH)...

[Category("Settings"), Description("Add a page at a specified location -
return the page ID")]
int AddPage(int page)
{
STARTFUNCTION()
return mPDF->AddPage(page);
ENDFUNCTION()
}

All suggestions gratefully received.

Does anyone know if MS use MC++ for any of the .NET Framework? Or do they
package all their x86 code in standard x86 DLLs and call it that way?

I can't believe it was used for .NET 1.0 because the AppDomain bug meant
that anything written using it wouldn't work under ASP.NET. Has anything
changed with 1.1?

Jos
 
C

Carl Daniel [VC++ MVP]

Jos said:
Does anyone know if MS use MC++ for any of the .NET Framework? Or do
they package all their x86 code in standard x86 DLLs and call it that
way?

I believe that the native parts of the BCL predate MC++. If you look in the
Rotor (aka SSCLI) source code, you'll see that the native functions use a
special attribute ([internal]) in the C# code to cause the compiler to call
a native function. The native functions are implemented in normal C++,
using a lot of macros to create definitions for the functions. I don't
think this kind of binding is documented anywhere - it's meant to be
internal to the BCL implementation, afterall.
I can't believe it was used for .NET 1.0 because the AppDomain bug
meant that anything written using it wouldn't work under ASP.NET. Has
anything changed with 1.1?

Not in the way the native portions of the BCL are implemented, no.

-cd
 
T

Tomas Restrepo \(MVP\)

Hi Jos,
Well my latest incarnation is written as an x86 library which is then linked
with a MC++ shell.

The x86 lib is compiled with - /02 (Maximise Speed), /Ob2 (Any Suitable
Inline Function Expansion), No global optimization (coz it's incompatible
with CLR). Otherwise it's a standard release build,

I'm assuming you mean /GL and not /Og here, right?
The .NET shell has the same settings but has global optimizations turned on
(I think this may be being ignored because it's incompatible with .CLR).
Also has Favor Fast Code turned on. The linker optimization is standard.

Just out of curiosity, are you specifying /O2 or /O1 as well? (I hope you
have at least /Og in there besides /Ot, otherwise, you're not doing much,
really).
The shell is a shell - not much there. Typically things like this
(STARTFUNCTION and ENDFUNCTION are a simple set of try/catch handlers for
SEH)...

[Category("Settings"), Description("Add a page at a specified location -
return the page ID")]
int AddPage(int page)
{
STARTFUNCTION()
return mPDF->AddPage(page);
ENDFUNCTION()
}

OK, I can see that....
All suggestions gratefully received.

Does anyone know if MS use MC++ for any of the .NET Framework? Or do they
package all their x86 code in standard x86 DLLs and call it that way?

AFAIK, not much in the framework itself is written in MC++ (actually, the
only thing I can think of that *might* be is
System.EnterpriseServices.Thunk.dll)

Most of the rest of the framework libraries are straight C#.

I can't believe it was used for .NET 1.0 because the AppDomain bug meant
that anything written using it wouldn't work under ASP.NET. Has anything
changed with 1.1?

Some of the problems with that asp.net you mention have been corrected in
v1.1 (for assemblies compiled with VC++ 7.1)... see
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q309694

You should also be aware of
http://support.microsoft.com/default.aspx?scid=kb;en-us;814472
 
T

Tomas Restrepo \(MVP\)

Hi Jos,
I don't really think it matters. I've tried a set of variations on a
theme.

HUmm... well it *should* matter. /GL (enable whole program optimizations) is
incompatible with /clr. /Og (enable global optimizations) is *not*. :)
Ultimately these are the same optimizations I was using in a straight DLL
with C# front end and it ran 5x faster.

That does seem weird, and certainly shouldn't be that way unless something
weird is going on.... I'm out of ideas, though, without seeing the actual
code :(
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top