.NET SUCKS --- READ FOLLOWING. MICROSOFT IS A SUCKY CO

J

Jon Skeet [C# MVP]

Mike Hofer said:
You *know* so? How? What information sources do you have that we do
not? Can you please post them so we can be equally as informed? Thanks.

Richard is the author of many books about Microsoft products. I think
it's fair to say he's got better sources than most people. I dare say
most of what he knows is under NDA in this respect however.
I'm sorry, but I have to disagree with that point. The shared source
CLI (available at
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/
dndotnet/html/faq111700.asp)
compiles and runs on Windows and FreeBSD.

That's not .NET. That's the CLI.
The Microsoft® Shared Source CLI Implementation is a file archive
containing working source code for the ECMA-334 (C#) and ECMA-335
(Common Language Infrastructure, or CLI) standards. These standards
together represent a substantial subset of what is available in the
Microsoft .NET Framework. In addition to the CLI implementation and the
C# compiler, the Shared Source CLI Implementation contains a cornucopia
of tools, utilities, additional Framework classes, and samples. It will
build and run on the Microsoft Windows® XP and the FreeBSD operating
systems.

Note the "substantial subset" bit there.
Oh, now that's just splitting hairs. :) .NET is a platform, not a set
of binaries. Further, it is an open standard, and you can obtain the
source from Microsoft itself.

Now brace yourselves for this: .NET itself is not Microsoft .NET.
Microsoft .NET is Microsoft's implementation of the .NET standard (the
CLI and the BCL).

Where exactly is this ".NET standard" specified? One would think that
the CLI/BCL standards would refer to .NET. Certainly the CLI standard
doesn't.

..NET is an implementation of the CLI and BCL standard (and more
besides, of course), but it is not in itself a standard.

So on the one hand, you're right. Microsoft .NET isn't platform
independent. But .NET itself is. It helps to keep them separate.

No, it helps to keep Microsoft .NET and the CLI/BCL separate - which is
why they're not both called ".NET".
 
R

Richard Grimes

Kevin said:
I'm afraid YOU haven't been following the conversation. That message,
and all of the replies to it, and most of this thread, are over a
week old. It's ancient history, and I don't want to revisit it.

Oh, right. So I haven't been able to get to the groups for a few days
and so *you* bar me from correcting some issues. Wow, how open minded
does that make you?

Richard
 
R

Richard Grimes

Mike said:
You *know* so? How? What information sources do you have that we do
not? Can you please post them so we can be equally as informed?
Thanks.

I first was given access to longhorn builds in mid 2002 (ie a year
before the first public release) I had been asked to write a book about
WinFS (you may have read my article on WinFS for MSDN Magazine). Of
course that's all history now. My 'minder' at Microsoft told me that the
developers on the LH team were told that *all* development in LH (note,
I say Longhorn, not Whidbey), had to be managed code. They were told
that if they wanted to write *any* native code they had to make a good
case for it. I was told that unmanaged wrappers would be supplied for
'VB6 developers to use'.

I have just finished an analysis of Vista and I find that there is very
little .NET in the operating system. Clearly, when the big change
happened, when the code base was changed from XP to Win2003 they also
took the decision to do the majority of Longhorn development in native
code. That is a huge shift, in just a few years. As a .NET enthusiast,
and someone who has spent 5 years persuading people to use .NET that
comes as a complete disappointment to me.
wouldn't tell anyone that I *knew* something without being able to
back it up with facts.

Is this enough for you?

http://www.grimes.demon.co.uk/dotnet/vistaAndDotnet.htm
I'm sorry, but I have to disagree with that point. The shared source
CLI (available at
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/faq111700.asp)
compiles and runs on Windows and FreeBSD.

Go on, give me a quote *anywhere* on microsoft.com, or from *any*
Microsoft person where they use the term 'cross platform'. They are
studiously careful *not* to say that, because that is not their
intention. .NET is a Windows technology and that is the way they want it
to remain. Yes, Rotor has PAL so therortically you can compile it for
other platforms, but it is a research tool, and it is not supported for
production code.
Despite its small share of the desktop computer market, Microsoft has
continued to dump lots of money into the development of software for
Apple computers (which are (1) now running Unix [the core of OSX] and
(2) soon coming to Intel chips). It would be far more economical for
Microsoft to be able to write one codebase for Office and deploy it to
both operating systems without having to recompile.

Oh yes. That's the way that I thought 5 years ago. But Microsoft are
showing no intentions to make products like Office based on .NET. I have
asked Microsoft many times (most recently I asked Eric Rudder about
this) and Microsoft have always replied that Office will remain a native
application, although parts of it may have .NET code. Clearly their
intention is to use .NET for RAD (in particular - RAD for *you*) and
keep their code base native.
Oh, now that's just splitting hairs. :) .NET is a platform, not a set
of binaries. Further, it is an open standard, and you can obtain the
source from Microsoft itself.

If only. Rotor is an altered version of .NET, yes, the library is very
similar, Reflector shows that, but the unmanaged code? As I said above
Rotor is just a research tool. It is not a reference standard.
Microsoft's implementation of the .NET standard; Mono is an
open-source
implementation of .NET for several different operating systems and
CPUs.

Here's a question: will mono code run under Microsoft's runtime, and
visa versa?

Microsoft are putting huge efforts into Web Services (SOA), why do you
think that is? It is because the code stays in one place - running under
the platform that it was designed to run. Microsoft have *never* said
that their intention is for 'mobile' code, that is, code that will run
on multiple platforms. I know because I have spent 5 years trying to
find such a statement!
Trying not to laugh out loud at this one. :)

Let's be realistic. You can't POSSIBLY expect to design an application
for a desktop whose minimum display size these days is 800x600 and
expect them to run on PDAs. Further, look at all the equipment you can
connect to a desktop that can't be connected to a PDA. And no one in
their right mind would constrain the design of a desktop application
to
the constraints of a PDA application. That's not even CLOSE to being
realistic.

You have amply killed your own argument: .NET is *not* cross platform.
You cannot take code written for one platform and get it to run in
another one.

The point is that the Compact Framework does not support the entire .NET
framework that the desktop does. Most of the overloads are missing. Many
of the classes are missing. For example, I wanted to use a SHA hash on
my iPaq but there aren't any crypto classes in the CF, so I had to write
my own. This means that any utility code written for the desktop that
uses SHA, but does not use *any* hardware features, will not work on a
CF machine.
Can you back that up with facts? Links? Statistical data? Under which
particular circumstances is it processor hungry?

Been following the argument? When the GC occurs it suspends all threads.
It does a walk of all objects from the roots it determines. All objects
in the finalization queue are finalized. You're telling me that all of
that is tivial and uses infitessimally small amounts of CPU cycles? I
have written .NET code that has frozen the entire machine, sure, I
didn't intend it to do that, and a bug created 10 times more objects
that I intended, but it certainly froze the machine when a GC occurred.
The GC is great for what it does, but don't ever imagine that it is some
pice of magic that will solve all of your software problems.

Richard
 
K

Kevin Spencer

Oh, right. So I haven't been able to get to the groups for a few days and
so *you* bar me from correcting some issues. Wow, how open minded does
that make you?

You were replying to a comment I made some time ago, and which was already
discussed to my satisfaction some time ago. This thread is no longer of
interest to me. It was going on for awhile when I jumped into it, and I have
more fish to fry. How does that reflect on my "open mindedness?"

I've been putting in 60-hour weeks now for a month, and debating is not
something I have the time or inclination for at this point.

Good heavens, dude, isn't "I'm no longer interested" a good enough reason
for me to move on to other things, or does it make me "bad" somehow?! And if
you think it does, how hopen minded does that make YOU?

--
HTH,

Kevin Spencer
Microsoft MVP
..Net Developer
Ambiguity has a certain quality to it.
 
R

Rob Perkins

Richard said:
Oh, right. So I haven't been able to get to the groups for a few days
and so *you* bar me from correcting some issues. Wow, how open minded
does that make you?

Conversation died out a couple of weeks ago, actually. No matter,
though, IMO, because your comments are welcome here, as far as I'm
concerned.

Rob
 
M

Mike Hofer

Jon said:
Richard is the author of many books about Microsoft products. I think
it's fair to say he's got better sources than most people. I dare say
most of what he knows is under NDA in this respect however.

I wasn't aware of that. Thanks for pointing it out. I'll have to look
up some of his books on Amazon. :)
That's not .NET. That's the CLI.

Semantics shoot me again. :) I was always under the impression that
".NET" was the CLI, not the CLI plus whatever Microsoft strapped on for
Windows support. If ".NET" is strictly Microsoft's implementation of
the CLI, then my whole post was complete crap. (Not that that's
anything unusual, but I *am* trying to do better.)

The source of some of my confusion may be as follows:

"When Microsoft released the C# programming language and the .NET
platform, it also crafted a set of formal documents that described the
syntax and semantics of the C# and CIL languages, the .NET assembly
format, core .NET namespaces, and the mechanics of a hypothetical .NET
runtime engine (known as the Virtual Execution System, or VES). Better
yet, these documents have been submitted to Ecma International as
official international standards (http://www.ecma-international.org)."
[Andrew Troelsen, Pro C# 2005 and the .NET 2.0 Platform]

See, the way I read that is that the CLI describes the .NET platform in
an OS and/or CPU neutral way. I may--as has been the case in the
past--be reading more into it than was actually written. Or, this text
(in the 3rd edition of the book) may be misleading. Either way, I'm
*very* interested in clearing this up.

(It would help if we could get a clear, definitive answer in one
document, published by Microsoft. It would be really nice [and
therefore completely improbable] if Microsoft could say, "Yes, we
intend to port .NET to other platforms.")
Note the "substantial subset" bit there.

Yep. Noted that. It doesn't include System.Windows.Forms, System.Web,
System.Data, etc.
Where exactly is this ".NET standard" specified? One would think that
the CLI/BCL standards would refer to .NET. Certainly the CLI standard
doesn't.

.NET is an implementation of the CLI and BCL standard (and more
besides, of course), but it is not in itself a standard.

Again, that will be cleared up when it gets through my thick skull:
when I finally understand the difference between .NET, the CLI, and
which is intended to be platform independent.
No, it helps to keep Microsoft .NET and the CLI/BCL separate - which is
why they're not both called ".NET".

Acknowledged. :)

I think, for me, this is like the chicken and the egg. Which came
first? The CLI or .NET?
 
M

Mike Hofer

Richard said:
I first was given access to longhorn builds in mid 2002 (ie a year
before the first public release) I had been asked to write a book about
WinFS (you may have read my article on WinFS for MSDN Magazine). Of
course that's all history now. My 'minder' at Microsoft told me that the
developers on the LH team were told that *all* development in LH (note,
I say Longhorn, not Whidbey), had to be managed code. They were told
that if they wanted to write *any* native code they had to make a good
case for it. I was told that unmanaged wrappers would be supplied for
'VB6 developers to use'.

First and foremost, I apologize if it seemed like I was questioning
your credentials. I wasn't. I was asking for the information because I
have an interest in it. I am sorry if I offended.

Mr. Skeet pointed out to me that you're the author of several books. I
wasn't aware of that. I I see that you've written "Professional ATL COM
Programming," and "Developing Applications with Visual Studio .NET".
I'm sure there are more, but Amazon isn't listing them. I have no
doubts about your credentials.

Sadly, my company is cheap. I'm still arm wrestling them to get an MSDN
subscription. If we had one, I'd have seen your article in the online
libraries.
I have just finished an analysis of Vista and I find that there is very
little .NET in the operating system. Clearly, when the big change
happened, when the code base was changed from XP to Win2003 they also
took the decision to do the majority of Longhorn development in native
code. That is a huge shift, in just a few years. As a .NET enthusiast,
and someone who has spent 5 years persuading people to use .NET that
comes as a complete disappointment to me.

Now that *is* disappointing. And completely justifies what you were
saying. I wonder what drove that decision on their part.


Quite. Thank you very much. I'm printing it now. :)

Go on, give me a quote *anywhere* on microsoft.com, or from *any*
Microsoft person where they use the term 'cross platform'. They are
studiously careful *not* to say that, because that is not their
intention. .NET is a Windows technology and that is the way they want it
to remain. Yes, Rotor has PAL so therortically you can compile it for
other platforms, but it is a research tool, and it is not supported for
production code.

Okay, you got me there. (Smacks of "plausible deniability," doesn't
it?) But it *seemed* to me that since Microsoft itself was publishing
the shared-source CLI on MSDN, that they were opening it up for
cross-platform development. It *seemed* to be a logical inference.

I'm going to have to stop doing that.

(PAL?)

I have asked Microsoft many times (most recently I asked Eric Rudder about
this) and Microsoft have always replied that Office will remain a native
application, although parts of it may have .NET code. Clearly their
intention is to use .NET for RAD (in particular - RAD for *you*) and
keep their code base native.

Now that's just cheating on Microsoft's part. It means that their
applications will always perform better unless you precompile
everything to native code.

And it is really sad that they don't plan to port Office to .NET. It
would be nice if IT folks only had to worry about one product, running
on two different OSes. It would make it easier to move between Mac and
PC as well. But, again, that kind of ease makes it *highly* improbable
that Microsoft will do it.

Here's a question: will mono code run under Microsoft's runtime, and
visa versa?

That's a really good question. I feel an experiment coming on.
Microsoft have *never* said that their intention is for 'mobile' code,
that is, code that will run on multiple platforms. I know because I
have spent 5 years trying to find such a statement!

I'll defer to your experience on that one, then.

But I really wish folks--including Microsoft--would stop treating the
Web as the silver bullet of application development. Web applications
are a pain in the ass to develop, don't have anywhere near as rich a
UI, and are slow as hell. My users keep asking why web applications
can't do things that desktop apps can. Explaining why to them is like
pulling chicken teeth.

But I digress.

You have amply killed your own argument: .NET is *not* cross platform.
You cannot take code written for one platform and get it to run in
another one.

You'll find that I do that a lot. I'm working hard at becoming more
informed, and writing more accurate, knowledgeable posts, though.
The point is that the Compact Framework does not support the entire .NET
framework that the desktop does. Most of the overloads are missing. Many
of the classes are missing. For example, I wanted to use a SHA hash on
my iPaq but there aren't any crypto classes in the CF, so I had to write
my own. This means that any utility code written for the desktop that
uses SHA, but does not use *any* hardware features, will not work on a
CF machine.

Okay, I can see that.
Been following the argument? When the GC occurs it suspends all threads.
It does a walk of all objects from the roots it determines. All objects
in the finalization queue are finalized. You're telling me that all of
that is tivial and uses infitessimally small amounts of CPU cycles? I
have written .NET code that has frozen the entire machine, sure, I
didn't intend it to do that, and a bug created 10 times more objects
that I intended, but it certainly froze the machine when a GC occurred.
The GC is great for what it does, but don't ever imagine that it is some
pice of magic that will solve all of your software problems.

I'd argue -- at the risk of shooting myself in the foot -- that that
was a defect that would have been rooted out in development or QA. I
think that the garbage collector can be expected to do odd things on a
development box, but not on a production box. Those kinds of bugs would
certainly have been rooted out before the product went live.

Right?

<ducking!>
 
R

Richard Grimes

Mike said:
Okay, you got me there. (Smacks of "plausible deniability," doesn't
it?)

Oh I agree. Its a statement I have expected Microsoft to make ever since
they released the first beta. But theu have *never* said that .NET is
cross platform. They have always been careful to avoid it. I think the
reason is that Java's "write once, run anywhere" adage has received a
lot of flak. In any case, they don't really want people developing for
linux and the Mac is such a small market compared to PCs its irrelevant
to them. Why do they need to make it cross platform?
I'm going to have to stop doing that.

(PAL?)

Platform Adaption Layer. Essentially it is a C++ abstraction layer used
by the unmanaged code. The idea is that if you compile Rotor for a new
platform you only need to replace the API calls in the PAL - the rest of
the code uses the PAL functions (well its more complicated than that,
but you see the principle).
And it is really sad that they don't plan to port Office to .NET. It
would be nice if IT folks only had to worry about one product, running
on two different OSes. It would make it easier to move between Mac and
PC as well. But, again, that kind of ease makes it *highly* improbable
that Microsoft will do it.

Clearly they have a business reason for their decision. My opinion is
that the security aspect of .NET alone makes it a reason for Microsoft
ot make all new development managed. But someone in Microsoft disagrees.
But I really wish folks--including Microsoft--would stop treating the
Web as the silver bullet of application development. Web applications
are a pain in the ass to develop, don't have anywhere near as rich a
UI, and are slow as hell. My users keep asking why web applications
can't do things that desktop apps can. Explaining why to them is like
pulling chicken teeth.

Oh I agree. Recently I installed a new firewall on my machine and I ran
ntbackup and the firewall told me that ntbackup was trying to access the
internet. Why? This is plain daft, and the alarm bells started ringing,
suggesting to me that there was a Trojan. I suspect that it was the
initialization of the help system, but Microsoft really has to start
thinking about core functionality rather than bells and whistles.
I'd argue -- at the risk of shooting myself in the foot -- that that
was a defect that would have been rooted out in development or QA. I
think that the garbage collector can be expected to do odd things on a
development box, but not on a production box. Those kinds of bugs
would certainly have been rooted out before the product went live.

That's what I have thought for a long while, but I now keep hearing
conflicting opinions, and Microsoft's own action of practically removing
all .NET apps out of Longhorn/Vista makes me suspicious.

The point is that the managd heap is extremely cheap to allocate memory,
far cheaper than C++. But garbage collection and finalization is
expensive because it happens at one time.

Richard
 
G

Guest

This argument is ridiculous.

I would use .NET for this purpose.

Any application that must guarantee a response 100% of the time within 1
second MUST HAVE a finite set of states and each of these states MUST BE
resolvable within 1 second 100% of the time.

You design such an application as a FINITE STATE MACHINE, within this you
PREALLOCATE all memory you need (keep it in scope), in ANY LANGUAGE
ENVIRONMENT, including .NET. The GC will not run since you will never reach a
point where you run out of your allocation of heap GC Level 0 memory. The
approach should be identical in both JAVA and C++!

If you are doing ANYTHING with your problem state that requires a
non-deterministic operation (something you cannot guarantee how long it will
take, e.g.: disk IO, writting to SQL Server ETC) then this operation must be
QUEUEABLE in a guaranteed deterministic fashion, which excludes the use of
something like MSMQ. You build your own queue, normally with something like
shared memory and semaphores, then another thread services the queue in an
asynchronous fashion. In this case the shared memory is used to allow for
both threads to run in seperate .NET application domains. Each domain has its
own GC and do not interfere with one anothers threads. You design your
semaphores so that reads (the servicing thread) respect locks but never lock
the writes (the finite state thread will never be locked).


Your solution WILL ALLWAYS be subject to the performance of the thread
servicing the queue and its ability to keep up, but since any DISK IO or SQL
request can take longer than expected but on average is faster than required,
you are safe.

If your response time requirements are 1 second, I would gladly use .NET. I
would design a similar solution for C++, assembler, Java, Delphi, VB 6 etc.

One developer cannot be applied to all solutions equally. How much
experience do you have with RT systems in any case? I suggest you consult
with an architect that has RT experience including building RT solutions on
non-RT platforms.

Sincerely,
 
J

Jon Skeet [C# MVP]

(It would help if we could get a clear, definitive answer in one
document, published by Microsoft. It would be really nice [and
therefore completely improbable] if Microsoft could say, "Yes, we
intend to port .NET to other platforms.")

Yes, that would be lovely, wouldn't it? I can't see it happening. I
believe MS designed the whole thing so that it *could* be portable to
other architectures. That doesn't mean they want to port it to
different operating systems.
Yep. Noted that. It doesn't include System.Windows.Forms, System.Web,
System.Data, etc.

Indeed. MS have standardised just enough to make it a reasonable
standard, but not enough to really make it a useful platform for real
work on its own.

I think, for me, this is like the chicken and the egg. Which came
first? The CLI or .NET?

For me, they came together - .NET is just an implementation of the CLI
and more.
 
M

Mike Hofer

cmdematos wrote:
Any application that must guarantee a response 100% of the time within 1
second MUST HAVE a finite set of states and each of these states MUST BE
resolvable within 1 second 100% of the time.

In an ideal world, yes; but this isn't an ideal world. :)

In an ideal world, the people who make promises about a software's
abilities know something about software development, about what's
realistic and what isn't. But here in the real world, many companies
use marketing professionals to present ideas to customers and some
companies make it a habit to exclude development professionals from the
contract acquisition process altogether. Without an experienced
development professional in the process, the promises that are made
often set *very* unrealistic expectations, but they catch customer
interest and that's what makes the sale. Sadly, once the sale is over,
the project is dropped in the laps of the IT staff, and the developers
are saddled with the responsibility of making fantasy a reality. I'm
going through it now. I'm slowly teaching them to stop doing that, but
it takes time. And in the meanwhile, you have to deliver what they've
already promised.

It's possible his application's requirements were unrealistic from the
beginning. But from what he describes, I tend to think that's not the
case.
You design such an application as a FINITE STATE MACHINE, within this you
PREALLOCATE all memory you need (keep it in scope), in ANY LANGUAGE
ENVIRONMENT, including .NET. The GC will not run since you will never reach a
point where you run out of your allocation of heap GC Level 0 memory. The
approach should be identical in both JAVA and C++!

Some of this seems to make sense to me, and some of it doesn't. If I
understand garbage collection correctly, the GC in .NET runs whether I
like it or not. Nothing I do will prevent it from running. If it
decides to kick in, it kicks in. And what about other running processes
and their effect on the managed heap? When memory gets low, the GC will
run. At that point, all other threads are suspended so the GC can run.
My process--finite state or not--will be put on hold.

It seems generally understood that there's a certain amount of
uncertainty that's inherent in an operating system that uses preemptive
multitasking. I may design my application to be rock solid, and I may
know that when I designed it, coded it, and tested it, it operated
within all operational and functional constraints. But as soon as I
drop it onto a server where other processes are running--processes that
are out of my control--all bets are off. There are no guarantees about
response times because someone else can be badly behaved. The operating
system will knock me off my throne so that it can put somebody else on
it for a bit. And while someone else is on that throne, anything can
happen.

This kind of uncertainty is what makes Windows and .NET a
less-than-optimal choice for mission critical real-time systems with
high demands and small response times. If I have to use an OS like
Windows, I *need* to understand that there is no such thing as an
absolute guarantee, and that I need to make my acceptable response
times large enough to be reasonable for the OS I'm on. If Windows can't
give me the response time I'm after, I need to look at a different OS.
If you are doing ANYTHING with your problem state that requires a
non-deterministic operation (something you cannot guarantee how long it will
take, e.g.: disk IO, writting to SQL Server ETC) then this operation must be
QUEUEABLE in a guaranteed deterministic fashion, which excludes the use of
something like MSMQ. You build your own queue, normally with something like
shared memory and semaphores, then another thread services the queue in an
asynchronous fashion. In this case the shared memory is used to allow for
both threads to run in seperate .NET application domains. Each domain has its
own GC and do not interfere with one anothers threads. You design your
semaphores so that reads (the servicing thread) respect locks but never lock
the writes (the finite state thread will never be locked).


Your solution WILL ALLWAYS be subject to the performance of the thread
servicing the queue and its ability to keep up, but since any DISK IO or SQL
request can take longer than expected but on average is faster than required,
you are safe.

If your response time requirements are 1 second, I would gladly use .NET. I
would design a similar solution for C++, assembler, Java, Delphi, VB 6 etc.

I think that in most cases, this is very true. If his application can't
keep up, he should either review his client's expectations for
reasonable throughput or review the code for quality issues.
One developer cannot be applied to all solutions equally. How much
experience do you have with RT systems in any case? I suggest you consult
with an architect that has RT experience including building RT solutions on
non-RT platforms.

Hear hear! :)
 
G

Guest

Hi Mike,

I enjoyed reading your responce.

I did a lot of research on the GC recently to answer some specific
questions, this is what my research confirmed:

1. Each app domain has its own .NET run time library, heap, stacks and
threads including one GC thread and one disposer thread.

2. The heap is divided into three generations only. Allocations are super
fast (much faster then c++).

3. Deallocations (when the GC runs) only occur when space in generation 0
needs to be extended. It (the GC) is triggered by this, the GC does not run
arbitrarily. However, even if you dont have any faith in that (which I am
sure of) then have faith in this..

4. Shoulsd the GC run it will first freeze all threads in its app domain,
other app domains are out of bounds the same way one applications memory
space is out of bounds to another in a non-Dot.NET environment.

5. Should the GC run it will only pack memory for Generation 0. If the
resulting free memory does not free up enough then it packs generation 1, if
this still does free up enough then it packs generation 2. All generations
except 2 are incremented. If there is still not enough memory then it
allocates it from the OS Virtual Memory pool.

The GC does not run unless it has been prompted to do so, by the heap
running out of generation 0 memory.

You ensure you have no need to allocate generation 0 memory (a new heap
memory request) by using only stack type (value type) local variables (Int32,
Double, String).

You never allocate new objects as part of your 1 second cycle. You use only
pre-created obects that are therefore already allocated.

Imagine your code was reading stock ticker data from a mainframe and
writting into a SQL database. The mainframe implements a serial interface
with no protocol, 9600, 8,n,1 continous feed.

Your code must read the data stream, find tokens (lets say !IBMI + 045 - 041
125.67! is one reacord with four fields, IBMI is the name of the stock 045
041 are some movement ranges and 125.67 is the price.

Because there is no protocol you must guarantee a continous read and
processing otherwise you will lose data.

You write it like this...

1. Define deterministic code - this is the code that runs within the
restricted time
1.1 includes reading next byte from the serial port
1.2 Storing into a prededined buffer (string builder is OK here)
1.3 Detecting when you have a complete record
1.4 parse the string into a predefined structure (value type, not even on
the heap)
1.5 writting the structure into the next circular queue buffer (shared
memory)
1.6 increment semaphore to trigger deterministic code which is in a
seperate application and seperate application domain

2. Define non-deterministic code
2.1 sits and waits on semaphore > 0
2.1 fired the moment semaphonre is > 0
2.2 reads memory
2.3 allocate SqlCommand (heap), builds parameters (heap)
2.4 Fires SQL (totally un-deterministic)
2.5 GC can fire here at any time, but will only stop THIS process

Application 1. will never have the GC fired, and even if it does for same
strange reason it wont have any work to do since there are no objects being
created and destroyed on the heap.

Now for responce times much faster than say USB2.0 speeds, you perhaps need
a RT OS.

As for other applications running on the same machine,

The environment needs to be specified
- Dark room PC (no human interface)
- No other applications running
- enough memory to ensure Zero paging
- No ancillary processes like Windows update etc
- A large enough circular buffer to enure survival if the network or SQL
server is
not available for a predertimined period of time (say 15 minutes).

With this very real world spec, I would build it Dot.NET.
 
B

Brian Gideon

Richard said:
Don't use .NET for services because services have privileged access to
the OS and so you don't want such code to suddenly suspend itself.

Richard,

Can you expand on this a little more? I'm assuming when you say
suspend you're speaking about the GC suspending application threads
right? How is the behavior different when the application is a service
as opposed to GUI and why would we care?

Thanks,
Brian
 
M

Mike Hofer

Hi Carlos,

cmdematos wrote:
I did a lot of research on the GC recently to answer some specific
questions, this is what my research confirmed:

1. Each app domain has its own .NET run time library, heap, stacks and
threads including one GC thread and one disposer thread.

2. The heap is divided into three generations only. Allocations are super
fast (much faster then c++).

3. Deallocations (when the GC runs) only occur when space in generation 0
needs to be extended. It (the GC) is triggered by this, the GC does not run
arbitrarily. However, even if you dont have any faith in that (which I am
sure of) then have faith in this..

4. Shoulsd the GC run it will first freeze all threads in its app domain,
other app domains are out of bounds the same way one applications memory
space is out of bounds to another in a non-Dot.NET environment.

5. Should the GC run it will only pack memory for Generation 0. If the
resulting free memory does not free up enough then it packs generation 1, if
this still does free up enough then it packs generation 2. All generations
except 2 are incremented. If there is still not enough memory then it
allocates it from the OS Virtual Memory pool.

The GC does not run unless it has been prompted to do so, by the heap
running out of generation 0 memory.

You ensure you have no need to allocate generation 0 memory (a new heap
memory request) by using only stack type (value type) local variables (Int32,
Double, String).

You never allocate new objects as part of your 1 second cycle. You use only
pre-created obects that are therefore already allocated.

Wow! That was a great response, and cleared up a lot of confusion. I
especially enjoyed the section detailing how the garbage collector
works with app domains. I was unaware that each application domain got
its own GC thread. I don't recall seeing it mentioned, but that's
likely because I just haven't run across it in the documentation yet. I
was working under the assumption that all the different .NET
applications shared the same GC thread, and that some ubiquitous
service managed it. But giving each application domain its own heap,
and a GC thread to manage it makes far more sense and, as I see now, is
*far* more efficient because it limits the scope of the tree that must
be walked. Very clever, Microsoft. Very clever indeed.

I bet someone could write an entire book on .NET memory mamagement, and
it would be a pretty decent and meaty read.

Now for responce times much faster than say USB2.0 speeds, you perhaps need
a RT OS.

That was the kind of system *I* was thinking of (flight control
systems, medical devices, etc.). I don't think the original poster's
system has those kinds of throughput requiremnts.
As for other applications running on the same machine,

The environment needs to be specified
- Dark room PC (no human interface)
- No other applications running
- enough memory to ensure Zero paging
- No ancillary processes like Windows update etc
- A large enough circular buffer to enure survival if the network or SQL
server is
not available for a predertimined period of time (say 15 minutes).

And *THIS* is the rub! See, I figure that .NET runs as a layer on top
of the operating system. That means that an application written in .NET
is kind of separated from the operating system by a layer of
abstraction; the runtime itself is a separate process (or group of
processes). It's those unmanaged processes that can cause problems for
a system that needs to be able to do things at speeds worthy of Barry
Allen.

Take, for instance, the typical "Web server" running in many, many
companies, that is running IIS 6.0 with ASP.NET, SQL Server, and
Microsoft God Knows What Else (TM). I see a *lot* of these machines.

The ASP.NET applications have to share CPU cycles with the unmanaged
SQL Server instances running on the machine (along with the MSGKWE
software). If I understand this all correctly, a stalled instance of an
unmanaged product like SQL Server can hose the machine, thereby
preventing anything in the managed instances from executing.

(I actually see this a lot with a Crystal Report that hangs the SQL
Server instance on one of our own hybrid servers from time to time. It
brings the whole server to a halt: Web server, SMTP server, FTP server,
and SQL Server. It's just lovely. Crystal Reports = DOOOOOOOOOOM! But I
digress.)

Anyway, my point is that the preemptive multitasking in the OS could
hand control of the CPU to a stalled program, which would prevent your
application from getting the processor time it needs. That's the kind
of thing where, I think, no amount of good coding on your part will fix
the problem. *That* kind of thing would be the uncertainty principle I
was talking about.

I guess I'm not knowledgeable enough about the internals of "preemptive
multitasking." Now that I think about it, it would seem that the
"preemptive" portion of the thing would prevent any single thread from
stalling the whole machine. I wonder how the heck it does that.
 
L

Lloyd Dupont

if you want a modern language, with both garbage collection and control over
it, go there:
http://www.digitalmars.com/d

granted its far from as complete as the .NET SDK,
but you'll see a great potential to this language.

I mean if you're of the kind who prefer to sow than to reap ;-)
 
R

Richard Grimes

Brian said:
Can you expand on this a little more? I'm assuming when you say
suspend you're speaking about the GC suspending application threads
right? How is the behavior different when the application is a
service as opposed to GUI and why would we care?

By definition, GUI code spends the majority of its time waiting for user
input (think of the news client you are using). (Yes, there are GUI
applications that take realtime information - I have written some my
self for data acquisition - but they are a small proportion of all GUI
applications.) That means there is idle time that can be utilised for
background tasks. Services are designed to run all the time. There
should be no down time for a service. If there is any downtime, then the
service becomes a non-service <g>.

Richard
 
B

Brian Gideon

Thanks Richard. I see where you're coming from now. I've written a
few services in .NET before and it worked out very well. Though, in my
case the services didn't have strict realtime requirements.

Brian
 
R

Rob Perkins

Richard said:
By definition, GUI code spends the majority of its time waiting for user
input (think of the news client you are using). (Yes, there are GUI
applications that take realtime information - I have written some my
self for data acquisition - but they are a small proportion of all GUI
applications.) That means there is idle time that can be utilised for
background tasks. Services are designed to run all the time. There
should be no down time for a service. If there is any downtime, then the
service becomes a non-service <g>.

Services blocked for I/O certainly don't "run all the time", that is,
consume processor time in some kind of spinlock, or even process data.
They behave like GUI apps, sort of, waiting for input. Every
network-based application (web server, FTP server, whatever) has this
characteristic, right?

Did you mean something else I didn't pick up on?

Rob
 
B

Brian Gideon

Rob,

Yeah, that was my first though as well. But, I took his response as
being relevant to services that have a realtime data acquisition
element to them. Perhaps a service that is collecting data from a
hardware device of some kind.

I don't think he was suggesting that a network based application or
some other client-server application should not be a service.

Brian
 
M

Mike Hofer

Richard Grimes wrote:
Services are designed to run all the time. There
should be no down time for a service. If there is any downtime, then the
service becomes a non-service <g>.

Hi Richard.

Can you clarify that a bit for me? I'm about to write a Windows service
that basically sleeps most of the time, and wakes up when files are
dropped via FTP into a folder. It will then process those files
(importing their data into a database).

Based on the text above, it sounds like this isn't something a service
should do. But I was led to believe that this was *precisely* the type
of thing that a service should do.

Did you mean something specific by "downtime"?

Thanks!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top