| > | Perhaps we're looking at different specs, or different places? I was
| > | looking at partition 3 of the ECMA spec, in the definition of newarr.
| > | It's rather odd.
| >
| > From ECMA-335 3rd Ed. / June2005
| >
| > Partition III
| >
| > 4.20 newarr - .....
| > ...
| > The newarr instruction pushes a reference to a new zero-based,
| > one-dimensional array whose elements are of type etype, a metadata token
(a
| > typeref, typedef or typespec; see Partition II). numElems (of type
native
| > int or int32) specifies the number of elements in the array. Valid array
| > indexes are 0 ? index < numElems ...
|
| Ah, interesting - same bit, different version. I'm looking at the 2002
| version. That "or" is really confusing - I have no idea what it means.
|
Nor do I.
| > | But that situation may well be reasonably common in 10 years.
| > |
| > Well I don't believe so, but I could be wrong :-(.
| > I remember back in 1995 DEC said that at the end of the century 30% of
all
| > the server/desktop would be equiped with one or more 64-bit processors
with
| > at least 16GB of RAM, running a 64 bit OS. Six years later the world
looks
| > more conservative with less than 10% market share (estimates) for 64 bit
HW
| > and an avarage of 8GB of RAM.
|
| Of course DEC was trying to sell Alphas at the time
|
Yep ,not that we expected to take that 30% with Alpha (our estimate was 6%),
but their forcasts were backed by Gartner's.
| While it's true that we're not in a situation where more than 8GB is
| *common*, it's starting to happen every so often, and not only in
| massive organisations. Machines with 1 or 2GB are more common for
| consumers than they were - certainly for developers, and things do tend
| to gradually push upwards.
|
| Of course, there's the ever-tantalising prospect of fast, massive,
| cheap static memory - the "1TB on a credit card sized form factor for
| $50" promise. I'll believe it when I see it - but if it ever *does*
| happen, computing will change drastically...
|
True, however we may not forget that we are talking about single array's of
2GB, so you can have several of these monsters in a single AD and multiple
AD's per process and that can become a real issue even on 64 bit if you
don't set a limit. One of the major problems we encounter now (on 64-bit)
are an overuse of XML and self expanding ArrayLists and generic List's in
server applications, growing beyong available HW memory, just because "they
are so easy to use sir". OOM exceptions aren't thrown any longer, but oh,
the performance drops dramatically and developper don't understand why. So
IMO it's good to have some limits, it makes people think, but I guess it's
me getting old ;-).
Willy.