Option Strict On

W

Willy Denoyette [MVP]

| > | Hmm... that may be a limitation of .NET rather than the CLI spec
| > | though. The CLI spec states that the newarr instruction take a number
| > | of elements which is of type native int - and doesn't specify any
| > | limit.
| >
| > Actually it says "native int or int32", which is rather confusing IMO,
and I
| > noticed that it's an int32 even on the 64-bit CLR.
|
| Perhaps we're looking at different specs, or different places? I was
| looking at partition 3 of the ECMA spec, in the definition of newarr.
| It's rather odd.
|

From ECMA-335 3rd Ed. / June2005

Partition III

4.20 newarr - .....
....
The newarr instruction pushes a reference to a new zero-based,
one-dimensional array whose elements are of type etype, a metadata token (a
typeref, typedef or typespec; see Partition II). numElems (of type native
int or int32) specifies the number of elements in the array. Valid array
indexes are 0 ? index < numElems ...


| > | It could be that while current implementations have more restrictions,
| > | a future implementation may be able to create larger arrays.
| >
| > Sure, it's a .NET limitation, it's possible that the next version of the
CLR
| > supports a larger value, but as far as I know nothing like this has been
| > announced publically.
|
| Right. My guess is that in 10 years time the limitation might seem
| somewhat severe - although I would have thought that the spec could
| have been expanded at that time.
|
| > | Of course, it could equally be two different people on the spec
writing
| > | team who didn't talk to each other quite enough...
| >
| > I don't think so. IMO the limitation is a conscious design decision.
Imagine
| > what happens on a system when a single application allocates a single
8GB
| > array (contigious memory) and starts to access it in a sparse/random
order,
| > you'll end in a world of pain unless you have a ton of physical memory
| > available.
|
| But that situation may well be reasonably common in 10 years.
|
Well I don't believe so, but I could be wrong :-(.
I remember back in 1995 DEC said that at the end of the century 30% of all
the server/desktop would be equiped with one or more 64-bit processors with
at least 16GB of RAM, running a 64 bit OS. Six years later the world looks
more conservative with less than 10% market share (estimates) for 64 bit HW
and an avarage of 8GB of RAM.


| Put it this way - future expansion is the only reason I can see for
| array lengths being allowed to be longs in C#.
|

Agreed, but I would be happy if they first relaxed the 2GB restriction, this
way we would be able to create 2^31 * sizeof(long) arrays or 16GB, without a
need to change the CLR data structures.


Willy.
 
J

Jon Skeet [C# MVP]

Willy Denoyette said:
| Perhaps we're looking at different specs, or different places? I was
| looking at partition 3 of the ECMA spec, in the definition of newarr.
| It's rather odd.

From ECMA-335 3rd Ed. / June2005

Partition III

4.20 newarr - .....
...
The newarr instruction pushes a reference to a new zero-based,
one-dimensional array whose elements are of type etype, a metadata token (a
typeref, typedef or typespec; see Partition II). numElems (of type native
int or int32) specifies the number of elements in the array. Valid array
indexes are 0 ? index < numElems ...

Ah, interesting - same bit, different version. I'm looking at the 2002
version. That "or" is really confusing - I have no idea what it means.
| But that situation may well be reasonably common in 10 years.
|
Well I don't believe so, but I could be wrong :-(.
I remember back in 1995 DEC said that at the end of the century 30% of all
the server/desktop would be equiped with one or more 64-bit processors with
at least 16GB of RAM, running a 64 bit OS. Six years later the world looks
more conservative with less than 10% market share (estimates) for 64 bit HW
and an avarage of 8GB of RAM.

Of course DEC was trying to sell Alphas at the time :)

While it's true that we're not in a situation where more than 8GB is
*common*, it's starting to happen every so often, and not only in
massive organisations. Machines with 1 or 2GB are more common for
consumers than they were - certainly for developers, and things do tend
to gradually push upwards.

Of course, there's the ever-tantalising prospect of fast, massive,
cheap static memory - the "1TB on a credit card sized form factor for
$50" promise. I'll believe it when I see it - but if it ever *does*
happen, computing will change drastically...
| Put it this way - future expansion is the only reason I can see for
| array lengths being allowed to be longs in C#.

Agreed, but I would be happy if they first relaxed the 2GB restriction, this
way we would be able to create 2^31 * sizeof(long) arrays or 16GB, without a
need to change the CLR data structures.

Right.
 
W

Willy Denoyette [MVP]

| > | Perhaps we're looking at different specs, or different places? I was
| > | looking at partition 3 of the ECMA spec, in the definition of newarr.
| > | It's rather odd.
| >
| > From ECMA-335 3rd Ed. / June2005
| >
| > Partition III
| >
| > 4.20 newarr - .....
| > ...
| > The newarr instruction pushes a reference to a new zero-based,
| > one-dimensional array whose elements are of type etype, a metadata token
(a
| > typeref, typedef or typespec; see Partition II). numElems (of type
native
| > int or int32) specifies the number of elements in the array. Valid array
| > indexes are 0 ? index < numElems ...
|
| Ah, interesting - same bit, different version. I'm looking at the 2002
| version. That "or" is really confusing - I have no idea what it means.
|
Nor do I.

| > | But that situation may well be reasonably common in 10 years.
| > |
| > Well I don't believe so, but I could be wrong :-(.
| > I remember back in 1995 DEC said that at the end of the century 30% of
all
| > the server/desktop would be equiped with one or more 64-bit processors
with
| > at least 16GB of RAM, running a 64 bit OS. Six years later the world
looks
| > more conservative with less than 10% market share (estimates) for 64 bit
HW
| > and an avarage of 8GB of RAM.
|
| Of course DEC was trying to sell Alphas at the time :)
|
Yep ,not that we expected to take that 30% with Alpha (our estimate was 6%),
but their forcasts were backed by Gartner's.

| While it's true that we're not in a situation where more than 8GB is
| *common*, it's starting to happen every so often, and not only in
| massive organisations. Machines with 1 or 2GB are more common for
| consumers than they were - certainly for developers, and things do tend
| to gradually push upwards.
|
| Of course, there's the ever-tantalising prospect of fast, massive,
| cheap static memory - the "1TB on a credit card sized form factor for
| $50" promise. I'll believe it when I see it - but if it ever *does*
| happen, computing will change drastically...
|

True, however we may not forget that we are talking about single array's of
2GB, so you can have several of these monsters in a single AD and multiple
AD's per process and that can become a real issue even on 64 bit if you
don't set a limit. One of the major problems we encounter now (on 64-bit)
are an overuse of XML and self expanding ArrayLists and generic List's in
server applications, growing beyong available HW memory, just because "they
are so easy to use sir". OOM exceptions aren't thrown any longer, but oh,
the performance drops dramatically and developper don't understand why. So
IMO it's good to have some limits, it makes people think, but I guess it's
me getting old ;-).

Willy.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top