W
Willy Denoyette [MVP]
| > | Hmm... that may be a limitation of .NET rather than the CLI spec
| > | though. The CLI spec states that the newarr instruction take a number
| > | of elements which is of type native int - and doesn't specify any
| > | limit.
| >
| > Actually it says "native int or int32", which is rather confusing IMO,
and I
| > noticed that it's an int32 even on the 64-bit CLR.
|
| Perhaps we're looking at different specs, or different places? I was
| looking at partition 3 of the ECMA spec, in the definition of newarr.
| It's rather odd.
|
From ECMA-335 3rd Ed. / June2005
Partition III
4.20 newarr - .....
....
The newarr instruction pushes a reference to a new zero-based,
one-dimensional array whose elements are of type etype, a metadata token (a
typeref, typedef or typespec; see Partition II). numElems (of type native
int or int32) specifies the number of elements in the array. Valid array
indexes are 0 ? index < numElems ...
| > | It could be that while current implementations have more restrictions,
| > | a future implementation may be able to create larger arrays.
| >
| > Sure, it's a .NET limitation, it's possible that the next version of the
CLR
| > supports a larger value, but as far as I know nothing like this has been
| > announced publically.
|
| Right. My guess is that in 10 years time the limitation might seem
| somewhat severe - although I would have thought that the spec could
| have been expanded at that time.
|
| > | Of course, it could equally be two different people on the spec
writing
| > | team who didn't talk to each other quite enough...
| >
| > I don't think so. IMO the limitation is a conscious design decision.
Imagine
| > what happens on a system when a single application allocates a single
8GB
| > array (contigious memory) and starts to access it in a sparse/random
order,
| > you'll end in a world of pain unless you have a ton of physical memory
| > available.
|
| But that situation may well be reasonably common in 10 years.
|
Well I don't believe so, but I could be wrong :-(.
I remember back in 1995 DEC said that at the end of the century 30% of all
the server/desktop would be equiped with one or more 64-bit processors with
at least 16GB of RAM, running a 64 bit OS. Six years later the world looks
more conservative with less than 10% market share (estimates) for 64 bit HW
and an avarage of 8GB of RAM.
| Put it this way - future expansion is the only reason I can see for
| array lengths being allowed to be longs in C#.
|
Agreed, but I would be happy if they first relaxed the 2GB restriction, this
way we would be able to create 2^31 * sizeof(long) arrays or 16GB, without a
need to change the CLR data structures.
Willy.
| > | though. The CLI spec states that the newarr instruction take a number
| > | of elements which is of type native int - and doesn't specify any
| > | limit.
| >
| > Actually it says "native int or int32", which is rather confusing IMO,
and I
| > noticed that it's an int32 even on the 64-bit CLR.
|
| Perhaps we're looking at different specs, or different places? I was
| looking at partition 3 of the ECMA spec, in the definition of newarr.
| It's rather odd.
|
From ECMA-335 3rd Ed. / June2005
Partition III
4.20 newarr - .....
....
The newarr instruction pushes a reference to a new zero-based,
one-dimensional array whose elements are of type etype, a metadata token (a
typeref, typedef or typespec; see Partition II). numElems (of type native
int or int32) specifies the number of elements in the array. Valid array
indexes are 0 ? index < numElems ...
| > | It could be that while current implementations have more restrictions,
| > | a future implementation may be able to create larger arrays.
| >
| > Sure, it's a .NET limitation, it's possible that the next version of the
CLR
| > supports a larger value, but as far as I know nothing like this has been
| > announced publically.
|
| Right. My guess is that in 10 years time the limitation might seem
| somewhat severe - although I would have thought that the spec could
| have been expanded at that time.
|
| > | Of course, it could equally be two different people on the spec
writing
| > | team who didn't talk to each other quite enough...
| >
| > I don't think so. IMO the limitation is a conscious design decision.
Imagine
| > what happens on a system when a single application allocates a single
8GB
| > array (contigious memory) and starts to access it in a sparse/random
order,
| > you'll end in a world of pain unless you have a ton of physical memory
| > available.
|
| But that situation may well be reasonably common in 10 years.
|
Well I don't believe so, but I could be wrong :-(.
I remember back in 1995 DEC said that at the end of the century 30% of all
the server/desktop would be equiped with one or more 64-bit processors with
at least 16GB of RAM, running a 64 bit OS. Six years later the world looks
more conservative with less than 10% market share (estimates) for 64 bit HW
and an avarage of 8GB of RAM.
| Put it this way - future expansion is the only reason I can see for
| array lengths being allowed to be longs in C#.
|
Agreed, but I would be happy if they first relaxed the 2GB restriction, this
way we would be able to create 2^31 * sizeof(long) arrays or 16GB, without a
need to change the CLR data structures.
Willy.