simple math problem?

G

Guest

I think your math is a little funny.

1000 variables = 15k
1000x15k = 15Mb
100x15Mb = 1.5gb

1000x1000x100 = 100,000,000 variables.

69000 = 69 x 1000 = 1 Mb not 1Gb. There's a fairly significant difference
there.

Param R. said:
Lets leave aside database and IO for a second. Lets assume my apps dont use
them. If I have an object with about 1000 Decimal variables. As per the docs
this will consume 16,000 bytes of memory or roughly 15K. Now if my server
had 1GB RAM, that would give me a max possible of about 69,000 objects
living in memory. Ofcourse that is assuming all memory is reserved for my
objects which is not the case. OS, Services etc need RAM. I guess I could
add more RAM if the server could take it.

thanks!
 
J

Jon Skeet [C# MVP]

<=?Utf-8?B?RW1pbiBLYXJheWVs?= <Emin
No i think this is not true, cause decimal represents the values different to
floating point values.

No, it represents values differently to IEEE 754 binary floating point
values. It's still floating point.
Floating point numbers are like: b * 2^e where e is the scaling exponent,
and b is a fraction.

No, floating point numbers are like x * b^n where x is the mantissa, n
is the scaling component, and b is the (fixed) base. There's nothing
which says that b has to be 2 for the system to be a floating point
number system.

Decimal is different it uses a representation like b * 10^e (where e is
again a scaling factor). In this system it is trivial to represent 1/10 or
0.21 etc.

Sure - but not 1/3, for instance. That doesn't affect the fact that
Arnaud said floating point arithmetic shouldn't be used, and that
decimal is a floating point type.
Anway i'd like to propose another solution (if memory is a problem). Just
use Integer (and multiply all numbers by 100) (if this is possible). If you
represent money, there can not appear any problems. Cause all things you do
is, adding, and multiplying by some percentages. Just think you are using
another currency or measure. (like cents :-D).

If you don't mind everything being limited to a given precision, that's
fine. It still won't give you "full" precision though - imagine a
single penny multiplied by 0.25 (25%). Probably not a problem, but it
needs to be understood.
 
G

Guest

Mike Abraham said:
Perhaps I'm missing something, but if all your singles represent money, and
you don't care about fractional pennies, you could solve the precision,
performance, and space problems by simply storing your numbers as integer
numbers of pennies. The only complication is that you would have to divide
by 100 for output purposes.

Mike A.

This is actually the prefered way for most large systems (it is how Commerce
Server was built) for two reasons:

1. Money is NOT fractional. It is integral by nature. We may represent
money as a decimal (dollars and cents), but it is not a decimal. In reality,
US currency is only cents, not dollars and cents (Yes, I know the gas station
charges $2.139/gallon - This is a RATE, not a charged amount! You will end
up paying $2.14 if you pump a perfect gallon).

2. When tallying up a receipt, all of the line totals printed on the receipt
must add up to the grand total. This is a widespread problem when using
decimal/float data types to represent money. Even if you use doubles instead
of singles, you can still have this problem, since you will often have
calculations on the line levels (e.g. save 35% when you buy 12). If you use
doubles, the line calculations will look good, but the grand total may be off
because of a large number of line items. All those fractions only have to
add up to 1 cent to ruin it.

So what to do?

We must make a seperation between data and display.

Your money data type should be a long - representing values as cents (or
equivalent for other countries) When displaying values, use a double to
divide by 100 [Note: to use this for multiple currencies, use a data type to
determine the number of decimal places - good reason to be object oriented].

When you are performing line level calculations, switch everything to double
(e.g. discounts, tax, rates, APRs, etc). Don't round off intermediate values
here - you want the line total to be as accurate as possible. BUT, save the
line total as a long.

Now, when you manually add up all the line totals, it will alway be the same
as the displayed grand total.

I realize this is complicated, and it may be too late for existing systems,
but it is the best way. At virtually every organziation I've been at,
off-by-one penny errors have been a nightmare. The correct solution will
always pay for itself in the long run.

Good luck to us all,
Mitch Marcus
 
G

Guest

When chipmakers design cpus, they design them in such a way that the
registers, are most comfortable with data of a certain size. For example,
most of the CPU chips that run the Windows operating system have registers
that handle 4-byte data very efficiently. As a result, Integer data is a good
choice in many situations, provided that the data falls within the range of
an integer variable. In fact, in some instances, Integer data is processed
faster than some of the data that uses fewer bytes simply because the smaller
data types don't fit the "natural" size of the CPU's registers.

Let's reconsider the decision about whether to use a Single or Double data
type for very large numbers. The Single data type has fewer bytes, and if we
don't need the extra range that a Double offers, it would seem that the
Single data type should be a good choice. After all, it's only 4 bytes, just
like an integer, and should fit naturally within the CPU registers, right?

Well, there's a little glitch in this thinking. The Single and Double data
types are designed to represent floating-point numbers. Because of
this,floating-point numbers are not processed in the same way that integer
numbers are.
Processing floating-point numbers is inherently slower than processing
integer numbers. In fact, the processing is so much slower that chip
manufacturers have recently begun to build small floating-point processors
(FPPs) into CPU chips. (Prior to being part of the CPU itself, the FPP was a
separate chip, like the 8087 by Intel.) The registers inside the FPPs are
designed for 8-byte data values. Because of this, even though the Single data
type uses fewer bytes than the Double type, it actually runs more slowly in
most applications because it doesn't naturally fit the register size of the
FFP. (This is because code must be executed to add an extra 4 bytes of empty
data to the Single data type, and this operation takes time.) As a result,
the Double data type is often the best choice for programs that crunch a lot
of floating-point numbers.
 
N

Niki Estner

prog_dotnet said:
...
Processing floating-point numbers is inherently slower than processing
integer numbers. In fact, the processing is so much slower that chip
manufacturers have recently begun to build small floating-point processors
(FPPs) into CPU chips. (Prior to being part of the CPU itself, the FPP was
a
separate chip, like the 8087 by Intel.)

Do you mean FPU's? Actually these have been included in mainstream
processors for quite some time. For x86 processors, this started with the
80486, introducd in 1989.
The registers inside the FPPs are
designed for 8-byte data values.

What processors are you talking about?
Intel processors have 80-bit FP registers (i.e. 10 bytes).
Because of this, even though the Single data
type uses fewer bytes than the Double type, it actually runs more slowly
in
most applications because it doesn't naturally fit the register size of
the
FFP. (This is because code must be executed to add an extra 4 bytes of
empty
data to the Single data type, and this operation takes time.)

First of all, there's no "extra code": loading a Single value in the FPU
stack is exactly one x86 instruction. This instruction internally loads 4
bytes, sets 6 other bytes to 0, and adjusts a few flags. The "Load Double"
instruction loads 8 bytes, and sets 2 others to 0.
How long this will take depends on many factors, among them paging, caching
and memory alignment. As a rule of thumb, more Single's will fit into the
CPU's cache, so it will generally be faster to load Single's. If neither is
in the cache, loading a Double (64 bit) through a bus that's 32 bits wide
will take longer than loading a Single value (32 bit).
As a result,
the Double data type is often the best choice for programs that crunch a
lot
of floating-point numbers.

Nope. If size and speed are really important, the only available choice is
doing benchmark. On today's processors with pipelining, branch prediction,
multi-level-caches... predicting how long something will take is virtually
impossible.

This quick sample:

using System;
using System.IO;

class FloatSpeed
{
static void Main()
{
float[] test = new float[10000000];
for (int t=0; t<100; t++)
{
long nStart = DateTime.Now.Ticks;
//System.Threading.Thread.Sleep(1000);
float f = 0;
for (int i=0; i<test.Length; i++)
f += test;
long nEnd = DateTime.Now.Ticks;
Console.WriteLine((nEnd-nStart)/1e4);
}
}
}

runs considerably faster with Single's than with Double's (on my PC). There
might of course be other samples with other results.

Niki
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top