Problem - casting from double to unsigned int

G

Guest

Hi,

I am working in the project where VC6 code is ported to VC8 (VC++ .Net 2005)

I got a problem when I cast a double value to unsigned int. Problem is I
couldn’t get the proper value after casting (explicitly / implicitly).

Code looks as below :
const double d_a = 100e-9; // 100ns
const double d_b = 20e-9; // 20ns
const unsigned __int32 ui_c = (unsigned __int32) (d_a / d_b)

It is giving ui_c output as 4, which should be 5.

It is working perfectly in VC6. If I create a new console project containing
those 3 lines of code in VC8, it is working and I am getting proper value.

But in PORTED VC8 project which is a DLL I am NOT getting the proper value.

I want to know whether any settings should be applied to get the proper value.

Please reply to this query as soon as possible.

Thanks in advance,
Vinod
 
B

Bruno van Dooren

I am working in the project where VC6 code is ported to VC8 (VC++ .Net
2005)

I got a problem when I cast a double value to unsigned int. Problem is I
couldn't get the proper value after casting (explicitly / implicitly).

Code looks as below :
const double d_a = 100e-9; // 100ns
const double d_b = 20e-9; // 20ns
const unsigned __int32 ui_c = (unsigned __int32) (d_a / d_b)

It is giving ui_c output as 4, which should be 5.

It is working perfectly in VC6. If I create a new console project
containing
those 3 lines of code in VC8, it is working and I am getting proper
value.

But in PORTED VC8 project which is a DLL I am NOT getting the proper
value.

I want to know whether any settings should be applied to get the proper
value.

The fact that it worked in VC6 means that you got lucky.
You really should not expect to get exact integer result from floating point
operations.

search google for
'what every programmer should know about floating point'
and you will find a number of explanations.

to work around this problem you have to work with tolerances / error margins

--

Kind regards,
Bruno van Dooren
(e-mail address removed)
Remove only "_nos_pam"
 
G

Guest

You are correct, But It is working fine in the VC8 (Console) project,

I don't know why it is not giving the proper value in the PORTED VC8 (DLL)
project...

Do you have any idea about optimization setting in the project property?
I think that might cause the problem. But I don't know exactly.

-Thanks,
Vinod
 
D

David Lowndes

Do you have any idea about optimization setting in the project property?

Try changing the floating point model switch /fp to whatever works for
you in the console test project.

Dave
 
G

Guest

I tried the same whatever I used in the console test project in the PORTED
VC8 project also. But I am not getting the proper value here. Anything else I
need to do?

-Thanks,
Vinod
 
D

David Lowndes

I tried the same whatever I used in the console test project in the PORTED
VC8 project also. But I am not getting the proper value here. Anything else I
need to do?

How about similar settings in the building of the calling program?

Other than that I can only reiterate what Bruno said in that you
should never expect to get exact results with FP operations - they
just don't work that way!

Dave
 
C

chl

Vinod said:
Hi,

I got a problem when I cast a double value to unsigned int. Problem is I
couldn't get the proper value after casting (explicitly / implicitly).

Code looks as below :
const double d_a = 100e-9; // 100ns
const double d_b = 20e-9; // 20ns
const unsigned __int32 ui_c = (unsigned __int32) (d_a / d_b)

It is giving ui_c output as 4, which should be 5.

As others have stated, fp is inherently inexact, so in some scenarios your division
yields perhaps 4.9999999999, and the cast will then truncate it to 4.

You have to _round_ the division result before casting. An easy way to do this is to
add half of the rounding factor before truncating, i.e. (int) ((a / b) + 0.5)

You cannot solve this using settings, as your code is not correct.
 
T

Tom Widmer [VC++ MVP]

Vinod said:
You are correct, But It is working fine in the VC8 (Console) project,

'working fine' for me means it could be giving you either 4 or 5. Either
answer is equally fine for that code. In theory, you could get 4 or 5 at
random and it would still be working fine.
I don't know why it is not giving the proper value in the PORTED VC8 (DLL)
project...

It is giving the proper value in both projects.
Do you have any idea about optimization setting in the project property?
I think that might cause the problem. But I don't know exactly.

You might be able to make it return one of the two proper values with
different settings, but this is very unreliable, and might not work if
you switch to a different CPU.

Tom
 
G

Guest

You are absolutely correct. The division result (double value) might be some
4.9999999 and casting into "unsigned int32" is giving the final value as 4.

What I am exactly asking is why it is giving the value as 4 instead of 5 in
the PORTED VC8 (DLL) project alone, while I got the expected value 5 in
existing VC6 project and Test VC8 (console) project.

It causes problem in so many places in the PORTED VC8 (DLL) project, I can’t
go to each and every place and can’t change as you suggested. It is not good
practice, right? If there are any property settings which can be changed, it
will be good. If there are no property settings or any other solutions for
this problem, obviously I need to go to every place and have to change the
code...

So before that, I would like to confirm this with you people... Please do
answer me...

-Thanks,
Vinod
 
G

Guest

"Working fine" means it is giving the proper or expected result i.e. 5 in the
Test VC8 (console) project.

What I am exactly asking is why it is giving the value as 4 instead of 5 in
the PORTED VC8 (DLL) project alone, while I got the expected value 5 in
existing VC6 project and Test VC8 (console) project.

Please do reply the solution of the problem or suggestion to avoid the
problem.

-Thanks,
Vinod
 
T

Tom Widmer [VC++ MVP]

Vinod said:
You are absolutely correct. The division result (double value) might be some
4.9999999 and casting into "unsigned int32" is giving the final value as 4.

What I am exactly asking is why it is giving the value as 4 instead of 5 in
the PORTED VC8 (DLL) project alone, while I got the expected value 5 in
existing VC6 project and Test VC8 (console) project.

It relates to how much of the division is done at compile time versus
runtime, what mode the CPU is set to when the division is performed and
how the optimizer modifies the code. You don't really have explicit
control over all this, and you can change the results of calculations
with apparently innocuous changes (e.g.
double d = 10.0;
double e = 5.0;
double f = d / e;
vs
double f = 10.0/5.0;
)

The /Op option may give you more consistent results, but certainly not
in all cases.
It causes problem in so many places in the PORTED VC8 (DLL) project, I can’t
go to each and every place and can’t change as you suggested. It is not good
practice, right?

It is certainly good practice to fix broken code.

If there are any property settings which can be changed, it
will be good. If there are no property settings or any other solutions for
this problem, obviously I need to go to every place and have to change the
code...

So before that, I would like to confirm this with you people... Please do
answer me...

If you do find a setting that seems to work, are you confident that
every double calculation will produce the same result as before? Do you
have some kind of regression test to check this? If not, you are
probably better off changing the code. Regarding settings, you could try
disabling all optimizations and adding /Op to see whether that fixes it,
and then start adding them back in one at a time to see which one causes
the "problem".

Tom
 
B

Bruno van Dooren

It causes problem in so many places in the PORTED VC8 (DLL) project, I
It is certainly good practice to fix broken code.

The best solution is to create a function like this:
int myDiv(double a, double b);

That performs the operation you expect with correct rounding.
Then you go through your code and replace all division operations of that
king by your new div function.

Changing compiler settings is not a good solution because the assumptions in
the code are flawed.
There is NO way to guarantee that the code will produce the result that you
expect otherwise.
Even if it would work now for the values you test it on, the problem could
come back if your program runs on another CPU.

--

Kind regards,
Bruno van Dooren
(e-mail address removed)
Remove only "_nos_pam"
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top