Why does 28.08 show up as 28.080000000000002 in DataGridView

D

Daniel Manes

I'm baffled. I have a column in a SQL Server Express database called
"Longitude," which is a float. When I view the table in a DataGridView,
some of the numbers, which only have two decimal places in the database
show up with *15* decimal places and are ever so slightly off (in the
example in the subject line, by about 2E-15).

I'm not doing any operations on this column. It's just running a stored
procedure which performs a pretty basic SELECT on the table. If I run
the stored procedure in Management Studio Express, all numbers show up
fine (just two decimal places).

What's going on here?

Thanks,

-Dan
 
A

AlterEgo

Daniel,

Float is an approximate datatype. Lookup approximate numeric data in BOL.

-- Bill
 
A

Aaron Bertrand [SQL Server MVP]

FLOAT is an approximate data type. If you want precision, then use an
appropriate DECIMAL instead of FLOAT.
 
J

Jon Skeet [C# MVP]

Daniel Manes said:
I'm baffled. I have a column in a SQL Server Express database called
"Longitude," which is a float. When I view the table in a DataGridView,
some of the numbers, which only have two decimal places in the database
show up with *15* decimal places and are ever so slightly off (in the
example in the subject line, by about 2E-15).

I'm not doing any operations on this column. It's just running a stored
procedure which performs a pretty basic SELECT on the table. If I run
the stored procedure in Management Studio Express, all numbers show up
fine (just two decimal places).

What's going on here?

See http://www.pobox.com/~skeet/csharp/floatingpoint.html

For reference, the closest .NET double to 28.08 is exactly
28.0799999999999982946974341757595539093017578125
 
J

Jon Skeet [C# MVP]

Aaron Bertrand said:
FLOAT is an approximate data type. If you want precision, then use an
appropriate DECIMAL instead of FLOAT.

Decimal is as "approximate" as float, in that neither can represent
every possible rational number exactly. They just have different bases
- decimal will represent numbers like 0.1234 exactly, but will be
inaccurate with 1/3 in the same way that float is.

Both are floating point types - float is a floating *binary* point
type, and decimal is a floating *decimal* point type.
 
J

Jon Skeet [C# MVP]

Jon Skeet said:
Decimal is as "approximate" as float, in that neither can represent
every possible rational number exactly. They just have different bases
- decimal will represent numbers like 0.1234 exactly, but will be
inaccurate with 1/3 in the same way that float is.

Both are floating point types - float is a floating *binary* point
type, and decimal is a floating *decimal* point type.

Apologies - some clarification is required here. The above is certainly
true for the C# float/decimal types. After a bit of digging (I don't
have SQL Server help available at the minute) I believe that any
particular column with a defined precision and scale, a DECIMAL column
in SQL Server is effectively "fixed point" (i.e. the value itself
doesn't specify where the decimal point is, the column does).

That doesn't mean it's "precise" in a way that FLOAT isn't, it just
alters the storage (and therefore the range and precision available).

The above probably isn't terribly clear, but the bottom line is that a
floating point number is a very precise number - it has an exact value
- but not every number can be exactly represented as a floating point
number (given the base/storage size etc). That's true for fixed point
numbers as well, and it's not the "fixedness" that makes DECIMAL more
appropriate for business calculations, but the fact that it uses base
10 instead of base 2.
 
A

Aaron Bertrand [SQL Server MVP]

What I meant by "not approximate" is that if you put 28.08 in a
decimal(5,2), you are never going to get 28.08000000000000002
 
J

Jon Skeet [C# MVP]

Aaron Bertrand said:
What I meant by "not approximate" is that if you put 28.08 in a
decimal(5,2), you are never going to get 28.08000000000000002

True. If you ask SQL server to divide 1 by 3, however, you certainly
*won't* get an exact answer in a DECIMAL though (or FLOAT, admittedly).

The reason I'm bothering to make the distinction is that there's a
widespread myth that decimal types are "exact" in a way that floating
binary point types aren't - it's just down to people having a natural
bias to base 10. If you consider numbers in base 3 (or 7, or 11, etc)
instead, DECIMAL is just as bad as FLOAT.
 
A

Aaron Bertrand [SQL Server MVP]

Aaron Bertrand said:
True. If you ask SQL server to divide 1 by 3, however, you certainly
*won't* get an exact answer in a DECIMAL though (or FLOAT, admittedly).

The reason I'm bothering to make the distinction is that there's a
widespread myth that decimal types are "exact" in a way that floating
binary point types aren't -

I think that most of us recognize the difference between SET @foo = 0.33 and
SET @foo = 1.0/3 ... in the former, we're choosing to limit the "exact"
nature of the result, and if we choose to store it in a DECIMAL as opposed
to a FLOAT, we know we're going to get 0.33 every time...
 
J

Jon Skeet [C# MVP]

Aaron Bertrand said:
I think that most of us recognize the difference between SET @foo = 0.33 and
SET @foo = 1.0/3 ... in the former, we're choosing to limit the "exact"
nature of the result, and if we choose to store it in a DECIMAL as opposed
to a FLOAT, we know we're going to get 0.33 every time...

But if you were to express floating binary point numbers in a binary
format (eg 0.01101) then you'd get the exact same result back when you
reformatted it as binary. The nature of the literal formats available
doesn't alter whether or not a data type is essentially approximate or
not.

The "approximate" nature of binary floating point isn't because it's
floating point, and isn't changed by using a decimal format. It's just
the inability to exactly represent all *decimal* numbers, which isn't
the same thing as all numbers.

I personally believe that's *not* well understood - hence people (e.g.
3 different people in this thread) referring to floating binary point
numbers as "approximate" when they're not at all - they're exact
numbers which may only be approximately equal to the number you
originally tried to set them to.

As an example of what I mean, consider integer types. Are they
"approximate"? No - they represent exact numbers. However, if you do
(in C#):

int x = (int) 10.3;

The value of x won't be 10.3, it will be 10. 10 is an exact number, but
the conversion from 10.3 was an approximation. The same is true if you
have a float and do

SET @my_float = 0.33 (SQL)
or
float myFloat = 0.33f; (C#)

my_float represents an exact number, but the conversion from the
literal "0.33" to floating binary point is an approximating one. The
differences between the floating binary point conversion and the
conversion to an integer earlier are:

1) The integer conversion is explicit, highlighting that data may be
lost
2) When reformatted as a decimal value, an integer is always "tidy"
whereas a float often isn't (as evidenced by the subject line of this
thread)

I don't see either of those as reasons to describe floating binary
point numbers as "approximate" when integers aren't described in that
way.

Given a floating point value, I can tell you *exactly* what that number
is as a decimal expansion. In what sense is the value (or the type)
"approximate"? It's important (IMO) not to be blinded by the bias of
the fact that humans tend to represent numbers as decimals.
 
M

Mike C#

Jon Skeet said:
True. If you ask SQL server to divide 1 by 3, however, you certainly
*won't* get an exact answer in a DECIMAL though (or FLOAT, admittedly).

That's a shortcoming of numbers in general, not of SQL Server's
representation of numeric values. T-SQL defines real and float as
"approximate" data types because using the IEEE standard binary
representation not all numbers in the range of valid values can be stored in
an exact representation. Or, as Aaron pointed out, if you put in "28.08"
you can get back "28.08000000000000002".

http://msdn2.microsoft.com/en-us/library/ms173773.aspx

Decimal and numeric are "exact" data types because they can represent all
the numbers in their range of valid values in an exact representation.
I.e., "28.08" is stored exactly as 28.08.
The reason I'm bothering to make the distinction is that there's a
widespread myth that decimal types are "exact" in a way that floating
binary point types aren't - it's just down to people having a natural
bias to base 10. If you consider numbers in base 3 (or 7, or 11, etc)
instead, DECIMAL is just as bad as FLOAT.

As pointed out, decimal types are "exact" in that they can "exactly"
represent every single number in their range of valid values. IEEE standard
floating point binary types are "approximate" because they cannot.
 
J

Jon Skeet [C# MVP]

Mike C# said:
That's a shortcoming of numbers in general, not of SQL Server's
representation of numeric values.

No, it's a shortcoming of storing numbers as sequences of digits,
whatever the base. You could represent all rationals as one integer
divided by another integer, and always have the exact value of the
rational (assuming arbitrary length integers) - until you started to
perform operations like square roots, of course.
T-SQL defines real and float as
"approximate" data types because using the IEEE standard binary
representation not all numbers in the range of valid values can be stored in
an exact representation.

The same is true for decimal, however: not all numbers in the range of
valid values can be stored in an exact representation. All *decimal*
values can
Or, as Aaron pointed out, if you put in "28.08"
you can get back "28.08000000000000002".

Just as if you had a way of representing base 3 literals and you put in
".1" for a decimal you wouldn't (necessarily, at least) get ".1" back
when you reformatted to base 3 - because information would have been
lost.

I disagree with MSDN as well. It's not the first time, including for
floating point types - I had to battle with people at MSDN for them to
correct the description of the .NET Decimal type, which was originally
documented as a fixed point type, despite it clearly not being so.
Fortunately that's now fixed...
Decimal and numeric are "exact" data types because they can represent all
the numbers in their range of valid values in an exact representation.
I.e., "28.08" is stored exactly as 28.08.

No, they can't, any more than binary floating point types can. They can
only store numbers to a certain *decimal* precision, in the same way
that floating binary point types can only store numbers to a certain
*binary* precision.

As an example, the real (and even rational) number obtained by dividing
1 by 3 is within the range of decimal (as expressed in the MSDN page
for decimal: "valid values are from - 10^38 +1 through 10^38 - 1"), but
*cannot* stored exactly.
As pointed out, decimal types are "exact" in that they can "exactly"
represent every single number in their range of valid values. IEEE standard
floating point binary types are "approximate" because they cannot.

You're showing a bias to decimal numbers. There are perfectly valid
reasons to have that bias and to have that decimal type, but you should
acknowledge that the bias is there.

1/3 is just as "real" a number as 1/10 - and yet you ignore the fact
that it can't be exactly represented by a decimal, even though it's in
the range. Why should the failure to represent 1/10 accurately make a
data type be regarded as "approximate" when the failure to represent
1/3 doesn't? From a mathematical point of view, there's very little
difference.

From a *business* point of view, there's a large difference in terms of
how useful the types are and for what operations, but that doesn't
affect whether or not a data type should be regarded as fundamentally
"exact" or not, IMO.


Look at it from the raw data point of view: both types essentially
represent a long sequence of digits and a point somewhere within the
sequence. You could choose *any* base - it so happens that the common
ones used are 2 and 10.

Now, from that definition, and disregarding the fact that humans have
10 fingers, what's the difference between a sequence of bits with a
point in and a sequence of decimal digits with a point in that makes
you say that the first is approximate but the second is exact?
 
M

Mike C#

Jon Skeet said:
The "approximate" nature of binary floating point isn't because it's
floating point, and isn't changed by using a decimal format. It's just
the inability to exactly represent all *decimal* numbers, which isn't
the same thing as all numbers.

That seems like hair-splitting, for real.
I personally believe that's *not* well understood - hence people (e.g.
3 different people in this thread) referring to floating binary point
numbers as "approximate" when they're not at all - they're exact
numbers which may only be approximately equal to the number you
originally tried to set them to.

People in this newsgroup refer to binary floating point numbers as
"approximate" because that's what the ANSI SQL standard calls them.

"The data types NUMERIC, DECIMAL, INTEGER, and SMALLINT are collectively
referred to as exact numeric types. The data types FLOAT, REAL, and DOUBLE
PRECISION are collectively referred to as approximate numeric types. Exact
numeric types and approximate numeric types are collectively referred to as
numeric types. Values of numeric type are referred to as numbers." -- ANSI
SQL-92
 
M

Mike C#

Jon Skeet said:
No, it's a shortcoming of storing numbers as sequences of digits,
whatever the base. You could represent all rationals as one integer
divided by another integer, and always have the exact value of the
rational (assuming arbitrary length integers) - until you started to
perform operations like square roots, of course.

And what about irrational numbers?
The same is true for decimal, however: not all numbers in the range of
valid values can be stored in an exact representation. All *decimal*
values can

The SQL Decimal and Numeric data types are designed to store every single
last number in the range of valid values in an *EXACT* representation.
Just as if you had a way of representing base 3 literals and you put in
".1" for a decimal you wouldn't (necessarily, at least) get ".1" back
when you reformatted to base 3 - because information would have been
lost.

Are we talking about information loss from the simple act of converting from
one base to another, which is what the Decimal and Numeric types address; or
are you talking about information loss from performing operations on data
which goes back to your 1/3 example earlier?
I disagree with MSDN as well. It's not the first time, including for
floating point types - I had to battle with people at MSDN for them to
correct the description of the .NET Decimal type, which was originally
documented as a fixed point type, despite it clearly not being so.
Fortunately that's now fixed...

You need to present your arguments about this to ANSI, since these are the
definitions they put forth.
No, they can't, any more than binary floating point types can. They can
only store numbers to a certain *decimal* precision, in the same way
that floating binary point types can only store numbers to a certain
*binary* precision.

They can store *every single valid value in their defined set of values* in
an exact representation. With Numeric and Decimal types precision and scale
that you specify define the valid set of values.
As an example, the real (and even rational) number obtained by dividing
1 by 3 is within the range of decimal (as expressed in the MSDN page
for decimal: "valid values are from - 10^38 +1 through 10^38 - 1"), but
*cannot* stored exactly.

You might want to re-read that, because you seem to have picked out just
enough of a single sentence to support your argument, but completely ignored
the part that doesn't. Here's the full sentence for you (the emphasis is
mine):

"WHEN MAXIMUM PRECISION IS USED, valid values are from - 10^38 +1 through
10^38 - 1."

That also defines a default scale of "0". If, however, you define a
different precision and/or scale, the valid set of values changes.
You're showing a bias to decimal numbers. There are perfectly valid
reasons to have that bias and to have that decimal type, but you should
acknowledge that the bias is there.

Of course I'm biased to decimal numbers, as are most of the people here who
were born with 10 fingers and 10 toes. Now let's talk about your bias
against irrational numbers, since you seem to be tip-toeing all around them
here.
1/3 is just as "real" a number as 1/10 - and yet you ignore the fact
that it can't be exactly represented by a decimal, even though it's in
the range. Why should the failure to represent 1/10 accurately make a
data type be regarded as "approximate" when the failure to represent
1/3 doesn't? From a mathematical point of view, there's very little
difference.

There is no "fraction" data type in SQL, which you might consider a
shortcoming of the language. There is also no "fraction" data type built
into C#, C++, or most other languages. The reason for that might be a huge
bias or conspiracy against vulgar fractions, or it might be the lack of a
useful vinculum in the original ASCII character set, or who it might just be
that most people just don't find it all that useful.

Regardless of the reason, 1/10 is not a repeating decimal in decimal (yes,
I'm showing my "bias" for decimal again). It has a finite representation in
decimal. 1/3 is a repeating decimal in decimal. Since it's decimal
representation repeats infinitely, it's impossible to store it exactly
anywhere. However, with regards to Decimal and Numeric data types in SQL,
"0.33333" is not a member of the set of valid values for a DECIMAL (10, 2)
or a NUMERIC (10, 3) [for instance].
From a *business* point of view, there's a large difference in terms of
how useful the types are and for what operations, but that doesn't
affect whether or not a data type should be regarded as fundamentally
"exact" or not, IMO.

The definitions of SQL DECIMAL and NUMERIC specifically allow you to define
the precision and scale. By doing so you define the number of total digits
and the number of digits after the decimal point. Any numbers that are in
the set of valid values for the precision and scale that you define will be
stored *exactly*. Any numbers that are not in the set of valid values for
that precision and scale will not be stored exactly. SQL REAL and FLOAT
types do not come with this guarantee.
Look at it from the raw data point of view: both types essentially
represent a long sequence of digits and a point somewhere within the
sequence. You could choose *any* base - it so happens that the common
ones used are 2 and 10.

And there happen to be very good reasons for that...
Now, from that definition, and disregarding the fact that humans have
10 fingers, what's the difference between a sequence of bits with a
point in and a sequence of decimal digits with a point in that makes
you say that the first is approximate but the second is exact?

Quite simply because with NUMERIC and DECIMAL types I can define the
precision and scale, and every valid value in that range will be stored
*EXACTLY*. REAL and FLOAT come with pre-defined ranges, but have no
guarantees concerning how the exactness with which your numeric data will be
stored. As we saw, even a number as simple as 28.08 is stored as an
approximation of 28.08 (or 2808/100 if you prefer your fractions). Whereas
with a NUMERIC (10, 2) it would be stored as -- well, "28.08" *EXACTLY*.
 
A

Aaron Bertrand [SQL Server MVP]

You're showing a bias to decimal numbers. There are perfectly valid
reasons to have that bias and to have that decimal type, but you should
acknowledge that the bias is there.

Well, yeah, Sherlock.

The OP passed 28.08 into the database. When he asked for the value back, he
got a different answer. He asked how to fix it, so I guess, yeah, our bias
in this case is toward using DECIMAL. Where, regardless of what fantasy
world you're converting to base 3 et. al., when you input 28.08 and then ask
for the value back, you're still going to get 28.08. This isn't about math
theory or MSDN or base 3 or 1/3 or some religious war about whether you
think the standards' definition of "approximate" is acceptable to you. It's
about storing a known value (not an expression, like 1/3) with a fixed
number of decimal places, and getting the same number back, every time, in a
deterministic fashion.

A
 
A

Aaron Bertrand [SQL Server MVP]

You're showing a bias to decimal numbers. There are perfectly valid
reasons to have that bias and to have that decimal type, but you should
acknowledge that the bias is there.

Well, yeah, Sherlock.

The OP passed 28.08 into the database. When he asked for the value back, he
got a different answer. He asked how to fix it, so I guess, yeah, our bias
in this case is toward using DECIMAL. Where, regardless of what fantasy
world you're converting to base 3 et. al., when you input 28.08 and then ask
for the value back, you're still going to get 28.08. This isn't about math
theory or MSDN or base 3 or 1/3 or some religious war about whether you
think the standards' definition of "approximate" is acceptable to you. It's
about storing a known value (not an expression, like 1/3) with a fixed
number of decimal places, and getting the same number back, every time, in a
deterministic fashion.

A
 
J

Jon Skeet [C# MVP]

Mike C# said:
That seems like hair-splitting, for real.

I see nothing hair-splitting about recognising that 1/3 is a number.
People in this newsgroup refer to binary floating point numbers as
"approximate" because that's what the ANSI SQL standard calls them.

You may notice that there are multiple newsgroups involved here...
"The data types NUMERIC, DECIMAL, INTEGER, and SMALLINT are collectively
referred to as exact numeric types. The data types FLOAT, REAL, and DOUBLE
PRECISION are collectively referred to as approximate numeric types. Exact
numeric types and approximate numeric types are collectively referred to as
numeric types. Values of numeric type are referred to as numbers." -- ANSI
SQL-92

Just because ANSI says something doesn't mean it's right, IMO.
 
J

Jon Skeet [C# MVP]

Mike C# said:
And what about irrational numbers?

That's why I said you couldn't cope when square roots are involved.

Basically we don't *have* a good way of exactly representing numbers,
unless you want to keep everything at a symbolic level.
The SQL Decimal and Numeric data types are designed to store every single
last number in the range of valid values in an *EXACT* representation.

1/3 is in the range, but can't be stored exactly. Which part of that
statement do you disagree with?
Are we talking about information loss from the simple act of converting from
one base to another, which is what the Decimal and Numeric types address; or
are you talking about information loss from performing operations on data
which goes back to your 1/3 example earlier?

They're the same thing. Converting 0.1 base 3 (i.e. 1/3) into a decimal
*is* a conversion from one base to another, and the Decimal and Numeric
types *do not* store that number exactly.
You need to present your arguments about this to ANSI, since these are the
definitions they put forth.

I'm happy to disagree with any number of people, if what they're saying
doesn't make sense.
They can store *every single valid value in their defined set of values* in
an exact representation. With Numeric and Decimal types precision and scale
that you specify define the valid set of values.

In that case the same can be said for Float. Every Float value is an
exact value - you can write down the *exact* binary string it
represents, and even convert it into an exact decimal representation.
You might want to re-read that, because you seem to have picked out just
enough of a single sentence to support your argument, but completely ignored
the part that doesn't. Here's the full sentence for you (the emphasis is
mine):

"WHEN MAXIMUM PRECISION IS USED, valid values are from - 10^38 +1 through
10^38 - 1."

That also defines a default scale of "0". If, however, you define a
different precision and/or scale, the valid set of values changes.

I don't think that makes any difference - when maximum precision is
used, the range of valid values is given in terms of a minimum and a
maximum, and not every value in that range is exactly representable. If
you're going to treat it as "every value that can be exactly
represented can be exactly represented" then as I say, the same is true
for Float.
Of course I'm biased to decimal numbers, as are most of the people here who
were born with 10 fingers and 10 toes. Now let's talk about your bias
against irrational numbers, since you seem to be tip-toeing all around them
here.

No, I'm recognising the mathematical reality that the existence (and
value) of a number doesn't depend on what base you represent it in.
There is no "fraction" data type in SQL, which you might consider a
shortcoming of the language. There is also no "fraction" data type built
into C#, C++, or most other languages. The reason for that might be a huge
bias or conspiracy against vulgar fractions, or it might be the lack of a
useful vinculum in the original ASCII character set, or who it might just be
that most people just don't find it all that useful.

I wasn't talking about usefulness, I was talking about existence.
Suggesting that 1/3 isn't a number is silly, IMO.
Regardless of the reason, 1/10 is not a repeating decimal in decimal (yes,
I'm showing my "bias" for decimal again). It has a finite representation in
decimal. 1/3 is a repeating decimal in decimal. Since it's decimal
representation repeats infinitely, it's impossible to store it exactly
anywhere. However, with regards to Decimal and Numeric data types in SQL,
"0.33333" is not a member of the set of valid values for a DECIMAL (10, 2)
or a NUMERIC (10, 3) [for instance].
Indeed.
From a *business* point of view, there's a large difference in terms of
how useful the types are and for what operations, but that doesn't
affect whether or not a data type should be regarded as fundamentally
"exact" or not, IMO.

The definitions of SQL DECIMAL and NUMERIC specifically allow you to define
the precision and scale. By doing so you define the number of total digits
and the number of digits after the decimal point. Any numbers that are in
the set of valid values for the precision and scale that you define will be
stored *exactly*. Any numbers that are not in the set of valid values for
that precision and scale will not be stored exactly. SQL REAL and FLOAT
types do not come with this guarantee.

There's no reason why they shouldn't - why would it not be able to
guarantees tha the number is essentially reproducible, which is all I
can understand that we're talking about? The binary string "0.010101"
represents an exact number and no others.
And there happen to be very good reasons for that...

Of course, which I've never denied.
Quite simply because with NUMERIC and DECIMAL types I can define the
precision and scale, and every valid value in that range will be stored
*EXACTLY*. REAL and FLOAT come with pre-defined ranges, but have no
guarantees concerning how the exactness with which your numeric data will be
stored. As we saw, even a number as simple as 28.08 is stored as an
approximation of 28.08 (or 2808/100 if you prefer your fractions). Whereas
with a NUMERIC (10, 2) it would be stored as -- well, "28.08" *EXACTLY*.

Do you regard 28.08 as in the "range" for FLOAT? It seems unfair to
demand that it is while saying that 1/3 isn't in the "RANGE" for
NUMERIC, but that's what you've basically got to do in order to claim
that FLOAT is "approximate" but DECIMAL is "exact".
 
J

Jon Skeet [C# MVP]

Aaron Bertrand said:
Well, yeah, Sherlock.

It seems to be a fact which is blinding everyone to the existence of
1/3 as a number.
The OP passed 28.08 into the database. When he asked for the value back, he
got a different answer. He asked how to fix it, so I guess, yeah, our bias
in this case is toward using DECIMAL.

And I never said that wasn't a good idea.
Where, regardless of what fantasy world you're converting to base 3
et. al., when you input 28.08 and then ask for the value back, you're
still going to get 28.08.

No, if you input a number in base 3, convert it to base 10, and then
later convert it back to base 3, you will *not* (always) get the same
number back, for exactly the same reason that 28.08 doesn't come back
as 28.08. Information is lost.
This isn't about math theory or MSDN or base 3 or 1/3 or some
religious war about whether you think the standards' definition of
"approximate" is acceptable to you.

That's precisely what this part of the thread is about. I never
quibbled with the solution to the OP's problem, merely the terminology
used.
It's about storing a known value (not an expression, like 1/3)

1/3 is a known value - just think of it as the literal "0.1" in base 3.
with a fixed number of decimal places, and getting the same number
back, every time, in a deterministic fashion.

You get determinism with floating binary point too - just not lossless
conversion from base 10, that's all.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top