Your question doesn't make sense. You are correct in that a decimal is not
(necessarily) as accurate as a fraction, but a fraction derived from a
decimal will always be only as accurate as the decimal it was derived from.
For example, let's look at your example:
0.3333 = 1/3
This is incorrect. 0.3333 and 1/3 are 2 entirely different values. 1/3 is
greater than 0.3333. So, there is no way to derive 1/3 from 0.3333. You
*could* derive a fraction from it, but it would not be 1/3. In fact, you
could not derive 0.3333 from 1/3, as they are not equal. 1/3 cannot be
expressed in decimal numbers.
The formula is simple algebra. 1/3 is an expression that indicates 1 divided
by 3. So, let's start out with a fraction that *can* be converted to a
decimal:
0.25 = 1/4
This means that 0.25 is equal to 1 divided by 4. This could be expressed
using variables as:
x = y / z
From algebra, we know that to resolve for y, we use:
y = x * z
To resolve for z:
z = y / x
So, to derive 1 / 4 from 0.25, you would say:
0.25 = y / z
Now, it is important to note here that any number of fractions can be
derived from a decimal. For example:
0.25 = 1 / 4
0.25 = 2 / 8
0.25 = 3 / 12
So, all we have to do is plug in an arbitrary number into either the 'y' or
'z' variable to derive the other:
0.25 = 2 / z
2 / 0.25 = z
2 / .25 = 8
0.25 = 2 / 8
Of course, depending on what value you plug in, you may get a decimal or
fractional result for the other:
0.25 = 3 / z
3 / 0.25 = z
3 / 0.25 = 0.75
0.25 = 3 / 0.75
An easy fix is to use the original decimal, converted to a whole number, to
ensure a whole fraction:
0.25 = 25 / z
0.25 / 25 = z
0.25 / 25 = 100
0.25 = 25 / 100
If you want to, you can reduce the fraction, but I'm sure you know how to do
that.
In any case, as I said, you're not going to get any more accurate this way.
Because fractions *are* more accurate than decimals, you will never achieve
greater accuracy by converting a decimal to a fraction.
--
HTH,
Kevin Spencer
Microsoft MVP
Professional Chicken Salad Alchemist
What You Seek Is What You Get.