how to calculate image size

N

Nony Buz

Ok, I am considering buying the Nikon Coolscan 5000 ED. I am trying to
figure out how to calculate the size of a TIFF image based on the
scanning resolution. It looks like at 4000 dpi, it will scan a 35mm neg
( 36mm x 24mm) at 5669 x 3780. So do you simply multiply those two
numbers together and then multiply that product by 3?

5669x3780x3= 62.78 Megs

is that correct?
 
M

Mac McDougald

Ok, I am considering buying the Nikon Coolscan 5000 ED. I am trying to
figure out how to calculate the size of a TIFF image based on the
scanning resolution. It looks like at 4000 dpi, it will scan a 35mm neg
( 36mm x 24mm) at 5669 x 3780. So do you simply multiply those two
numbers together and then multiply that product by 3?

5669x3780x3= 62.78 Megs

is that correct?

Not sure if that's exactly the formula or not, but Photoshop estimates
61.4, so it must be pretty close at worst :)

Mac
 
D

Dances With Crows

figure out how to calculate the size of a TIFF image based on the
scanning resolution. It looks like at 4000 dpi, it will scan a 35mm
neg ( 36mm x 24mm) at 5669 x 3780. So do you simply multiply those
two numbers together and then multiply that product by 3?

5669x3780x3= 62.78 Megs

5669 pixels wide * 3780 pixels high * 3 bytes/pixel = 64,286,460 bytes
64,286,460 bytes / 1024 = 62,779.7 Kb
62,779.7 Kb / 1024 = 61.3 Mb

....plus a few hundred bytes for the TIFF directory. That's for an
*uncompressed* TIFF. You can reduce this some if you use LZW
compression, but don't expect miracles from that; the compressed images
will be 30-50M in many cases. Whatever software you use may allow you
to use JPEG compression within the TIFF. This is not a good idea
because JPEG is lossy and lots of software won't handle a JPEGged TIFF
at all. If you want to use JPEG, just save the image as a straight
JPEG. I'd suggest keeping the original scanned image around in an open,
lossless format like PNG or LZW TIFF, just because you might want to
work with the image later on.
 
B

Bart van der Wolf

Nony Buz said:
Ok, I am considering buying the Nikon Coolscan 5000 ED. I am
trying to figure out how to calculate the size of a TIFF image based
on the scanning resolution. It looks like at 4000 dpi, it will scan a
35mm neg ( 36mm x 24mm) at 5669 x 3780. So do you simply
multiply those two numbers together and then multiply that product
by 3?

5669x3780x3= 62.78 Megs

is that correct?

Yes, plus file header overhead, assuming uncompressed RGB 8-bit/channel
data. A program like VueScan allows to also save the IR channel data.
16-b/ch data will produce a file roughly twice that size, and it will
compress less (or even increase file size).

Bart
 
W

Wayne Fulton

Not sure if that's exactly the formula or not, but Photoshop estimates
61.4, so it must be pretty close at worst :)


The confusion is just that there are 1000x1000 = 1,000,000 bytes in a
million bytes, and 1024x1024 = 1,048,576 bytes in a megabyte.. about 5%
more bytes in a megabyte, so about 5% fewer megabytes. Just divide bytes
by 1.049 to get MB.

The 62.78 number only used one 1024 divisor, instead of two, so it would
have been exactly correct if it had said KB, to be 62,780 KB.

I feel a good rant coming on <g>

I dont know why we all still must use this powers of 2 megabyte concept for
file size or for size in memory. There is nothing related to powers of 2
about file sizes, or about the product of 5669x3780 pixels. Powers of 2
were significant back when a 1K memory chip cost $1000, significant that it
necessarily had 1024 bytes in it. Every one counted big time then, about
one dollar each byte, in 1970 dollars. Memory chips must in fact be built
in capacities of multiples of 2, because each added address line doubles
the previous memory total, in powers of 2. But that is only about the chip
itself however, and not about what we store in it. We can store 5 bytes in
it for example and 5 is not a powers of 2 number.

Other than the memory chip itself, it is really not useful to continue this
anymore - instead it is outright inconvenient to have to do this silly
calculation to convert the actual real file size or real memory size from
millions to megabytes, just so we can say MB in the conventional sense,
just so that we are no longer exactly sure what it means. <g>

The prefix mega does literally mean million in the dictionary, and in all
other uses. It is only memory chips that changes it to 1024x1024.

Digital cameras mean millions when they say megapixels.

Hard disks mean millions when they say megabytes.

We even hear some comments (which dont quite get it) criticize hard disk
specs that use decimal millions instead of using powers of 2 megabytes like
memory chips do (all the disks do this, but nevertheless the claim is that
all are supposedly wrong). This is claimed to be false marketing hype just
to inflate the "real" number by 5% so it "sounds better". That notion
seems dumb and funny to me, because the "real" count is decimal units of 1
(file sizes too, same thing).

Million is what the word mega means, and we humans count in decimal, and
that is the actual correct size of the disk or file or image. We wouldnt
even realize the problem existed if it were not that our software divides
it by 1024 or 1024x1024 just for the one purpose to show it to us. <g>
This overt extra step is unnecessary and inconvenient and confusing.

We ought to do disk file sizes and image memory sizes as decimal, and
probably would if Microsoft didnt keep promoting it as K units for files,
and photo editors promote it as MB for images.

The Windows Explorer shows file size in K (units of 1024 bytes) but the DOS
Prompt DIR shows the size of the SAME file in units of 1 byte (decimal),
which of course is the exact size it is. It is silly that we continue
converting to 1,048,576 byte megabytes for file size or image size.

But unfortunately, that is the bothersome convention we chose, back when
only a few programmers knew about it. Today it is very mainstream however,
and it seems the time to fix it..
 
W

Wayne Fulton

My rant should have also mentioned that the IEC international organization
has the new units for power of 2 prefixes:

2^10 kibi Ki kilobinary: (2^10)^1 kilo: (10^3)^1
2^20 mebi Mi megabinary: (2^10)^2 mega: (10^3)^2
2^30 gibi Gi gigabinary: (2^10)^3 giga: (10^3)^3
2^40 tebi Ti terabinary: (2^10)^4 tera: (10^3)^4

We should use those terms, if that is what we mean.
 
N

Nony Buz

My rant should have also mentioned that the IEC international organization
has the new units for power of 2 prefixes:

2^10 kibi Ki kilobinary: (2^10)^1 kilo: (10^3)^1
2^20 mebi Mi megabinary: (2^10)^2 mega: (10^3)^2
2^30 gibi Gi gigabinary: (2^10)^3 giga: (10^3)^3
2^40 tebi Ti terabinary: (2^10)^4 tera: (10^3)^4

We should use those terms, if that is what we mean.

Wayne,

First off, thank you for your rant, I found it VERY educational. I find
it said that I have been a professional programmer for over eight years
and I never knew any of your rant! I will not be making the mistake
again!

So as to make sure I understand the chart above, a Ki would be what
folks are currently calling a Kilobyte, yes? And my file size would be
61.308345794677734375 Mi?

Thanks for the long over due education!
 
B

Bart van der Wolf

SNIP
I feel a good rant coming on <g>

Yes, and it will do little to eradicate the confusion ;-(
We can only hope.

SNIP
The prefix mega does literally mean million in the dictionary, and in all
other uses. It is only memory chips that changes it to 1024x1024.

Digital cameras mean millions when they say megapixels.

Hard disks mean millions when they say megabytes.

Hard disks are organized and addressed in sectors and clusters that are
multiples of byte sizes (512/1024/2048 etc.) so there is some sense in
considering that. 1000 bytes will occupy at least 1024 bytes (but only for
the last sector of a file, AKA slack). So for large files, the difference is
almost non-existent, and bytes are accurately measured by 1000's.

SNIP
The Windows Explorer shows file size in K (units of 1024 bytes) but the DOS
Prompt DIR shows the size of the SAME file in units of 1 byte (decimal),
which of course is the exact size it is. It is silly that we continue
converting to 1,048,576 byte megabytes for file size or image size.

Again, it has to do with the heritage of sector addressing, but I agree if
we talk about mega pixels, we mean decimal thousands (most people do have
more than 8 fingers, so it seems 'handy' to stick to multiples of 10).
But unfortunately, that is the bothersome convention we chose, back when
only a few programmers knew about it. Today it is very mainstream however,
and it seems the time to fix it..

I'm not hopeful, but I do agree.

Bart
 
D

David J. Littleboy

Wayne Fulton said:
The confusion is just that there are 1000x1000 = 1,000,000 bytes in a
million bytes, and 1024x1024 = 1,048,576 bytes in a megabyte.. about 5%
more bytes in a megabyte, so about 5% fewer megabytes. Just divide bytes
by 1.049 to get MB.

The 62.78 number only used one 1024 divisor, instead of two, so it would
have been exactly correct if it had said KB, to be 62,780 KB.

I feel a good rant coming on <g>

Rants. I love rants.
I dont know why we all still must use this powers of 2 megabyte concept for
file size or for size in memory. There is nothing related to powers of 2
about file sizes, or about the product of 5669x3780 pixels. Powers of 2
were significant back when a 1K memory chip cost $1000, significant that
it necessarily had 1024 bytes in it.

No, your rant is misplaced. Computers work on powers of 2 since the space
they can address ends up being a power of 2. Memory and disks and computery
things come in powers of 2. It makes sense. Always has, always will. File
sizes are things that go in memory and on disks, and should be counted in
powers of two.

If you have to deal with disks and address spaces and computery things,
"mega" is going to mean 2^10.

What doesn't make sense is talking about digital images in terms of file
size. Digital images should be counted in pixels, and one should never
ever talk about the file size as being a measure of a digital image size.

If you have to deal with real-world phenomenon, that come in sizes that we
measure in intuitive units, "mega" should mean 10^6.

Talking about images in terms of bytes is insane. Bytes have _nothing_
whatsoever to do with the image. An image consists of pixels. It doesn't
matter how those pixels are represented and stored as long as the
representation has enough bits for the _information_ contained in the image,
and the storage succeeds in storing those bits. The vast majority of scans
are so soft and noisy that you can store the image as either a 16-bit image,
an 8-bit image, or a 1:10 compressed 8-bit jpeg with absolutely no loss in
_information_. The file size is irrelevant. The pixel count isn't.
The prefix mega does literally mean million in the dictionary, and in all
other uses. It is only memory chips that changes it to 1024x1024.

Yes. Well, address spaces, anything computery.
Digital cameras mean millions when they say megapixels.
Yes.

Hard disks mean millions when they say megabytes.

Maybe. Usually unformatted megabytes are smaller than even 10^6 flavor
megabytes said:
But unfortunately, that is the bothersome convention we chose, back when
only a few programmers knew about it. Today it is very mainstream however,
and it seems the time to fix it..

There's nothing to fix as long as you use the right notation for the thing
you are talking about. Using the wrong terminology is the problem.

David J. Littleboy
Tokyo, Japan
 
W

Wayne Fulton

Wayne,

First off, thank you for your rant, I found it VERY educational. I find
it said that I have been a professional programmer for over eight years
and I never knew any of your rant! I will not be making the mistake
again!

So as to make sure I understand the chart above, a Ki would be what
folks are currently calling a Kilobyte, yes? And my file size would be
61.308345794677734375 Mi?

Thanks for the long over due education!


I certainly wasnt fussing at you Nony, I was just ranting, in a rather futile
way. It was about what ought to be, instead of about what is. The fact and
problem is that everyone else thinks it is correct as is, so if you change
now, everyone else will think you are wrong. <g> I was just lamenting that
sad state of affairs.

Yes, the new KI term is meant to replace the K term, in those uses that are
appropriate for the powers of 2 unit (since kilo really means 1000 in all
other uses). I dont think any one is using it yet. There can be no progress
in correction until the powers that be, Microsoft, Adobe, and such, also adopt
it. Hopefully, maybe 10 years??
See http://physics.nist.gov/cuu/Units/binary.html for more.
 
W

Wayne Fulton

No, your rant is misplaced. Computers work on powers of 2 since the space
they can address ends up being a power of 2. Memory and disks and computery
things come in powers of 2. It makes sense. Always has, always will. File
sizes are things that go in memory and on disks, and should be counted in
powers of two.

See what I mean? <g> This did have some significance 25 or 30 years ago,
when sizes were tiny and making things fit in the chip was difficult. As a
programmer I remember once working a week to make a 265 byte loader fit into
a 256 byte prom (and still do what it needed to do). That was the maximum
prom then, but today, we'd just use a 8KB prom and forget it. <g>

And similarly today, when our PC has perhaps 128MB to 1GB of memory, I really
feel absolutely no need to divide my 20 million byte file size by 1024x1024
to help me understand how it can be fitted in there. It doesnt help at all.
Didnt help when we only had 2MB memory either. Same thing with a 120 GB disk
drive file. Even if I divide the real size to see a new fake number that
has no meaning, the fact is that it is still a 20 million byte file. I'm not
saying there isnt a 1024 number down deep in the chips, but we dont need to
see it anymore, users dont even need to know about it, and certainly we are
not required to count that way. This is just a complexity and confusion
now, only for the sake of complexity and confusion, and totally unnecessary.

And it will change of course, but it will probably still be years away.
 
D

David J. Littleboy

Wayne Fulton said:
See what I mean? <g> This did have some significance 25 or 30 years ago,
when sizes were tiny and making things fit in the chip was difficult. As a
programmer I remember once working a week to make a 265 byte loader fit into
a 256 byte prom (and still do what it needed to do). That was the maximum
prom then, but today, we'd just use a 8KB prom and forget it. <g>

But today, you'd be told to do it in C++, and it still would be a pain to
fit the required functionality in the available space.
And similarly today, when our PC has perhaps 128MB to 1GB of memory, I really
feel absolutely no need to divide my 20 million byte file size by 1024x1024
to help me understand how it can be fitted in there. It doesnt help at
all.

Sure it does: if you don't want your disk accesses going all over the place
and trashing your access time, someone better fit things in in sensible
sized chunks.

But the only time users have to deal with powers-of-2 Mbytes is when you
don't have quite enough space for something.

The vast majority of the time it's a non-issue, and the only time it's an
issue is when there's a need.
Didnt help when we only had 2MB memory either. Same thing with a 120 GB disk
drive file. Even if I divide the real size to see a new fake number that
has no meaning, the fact is that it is still a 20 million byte file. I'm not
saying there isnt a 1024 number down deep in the chips, but we dont need to
see it anymore, users dont even need to know about it, and certainly we are
not required to count that way. This is just a complexity and confusion
now, only for the sake of complexity and confusion, and totally unnecessary.

And it will change of course, but it will probably still be years away.

It doesn't hurt anything, as long as you don't count real-world parameters
in senseless sizes.

You'd never even have noticed that there was a problem if people didn't
insist on talking about images in bytes. That should only be a question when
you need to pack files onto a limited CD-R or whatever.

The problem is using the wrong units for the wrong information.

David J. Littleboy
Tokyo, Japan
 
W

Wayne Fulton

Again, it has to do with the heritage of sector addressing, but I agree if
we talk about mega pixels, we mean decimal thousands (most people do have
more than 8 fingers, so it seems 'handy' to stick to multiples of 10).


I'm not hopeful, but I do agree.


Right, hard disk sectors do exist of course, and must be used in integer
multiples as you said. Actually, they are used in even larger clusters of
sectors by the operating system, which also must be an integer allocation,
maybe 8 to 64 sectors per cluster. This actual space usage is no big deal
today, we can add another 120GB disk for $100.

But regardless, the disk FAT table and the operating system and the API and
the programmers all see the file size in bytes, exactly 1037 bytes if that is
the actual size of the file data. The file may necessarily allocate 64
sectors (some cases), but they all see only that exact 1037 byte actual
number that reflects the data size in the file. It doesnt tell us KB at all,
it tells us decimal bytes. The programmers never see this file size
redimensioned in KB until they choose to write code to compute it by dividing
by 1024. They could also compute the number of sectors used, which at least
does have actual meaning, but that is normally also pointless to do too, we
simply dont care today.

The data size remains 1037 bytes. That 1037 number is real as relating to the
data therein. The programmer can recompute it to show us the 1.013KB
equivalent number, but this is rather imaginary, at least it is foreign to
users. It serves no purpose. In the old days, the 1.013KB could tell us the
data will not fit into a 1K memory chip (by 13 bytes), but this concern has
been quite pointless by very many years now. Memory is very cheap too.

So why we still want to do this silly division is what I question. <g> The
only reason is to show the number to us that way. It is not to understand
anything better, we understand it less well. The only reason is to match our
outdated standards, which we falsely imagine that is what we need to see,
even if we dont quite get it. I hope we are seeing the faintest of
beginnings of change now.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top