New hard disk architectures

Y

Yousuf Khan

GSV said:
It would allow an even deeper level of coma than 'Hibernation' I guess
... you could turn the power off or pull the wall plug and still resume
where you left off. If the speed was right (which could be arranged)
then maybe you could use it as some place to store %bloatwaredir% and
get even cold boots going PDQ.

The problem you'd have with such a dynamically updated hibernate file is
that if you keep writing to the flash drive, it will quickly lose its
entire limited allocation of write cycles. The hard disks and ram have
unlimited write cycles (virtually), flash doesn't.

However, in a laptop environment, with a battery backup already
available, I can see them possibly going into save-to-ram (standby)
mode, followed by a save-from-ram-to-flash mode. You can completely turn
off the hard disk when power is lost, and make all updates only to the
flash disk, which would then proceed to update the disk when power is
restored. Very much like a journalled filesystem.

Yousuf Khan
 
D

daytripper

The problem you'd have with such a dynamically updated hibernate file is
that if you keep writing to the flash drive, it will quickly lose its
entire limited allocation of write cycles. The hard disks and ram have
unlimited write cycles (virtually), flash doesn't.

However, in a laptop environment, with a battery backup already
available, I can see them possibly going into save-to-ram (standby)
mode, followed by a save-from-ram-to-flash mode. You can completely turn
off the hard disk when power is lost, and make all updates only to the
flash disk, which would then proceed to update the disk when power is
restored. Very much like a journalled filesystem.

Yousuf Khan

A solution looking for a problem...

With all of that in place, there's really no gain on the play: a laptop with a
functional battery shouldn't scram to disk just because it lost its mains
source, it should just pop a warning and keep on running. If the operator
decides to bail, (s)he can simply enter Hibernate. End of story, no flash
required.

At best, all the extra flash and code to use it could even hope to accomplish
is to save a few milliwatts of battery life when shutting down...

/daytripper
 
A

Arno Wagner

I don't think they're talking about using flash in the sense of a
dynamic disk cache, but as a static disk cache, or a ramdisk in other
words. Namely, they're aiming to cache the boot sequence into the
flashdisk to speed up boot times.

That would not make much sense IMO.
Well, they explained it in article, they're saying that the reason this
is needed is because with only 512 bytes you don't have enough bits for
error correcting code with today's big hard disks.

That is nonsense. The size of the disk has no impact on the per-sector
error corection. Maybe they mean that with 4096 byte sectors they
can use more efficient codes.

Arno
 
G

George Macdonald

Well, flash isn't going to extend the life of the platters, it's only
good for the fast startup. In order to extend platter life you'd need
ram mostly.

I did not mean reduce wear of the platters but extend the lifetime of hard
disks in general as a mass storage solution, i.e. delay the switch over to
flash as a replacement for hard disks. It gets them a foot in the door
with the technology too... hopefully, from their POV, fending of Sandisk
et.al. from taking over the mass storage market eventually.
 
D

daytripper

I did not mean reduce wear of the platters but extend the lifetime of hard
disks in general as a mass storage solution, i.e. delay the switch over to
flash as a replacement for hard disks. It gets them a foot in the door
with the technology too... hopefully, from their POV, fending of Sandisk
et.al. from taking over the mass storage market eventually.

Until cost per bit for flash at least enters the same arena as magnetics -
never mind approaches parity - I doubt the magnetic media companies are all
that worried about flash encroaching in their bread-and-butter markets...
 
A

Alexander Grigoriev

Oh, wonders of a journalled filesystem... BTW, you can do the same tricks
with NTFS.

I'm afraid, you folks argue two different concepts:

1. Filesystem robustness against power failure or system reset;
2. Complete system state restore across power-off (which is achieved with
hibernation, but requires some time for writing the state).

IIRC, OS/360 allowed the applications to restart from a saved checkpoint,
but it's not what's discussed here.

If anybody hopes to restore complete system state after an arbitrary power
failure (as if it didn't happen), you're out of luck without battery backup.
 
K

Keith

Until cost per bit for flash at least enters the same arena as magnetics -
never mind approaches parity - I doubt the magnetic media companies are all
that worried about flash encroaching in their bread-and-butter markets...

OTOH, do people really pay more for 200GB drives? Ok, I bought one on
BlackFriday for $29 (I would have bought a smaller drive at $29). Will
people pay for a flash drive it it were a similar price and half the
capacity? ...forgetting the write-cyle issue. My bet is yes.

BTW, what happened to MRAM? I thought we'd be swimminng in it by now. ;-)
 
D

daytripper

OTOH, do people really pay more for 200GB drives? Ok, I bought one on
BlackFriday for $29 (I would have bought a smaller drive at $29). Will
people pay for a flash drive it it were a similar price and half the
capacity? ...forgetting the write-cyle issue. My bet is yes.

Geeze...."forgetting that the drive has a profound problem with wear-out"
kinda changes the nature of the comparison....But, ok, at only twice the
price, you possibly could be right - if in fact there's some perceivable
performance advantage, because capacity is still important. But at the current
rate of cost-per-bit closure, even you'll be an old gummer before that
happens.
BTW, what happened to MRAM? I thought we'd be swimminng in it by now. ;-)

ahahahahahahahahaha!

Reminds me of the annual visits from the FeDRAM folks...

/daytripper
 
T

The little lost angel

OTOH, do people really pay more for 200GB drives? Ok, I bought one on
BlackFriday for $29 (I would have bought a smaller drive at $29). Will
people pay for a flash drive it it were a similar price and half the
capacity? ...forgetting the write-cyle issue. My bet is yes.

US$29 for a 200GB drive? New? I gotta get a truckload of these :p They
are going for like at least US$100 a piece here.
 
G

George Macdonald

Until cost per bit for flash at least enters the same arena as magnetics -
never mind approaches parity - I doubt the magnetic media companies are all
that worried about flash encroaching in their bread-and-butter markets...

Not wholly for a while yet of course but as interface/burst speeds go up
way beyond off-the-platter speeds, if a hybrid is on the cards in the
interim, I'm sure they'd rather be the ones selling the bits.
 
J

J. Clarke

George said:
Not wholly for a while yet of course but as interface/burst speeds go up
way beyond off-the-platter speeds, if a hybrid is on the cards in the
interim, I'm sure they'd rather be the ones selling the bits.

They get their cut regardless--the only way some outfit that is not a hard
disk manufacturer could make such a thing is to start with a hard disk
bought from one of the manufacturers.
 
K

Keith

US$29 for a 200GB drive? New? I gotta get a truckload of these :p They
are going for like at least US$100 a piece here.

Yeah, Black Friday (the Friday after the US Thanksgiving holiday) is a
huge shopping day in the US. Some stores have "loss-leaders"[*] to get
people in the stores, hopeing they'll buy something else. In this case
Staples (and office supply chain) was selling 200GB Maxtor IDE drives for
$29. I also snagged a dual-layer DVD burner for $19 and a spindle of
50 DVD+Rs for $3. Of course there are "rebates" to be filled out
(on-line in this case), so there was more out of pocket than $29.

[*] A "loss-leader" is a product sold (usually at a loss, hence the name)
to generate traffic.
 
G

George Macdonald

They get their cut regardless--the only way some outfit that is not a hard
disk manufacturer could make such a thing is to start with a hard disk
bought from one of the manufacturers.

The 2 routes being considered here were in the HDD package or on the mbrd.
 
Y

Yousuf Khan

Arno said:
That would not make much sense IMO.
Why?



That is nonsense. The size of the disk has no impact on the per-sector
error corection. Maybe they mean that with 4096 byte sectors they
can use more efficient codes.

Yeah, that's what they meant. ECC is taking up too much of the disk real
estate these days.

Yousuf Khan
 
J

J. Clarke

Yousuf said:
Yeah, that's what they meant. ECC is taking up too much of the disk real
estate these days.

???? It's taking up the same percentage it always took up. Disks today are
approaching the size of large datacenters 20 years ago, so I find the
"taking up too much real estate" argument to be kind of silly.
 
Y

Yousuf Khan

J. Clarke said:
???? It's taking up the same percentage it always took up. Disks today are
approaching the size of large datacenters 20 years ago, so I find the
"taking up too much real estate" argument to be kind of silly.

They're just saying they can do a more efficient error correction over
4096 byte sectors rather than 512 byte sectors.

Yousuf Khan
 
A

Arno Wagner


How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.
Yeah, that's what they meant. ECC is taking up too much of the disk real
estate these days.

I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.

Arno
 
F

Folkert Rienstra

Arno Wagner said:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.



I think that is nonsense.

You have always been an idiot too.
ECC is something like 10%.

Right, that's huge. High time to cut that back.
It does not make sense to rewrite every driver

You don't have the faintest idea what this is about, have you.
and the whole virtual layer just to make this a bit smaller,
except meybe from the POV of a salesperson.

Completely engulfed in conspiricy theories.
From an enginnering POV there is good reason not
to change complex systems for a minor gain.

This has been working in SCSI for years, stupid.
And it's already a reality in ATA/ATAPI-7.
 
Y

Yousuf Khan

Arno said:
How would you determine where the boot-sequence ends? What if
it forks? How far would you get actually (personal guess:
not far)? And does it realyy give you significant speed imptovement?
With Linux, kernel loading is the fastest part of booting. The
part that takes long is device detection and initialisatiom.
My guess is it is the same with Windows, so almost no gain from
reading the boot data faster.

You would manually choose which components go into the flash disk. Or
you would get a program to analyse the boot sequence and it will choose
which components to send to the flash. You can even pre-determine what
devices are in the system and preload their device drivers.
I think that is nonsense. ECC is something like 10%. It does not
make sense to rewrite every driver and the whole virtual layer just
to make this a bit smaller, except meybe from the POV of a
salesperson. From an enginnering POV there is good reason not
to change complex systems for a minor gain.

You've just made the perfect case for why it's needed. 10% of a 100GB
drive is 10GB, 10% of 200GB is 20GB, and so on.

Yousuf Khan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top