Wiping a hard drive?

A

Arno Wagner

Previously John said:
On Sun, 22 Oct 2006 07:25:17 GMT, "Timothy Daniels"
Over the years I have seen this issue debated many times and
invariably it degenerates into insults and name calling. so I would
like to throw out a practical question.
Has anyone ever read about or heard of any criminal or civil court
case, any place in the world, where over written disk data has been
introduced as evidence in the trial? It could even be one wipe with
either zeros or random data. I will leave it to you to draw your own
conclusion. My purpose is just to look at the problem from a different
direction.

Actually that is part of the "evidence" others and I are using.
There are a lot of people that would notice and it would go over
all the relevant security reporting services. To date there
has not been a single case. There have been some were it was
tried to obscure how the information was obtained. In all that
were cleared up, conventional methods were used. Also c't
magazine contacted all the major data recovery outfits
under a fake identity 2-3 years back and tried to get a once
overwtitten file back. All said that they did not have that
capability. Not all of them are based in the US.

There is the opposing school of thought that wants a proof
of "impossibility". These people typically believe that the
CIA/FBI/NSA can do everything. I talked to some NSA people
last year in an informal setting, and one said "If we could do
all the things people think we could do, the world would look
differently". I find that argument very convincing.

The third argument is based on a look at the theoretical
maximum amount of data a disk surfce can hold reliably.
It is not that far removed from what actually gets
stored on the surfaces today.

Arno
 
P

Paul Rubin

Arno Wagner said:
Actually that is part of the "evidence" others and I are using.
There are a lot of people that would notice and it would go over
all the relevant security reporting services. To date there
has not been a single case. There have been some were it was
tried to obscure how the information was obtained. In all that
were cleared up, conventional methods were used. Also c't
magazine contacted all the major data recovery outfits
under a fake identity 2-3 years back and tried to get a once
overwtitten file back. All said that they did not have that
capability. Not all of them are based in the US.

Are there any data recovery outfits using really high-end recovery
techniques (e.g. magnetic force microscopes), or do they just run
Norton-like undeletion utilities?
The third argument is based on a look at the theoretical
maximum amount of data a disk surfce can hold reliably.
It is not that far removed from what actually gets
stored on the surfaces today.

If this were true the capacity of disks could not keep increasing the
way it does.

The classic paper on secure deletion is

http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html

though now a few years out of date it has a more recently written
epilogue. The paper concludes:

Data overwritten once or twice may be recovered by subtracting
what is expected to be read from a storage location from what is
actually read.

This suggests that overwriting with unpredictable data (random) is
better than overwriting with predictable data (all zeros). The
epilogue adds:

As the paper says, "A good scrubbing with random data will do
about as well as can be expected". This was true in 1996, and is
still true now.
 
A

Arno Wagner

Are there any data recovery outfits using really high-end recovery
techniques (e.g. magnetic force microscopes), or do they just run
Norton-like undeletion utilities?

They use whatever gets the job done at reasonable cost. I would
expect that reasonable can be up to 10'000 EUR/USD or more
per drive. Maybe even 100'000 EUR/USD per drive in some cases.
If this were true the capacity of disks could not keep increasing the
way it does.

The surface material is the key to store more today. It gets
improved all the time. Please familiarise yourself with the
technology before making this type of broad statement.
The classic paper on secure deletion is

Of course I have read that.
though now a few years out of date it has a more recently written
epilogue. The paper concludes:
Data overwritten once or twice may be recovered by subtracting
what is expected to be read from a storage location from what is
actually read.

There is no proof or demonstrated case that it can be done
with today's HDDs. There is indication that it may not work.
Also RLL and MF(M) has not been used in HDDs for a long time
now. MFM is still used in floppies.

It is trivially possible with floppies and tape. Since Gutmanns paper
deals with "magnetic media", it needs to address these as well.
In fact I have lookad at a raw tape signal way back (C64) and
you could very clearly see the last data layer with a normal
oscilloscope.

When Gutmann says that on modern disks maybe one or two layers
can be recoverd, he is careful. Of course he (and I) cannot be
sure that the NSA does not have the "magic machine". But it
seems to be more and more of a theoretical possibility and
even that is dwindeling.
This suggests that overwriting with unpredictable data (random) is
better than overwriting with predictable data (all zeros). The
epilogue adds:
As the paper says, "A good scrubbing with random data will do
about as well as can be expected". This was true in 1996, and is
still true now.

Note that this only applies if you do several passes. A single pass
is just as secure with zeroes as with random data, since with a
single pass you know the random data.

Arno
 
P

Paul Rubin

Arno Wagner said:
They use whatever gets the job done at reasonable cost. I would
expect that reasonable can be up to 10'000 EUR/USD or more
per drive. Maybe even 100'000 EUR/USD per drive in some cases.

I think if they get a problem that would take that level of effort,
they just give up instead.
The surface material is the key to store more today. It gets
improved all the time. Please familiarise yourself with the
technology before making this type of broad statement.

Regardless of this, they have to read the data from the drive using a
disk head at enormous speed. Even if disk comes right up to the
Shannon limit getting data out of the read signal in normal operation,
if they switch over to some much slower recovery process involving
microprobes reading a few bits per minute instead of disk heads
reading megabits per second, they can possibly get a much better S/N
ratio and therefore get more data out.

There is another possibility too, which is you write data to some
sector, it reads back with low-level errors that are corrected by the
drive ECC, and the drive notices the errors and decides to mark athat
physical sector as bad and relocate the data to a spare sector. The
user application never gets wrong data or notices that anything
unusual happened. Thereafter, no erasure or overwriting in normal
operation ever touches the original sector, and the data is always
recoverable from that sector.
When Gutmann says that on modern disks maybe one or two layers
can be recoverd, he is careful. Of course he (and I) cannot be
sure that the NSA does not have the "magic machine". But it
seems to be more and more of a theoretical possibility and
even that is dwindeling.

Yeah, it is speculative but not groundless. I think that
commercial recovery services don't even try stuff at that level.
Note that this only applies if you do several passes. A single pass
is just as secure with zeroes as with random data, since with a
single pass you know the random data.

True.

Seagate announced some drives a while back with built-in encryption.
I don't know if they're actually selling them now though.

http://www.seagate.com/cda/newsinfo/newsroom/releases/article/0,,2732,00.html
 
A

Aidan Karley

Well, good luck if that is ever put to the test. I think IANA could
be replaced pretty fast.
Indeed.
I was looking at the Indiana SpamHaus case, and thinking to
myself - could Indiana be paving the way to an experiment in re-rooting
the Internet. A pre-emptive test run, if you like.
It seems (or seemed - I haven't caught up on today's news
headlines, though I saw something going past in another part of my
morning news) as if Indiana courts are wanting to stop SpamHaus from
having any effect on an Indiana business's activities, and to do so by
having a detrimental effect on the Internat as a whole. The natural
response for the Internet would be to switch off service to all
organisations with registered addresses in Indiana (shouldn't be beyond
the wits of a PERL monk with a good WHOIS service) and let Indiana sort
out it's own version of the Internet while the rest of the world gets
on with it's business.
(This is illuminated by that guy TimSomething thinking that
contributions from countries outside America were of little consequence
in making the Internet work.)
 
A

Aidan Karley

On the other
hand, people with real secrets will more likely stay away from
computers or do physical destruction.
When was the last time someone mentioned "TEMPEST" to you?

Did you see that demonstration paper in Arxiv a year or so back
of doing optical TEMPEST through a window onto a screen not facing the
window. Made me think a lot more about getting an LCD screen, that did.
 
A

Aidan Karley

The swiss military abandoned the actual animals two or three years
ago, so the manual should still be around somewere...
Hmmm, maybe not the best of examples then ; I'd not thought of the
fact that pack mules could still be the most effective way of moving
material in mountain areas. I wonder how India and Pakistan move material
in their continuing Himalayan war.
I'll try another example ... are there still manuals and
regulations for muzzle-loading cannons? Probably, they're in ceremonial
use. Muzzle-loading hand-weapons?
 
A

Arno Wagner

Previously Aidan Karley said:
When was the last time someone mentioned "TEMPEST" to you?

Did you see that demonstration paper in Arxiv a year or so back
of doing optical TEMPEST through a window onto a screen not facing the
window. Made me think a lot more about getting an LCD screen, that did.

What does that have to do with my statement? And yes, I know what
TEMPEST is and what the developments of the last few years are.

Arno
 
A

Arno Wagner

Previously Aidan Karley said:
Hmmm, maybe not the best of examples then ; I'd not thought of the
fact that pack mules could still be the most effective way of moving
material in mountain areas. I wonder how India and Pakistan move material
in their continuing Himalayan war.
I'll try another example ... are there still manuals and
regulations for muzzle-loading cannons? Probably, they're in ceremonial
use. Muzzle-loading hand-weapons?

Duell-pistols? Maybe still in use?

Arno
 
A

Arno Wagner

I think if they get a problem that would take that level of effort,
they just give up instead.

10'000 EUR/USD as upper limit is certainly realistic. We recently
asked a quote for a disk with a specific problem and it was something
like 2000 EUR/USD.
Regardless of this, they have to read the data from the drive using a
disk head at enormous speed. Even if disk comes right up to the
Shannon limit getting data out of the read signal in normal operation,
if they switch over to some much slower recovery process involving
microprobes reading a few bits per minute instead of disk heads
reading megabits per second, they can possibly get a much better S/N
ratio and therefore get more data out.

That is not the way I read the (admittedly limited) information about
the current surface materials. The impression I got is that if you
make the bits smaller, then neighbouring ones start to cancel each
other out in a short time, i.e. spontaneous bit-flips become
likely. The perpendicular recording stuff is all about making the bits
larger (in 3D), while making their 2D surface footprint smaller. You
argument is however certainly valid for tape and older HDD
technologies.
There is another possibility too, which is you write data to some
sector, it reads back with low-level errors that are corrected by the
drive ECC, and the drive notices the errors and decides to mark athat
physical sector as bad and relocate the data to a spare sector. The
user application never gets wrong data or notices that anything
unusual happened. Thereafter, no erasure or overwriting in normal
operation ever touches the original sector, and the data is always
recoverable from that sector.

Agreed. That is a possibility and if your data is sensitive enough
that this is a problem, then you need physical destruction. As I
was just arguing thet one overwrite is likely enough for a modern
disk, this is no contradiction to what I said, since reallocated
sectors will not be overwritten better with multiple overwrites.

In order to quantify the risk, you have to ask yourself
the following questions:

- Do I have data that fits into one secotr that an attacker
would still find valuable and could identify?
- How large is the probability the data is in a reallocated sector?

For the second, an estimatioin like the following could be used:
A disk has (e.g.) 80GB. Assume sectors are reallocated at random and
assume not morr than 1000 (e.g.) are rellocated in a disks-lifetime.
Now for each "single-sector secret" you get a probability of
roughly 1/150'000 that it is in a defective sector. Multiply by
the value of the secret and how often a secret gets rewritten
in a way that changes its place on disk (usually means being
written to a new file, but could be worse with a filesystem that
has data-journalling). Add up for all secrets on the disk.
If the resulting number exceeds the disk value, do physical
destruction.
Yeah, it is speculative but not groundless. I think that
commercial recovery services don't even try stuff at that level.

There is evidence they did, but failed. Some company has a
4 year old whitepaper about ther upcomming universal disk surface
reader. (Sorry, forgot the reference.)

Some very well funded government agencies certainly will have tried
and continue to try with each new recording technology. Whether they
actually succeed is a different question. What is sure, is that it
will be expensive and not available to most companies and usually
not to law enforcement, without the fact becomming public knowledge
relatively fast. So protecting against this is relevant for
state-secrets (or the like), but not allmost all private or
commercial data. And if recovery is slow and expensive, it will
stay that way.
Seagate announced some drives a while back with built-in encryption.
I don't know if they're actually selling them now though.

Don't know. But I think I read about such a technology being already
broken recently in c't magazine. Might have been something different
though.

Arno
 
A

Arno Wagner

Previously Aidan Karley said:
Indeed.
I was looking at the Indiana SpamHaus case, and thinking to
myself - could Indiana be paving the way to an experiment in re-rooting
the Internet. A pre-emptive test run, if you like.

I had a similar though. It would have been interessting so see
what happened. My geuss is that they would have moved the
company to a different country and used a different domain.
Might have taken a few days though.
It seems (or seemed - I haven't caught up on today's news
headlines, though I saw something going past in another part of my
morning news) as if Indiana courts are wanting to stop SpamHaus from
having any effect on an Indiana business's activities, and to do so by
having a detrimental effect on the Internat as a whole. The natural
response for the Internet would be to switch off service to all
organisations with registered addresses in Indiana (shouldn't be beyond
the wits of a PERL monk with a good WHOIS service)

Or an ad-hoc created real-time blocklist....
and let Indiana sort
out it's own version of the Internet while the rest of the world gets
on with it's business.
(This is illuminated by that guy TimSomething thinking that
contributions from countries outside America were of little consequence
in making the Internet work.)

Certainly true for the first demonstration of the Arpanet. Not true
for what is required to make it working. And not true for the
precursor technologies.

And to use his argumentation every non-German having printed anything
would be a "foreigner" on the medium of words printed on paper, since
Gutenberg developped the printing press, and all others are just using
his (i.e. German) technology.

Arno
 
A

Aidan Karley

I think if they get a problem that would take that level of effort,
they just give up instead.
The last time I was approached to get involved in a DR project
projected to cost in that range, the client balked when presented with
the estimated bill and the first (sample) recovered file. I'd put in a
price for doing the work of £10/hour, cash-in-hand and only 4 people
could be found who could be relied on to do the work, be discrete, turn
up for the next shift, self-QC, etc. And even then, it would have taken
about 3 weeks to get the data back (assuming that I could have got the
holiday time from my full-time job to moonlight on this work). The
client figured that the building work would have caught up with the
lost plans by that point, so he had no choice but to set AutoCAD
technicians in his architectural office to re-creating the lost plans
from their paper records and other paper data. So, if he paid for the
DR, he'd still have to spend this other resources anyway. Cancelled the
DR.
A *lot* of DR approaches to my friends in the computing business
died on the rocks of time and cost.

Which reminds me to see how HandyAndy is doing over in Canada.
 
P

Paul Rubin

Arno Wagner said:
And to use his argumentation every non-German having printed anything
would be a "foreigner" on the medium of words printed on paper, since
Gutenberg developped the printing press, and all others are just using
his (i.e. German) technology.

http://en.wikipedia.org/wiki/Printing

Printing was first conceived and developed in China. Primitive
woodblock printing was already in use by the 6th century in China. In
the Tang Dynasty, a Chinese writer named Fenzhi first mentioned in his
book "Yuan Xian San Ji" that the woodblock was used to print Buddhist
scripture during the Zhenguan years (627~649 A.D.). The oldest known
Chinese surviving printed work is a woodblock-printed Buddhist
scripture of Wu Zetian period (684~705 A.D.); discovered in Tubofan,
Xinjiang province, China in 1906, it is now stored in a calligraphy
museum in Tokyo, Japan. Printing is considered one of the Four Great
Inventions of ancient China... The world's first movable type metal
printing press was invented in Korea in 1234 by Chwe Yun-ui during the
Goryeo Dynasty. By the 12th and 13th century many Chinese libraries
contained tens of thousands of printed books. The oldest extant
movable metal-type book is the Jikji, printed in 1377 in Korea.
 
P

Paul Rubin

Arno Wagner said:
That is not the way I read the (admittedly limited) information about
the current surface materials. The impression I got is that if you
make the bits smaller, then neighbouring ones start to cancel each
other out in a short time, i.e. spontaneous bit-flips become likely.

Maybe I'm misinterpreting what you're saying. I mean if we're trying
to recover data from some disk that's been overwritten or is making a
weak signal for some reason, we can possibly get a stronger signal by
reading much slower. I don't mean making the bits smaller, I mean
trying to read existing (but overwritten) bits. Another way to
increase the S/N ratio in a lab situation might be to chill the disk
and the read probe, maybe even with liquid helium or something.
Agreed. That is a possibility and if your data is sensitive enough
that this is a problem, then you need physical destruction.

Right, this may put an upper limit on how much data recovery effort
is worthwhile.
For the second, an estimatioin like the following could be used:
A disk has (e.g.) 80GB. Assume sectors are reallocated at random and
assume not morr than 1000 (e.g.) are rellocated in a disks-lifetime.
Now for each "single-sector secret" you get a probability of
roughly 1/150'000 that it is in a defective sector.

Sounds reasonable. I can imagine data structures where recovery of
any sector causes a real security failure, but it gets a bit contrived.
There is evidence they did, but failed. Some company has a
4 year old whitepaper about ther upcomming universal disk surface
reader. (Sorry, forgot the reference.)

http://www.actionfront.com/ts_whitepaper.aspx
(saved from here a few months back, thanks)
relatively fast. So protecting against this is relevant for
state-secrets (or the like), but not allmost all private or
commercial data. And if recovery is slow and expensive, it will
stay that way.

Yeah, it's an interesting topic, we had a big discussion about it on
sci.crypt a few months ago:

http://groups.google.com/group/sci.crypt/browse_frm/thread/d1c1ce279ad0ab4b/3fedf36049799af6

There wasn't a definite conclusion, but valid points were brought up
on both sides. The stuff I've mentioned here comes partly from that
thread. Of course on sci.crypt we're trying to defend against
realistically impractical attacks all the time (planetary-scale
parallel computers and so forth), something like proving things in
math vs. just being sure of them in practice.
Don't know. But I think I read about such a technology being already
broken recently in c't magazine. Might have been something different
though.

Hmm, interesting. Such products do get broken all the time, usually
due to silly design and implementation errors.
 
O

Odie Ferrous

Paul said:
I think if they get a problem that would take that level of effort,
they just give up instead.


Regardless of this, they have to read the data from the drive using a
disk head at enormous speed. Even if disk comes right up to the
Shannon limit getting data out of the read signal in normal operation,
if they switch over to some much slower recovery process involving
microprobes reading a few bits per minute instead of disk heads
reading megabits per second, they can possibly get a much better S/N
ratio and therefore get more data out.

There is another possibility too, which is you write data to some
sector, it reads back with low-level errors that are corrected by the
drive ECC, and the drive notices the errors and decides to mark athat
physical sector as bad and relocate the data to a spare sector. The
user application never gets wrong data or notices that anything
unusual happened. Thereafter, no erasure or overwriting in normal
operation ever touches the original sector, and the data is always
recoverable from that sector.


Yeah, it is speculative but not groundless. I think that
commercial recovery services don't even try stuff at that level.


True.

Seagate announced some drives a while back with built-in encryption.
I don't know if they're actually selling them now though.

http://www.seagate.com/cda/newsinfo/newsroom/releases/article/0,,2732,00.html

Fascinating thread. However, this Seagate "encryption" will be broken
within days by some Russian or Chinese IT guru. Perhaps hours. Dare
say I might even be able to do it myself.

The only encryption I trust is that delivered by means of a steel
hammer.


Odie
 
R

Rod Speed

Aidan Karley said:
Arno Wagner wrote
Hmmm, maybe not the best of examples then ; I'd not thought
of the fact that pack mules could still be the most effective way
of moving material in mountain areas. I wonder how India and
Pakistan move material in their continuing Himalayan war.

Mostly using helicopters.
I'll try another example ... are there still manuals
and regulations for muzzle-loading cannons?

There never were 'regulations'
Probably, they're in ceremonial use.
Muzzle-loading hand-weapons?

Yep, lots of those outside the military.

Plenty into suits of armour, chainmail etc too.
 
R

Rod Speed

Fascinating thread. However, this Seagate "encryption" will be broken
within days by some Russian or Chinese IT guru. Perhaps hours. Dare
say I might even be able to do it myself.

Mindlessly silly. No one has 'broken' what gets used for banking etc.
The only encryption I trust is that delivered by means of a steel hammer.

More fool you.
 
A

Aidan Karley

What does that have to do with my statement?
Errr, reinforces it?
And yes, I know what
TEMPEST is and what the developments of the last few years are.
Obviously you know about it. Anyone with more than a passing
acquaintance with electronics can see the difficulty of defending
comprehensively against a non-trivial investigator using TEMPEST-like
techniques. (The optical-TEMPEST attack I referred to used nothing more
sophisticated than a fast-acting photodiode and a high-speed data
capture device, plus some software.)
High-tech investigation however will not reveal what is written
in pencil on a piece of paper that they don't have sight of at a range
of less than a few hundreds of metres. A secret message that is not
written down, but is contained in a person's memory, cannot be
intercepted without the courier person being aware of it, which means
that you're totally dependant on old-fashioned, "HUMINT" techniques of
infiltration, subversion and/ or theft. All of these are vastly more
expensive per bit of recovered data than, say, SIGINT (signals
intelligence), so less data can be recovered.
Going back to the previous topic: yes, if PGP etc were proved to
be systematically breakable, then computer-savvy criminals and
surveillence-phobic people will simply stop entrusting their secret
data to PGP-encrypted channels, and will probably stop using emails at
all. If they're using emails at the moment.
 
A

Aidan Karley

My geuss is that they would have moved the
company to a different country and used a different domain.
Might have taken a few days though.
Spamhaus?
Sure they could move to a different domain name, but the Indiana
court could block service to the SpamHaus organisation as such. so you'd
end up in a cat-n-mouse game, until Spamhaus ended up with nowhere to
go.
On the other hand, if a .org domain root was established which
cared not a hoot about the rulings of some North American parochial
court, then Spamhaus.org could remain in the myriads of configuration
files that use it, and DNS queries that couldn't resolve Spamhaus.org
would need to be re-configured to check this alternative .org root
service. Which is something that's being discussed with respect to the
rest of the Internet (well, DNS at least), in the event that the present
root servers be brought down for some reason. Some countries don't want
this to happen ; other countries see it as being highly desirable to not
be reliant on a single country for that sort of globally critical
service. It's the same logic that's leading the EU and Russia to be
putting up GPS-compatible satellites of their own.
Or an ad-hoc created real-time blocklist....
And to use his argumentation every non-German having printed anything
would be a "foreigner" on the medium of words printed on paper, since
Gutenberg developped the printing press, and all others are just using
his (i.e. German) technology.
I'll let a Chinese person take you up on that <G>.
 
A

Aidan Karley

Agreed. That is a possibility and if your data is sensitive enough
that this is a problem, then you need physical destruction. As I
Cast your mind back ... 2 years ? ... to the spying/ data
mis-handling scandals at Los Alamos.
Some of the comments made in public about those cases implied
that the actual crimes committed were pilfering discs from stocks
audited for *physical destruction* , and taking them home. Which
implies a mindset that would prefer to see the oxides scraped off the
platters and reduced to metal, while the platters are melted down and
cast into truck wheels.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top