1000 year data storage for autonomous robotic facility

  • Thread starter Bernhard Kuemel
  • Start date
N

Nico Coesel

Bernhard Kuemel said:
Sorry for repost, I posted to sci.electronics before, which does not exist.

Hi!

I'm planning a robotic facility [3] that needs to maintain hardware
(exchange defective parts) autonomously for up to 1000 years. One of the
problems is to maintain firmware and operating systems for this period.
What methods do you think are suitable?

I'd go lo-tech. Etches in stainless steel for example. Maybe gold
plate them afterwards for an extra protective layer.
 
J

josephkk

Also:
<http://www.10000yearclock.net>

Does the 10,000 year clock come with a warranty?

At this time, there is a 16 second difference between GPS atomic clock
time and UTC, which is based on astronomical time.
<http://leapsecond.com/java/gpsclock.htm>
<http://leapsecond.com/java/nixie.htm>
They were identical on Jan 6, 1980 and are diverging at the rate of
about 2 seconds per year. Ignoring variations in the earths speed of
rotation:
<http://tycho.usno.navy.mil/leapsec.html>
in 1000 years, the two clocks might be 2000 seconds or 33 hours
different.

Ummm. There are 3600 seconds in one hour, care to recheck that last
calculation?

?-)
 
J

josephkk

About 1000 years ago, we were just coming out of the dark ages.
Proposing a 1000 year document preservation of e.g. the Library at
Alexandria, would have the equivalent technical limitations as your
current proposal. I doubt that the dark ages religious establishments
could have succeeded given the wide range of totally inconceivable and
unpredictable threats that have arrived in the last 1000 years. It's
equally unlikely that you could defend your data against the next 1000
years of currently known threats, much less the unknown threats. All
it would take is a biological niche to open for bacteria that eats
silicon or lives on Epoxy-B, and your archive is gone.
One more time, OP is talking about a sef-maintaining robot (better make
that a robot society).

?-)
 
B

Bernhard Kuemel

At this time, this would either requirting hard-coding all possible
modes this can break down with recovery instructions (infeasible)

My computers POST can tell me if there's a memory, keyboard, floppy,
disk, etc error. What's infeasible about identifying defective parts and
replacing them? Sure there are limits, but we can try to find a solution
with reasonable chance for success.
or true AI in there (likely infeasible as well).

Even humans can't solve all problems.

Bernhard
 
J

josephkk

Thanks. The 2000 seconds is correct. However, it should be 33
minutes, not hours. Sorry(tm).

Da nada. I spent much of my working time in the last 30+ years looking
for others slips and things. That amount of practice must produce some
little bit of skill.

?;-)
 
J

josephkk

That's one way to read between the lines. I just re-read all the OP's
postings in this thread and found in <[email protected]>
"This is about media being used during these 1000 years
as a source of firmware and operating systems to keep the
robotic facility functional."
Note the word "media".

His answer to my comments on the need for data verification in
<[email protected]>:
"The idea is to make the cold store for humans autonomously due
to a lack of trust in human reliability. If the autonomous facility
works, then there is no need for anyone to verify the data.
Verification is done via checksums internally. Ideally it would
be in a remote place, forgotten and eventually discovered, either
by chance, or by radio signals from the facility in case of
malfunction or at a set date like in 1000 year when technology
is expected to be able to scan/upload the minds of the frozen
humans. Actually I think 200 years probably suffice."

One does not normally refer to components, parts, firmware, etc as
"media". I can't tell what he's planning to accomplish with a 1000
year self maintaining robot. The 1000 year self maintaining robot is
not the problem. It's whatever the robot is suppose to be doing for
1000 years is the problem. Again, reading between the lines, it looks
like a robotic Alcor:
<http://www.alcor.org>
or if the body or brain can somehow be reduced to data, a large data
time capsule. It's the political, social, and financial aspects of
operating such a robot, which is what I find interesting.

That puts a really different spin on it. I wonder of OP has real "Silicon
Beach", "The Two Faces of Tomorrow", Dr. Asimov's Robot Series, or any
similar books.

?-)
 
J

josephkk

All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.

If it does, one might assume that there are times during that period
where interest is sufficient to copy to new or better media.

I still have files that have survived five generations of media tech.

Wow. Is that from magnetic tape or paper tape?

?-)
 
A

Arno

My computers POST can tell me if there's a memory, keyboard, floppy,
disk, etc error. What's infeasible about identifying defective parts and
replacing them? Sure there are limits, but we can try to find a solution
with reasonable chance for success.

You computer's POST cannot tell you if the POST itself is broken.
It has a table of specific checks (i.e. hardcoded ones), other
problems will not even be atempted to be diagnosed.
Even humans can't solve all problems.

Depends on the intelligence and experience of the human involved.
But you are right, and I have been pointing oout that this
very project is very likely amont the problems humans cannot solve.

Arno
 
R

Rod Speed

All this prompts the question of whether human culture will last, to
the point that anyone will care about decoding 1's and 0's in 1000yr.

Bet it does.
If it does, one might assume that there are times during that period where
interest is sufficient to copy to new or better media.

That didn't happen much at all in the previous 1000 years.
I still have files that have survived five generations of media tech.

You don't see much of that with the previous 1000s of years.
 
R

Rod Speed

I don't know. What I do know is that he's solving the wrong problem.

I don't agree. There is certainly more chance of an autonomous
robotic system lasting for 1000 years than trying to organise some
way of getting humans to maintain his cryogenic body storage
system over that length of time.
I don't believe it's possible to achieve 1000 year reliability
for electronics and mechanisms. If it moves, it breaks...
unless something extraordinary (and expensive) is employed.

He did say that cost was no object.
The list of probable hazards are just too great for such a device.

That stuff doesn't matter if it can repair what breaks.
If a species cannot change and evolve effectively,
environmental changes will guarantee extinction.

That's just plain wrong. There are plenty of
examples of species that have not evolved
at all over 1000 years and have survived fine.
The same can be said of all mechanisms, including electronics.

No, most obviously with the static storage of data
using a sufficiently stable storage mechanism.

That has in fact lasted MUCH longer than 1000
years already with some of the ways of doing that.
Mother nature, Microsoft, and satellite technology have provided
examples the long term survivability that work. Mother nature
offers evolution, where a species adapts to changing conditions.

That's just one way its handled that problem.
Microsoft has Windoze updates, which similarly adapts a know buggy
operating system into a somewhat less buggy operating system.

But hasn't even managed 100 years, let alone 1000.
The satellite industry has deal with the inaccessibility of satellite
firmware and in flight RAM damage with reloadable firmware.

Yes, that's a rather better example, but none of that was ever
designed to last for 1000 years and in fact we know it won't
because the satellites won't even stay there for that long even
if the electronics does work for that long, and we know it wont.
None of the products of these technologies would operate
for very long in their original form without adaptation.

Adaption is just one approach.

Clearly an autonomous robot manufacturing facility can
just keep making more of what fails whenever it fails as
long as the raw materials are always available.
Building a sealed system also has its problems.

All approaches have their problems.

That's why we have engineers, to solve them.

That's just biological systems. Closed data storage systems work fine.
The story is always the same. They get 99.9999% there, and the whole
thing collapses due to some unexpected and uncontrolled trivial oversight.

Doesn't happen with closed data storage systems.
The closest electronic parallel is again the satellite technology,
where environmental considerations (space junk, cosmic rays,
tin whiskers, solar cell deterioration, etc) cannot effectively
be repaired and eventually kill the satellite.

And even if they don't, the satellite will eventually
return to earth and burn up in the process.

That's all irrelevant to whats possible on earth tho.

We know that there are plenty of examples of
stuff that's lasted a lot longer than 1000 years.
Sometimes, it is politically expedient to spend huge amounts of money
to repair satellites (i.e. Hubble space telescope), but those are rare.
If
Hubble had been in geosynchronous orbit, the space shuttle would not
have been able to reach it and Hubble would have died on arrival.

All irrelevant to whats feasible on earth.
Therefore, in my never humble opinion, the trick to making
electronics survive beyond their "normal" lifetimes is to perform
constant and regular updates. That doesn't mean an infinite
supply of spare parts or 3D printing extended to its logical
extreme. It means small but constant improvements in the design.

That's just one way. Even just replacement
of what dies is another obvious approach.
For firmware, that could be improvements through self
modifying code as in "The Adolescence of P1".
<http://en.wikipedia.org/wiki/The_Adolescence_of_P-1>

No need to improve it.
The trick is to follow the example of evolution and not make
any radical changes. The risk of failure with small changes are
small and reversible. The risks with dramatic improvements in
technology are large and probably not reversible.

It would be a hell of a lot safer to not even attempt
any improvements, just replace what dies.
Applied to the OP's automated Alcor system is difficult, but not
impossible. For example, parallel redundancy is an obvious way to
improve reliability, but also a good way to implement evolutionary
electronics. If there are 10 processors running majority logic to
reach a decision or perform a function, there would not be a loss
of function if one of those processors engaged in evolutionary
experiments and improvements. If the code or hardware changes are
successful, then the remaining 9 processors could be slowly replaced.

Still a lot safer to just replace what dies.
Exactly how create evolutionary electronics is probably worthy of a
Nobel Prize. It may also be our doom as it would likely involve risks
such as nano technology "gray goo" or a Forbin Project style computer
takeover.
<http://en.wikipedia.org/wiki/Forbin_Project>
The principle is simple enough, but the devil is in all the details.
In its fully automated form, it also can be capable of initiating
resource exhaustion. If it needs some rare earth element to function,

In fact the rare earths arent actually rare at all.
and it has access to the commodities futures market computers,
it could easily corner the market in that element for itself. Lots of
other things that could go wrong.

But not if you just replace what breaks and have enough of the
raw materials included in the original that you have calculated
will be needed to replace what breaks and say have 10 times that
for safety.
The technology to make evolutionary computing
work is well beyond my level of expertise.

But replacing what breaks isnt.
I suspect it's going to be a priority if we ever establish
space colonies as the problems are similar. What I do
know is that building something with a 1000 year
reliability is not going to be a usable solution.

Its worked fine quite a few times in the past now.
 
J

Jeroen

I don't know. What I do know is that he's solving the wrong problem.
I don't believe it's possible to achieve 1000 year reliability for
electronics and mechanisms. If it moves, it breaks... unless
something extraordinary (and expensive) is employed. The list of
probable hazards are just too great for such a device. If a species
cannot change and evolve effectively, environmental changes will
guarantee extinction. The same can be said of all mechanisms,
including electronics.

Mother nature, Microsoft, and satellite technology have provided
examples the long term survivability that work. Mother nature offers
evolution, where a species adapts to changing conditions. Microsoft
has Windoze updates, which similarly adapts a know buggy operating
system into a somewhat less buggy operating system.

I beg to differ! Somewhat different bugs, sure. Somewhat less buggy,
surely not!

Jeroen Belleman
 
J

josephkk

As for long-term storage of information, two thoughts come to mind.
First, for the millennium celebrations, The New York Times decided to
make and widely disperse a umber to time capsules; these being
intended to be opened 1,000 years hence. Basically nothing worked
except nickel sheets with natural-language texts engraved into the
surface using an electron beam. The text was rendered in English and a
number of other languages, so it would also serve as a Rosetta Stone.
Anyway, the whole process was described in a set of articles in the NYT
Magazine published in 1999.

More recently, the ability to convert arbitrary text into DNA, and to
read the text back gives us a way to store huge amounts of binary
information for millennia.

One can also store bulk binary on nickel sheet by writing code blocks
in hexadecimal, with embedded error correcting codes.

Joe Gwinn


Some links:

<http://www.nytimes.com/1999/12/02/arts/design-is-selected-for-times-cap
sule.html>

<http://www.nytimes.com/1999/12/05/magazine/how-to-make-a-time-capsule.h
tml?pagewanted=all&src=pm>

<http://online.wsj.com/article/SB100014241278873245393045782598835075431
50.html>

<http://www.nature.com/nature/journal/vaop/ncurrent/full/nature11875.htm
l>


All but the last link broke due to word wrap. If you can't read headers i
am using Agent 6. One of the better respected news (and email) clients.

Carets "<>" ain't a perfect solution. Fortunately Jow Gwinn also placed
them on separate lines making using text selection reasonable easy. Using
copy and paste worked just fine.

?-)
 
A

Arno

edfair said:
Bernhard's question:
"What's infeasible about identifying defective parts and
replacing them?"
Arno's answer:
"You computer's POST cannot tell you if the POST itself is broken."

No, not at all. Although that is a concern. More like,
"Your POST basically checks only if everything needed
to bring up is there and seems to be ok on a very superficial
test." I have had countless hardware problems, like memory
errors, defective USB ports, dying disks and POST (the long
version) told me nothing. I have had one RAM chip with a weak bit
that required 3 days of memtest86+ to be detected, but did show
up immediately in some calculations (unfortunately not localizable
to a specific machine as this was in a cluster with process
migration and the migration).
Think we went through this with IBM's 360 series, triple redundency, 2
of 3 answers must match. Everything is fine till the comparison circuit
fails.

Or until 2 of 3 are broken, but agree in their output. At 1000
years, that is a real concern. 3 of 4 ,... will not really fix that
either, as a) more failures can occure while reparing is done, b)
these diversity approaches detect broken circurity only when
it is used (and what if the repairing requires operations never
used otherwise and turns out to be broken when it is desperately
needed) and c) software bugs are not covered, just hardware. Regarding
that assumption I refer to the excellent Ariane-5 failure report
(Biiig bada-BUMM! 800 Million Euros uninsured damage.), where
hardware redundancy was in place but software was assumed to
be perfect.

Arno
 
B

Bernhard Kuemel

Actually, the biggest problem are the human operators. The Three Mile
Island and Chernobyl reactor meltdowns comes to mind, where the humans
involved made things worse by their attempts to fix things. Yeah,
maybe autonomous would better than human maintained.


I once worked on a cost plus project, which is essentially an
unlimited cost system. They pulled the plug before we even got
started because we had exceeded some unstated limit. There's no such
thing as "cost is no object".

I said: "Price is not a big issue, if necessary." I know it's gonna be
expensive and we certainly need custom designed parts, but a whole
semiconductor fab and developing radically new semiconductors are
probably beyond our limits.
Repair how and using what materials?

Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.
Ok, lets see if that works. The typical small signal transistor has
an MTBF of 300,000 to 600,000 hrs or 34 to 72 years. I'll call it 50
year so I can do the math without finding my calculator. MTBF (mean
time between failures) does not predict the life of the device, but
merely predicts the interval at which failures might be expected. So,
for the 1000 year life of this device, a single common signal
transistor would be expected to blow up 200 times. Assuming the robot
has about 1000 such transistors, you would need 200,000 spares to make
this work. You can increase the MTBF using design methods common in
satellite work, but at best, you might be able to increase it to a few
million hours.

It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.

Also robots are usually idle and only active when there's something to
replace. The power supply, LN2 generator and sensors are more active.

I wonder how reliable rails or overhead cranes that carry robots and
parts around are. If replacing rails or overhead crane beams is
necessary and unfeasible, the robots will probably drive with wheels.
Geosynchronous satellites are unlikely to suffer from serious orbital
decay. However, they have been known to drift out of their assigned
orbital slot due to various failures. Unlike LEO and MEO, their
useful life is not dictated by orbital decay. So, why are they not
designed to last more than about 30 years?

Because we evolve. We update TV systems, switch from analog to digital
etc. My cryo store just needs to the same thing for a long time.
At the risk of being repetitive, the reason that one needs to improve
firmware over a 1000 year time span is to allow it to adapt to
unpredictable and changing conditions.

Initially there will be humans verifying how the cryo store does and
improve soft/firmware and probably some hardware, too, but there may
well be a point where they are no longer available. Then it shall
continue autonomously.
True. However, not providing a means of improving or adapting the
system to changing conditions will relegate this machine to the junk
yard in a fairly short time. All it takes is one hiccup or
environmental "leak", that wasn't considered by the designers, and
it's dead.

Yes. We need to consider very thoroughly every failure mode. And when
something unexpected happens, the cryo facility will call for help via
radio/internet. I even thought of serving live video of the facility so
it remains popular and people might call the cops if someone tries to
harm it. Volunteers could fix bugs or implement hard/software for not
considered failure modes.
 
J

Jasen Betts

I said: "Price is not a big issue, if necessary." I know it's gonna be
expensive and we certainly need custom designed parts, but a whole
semiconductor fab and developing radically new semiconductors are
probably beyond our limits.

so you're prepared to take the performance hit and use thermionics
instead? it's not like they're going to work after 1000 years either
(unless perhaps stored in a vacuum.) but the fab is simpler.
Have the robots fetch a spare part from the storage and replace it.
Circuit boards, CPUs, connectors, cameras, motors, gears, galvanic
cells/membranes of the vanadium redox flow batteries, thermo couples,
etc. They need to be designed and arranged so the robots can replace them.

where are you going to get a cpu in 700 years time? the ones in the
store will be have diffused away.
It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.

the problem is that they do... keeping them on ice (or in liquid helium)
might help enough
Also robots are usually idle and only active when there's something to
replace. The power supply, LN2 generator and sensors are more active.

I wonder how reliable rails or overhead cranes that carry robots and
parts around are. If replacing rails or overhead crane beams is
necessary and unfeasible, the robots will probably drive with wheels.

build them from stainless steel and the tracks thick enough to withstand 1000
years traffic, perhaps have a few spare cranes parked at the end of
the track.
Because we evolve. We update TV systems, switch from analog to digital
etc. My cryo store just needs to the same thing for a long time.

So in 1000 years the robots have sold their charges on ebay and are
playing online poker to buy electricity.
 
R

Rod Speed

edfair said:
Bernhard's question:
"What's infeasible about identifying defective parts and
replacing them?"
Arno's answer:
"You computer's POST cannot tell you if the POST itself is broken."
Think we went through this with IBM's 360 series,
triple redundency, 2 of 3 answers must match.
No.

Everything is fine till the comparison circuit fails.

Don’t need a comparison circuit, and it can be replicated anyway.
 
R

Rod Speed

Ok, not the best example.

Yeah, its nothing like evolution in fact.
The problem is that features and functions
get added to software faster than bug fixes.

The real problem is that computing is so complicated that
even with a system that never gets anything added at all,
you can never get rid of all bugs, most obviously with
those that are only seen in the most unusual situations.

And its just not possible to debug something
like the 2000 'bug' until it occurs either.
The inevitable result is a product with cancerous growth,

That's silly.
feature bloat,

Yes, what people want to do keeps being invented.

Most obviously currently with air gestures with touch systems.
and plenty of bugs.

Because computing is so complex.
I sometime suspect that this is intentional,

Yes, that's certainly true with new features.
as the only reason the users upgrade to the next latest
version is in the futile hope that it will have fewer bugs.

That's just plain wrong. Quite a few of them upgrade to
get stuff that isnt in what they currently have, or is much
better done than in what they currently have. That's been
seen with USB and wifi alone.
MS made that mistake with XP,

No, just ****ed up some of the stuff done with Vista that XP doesn't have.
which is actually quite good and reasonable usable,

But can obviously be improved on.
causing large corporate users to ask "why bother to upgrade"?

Plenty can see good reasons to upgrade.
13 years later, it's still going strong,

But Win7 is a lot better in a great raft of areas.
despite numerous failed attempts by MS to kill it.

MS has not tried to kill it, just tried to encourage
people to upgrade. They would be stupid if they
did not given where their revenue comes from.
Certainly, they're not going to make that mistake again.

They already did with Win8.
However, I'll make it easy for someone to prove me wrong.
Just name me one software package or major application
that either continues to be sold in its original version,

Even if someone could do that, it proves nothing about
your original claim. Its just a fact of life with revenues.

We see the same thing with cars and almost everything else.

There is even still some product improvement
with stuff as basic as cutlery, for a reason.
or which has become smaller, faster, or both?
Win7.

Offhand, I can't think of any that are even close.

Then you need to get out more.
Evolution and growth drives the software industry, because it works.

Evolution drives the software industry because that's what produces revenue.

It even happens with stuff like Linux which doesn't have any revenue, for a
reason.
 
R

Rod Speed


Essentially because there are very few human systems that
have ever been organised to keep doing the same thing like
that for 1000 years when doing that requires quite a bit of
human effort to do.

Even the world's great religions can't be relied on to
keep going, a hell of a lot of them have in fact imploded
over time. Some in a lot less time than 1000 years.

Its just the nature of human activity.
I'm constantly replacing dried out electrolytic caps from antique radios.

Sure, but we have seen some other approaches to
preservation last fine for much longer than 1000 years.
Anything that moves (pots, controls, rheostats, variable
caps, speaker cones, dial cord, etc) are constant sources
of maintenance problems. Any switch or relay without
hermetically sealed contacts eventually oxidizes, pits,
arcs, or melts.

Sure, but we have seen some other approaches to
preservation last fine for much longer than 1000 years.
My maintenance free battery is really a throw away battery.
I've had some experience working on process controllers for
the food canning business. It's amazing how much rotting
muck can find it's way into sealed NEMA enclosures.

Sure, but we have seen some other approaches
to sealing things do fine for well over 1000 years,
most obviously stuff sealed in glass.
I don't think it's possible to make an autonomous
anything that will work even 50 years,

There are plenty of examples of stuff that has done that.
much less 1000.

There are examples of stuff that has done
that for much longer than that too, most
obviously with the pyramids.
Actually, the biggest problem are the human operators.

Which is why its better to do without those if that's feasible.
The Three Mile Island and Chernobyl reactor meltdowns comes to mind,
where the humans involved made things worse by their attempts to fix
things.

But there are plenty more examples where attempting
to fix things worked fine. You don't have to have such an
unstable system where ****ing things up results in disaster.

Plenty of ancient churches and mosques etc have lasted
much longer than 1000 years with humans fixing things
that go wrong. The main problem is setting up a system
where the humans want to bother for more than 1000 years.
Yeah, maybe autonomous would better than human maintained.

No maybe about it if its feasible.
I once worked on a cost plus project, which
is essentially an unlimited cost system.

Nothing is ever an unlimited cost system.
They pulled the plug before we even got started
because we had exceeded some unstated limit.

So it wasn't in fact an unlimited cost system at all.
There's no such thing as "cost is no object".

That's clearly true of the world's great religions.
Repair how and using what materials?

The same materials that were used to make it in the first place.

ALL you need is a situation where those are everywhere.
Like I said before, do you have a CK722 transistor handy
to fix my ancient 6 transistor AM portable antique radio?

Its obviously possible to make more the
same way that the one that failed was made.
I was lucky and found one that was made in the early 1960's.
Ok, that's about 50 years. In another 50 years, such replacement
devices will only be found in museums and landfills.
<http://ck722museum.com>

But we can still make more the same way the original was made
if we have enough of a clue to document how it was made.

That is in fact done with plenty of medieval stuff, even
when it was not documented how it was made then.
You could make a plug-in work-alike replacement using Si-Ge technology.

And that's all you need when you want to keep it going for 1000 years.
However, that would require that you upgrade (evolve) your
spare semiconductor fab production line to switch from your
original technology, to the latest technology, which didn't exist
when the original was made. Or, you could keep cranking out
Ge replacement parts, until your supply of Ge runs out.

Or you can just work out what the failure rate is likely
to be, multiply that by 10, make that many and just
keep using the replacements from stock as they fail.
You gave the example of the termite and the alligator.
I provided> links which demonstrate that both have
evolved and changed over the millennium, not 1000 year.

But you did not show that what evolution had happened over
that time was NECESSARY for the survival of that species.
Many species, including man, have not evolved much in 1000 years.

So your claim that the system being discussed would have
to evolve to survive for 1000 years has blown up in your face
and covered you with black stuff very spectacularly indeed.
However, for every one that hasn't evolved,
there are literally thousands of insects, bacteria,
fish, birds, and other species that have changed.

ALL we need is examples of species that have survived
fine for 1000 years without any evolution that had
anything to do with its survival to prove that your claim
that evolution is crucial to its survival is just plain wrong.
The list of extinct and endangered species
should offer a clue as to how it works.
<http://en.wikipedia.org/wiki/IUCN_Red_List>

All that shows is that a lot of evolution happens.

NOT that its essential for survival over 1000 years.

It would be a hell of a lot more surprising if we had
not seen a massive amount of evolution given that
we ended up with something as sophisticated as
humans from what was once just pond slime.
If it were that reliable, we wouldn't need ECC (error correcting) memory.

We don't with data engraved on nickel plates.
Dynamic RAM and hard disk drive densities are down
to the point where the electronics has to literally make
a guess as to whether it's reading a zero or a one.

That utterly mangles the real story.

And there is no reason why you have to push the envelope that
hard with something you want to last for 1000 years anyway.
ECC is a big help, but all too often, the device guesses wrong.
Even cosmic rays and local radioactive sources can cause soft errors.
<http://en.wikipedia.org/wiki/Soft_error>

Yes, but soft errors are easily avoided.

And just arent a problem with data engraved on nickel plates etc.
I see these all too often in a rather weird way. When one of my
servers experiences an AC power line glitch, it often flips a bit.

Just a lousy design. Doesn't happen with mine.

And again, doesn't happen with data engraved on nickel plates.
The bit is usually not being used by the OS or by an application.
Several days later, the machine crashes, without any warning or
apparent cause, when it needs to read this bit, and finds it in an
unexpected state. I've also run memory error tests continuously
for several days on various machines (using MemTest86 and
MemTest86+) and found random errors ever few days.

Again, just a system with a fault.
You can probably build something that is reliable and stable,

We know you can because even the egyptians did that.
but it will involve low density, considerable redundancy,
and plenty of error checking and error correction.

Not if you store the data by engraving it on nickel plates.
Example please?

The data the egyptians left behind.
1000 years ago, we were in the tail end of the dark ages.

And the data the egyptians left behind was around MUCH earlier than that.
If the satellite business had a good financial reason for the birds to
last longer,

There isn't, because technology improves
so dramatically in even just 100 years.
I'm sure they would have done it.

Yes, they egyptians clearly decided that they needed
that for various reasons and achieved that.
Right now, the lifetime of LEO and MEO birds
are fairly well matched to their orbital decay life.

Because that approach makes sense.
A 1000 year lifetime on the electronics, won't make much
sense if the bird falls out of the sky at 20-30 years. There
are numerous orbital decay calculators online.
Name another approach that isn't a circular
definition, such as "making it more reliable".

Just use an approach known to last much longer
than that like storing the data by engraving it on
nickel plates that need no maintenance at all.
What design philosophy should be followed
in order to produce a 1000 year design that
does NOT evolve in some way?

See above.
Ok, lets see if that works.

Corse it works. That's how plant and animal species
survive for MUCH longer times than 1000 years.

There are plenty of trees that last for more than 1000 years.
The typical small signal transistor has an MTBF
of 300,000 to 600,000 hrs or 34 to 72 years.

Typical is irrelevant. You'd obviously use very long
lived technology if you want it to survive 1000 years.
I'll call it 50 year so I can do the math without finding my calculator.

I'll go for engraved nickel plates so I don't even need to calculate
anything.
MTBF (mean time between failures) does not predict the life of
the device, but merely predicts the interval at which failures might
be expected. So, for the 1000 year life of this device, a single
common signal transistor would be expected to blow up 200 times.

So all you need is say 2K spares.
Assuming the robot has about 1000 such transistors,
you would need 200,000 spares to make this work.

So you have 2M and survive 1000 years fine.
You can increase the MTBF using design methods
common in satellite work, but at best, you might
be able to increase it to a few million hours.

So there is no problem.
Great. You're going to seal an engineer inside the machine?

No, you use more than one to design the system in the first place.
You mean like the cloud storage servers that are erratically having
problems?

No, like the way the egyptians chose to store what data they
wanted to store, which lasted much longer than 1000 years fine.
Please provide a single server farm or data dumpster
that operated on a sealed building basis.

Look at how the egyptians did theirs.
The larger systems take storage reliability quite seriously.
For example, Google's disk drive failure analysis:
<http://static.googleusercontent.com...ch.google.com/en/us/archive/disk_failures.pdf>

Irrelevant to how the egyptians did theirs.
Geosynchronous satellites are unlikely to suffer from serious orbital
decay. However, they have been known to drift out of their assigned
orbital slot due to various failures. Unlike LEO and MEO, their
useful life is not dictated by orbital decay. So, why are they not
designed to last more than about 30 years?

Because the technology evolves so much over that time
that you don't care if they last longer than that, they are so
hopelessly obsolete that they are replaced for that reason.

We don't do it like that with stuff as
basic as books that we want to keep.
Please provide a few examples of devices that were
INTENTIONALLY designed to last more than 1000 years.

The stuff kept in egyptian pyramids.
The 10,000 year clock is a good example. Got any more?

The stuff kept in egyptian pyramids.
The methods that are used for satellite life extension
(reloadable firmware) are directly relevant to doing
the same on the ground in a sealed environment.
No.
At the risk of being repetitive, the reason that one needs
to improve firmware over a 1000 year time span is to allow
it to adapt to unpredictable and changing conditions.

The egyptians didn't bother and theirs
survived for much longer than that.
True. However, not providing a means of improving or
adapting the system to changing conditions will relegate
this machine to the junk yard in a fairly short time.

It didn't with the machine the egyptians made, the pyramids.
All it takes is one hiccup or environmental "leak", that
wasn't considered by the designers, and it's dead.

It wasn't with the machine the egyptians made, the pyramids.
Stupid machines don't last and brute force is not
a long term survival trait. Ask the dinosaurs.

I'll look at the pyramids instead.
Various countries are doing a great job of making rare minerals both
difficult to obtain and expensive for political and financial reasons.
A commodity doesn't need to be scarce in order to be difficult to obtain.

They aren't in fact at all difficult to obtain except with
a tiny subset that are potentially dangerous like uranium.
Example please.

The egyptian pyramids.
 
U

upsidedown

On 05/10/2013 07:44 AM, Jeff Liebermann wrote:

It's quite common that normal computer parts work 10 years. High
reliability parts probably last 50 years. Keep 100 spare parts of each
and they last 1000 years, if they don't deteriorate in storage.

Did you see the pictures of the Fukushima reactor control room ?

So 1970's :)

But generally, also in many other heavy industry sectors with the
actual industrial hardware being used for 50-200 years, you might
still keep over 30 years old sensors, actuators, field cables and I/O
cards, while upgrading higher level functions, such as control rooms,
to modern technology.

The geostationary satellite lifetime is limited by the amount of
station keeping fuel on board. The earth is not a perfect sphere and
hence, sooner or later, the satellite would be moving in a figure of
eight, as seen from earth.

If the figure is larger than the ground antenna beam width, active
satellite tracking is needed, which would be unacceptable for at least
home receiver antennas. For these reasons, the satellite position has
to be maintained within a degree or two in both E/W as well as N/S
direction, which requires station keeping fuel, ultimately determining
the usable life time of a geostationary satellite.
Because we evolve. We update TV systems, switch from analog to digital
etc.

Since satellite transponders are simple "bent tubes", switching from
analog (FM) to DVB-S might have required some backoff in the TWT.
Going from DVB-S to DVB-S2 might require some further backoff, This of
course drops out some of the smallest receiving antennas.

The ana/digi switchover might require some higher power TWTs and/or
narrower satellite beams, otherwise the A/D change is not that
dramatic.
 
U

upsidedown

Yeah, its nothing like evolution in fact.

If we are using the evolutional model, several sites with different
technologies must be used.

Some of these sites are successful, some are not, but of course we do
not know in advance, which system will survive and which will fail.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top