ROTFL! It was not the first encounter to begin with. Neither would I care..
It was not. It was about managing risk of software failures.
It was not, either. But it's apparently too multilevel for you to grasp.
Nonsense. You apparently can't read what's got written. You only
extrapolate from your lack of clue. "Lucked out" is simply projection of
your imagination.
Get your own advice. And get consistent. Your claims about financial
institutions risk management and crisis contradict your above statement.
ROTFL! First be consistent in what you talk about. Then buy a clue.
First one for free: risk estimation is just that (estimation) and
estimation might be over or under.
The general problem here is that you fall into the same trap about
probability and statistics that nearly everyone does, and it's why the
idea of economic "science" is so appalling. The idea of computer
"science" is nearly as appalling.
Science relies on the ability to reproduce results by way of
experimentation. If an experiment fails to reproduce a result, it
might be because one or the other of the experiments was flawed in
some way. You can examine the apparatus and the data. You can try
again. Others can try again.
In economics, you can do no such thing. *Maybe* your clients did well
because they have better methods, or maybe they were just lucky.
There is absolutely no way to tell, and no experiment you can perform
to prove or disprove either hypothesis, because conditions in
economics are never going to be the same way twice on a macro scale.
If you had said that you understood probability and were confident of
your own methods of estimating risk, I've have had to admit that it's
you're field, you're closer to the action than I am, and you have to
live with the consequences of your own misjudgments.
Instead, you referred to the judgments of "experts"--your customers in
finance. The entire world of financial engineering and economic
"science" is on the defensive right now because the entire methodology
of estimating risks and placing financial bets failed on a grand
scale.
Even in catastrophe, there will be some winners. There will always be
winners and losers in finance, but the fact that one group happened to
be a winner in some particular situation proves nothing. That the
entire system nearly collapsed and *didn't* collapse only because of
extraordinary and morally questionable intervention is another
matter. In retrospect, as with a collapsed building, it's easy to see
the structural flaws, which is a different matter from bad luck.
The point is that the methods of risk estimation failed to identify
the structural flaws and that that failure itself appears to be
structural.
The general problem is that you can never know if you left something
out. That's true even in laboratory science, but if a completely
different group sets up another experiment and gets nearly the same
result, you have some confidence that you have science and not mere
coincidence.
Climate "science" has the same problem. We can't perform experiments
with weather and climate (at least not yet). Yet people are forever
looking at irreproducible subsets of data and claiming to draw
conclusions from them, just as you are looking at an irreproducible
subset of data and claiming to be able to draw conclusions from it.
[...]
I was addressing your earlier expressed visions how the software world
should be.
It is widely agreed that there is no silver bullet, but we could be
doing much better than we are now and the world would be much better
for it.
Since the world of software is dominated by overconfident snots like
you, I see very little chance of improvement, near or far term, but
that reality has little to do with whether we could be doing better or
not.
You have little grasp of reality.
The wordld of software is as is because what you advocate is simply not
effective. It's plain simply better instead of spending money to bring
software error levels to your liking to spend them on something
productive. It only pays for itself in places where cost of failure is
extreme.
For example theft is at a significat levels in supermarts, yet
increasing security is simply more expensive than dealing with the fact
that some percentage of goods will get stolen. Similarily bank robberies
are pretty frequent in Europe (in major cities you have one bank robbery
every 5 days) yet as average amount stolen is less that one year wage of
a security person, it simply does not pay to increase security.
The analogy is not a good one. If someone takes a loaf of bread and
doesn't pay for it, your entire loss is the cost of putting that loaf
of bread on the shelves. If someone cracks your credit card
processor's security, the potential loss is simply incalculable.
The problem with software reliability, just as with placing financial
bets, is that there is no way of foreseeing or even bounding the
possible costs of a mistake. That you continue to insist you can
place bounds on the possible losses exposes the shallowness of your
thinking.
People in the business whom I respect say things similar to what
you're saying, but they say it with regret and humility: we get crappy
software because no one is willing to pay for good software.
Robert.