If you have a real-world constraint optimization problem, of a size which
has any real meaning to its purpose, I'd hope your toolkit would be
something of substance like CPLEX or OSL and include an appropriate model
managemnt system. Anything else, you're wasting your time and your
computer's. OTOH if you've never done any LP or Mixed Integer Programming
and you need results, it'd be a good idea to get help/advice from someone
who has.
Constraint-based programming and functional languages, and in fact
spread-sheets in general are a possible paradigm for more general
types of parallel programming.
Since I'm not in a group filled with theoreticians, I won't go to the
web to check that I'm using all the language in the right way that
would get me a red check on my final exam, okay?
Spreadsheets have the appealing property that in non-iterative
programming (not what happens in a solve), memory locations are used
just once. That's a magic door in terms of the theoretical properties
of the programming model.
If you have a model with the correct properties (and that fact is
checkable in an automated way), you can just start anywhere and start
computing. Thread creation becomes trivial. When you run into a
piece of data you have that isn't available, you just put that thread
to sleep and wait for the data to become available.
People _have_ thought about this property of non-iterative
spreadsheets. They've also done work around what ways could be used
to get around the write-once limitation, but I personally think that's
a bad idea, and I think it will be more productive to think in terms
of a conceptually infinite spreadsheet.
I'm happy to admit that I haven't done any actual work in trying to
use the methods of OR as an approach to concurrent programming. I
stumbled upon the idea quite by accident because the acronym CSP can
be construed in two common meanings, and it turns out that both of
them are applicable to concurrent programming.
Because this is a problem of great commercial interest, I'd be willing
to bet that the theoretical foundations for this kind of work are more
solid than they are for most other models of concurrent programming.
It's probably a very good place to go prospecting for good work that's
already been done, and I'm always happy to find good work that's
already done, at least if I can understand it.
It may be that what I'm talking about is all just old hat to someone
who does this for a living. If so, maybe this is my lucky day.
If I got lucky, I might find a package out there in a language I can
understand that illustrates all the nearly-magical properties that an
infinite spreadsheet would have. If you know of such a thing, I'd be
really interested in hearing about it. If I don't find such a thing,
I will be pursuing the idea from the other end of the telescope,
probably starting nearly from scratch.
<shrug> People have done it and people have sold the tools to do it.
Granted most such models are too complex for such a simplistic approach and
need a modeling language. End users, however, like the spreadsheet idiom
for viewing and manipulating their raw data. There are even people who
believe that the failure of Lotus Improv was a terrible disaster.
The spreadsheet occupies nearly a unique place in the history of the
development of the computer, as far as I'm concerned. Aside from
providing the killer app for PC's, it made mathematics that would
otherwise have been inexplicable to the average user immediately
transparent. The appeal as a conceptual/pedagogical model is
undeniable. It just isn't a very efficienct way to do computation.
M$ is the last company which is going to be able to supply optimization
software - as is often the case when things get complex, the Solver in
Excel was not done by them.
No big surprise. Neither did they do the speech-recognition software
in Word. For all the scathing criticism I've levelled at Microsoft,
they know how to shop for quality. What they do with it when they get
it is another matter.
You are just a naif about optimization and model management then. I don't
care what temperature that raises with you. There are many "models" which
are so bad they are laughable - usually written by the grant hunters for
the usual nefarious purposes; the optimization models used in business are
usually good, used rather well and perform an extremely valuable function
in the realm of strategic and tactical planning... IME.
I know of at least one example where models are routinely used in
business and I would have a hard time imagining how people would do
without them, which is that businesses use them in price setting. How
much skill is involved in actually using one, I wouldn't have a clue,
but I can imagine that it being used in the way that you seem to
imply: as a routine tool for solving day-to-day problems, where the
user of the model would get alot of practice and would probably
benefit very little from understanding all that much about how the
model works.
The level at whcih I have seen models used that just simply horrifies
me (and it's a horror that extends to practically every realm in which
computer models are used) is that people who don't understand either
the model, the mathematics, or the limitations of either, use a
computer model to make sophisticated, non-routine decisions. I've
seen it in business, and I've seen it in science and engineering. You
can call me a naif if you want, but I've seen people in action, and I
haven't liked what I've seen in most cases. The power of the
spreadsheet as a visualization tool is also its weakness. It gives
people the illusion that they understand something and that they've
thought things through when they haven't.
It's called shooting at the moon. M$ does not have the depth of experience
to even realize how far short of the mark they really stand. Basically,
for both M$ and Intel, 3rd party OEMing/distribution will not gain entry to
that market. That the PC allowed M$/Intel some penetration there was
simply inertia in the computer marketplace - it does not mean that they are
now in a position of guiding that market forward.
Both companies are in a very odd position. They both manifestly
understand some aspects of running a business because they actually
run one themselves. The fact that they have been so successful at
those businesses may lead to a certain myopia (of the kind that I
think was plainly the undoing of DEC).
I'm willing to give both of them more credit than you are. The
players strike me as hard-nosed people who are as unsparing of
themselves as they are of others. The absense of complacent arrogance
doesn't rule out the possibility that they will be able to see beyond
the limits of their own success, but I would hesitate to bet against
either player in any enterprise they undertake. That is to say I
would _hesitate_, not that I would be completely unwilling.
Sorry but the toolset your going to build is not even in the starting
blocks when the race has already been run. You have no conception of the
effort involved. You can't just collect a bunch of feel-right data and
throw it at a "toolset". If you compare xFFT tools against a Mathematical
Programming package, you may find that the nucleus of the core of the
kernel of the system contains some code which bears a resemblance; other
than a few loops which do matrix operations it's not even close.
I was groping around for examples of software that takes account of
its environment (cache size, the actual nature of the parallel
environment it is to run in, etc). In that respect the FFT packages I
mentioned are pretty advanced in comparison to other software I've
seen. They may not come close to what you need to solve your
problems, but they are an example of an approach you can take to make
software adapt to a hardware environment that itsn't known ahead of
time.
There are established algorithms for the things which have to be done in
decision support, optimization in particular. For practical models, they
involve fairly intensive spurts of FP but, as I'm sure you're aware, matrix
sparsity is an important consideration. The results obtained so far for
Itanium are not favorable.
Itanium and sparse matrix or matrices only turns up a handful of
papers in the ACM digital library. Google, though, finds eleven pages
of results. Maybe I find your attitude toward Itanium programming as
naive as you find my attitude toward decision support software.
It would be foolish of me to say that there's anything there or not
there, because I haven't looked. Sparse matrices almost inevitably
involve indirect addressing. Just exploring and documenting the
possible strategies that could be used to solve the indirect
addressing (and the related pointer-chasing) problem with Itanium in
its current incarnation and possible future incarnations could keep an
entire team of researchers busy full time.
The place this grabbed me is that I know how expensive that kind of
research is. Suppose you could invent a feedback algorithm that you
could just turn loose with a random set of training data, and it would
find the best approach automagically. People have done such work with
algorithms in general, but it's a long way from being competitive with
talented human programmers at the moment.
Considering what you've revealed about what you know about decision support
software....<shrug>
Do put-downs help communicatoin? said:
I don't have an agenda. This started out as a discussion about the
practicality of retrain/feedback compilation for software in general. I
still maintain that it does not fit the distribution model for commercial
software and if you expect ILOG/CPLEX et.al. to supply their source code to
end users so they can retrain the performance for the various model types
they have, it ain't gonna happen - it's umm, trade secret.
This subject has been thrashed through on comp.arch. Another
possibility is to distribute code in some kind of intermediate
representation and have a compiler that could handle the intermediate
representation. The intermediate representation can even be
obfuscated, if necessary. Apparently, such things have already been
done with commercial software, although not with Itanium, as far as I
know.
If you think
that a static retrain at system build time is going to work - well it'll
work but err, I thought the whole point was to get the best out of the
hardware for any given code/dataset... enter OoO!! IOW, for me,
retrain/feedback is a commercial red herring, which may OTOH, interest a
few researchers who develop their own code. They dont live in the
business/commercial world which pays the bills for Intel.
If you're going to run a whole bunch of iterations on the same model,
changing only parameters, I'd be fascinated to see how a binary would
evolve. As a practical matter, you'd have to worry about the overhead
of the time and effort involved in rebuilding, although that can be
completely automated. It is possible, even with non-open source
software as described above. If you're running on dedicated iron,
then there isn't anything much that OoO can do for you that constant
retraining wouldn't. It's the unpredictability of the environment
that OoO can do and even constant retraining can't.
If Itanium survives, I expect speculative threading to solve a big
slice of the problems that full up OoO would solve and much earlier
than even limited OoO will be available.
RM