Cheap memory?

S

Scott T. Jensen

Scott T. Jensen said:
I was posting for possible unconventional ideas
on how to get it.

For example, the following a suggestion someone emailed me after we
exchanged a few emails:

"Few inexpensive computers are going to let you have more than a small
number of gigabytes of memory today, the market just hasn't reached the
point where I can go buy a cheap motherboard and put in even 4 gigabytes,
let alone 64 gigabytes. Fortunately hard drives are cheap and with add-in
ide cards you can drop up to 8 drives into a box without much trouble. That
gives you perhaps 1.6 terrabytes of disk for perhaps only $1000 if you hunt
carefully for bargains.

You might consider building your own cluster of machines. People are
getting enough experience with home-built clusters that there is some
understanding of how to do this. That can let you put 4 or even 32 or more
inexpensive machines in the same room, all communicating with 100 megabit or
even gigabit ethernet in a secure and dependable fashion. The cost of
networking has come down so far that this is feasible. And this can allow
you to use cheap motherboards. Fry's puts their cheap and sleazy ECS
motherboard&2+Ghz AMD processor sets on sale for $60-$100 on a regular
basis. Those will easily accept 1.5 gigabytes of ram for under $300 more.
And they have built in ethernet. The price of ethernet switches has fallen
so far that you can gang dozens of machines together with a switch for only
$100 or so. Get a good fan for the processor, a cheap case, a good power
supply and I'm putting 3ghz number crunchers in a grid for under $400 each.
That has compute power and for a price that I could never have dreamed of,
and cannot buy in a single machine today. That is also convenient in that I
can just keep buying more $400 at a time and dropping them onto the task.
And if the price of memory would drop back to 2/3 or 1/2 of what it is at
the moment they would be even more of a bargain."

I'd appreciate input on this suggestion. Thanks.

Scott Jensen
 
D

David B. Held

Scott T. Jensen said:
[...]
For example, the following a suggestion someone emailed me after
we exchanged a few emails:
[...]

That is hardly an "unconventional" suggestion. The main problem is
that you haven't specified any targets. You haven't said how much
you can spend, what capacity/performance ratio you need/want, how
much RAM/mass storage you need (minimum/ideal), what kind of
processors you want, or anything else that is of relevance to such a
system. And there is no single "fastest" design. It depends on your
application. So if you want people to offer constructive advice, you
have to describe the characteristics of your app. Think of it this way.
Suppose I said: "I'm designing a vehicle. I want people to tell me
how to get a lot of cheap materials. Be creative." Could you answer
that question in a useful way?

Dave
 
S

Scott T. Jensen

David B. Held said:
The main problem is that you haven't specified any targets.

Here's the best I can do for that. I've run this past my lead programmer
and he thinks it's fine. He also thinks it's hard to know what we'll need
until we try to implement it. Given that...

Processor: It doesn't really matter. Faster would be nicer, but it isn't
crucial to the AI concept. I plan to get the fastest one at the price point
just before it spikes up because it is close to the fastest one on the
market. However, if this will enable the RAM to be larger, then I'd be
willing to put money into it to get one that can handle the most RAM.

RAM: As much as possible for as little as possible.

Hard-drive: As much as possible for as little as possible.

Video card: Doesn't really matter. Need one for the human interface but
practically anything will do.

Parallel processing isn't needed.

Cannot be done over the internet. Neither distributed computing or off-site
hard-drive (memory leasing). The reasons are many. Slows down process too
much; if net connection goes down, AI goes down; susceptible to viruses
(especially if done distributed computing style); security; and other
reasons ... more of which I'm sure you can come up with yourself.

Can be done in a cluster style ... a LAN of computers. However, would only
do this if there was a huge cost savings.

Budget: Since this computer will only be used for the AI project, there's no
guarantee that the AI project will succeed, and I don't see any other use
I'd have for it otherwise, I want to spend as little as possible. Would
like that to be under $2,000, but would consider spending as much as $10,000
.... if there's a really good reason to go above $2,000.

Scott Jensen
 
A

Al Dykes

(huge list of groups edited out)

If you are actually going to try to develop something, the cost of the
manpower is going to dwarf the cost of your first PC to try to run it.
A thousand bucks gets you lots of computrons today, and the machine
you buy 6 months from now will be faster, for constant bucks.
 
C

Curt Welch

Scott T. Jensen said:
Would
like that to be under $2,000, but would consider spending as much as $10,
000 ... if there's a really good reason to go above $2,000.

Scott Jensen

Do you have any intest in talking about your AI idea? It might save you
$2000 or make you think about some important issues you have overlooked.
I'd be happy to chat off line if you would like.

Are you trying to solve the general AI problem of making a machine
intelligent, or are you just working on some AI application which you think
will have potential value?
 
A

ancra

Here's the best I can do for that. I've run this past my lead programmer
and he thinks it's fine. He also thinks it's hard to know what we'll need
until we try to implement it. Given that...
Processor: It doesn't really matter.

Why doesn't that matter? ...Or, why do you think it doesn't matter?
If processor indeed doesn't matter, and your programmer doesn't see
any problems handling RAM, he is not considering beyond 1.5-2 GB. - I
can tell you that!
- And in that case you're fine!
Faster would be nicer, but it isn't
crucial to the AI concept. I plan to get the fastest one at the price point
just before it spikes up because it is close to the fastest one on the
market. However, if this will enable the RAM to be larger, then I'd be
willing to put money into it to get one that can handle the most RAM.

Ok, then processor does matter.
RAM: As much as possible for as little as possible.

Just curious: How do you propose to access/address that RAM.

I think Joe Legris already posted this highly crucial question. You
didn't seem to get the significance of that?
If your programmer doesn't have anything to say about that, In fact,
if he didn't volunteer something about that...
People considering very large memory requirements, usually understand
they need a 64-bit system.

ancra
 
S

Scott T. Jensen

Curt Welch said:
Do you have any interest in talking about your AI idea?
It might save you $2000 or make you think about some
important issues you have overlooked.

Right now, I'm researching what has already been done in the AI field to see
if my idea has been tried and proven wrong. I've read "Mind Matters:
exploring the world of artificial intelligence" by James Hogan, read
numerous web articles on AI, and am currently reading "Artificial
Intelligence: A Modern Approach" (2nd Ed.) by Stuart Russell and Peter
Norvig. From what I've read so far in Russell's book (as of this post, I'm
on page 73), it doesn't look like I will find my idea in his tome. In
reading his history of AI, it would seem AI researchers haven't yet thought
of exploring the approach that I believe will lead to AI. However, I'll try
to finish reading it to be sure ... though I'm not math-friendly and that
handicap has now increasingly slowed down my rate of progress through the
book.

If, after reading Russell's book, I still haven't come across my AI idea,
I'll begin searching AAAI's archives for it. Focusing first on what has
been published since the release of second edition of Russell's book and
then, if that proves unfruitful, hunting through the rest of the archive for
it. However, I'm not looking forward to paying the hefty AAAI membership
fee to do so and am hoping to find a friend, a friend of a friend, or
someone local that is an AAAI member who will let me use their account to
access the archives, such as a kind-hearted local professor at University of
Wisconsin - Madison, Edgewood College, or Madison Area Technical College ...
as I live in Madison, Wisconsin, USA.

If I do not find my AI concept in the AAAI archives, I will try to find an
AI professor or researcher within reasonable driving distance of where I
live that feels s/he has enough knowledge of what has been tried in the AI
field to be a good sounding board for my idea ... and one that I feel I can
trust not to run off with it. For you see, I've grown rather fond of my
little idea and would like to be the one that brings it to life. Yes,
that's a bit selfish on my part, but I never said I wasn't human. ;-)

Currently while I do my reading, I'm mentally operating under the assumption
that my AI idea has been looked into thus I am just trying to find out when
it has been done and why it failed or why the basic concept was shown to be
faulty. My idea seems obvious to me and thus I assume it would be to others
.... or at least one AI researcher somewhere at sometime in the past sixty
years. However...

I have three close friends who are experienced professional programmers and
who say they've kept up on the AI field. I've presented my AI idea to each.
After a rather thorough grilling of me by them, they've stated that they
haven't heard of anyone taking my approach and find it interesting. We've
even had a few evening sessions where they've started to chart out what
would be needed for the AI program. Software, not hardware. My take on how
to bring about my AI is that we just need to tweak several already-existing
computer programs and get them to work together to get them to do what I
want them to do. I am quite comfortable at this stage with any
inefficiencies this might allow. However, at least one of the components
for my AI my friends feel will need to be built from scratch ... if not all
the components, as one of the three maintains. These programmer friends are
the programmers I've been referring to in this thread and who are willing to
help me do this project. That and they want me to just tell them what each
component is to do and let them figure out how they'll get that component to
do that for me. They keep telling me not to be concerned about what goes on
inside the "black box" as my contribution is setting up the rules for how
the AI is to think once all the components are set to go. Personally, I
feel like the captain of a ship with a loyal crew that wants to go wherever
I want to go, but none of which want me to help out in the boiler room to
get the ship underway ... and tend to get upset everytime I poke my head
into the room to see how they're doing. *laugh* However, before I ask them
to volunteer their time any further, I've decided to try again to find if my
AI idea has already been attempted and proven wrong.

Just recently, I've emailed a few AI professors at MIT, Stanford, and
Carnegie-Mellon University. I selected these universities as they have
well-known AI labs. I read each college's AI department/lab's faculty
homepages and tried to find the faculty members whose expertise and/or area
of current research is closest to what I feel my AI concept would call home.
I then emailed them asking for reading recommendations that would help with
my search for my AI idea. The professors at MIT and Stanford never replied
back. Fortunately, the two professors I contacted at CMU did. One
suggested that I read Russell's book and the other suggested I dig through
the AAAI archives as outlined above. Both recommended that if after doing
their suggestions, I still don't find my idea, that I present it to an AI
professor or researcher (that I trust) to see if they've ever heard of it
being attempted. One of the two CMU AI professors then said that if that AI
expert says it hasn't been attempted that I should go ahead and "just
implement the thing and show us all." The other professor said that if the
AI expert said my idea hasn't already been attempted and holds promise that
I should pursue a doctorate in AI.

Saying my search gets me this far, I'm not sure I'd pursue a doctorate in
AI. From my readings of AI graduate programs, they require a lot of math
and computer languages. Neither of which interests me and both of which I
know I'd struggle through due to that lack of interest ... if not -- in the
attempt -- die of numbness of the mind. I'm a psychologist by nature
(possessing a humble little BA in psychology with a minor in marketing ...
and having worked for many years now as a marketing consultant and am now in
the process of launching my own talk show, www.scottjensenshow.com) and
barely know how to turn on my computer. That and my programmer friends
understand my limitations and are willing to provide the technical knowledge
and expertise to see what develops by attempting to bring about my AI idea.
My friends want us to do it along the lines of how the Wright Brothers
brought about the age of aviation. However...

Working with my friends to bring about my AI will mean the construction of
the AI will take a long and sporadic time since they'll be working on it in
their free time ... when they have free time. Whereas working in an AI lab
with dedicated researchers helping me would speed up the timetable
significantly. However, I'd only do that if I could get a full scholarship
(a.k.a. "full ride") to pursue a doctorate in AI. Then there's the issue
that I'm be a "bit" old for a graduate student at the age of 40. However,
still being single I think I could handle the social ramifications of that
.... especially if I could just come in and work on my idea with the help of
an AI professor and a couple of her/his lab's talented computer science
graduate students. That and not have to take any college courses in
computer science and math ... though I'd be up for attending some
graduate-level psychology courses. If I was able to get all that, I would
then not need to buy a computer for this project since I would then be
allowed use one of that university's supercomputers. Yes, yes, I know this
paragraph is just wishful daydreaming. Anyway...

I am looking at this search for my AI concept as merely an intellectual
exercise and a nice way to motivate myself to educate myself further on this
topic. If I learn that my AI idea has been attempted and proven wrong, I
will be a bit down but not terribly so. First, I am still expecting to find
out that it has already been done and tossed aside. Second, I would very
much want to know why it is a bad idea. And, third, I feel the mere pursuit
of knowledge has a value in and of itself.

And while I do the above research, I've decided to address the hardware
aspect of this project and thus the reason for this thread.
I'd be happy to chat off line if you would like.

I appreciate the offer. If you would email me your credentials, I will look
forward to receiving them. Also, are you within reasonable driving distance
of Madison, WI, USA? I'd prefer to meet face to face. That and I'd like
you to sign a confidentiality agreement before I tell you my idea.
Are you trying to solve the general AI problem of making
a machine intelligent, or are you just working on some AI
application which you think will have potential value?

I'm attempting to do the first one.

Scott Jensen
 
S

Scott T. Jensen

Scott T. Jensen said:
Here's the best I can do for that. I've run this past my lead
programmer and he thinks it's fine. He also thinks it's hard
to know what we'll need until we try to implement it. Given
that...

Processor: It doesn't really matter. Faster would be nicer,
but it isn't crucial to the AI concept. I plan to get the fastest
one at the price point just before it spikes up because it is
close to the fastest one on the market. However, if this will
enable the RAM to be larger, then I'd be willing to put
money into it to get one that can handle the most RAM.

RAM: As much as possible for as little as possible.

Hard-drive: As much as possible for as little as possible.

Video card: Doesn't really matter. Need one for the human
interface but practically anything will do.

Parallel processing isn't needed.

Cannot be done over the internet. Neither distributed
computing or off-site hard-drive (memory leasing). The
reasons are many. Slows down process too much; if net
connection goes down, AI goes down; susceptible to
viruses (especially if done distributed computing style);
security; and other reasons ... more of which I'm sure you
can come up with yourself.

Can be done in a cluster style ... a LAN of computers.
However, would only do this if there was a huge cost
savings.

Budget: Since this computer will only be used for the AI
project, there's no guarantee that the AI project will
succeed, and I don't see any other use I'd have for it
otherwise, I want to spend as little as possible. Would
like that to be under $2,000, but would consider
spending as much as $10,000 ... if there's a really good
reason to go above $2,000.

The following is what someone, who wishes to remain anonymous but willing to
allow me to post what they sent to get input from others on it, emailed me
on the above:

"Now, price seems to be the limiting factor on your project, that's sane.

Since you have a rough upper bound on the money to spend on the project it
seems that more ram means less hard drive, and vice versa. Fortunately you
can often use some of one of these for some of the other. But hard drives
are about 60 cents/gigabyte while ram is about 160 dollars/gigabyte. So I'd
suggest:

$30 power supply
$70 Fry's special sale ECS motherboard&AMD 2+Ghz processor
$30 fan (or they run specials with board&2.5Ghz proc&fan for $100)
$80 512 meg memory
$480 4x200 gigabyte hard drives
$5 cheap pci video card
$5 cheap mouse
$5 cheap keyboard
$0 scrounged cabinet to put all this in
$0 scrounged free operating system
$0 scrounged network stuff to tie these together

That's about $700. Buy one now to get your programmer busy. Getting the
first one going will be lots more work than the rest. Every time he starts
to run out of room buy him another one until you have succeeded or concluded
the project won't work or have run out of money.

Then if the project didn't work take all the drives out of these, do scans
of each drive to ensure that they are all fine and sell them for a plausible
fraction of what you paid. Clean tested drives that size are easy to sell
and ship and even a year from now you will be able to sell these without
trouble. (Save the boxes they came in for shipping)

My testing shows there is only a 20% increase in performance when going from
the AMD 2000 to the AMD 3200 for cpu intensive number crunching, the AMD
2600 Barton looks like the place to be right now. Buy big loud room fans to
blast air against the motherboard/heatsink/cpu fan, these run hot and you
don't want them hot.

So for your $2000 budget you get 2.4 terrabytes with three fairly fast cpu's
to access it and share the processing load, 100 megabit networking to
communicate is built into the motherboard and fairly zippy. Going to
gigabit networking is only a few dollars more and doesn't give vast
increases in performance but you might notice. At the very top end of your
budget you might have 12 terrabytes and 15 cpus. And for any $ figure you
choose you could cut your storage in half if you wanted to triple your
memory, for the same price.

I don't think you can buy used servers that will beat these specs and price,
but you could certainly poke around and see. There have been lots of
bankruptcies in the internet business and you might stumble on a bargain if
you hunt long enough and find where to look."

They then quickly sent me the following add-on note:

"Something I did think about this afternoon. The proposal has some degree
of flexibility in it.

You could, for example, only drop in additional hard drives as the
programmer filled them up. That would help keep costs in line and be a
monitor on progress. Likewise for extra memory sticks.

You could also make another change, you could jam more into a single case.
Instead of using three cases to hold the memory and drives you could
incrementally bump up memory half a gigabyte at a time and drop in pci hard
drive controller cards and drives. That saves buying more motherboards,
processors, cases, power supplies and some network hardware. But really the
extra cost of drive controller cards, more expensive cases and power
supplies looks like the overall cost will still be just about exactly the
same. And you won't get the extra processing power. If you could find a
case and power supply that would handle it, without paying a fortune for it,
you could get up to 1.5 gigs of memory fairly easily and without paying the
outrageous 1 gig/stick memory prices, and perhaps cram up to perhaps 4
terrabytes of drives into a single huge server case."

Scott Jensen
 
C

Curt Welch

Scott T. Jensen said:
Currently while I do my reading, I'm mentally operating under the
assumption that my AI idea has been looked into thus I am just trying to
find out when it has been done and why it failed or why the basic concept
was shown to be faulty.

A very common prolem in AI is that people think they have come up with a
new idea, but after enougth study, it becomes obvious is just a different
way of looking at the something which has already been will studied. You
don't recognize it as being old, becaue you are just using different words
to talk about it.

However, most old ideas have never been proven faulty. They have just not
yielded the type of fruit which was hoped for. That could be because the
idea is just a very weak one to begin with, or it could be that the idea
was good, but the old approach was just lacking some important twist, or
new understanding.

So, no matter where you idea my lie in the range of everything AI, if you
have a new way to look at an old problem, you could still discover
something significant and new.

The important thing for you to do is exactly what you are doing. Studying
all the old work on AI and you can understand for yourself how your ideas
are similar, and different, from what has been done in the past.
My idea seems obvious to me and thus I assume it
would be to others ... or at least one AI researcher somewhere at
sometime in the past sixty years. However...

Everybody has a different perspective and if you just happen to have the
right one, you will uncover something new.
I have three close friends who are experienced professional programmers
and who say they've kept up on the AI field.

What I find is that there is a huge difference bettwen people that have
been "keeping up" with AI, vs trying to build, AI. You can't really
appreciate AI until you have had the ideas, like you are currently having,
built them, found out they didn't work, tried for years to make them work,
throw them away, come up with new ideas, try for years to make those work,
throw those away, repeat until you have AI.

Very few people are doing this in the world. Many programers like your
friends have been exposed to past work, but I would be willing to bet that
none of them ever once came up with their own idea about how to solve AI,
and tried to build it.

A huge percentage of people working in AI have long since given up on (or
never even tried) solving the general AI problem, and are instead, working
on mastering some intersting sub-domain problem. Many who have made AI
their work, or even forced to do this, because if you want to continue to
get paid, you better be producing results. And if the only problem you are
working on is the "big one", and you have produced AI, it's going to be
hard to get funding. So they identify sub-domain problems that seem
important to the big problem, and make good progress studying the
sub-doman.
I've presented my AI idea
to each. After a rather thorough grilling of me by them, they've stated
that they haven't heard of anyone taking my approach and find it
interesting. We've even had a few evening sessions where they've started
to chart out what would be needed for the AI program. Software, not
hardware. My take on how to bring about my AI is that we just need to
tweak several already-existing computer programs and get them to work
together to get them to do what I want them to do.

The one thing that makes me doubt your approach without even knowing what
it is, is the fact that you do not seem to be a strong programmer (or did
you say you don't program at all?). Solving AI is an engineering problem.
It's a very hard engineering problem. You need very talanted and sharp
engineers to find solutions which nobody else has every been able to find
after 50 years of looking at the problem.

Also, people who are not engineers, or scientist famillar with behavior
research, tend to be easily fooled by human behavior. They just have no
clue what intelligece really is. This is the problem we all face by being
"too close" to the problem. We think we know that intelligece is just
because we think "we" are intelligent. The truth is, we aren't
intelligent. Our brains are. And even though we think we know who "we"
are, we don't. It's all a very confusion thing to your arms around.

AI projects fail not because the solution was mis-founded, but because the
problem was mis-defined. They didn't understand what intelligence is.

And that contines to be the real problem of AI - finding the right
definition of what intelligece is.

So the real question you need to ask yourself, is not do you have the
correct solution, but do you have the correct defintion of the problem.
I am quite
comfortable at this stage with any inefficiencies this might allow.
However, at least one of the components for my AI my friends feel will
need to be built from scratch ... if not all the components, as one of
the three maintains.

Programmers like to program. They don't like to waste time dealing with
somebody elses code. They will always push you to let them create new
code. That's just what we do. :)
These programmer friends are the programmers I've
been referring to in this thread and who are willing to help me do this
project. That and they want me to just tell them what each component is
to do and let them figure out how they'll get that component to do that
for me. They keep telling me not to be concerned about what goes on
inside the "black box" as my contribution is setting up the rules for how
the AI is to think

The black box issue is key. Because that's the defintion of the problem.
You have to describe the problem of AI in terms of what the black box is
going to do. You have to take the extensional stance, when you define the
problem. And, as I said above, getting the correct defintion of the
problem is the AI problem.

If you get the black box definition wrong, then it makes no difference what
so every what you choose to put in the box. If you have framed the AI
problem incorrectly, you have no hope of solving it.

And that takes us to the next point. You wrote "how the AI is to think".

By saying those magic words, you have already framed your problem. You
have already made a decision that the black box must think. Or your
programmers have made that decision. And thinking about AI as a "thinking
machine" has long been shown to be the totaly wrong way to frame the AI
problem. This gets back to what I said about how we, as humans, are too
close to the subject of the problem (humans) to see it for what it really
is. We think we have free will. We think we "think". We4 think we use
logic to make decisions. We think we have goals. We think too much and
understand too little at times.

What you have to answer before you waste any time or money, talking about
what you are going to put inside the black box, is what intelligence really
us. For any black box approach, you have to describe how the box
interfaces with the world, and you have to describe what the purpose of the
stuff inside the black box is. Not in terms of what you put inside the
box, but in pure extensional terms of what the black box is going to do.
And, ignoreing what will need to be inside the box, you have to explain to
yourself, why, the black box, as you have defined it, is "intellgent", and
not just some typical computer doing typcial computer things.

Once you are sure you have the right defintion of intelligence in form of a
black box description, then you can go about trying to figure out how to
build the inside of the box.
once all the components are set to go. Personally, I
feel like the captain of a ship with a loyal crew that wants to go
wherever I want to go, but none of which want me to help out in the
boiler room to get the ship underway ... and tend to get upset everytime
I poke my head into the room to see how they're doing. *laugh* However,
before I ask them to volunteer their time any further, I've decided to
try again to find if my AI idea has already been attempted and proven
wrong.

Just recently, I've emailed a few AI professors at MIT, Stanford, and
Carnegie-Mellon University. I selected these universities as they have
well-known AI labs. I read each college's AI department/lab's faculty
homepages and tried to find the faculty members whose expertise and/or
area of current research is closest to what I feel my AI concept would
call home. I then emailed them asking for reading recommendations that
would help with my search for my AI idea. The professors at MIT and
Stanford never replied back. Fortunately, the two professors I contacted
at CMU did. One suggested that I read Russell's book and the other
suggested I dig through the AAAI archives as outlined above. Both
recommended that if after doing their suggestions, I still don't find my
idea, that I present it to an AI professor or researcher (that I trust)
to see if they've ever heard of it being attempted. One of the two CMU
AI professors then said that if that AI expert says it hasn't been
attempted that I should go ahead and "just implement the thing and show
us all." The other professor said that if the AI expert said my idea
hasn't already been attempted and holds promise that I should pursue a
doctorate in AI.

Saying my search gets me this far, I'm not sure I'd pursue a doctorate in
AI. From my readings of AI graduate programs, they require a lot of math
and computer languages. Neither of which interests me and both of which
I know I'd struggle through due to that lack of interest ... if not -- in
the attempt -- die of numbness of the mind. I'm a psychologist by nature
(possessing a humble little BA in psychology with a minor in marketing

Ah, your psych background might help you understand us for what we are - if
you understand what people like Skinner were saying. That might be even
better than good engineering skills. If you can turn your knowledge into a
good black box description of what it is to be intelligence, then you can
let the programmers try to figure out what to put inside the box. If you
don't have the skills to create a good extensional black box specification,
then you will have problems getting your programmers to do anything
productive - unless they have the interest and skills to turn your ideas
into a good black box specification.
... and having worked for many years now as a marketing consultant and am
now in the process of launching my own talk show,
www.scottjensenshow.com) and barely know how to turn on my computer.
That and my programmer friends understand my limitations and are willing
to provide the technical knowledge and expertise to see what develops by
attempting to bring about my AI idea. My friends want us to do it along
the lines of how the Wright Brothers brought about the age of aviation.
However...

Working with my friends to bring about my AI will mean the construction
of the AI will take a long and sporadic time since they'll be working on
it in their free time ... when they have free time. Whereas working in
an AI lab with dedicated researchers helping me would speed up the
timetable significantly. However, I'd only do that if I could get a full
scholarship (a.k.a. "full ride") to pursue a doctorate in AI. Then
there's the issue that I'm be a "bit" old for a graduate student at the
age of 40. However, still being single I think I could handle the social
ramifications of that ... especially if I could just come in and work on
my idea with the help of an AI professor and a couple of her/his lab's
talented computer science graduate students. That and not have to take
any college courses in computer science and math ... though I'd be up for
attending some graduate-level psychology courses. If I was able to get
all that, I would then not need to buy a computer for this project since
I would then be allowed use one of that university's supercomputers.
Yes, yes, I know this paragraph is just wishful daydreaming. Anyway...

Maybe people have become addicted to the dream of AI. I'm one of them.
There are many others. Most are considered "kooks" because the AI problem
has a bad habit of looking easy. But, the mistake most the "kooks" make,
is not understanding the importance of the getting the defination of
"intelligence" correct. They all assume that "know" what intelligence is
just because they are a human, and start to "solve" the problem based on
the fact that they obviously know what it is to be intelligent, so they
understand the problem.

AI is hard, because no one understand the problem. That's why it's so damn
hard to find the solution. We don't know what the problem is - past the
far too vague idea of "being smart".
I am looking at this search for my AI concept as merely an intellectual
exercise and a nice way to motivate myself to educate myself further on
this topic.

Trying to solve AI is the best way to educate yourself on the problem.
If I learn that my AI idea has been attempted and proven
wrong, I will be a bit down but not terribly so. First, I am still
expecting to find out that it has already been done and tossed aside.
Second, I would very much want to know why it is a bad idea. And, third,
I feel the mere pursuit of knowledge has a value in and of itself.

And while I do the above research, I've decided to address the hardware
aspect of this project and thus the reason for this thread.


I appreciate the offer. If you would email me your credentials, I will
look forward to receiving them.

I don't have any "credentials". :) I'm just an old fart like you addicted
to the dream of solving AI. I work on as a hobby on the side. I've been
doing it on and off for close to 25 yearas.

I'm a professional programmer by trade. I founded and currently run a
small dot com business for a living.

I waste far too much time on AI, but like I said, it's an addiction.
Also, are you within reasonable driving
distance of Madison, WI, USA?

I'm in the Washington DC area. If you are in the area and what to get
together to chat, feel free to look me up. You can find contact
information on my web page: http://curtwelch.com/
I'd prefer to meet face to face. That and
I'd like you to sign a confidentiality agreement before I tell you my
idea.

I have no problem with that.

I think I know how to solve AI. All good kooks do. And we tend to believe
that all other kooks ideas are crazy becasuse they are not the same as
ours. If we get together and talk, what is most likely to happen is I will
spend an hour trying to explain to you why your framing of the defintion of
intelligence is wrong (unless you happened to frame it the way I did).
But, I can also give you a lot to think about in terms of why or why not
your approach may or may not work.

I am not well read on the AI literature, so I can not give you good
pointers to other work that might be similar to yours. The contacts you
are making in the AI field should be good for that.

I talk about my ideas in comp.ai.philosophy which is where I'm posting
from. So you can read some of my posts there and get a good flavor of my
approach (learning machines). I don't have any good summary of my work on
the web I can point your do. Sorry.
I'm attempting to do the first one.

That's the fun one to work on. But it's also the one that makes people
know you are a "kook" without even knowing what you are doing.
 
D

David B. Held

Scott T. Jensen said:
[snipped hardware discussion]

The main thing I disagree with that poster on is going with the Barton
core. If anything, the Barton is overrated. You get more bang/buck
with Thoroughbred B. Or, at least, you did about 6 months ago. But
at the least, the Barton did not live up to the top billing.

The other observation is that this hardware talk is completely
irrelevant. When you said you are a psychologist, I realized why
you are worried about hardware. The hardware is the bicycle shed.
It's something that's easy to talk about.

I agree with almost all of what Curt said except for the part where you
should read up on the state of the art. No need to do that. You aren't
even to the point where you could appreciate the state of the art. The
idea that you could build AI without so much as knowing a single
computer language is fairly amusing. But really, you don't need to
know a computer language to build an AI. You do, however, need to
understand how a computer operates to appreciate the difficulty of
your project, so I will make the following suggestion.

Pick a problem within the solution domain offered by your hypothetical
AI. For instance, solving a syllogism. Draw a diagram of how your
AI will work, and create a series of "snapshots" on paper that show
how information flows through your system to solve the problem.
Start out at a high level of description, and then increase the detail
until you think you could get 5 year olds to act out your virtual AI. If
you can't trace out the execution of even a simple problem, then I can
pretty much guarantee that your idea has no merit. If you *can* trace
the execution, then show it to your programmers, and ask them how
hard it would be to turn it into code.

Odds are, they will either code it, and you will have a primitive portion
of your AI to look at, or they will point out where you haven't specified
enough detail. My bet is on the latter. What you will find is the
incredible pedantry of a CPU. You will realize that many things you
take for granted require elaborate specification and detail. But in the
rare chance that you actually manage to solve a trivial problem with
your idea, take it to the next level. Pick a harder problem from your
solution domain, and try the same thing.

At some point, you will realize that you don't have the expertise to
draw out the solution in detail. At that point, you might be motivated
to learn from the masters, including math, programming, and AI.
Then reading will bring some benefit. But right now, reading is like
studying Greek literature without knowing Greek. Even if you can
pick out a few words, you won't appreciate what they mean without
context. And that context only comes by doing.

It's fairly easy to come up with solutions where the building blocks
are hand-waving and hemming and hawing. Anyone can do that,
including dilettantes. It's much harder to put your money where
your mouth is and turn that vaporware into a product. Don't spend
a penny on an expensive computer. Solve a simpler problem on
existing PCs, and teach yourself something about how hard AI
really is. No, the simpler problem will not be the full AI. I understand
that. Yes, scaling issues may make it irrelevant to the final solution.
I understand that too. But what you have to understand is that AI
is *hard*. And you won't appreciate that until you try to solve the
simpler problems. Even if they are trivial and not entirely related
to what you think is the "final solution".

The benefit of a formal AI training is not that you learn about AI. It's
that you learn how hard it is. You learn about the theoretical
usefulness of the A* and Min-Max algorithms, and then you learn
about the limitations of those algorithms. You begin to see just
how narrow most solutions really are. And you begin to get a
better appreciation of just how amazing a human brain really is.
You learn about the Resolution algorithm, and you think that you
could solve any logical problem in the world. And then you
implement it, build up a large proposition database, and see that
it would both be too slow to be effective on a large scale, and too
rigid to be effective in a practical environment. These are lessons
that you don't learn by reading, but by doing.

So do yourself a favor. Instead of jumping in head-first, take some
baby steps. Pick a simple problem, and try to solve it. Only then
can you begin to appreciate the types of issues that will arise in a
full-scale AI.

Dave
 
S

Scott T. Jensen

Hi Curt,

I found your reply quite interesting and entertaining. Thanks for giving
it.

Now let me address some points you raised and clarify some things about
myself and my approach ... without giving away the secret recipe. ;-)

Curt Welch said:
What I find is that there is a huge difference between people
that have been "keeping up" with AI, vs trying to build AI.

Yes, I agree and that is why I'm continuing my research to find if my idea
has been done before. I value my programmer friends' input and skills, but
do realize they might be lacking in complete overall and historical
knowledge ... which I'm sure they would be the first to admit that they are.
In fact, none have objected in the slightest to me making another attempt to
see if my idea has been tried before.
The one thing that makes me doubt your approach without
even knowing what it is, is the fact that you do not seem to
be a strong programmer (or did you say you don't program
at all?).

Not in the slightest.
Solving AI is an engineering problem. It's a very hard
engineering problem. You need very talanted and sharp
engineers to find solutions which nobody else has
every been able to find after 50 years of looking at the
problem.

I agree ... though I think we might disagree from which discipline those
"engineers" should be from. Or rather, who should be in charge and who
should just be support. I think history will show that this is the main
reason why AI didn't develop over these last fifty years. The reason being
that it is currently dominated by the wrong academic discipline and people.
For it has always dumbfounded me why programmers think they can come up with
an AI. No one would ever go to them with a psychological problem and yet
they think they can solve the ultimate psychological problem. Baloney. I
would place AI research under psychology and merely use computer programmers
as tech support. The programmers sitting along the wall and only talking
when spoken to by the team of psychologists at the conference table ... or a
small team of programmers working for and underneath just one psychologist.
The black box issue is key. Because that's the defintion
of the problem.

Of each component, yes.
You have to describe the problem of AI in terms of what
the black box is going to do.

If you mean by that statement: "given this input, it should come out this
way", yes. Not specifics, but a standardized way of processing input. For
example, I insert a banana and the banana comes out peeled. Any fruit I put
into it comes out peeled. Another component would be used to slice it.
Another component would drop it into a dish. Still another would pour Half
& Half on it. Etc. Etc. Etc.
And that takes us to the next point. You wrote "how the
AI is to think".

By saying those magic words, you have already framed
your problem. You have already made a decision that
the black box must think. Or your programmers have
made that decision.

No, I was just trying get across a complex idea in what I thought was an
easy-to-understand and brief way. Essentially, what I meant by
"how the AI is to think" is how the components process input. All of the
processes do not take place in any single component. Only a little in each.
What's important is how all the components are used and how the overall
process works. Those are the rules I was referring to. Those are the rules
I'll contribute to the project. The rules each component follows when
processing their input before passing it along to the next component and
from where they are to acquire the initial input.
And thinking about AI as a "thinking machine" has long
been shown to be the totaly wrong way to frame the AI
problem.

That would depend on how you define "thinking". I define thinking as simply
a process that uses memory (to include rules) to process input, add to
memory, and achieve goals.
Ah, your psych background might help you
understand us for what we are - if you understand
what people like Skinner were saying.

Behaviorism is no longer the golden child of psychology or artificial
intelligence research. Cognitive psychology is the current darling.
However...

I'm a behaviorist and view any meaningful contribution from cognitive
psychology as having a strong underpinning in behaviorism. Behaviorism
being "accepted" as a contributing factor to cognitive psychology but only
to a limited degree. Why this is the case in psychology today is because
behaviorism demystified the mind and many religious people didn't like that.
Behaviorism strips the mind of the "soul", "the touch of god", "the spirit",
"the thing that will live beyond death". Cognitive psychology adds a layer
of mysticism on top of behaviorism. Cognitive psychology is the mystics'
answer to behaviorism ... as limited-socialism is the now-discredited
communists' answer to capitalism.
That might be even better than good engineering skills.

I would agree.
If you can turn your knowledge into a good black box
description of what it is to be intelligence, then you can
let the programmers try to figure out what to put inside
the box.

That's essentially the approach my programmer friends and I are taking. I
lay down the requirements I need the component to do, they build me a
component that will meet those requirements.
If you don't have the skills to create a good extensional
black box specification, then you will have problems
getting your programmers to do anything productive...

I totally agree on this point.
...- unless they have the interest and skills to turn your
ideas into a good black box specification.

They have both, but they still want me to give them the specifications of
each component. Being the originator of the AI concept, I have a far easier
time adjusting the idea and explaining it more intrinsically when finer
details/definitions are needed. They just don't want me helping them build
the components for it. That's where they come in and I stay out.
Most are considered "kooks" because the AI problem
has a bad habit of looking easy.

The word "easy" would never be a word I'd use to describe what we're
attempting to do. If it was, I wouldn't need my friends and could just do
it myself. To me, "obvious" and "easy" are not necessarily
inter-changeable.
AI is hard, because no one understand the problem.

Made even harder by the wrong academic discipline and individuals trying to
understand and tackle the problem. Because of this...

What I've been thinking of late is that if I get all the way to the point of
telling my idea to an AI expert and s/he tells me that my AI idea hasn't
been attempted before and sounds promising, that I will ask them for a
letter of recommendation. A letter of recommendation that will vouch for my
AI idea as being original and promising and then encourage the receiver of
the letter to offer me a full scholarship to pursue a doctorate in AI.
However, the people I would send that letter to won't be at any computer
science graduate program, but a psychology graduate program. And I'd seek
not only a full scholarship but also funding to hire two to three
programmers to assist me in my quest. Then again...

I know of at least owner of a computer software company that would, after
receiving such a letter of recommendation, very likely set me up with what
I'd need to make my attempt. I wouldn't get a doctorate in the process, but
would get a very decent salary and be able to hire better help to assist me.
From my own personal experience, there's a lightyear of difference between a
wet-behind-the-ears freshly-minted computer science graduate and a seasoned
professional. Then again...

I also know of a foundation that might just give me a grant based on such a
recommendation. Then I might be able to have the best of both worlds. Use
that grant to give me essentially a full scholarship at a psychology
graduate program (after using the recommendation to get accepted into their
program) and be able to hire really good help.

The above are just a few things I've been chewing over as I do my reading.
Reading which will yield tomorrow that my idea has already been tried and
discarded thus making all this a moot point. :)
I'm in the Washington DC area. If you are in the area
and what to get together to chat, feel free to look me
up. You can find contact information on my web page:
http://curtwelch.com/

I may just do that if I'm ever in town.

Scott Jensen
 
C

Curt Welch

Scott T. Jensen said:
Hi Curt,

I agree ... though I think we might disagree from which discipline those
"engineers" should be from. Or rather, who should be in charge and who
should just be support. I think history will show that this is the main
reason why AI didn't develop over these last fifty years. The reason
being that it is currently dominated by the wrong academic discipline and
people. For it has always dumbfounded me why programmers think they can
come up with an AI.

I think you are very correct on that last statement. And certainly there
are some other behavioral psychologist here in c.a.p that would agree with
that last statement.

I'm sure the reason is that since we understand computers, and have a
brain, it just seems it should be "easy". :)

But, after 50 years, AI people have figured out it's not so easy. And a
lot of AI just fractured into interesting algorithm work, and stopped
trying to pretend they knew enough about the brain to work on the "big"
problem. But others have turned to fields like psychology to get the
missing answers. The amount of corss-polination going on is improving. But
some would say that the mis-thinking AI people are just spreading bad ideas
about human behavior into other fields, instead of learning what they
should be from the experts. There could be turth to that.

The good thing however is that we have more and more people from different
fields looking at the problem. And that can only be good in the long run.
You need as many different people with different perspectives looking at a
hard problem as possible.

I don't think you understand how hard engineering these solutions is, even
if you have an understanding of what the brain is doing. But, if you work
on it, you will either solve it, or get that understanding, so you are
doing the right stuff.
 
S

Scott T. Jensen

David B. Held said:
The other observation is that this hardware talk is
completely irrelevant. When you said you are a
psychologist, I realized why you are worried
about hardware. The hardware is the bicycle shed.
It's something that's easy to talk about.

No, if you look at all the newsgroups the original post was cross-posted to
you will discover that all but one of them are hardware newsgroups. Thus
why I'm focused on (or, as you say, "worried about") hardware. And I was
simply posting in hope of possibly receiving some unconventional ways of
tackling the hardware aspect of the project. I like keeping my options open
and soliciting new and different ideas. Whereas my team of programmers
think posting here is a waste of time, I disagree. However, since it only
"wastes" my time, they don't really care or object to me doing so. As this
thread has developed, I've forwarded some of what I believe are the best
posts and private emails to my lead programmer and he's discussed their
merits or lack their of with me. They haven't yet changed what hardware he
is recommending I get, but he has said he'll look into a few of the hardware
suggestions that I've passed along. That and there's no real rush in
assembling the hardware since that will only be purchased and assembled
after the program components are ready to be installed.

And I'd like to again thank all those that have offered hardware suggestions
and constructive critiques of them, both by public post and private email.
They are very much appreciated.

Scott Jensen
 
D

David B. Held

Scott T. Jensen said:
[...]
I agree ... though I think we might disagree from which discipline
those "engineers" should be from. Or rather, who should be in
charge and who should just be support. I think history will show
that this is the main reason why AI didn't develop over these last
fifty years.

The idea that AI engineers don't study psychology is ridiculous,
insulting, and extremely naive on your part.
The reason being that it is currently dominated by the wrong
academic discipline and people. For it has always dumbfounded
me why programmers think they can come up with an AI.

It's simple. They understand something that psychologists don't:
computers. Until you understand what it takes to make a computer
do something, you will never be able to appreciate the full
complexity of AI. And I'm fully convinced that what you think is
"the solution" will turn out to be painfully underspecified when the
time comes to put it to the test. It's like a race car driver saying:
"I know how to build the world's fastest car because I drive them
every day. What do those stupid engineers know?" It's a non-
sequitur.
No one would ever go to them with a psychological problem and
yet they think they can solve the ultimate psychological problem.
Baloney.

What's baloney is that practice in clinical psychology would tell you
how to build an intelligence. It's baloney because building AI is *not*
the "ultimate psychological problem". It's not even a *psychology*
problem to begin with! It's an *engineering* problem. Which is
why *engineers* pursue it, and not *psychologists*. And that's why
*engineers* design and build race cars, and not *drivers*.
I would place AI research under psychology and merely use
computer programmers as tech support.

And that's why Rodney Brooks runs the MIT AI Lab and not you.
He has numerous successful projects under his belt. What do you
have? A half-baked idea? The fact that you are trying to solve the
hardware aspects of the problem before building so much as a
software prototype demonstrates that you have an extremely naive
view of how to approach AI.
The programmers sitting along the wall and only talking when
spoken to by the team of psychologists at the conference table
... or a small team of programmers working for and underneath
just one psychologist.

LOL!!! You are thinly disguised AI crackpot in psychologist's
clothing. I haven't met a single psychologist that begins to have
the tools or experience necessary to tackle AI. That's why Steven
Pinker writes books, and leaves AI to...who? That's right, the
*engineers*. Antonio Damasio is a neurologist, not a psychologist,
but he's in a far better position to design an AI than a traditionally
trained psychologist; but even he would be a terrible AI engineer,
because he thinks that neurons are too low a level to understand
the processes of the mind. In some sense, he's right, of course.
But his position makes him unwilling to get his hands dirty and
actually experiment with neural networks.
[...]
What's important is how all the components are used and how
the overall process works.

Uhh...no. How each subsystem works is just as important as how
they are connected together. You seem to think that they are just
simple black boxes that your programmers and just whip together
like a chicken pot pie. In fact, you'll find that the limitations of a
simple component affect how the system as a whole works. And
I'll bet money that you will end up saying: "Why doesn't it do this?
Why doesn't it do that?" And the programmers will say: "That's
really hard."
Those are the rules I was referring to. Those are the rules
I'll contribute to the project. The rules each component follows
when processing their input before passing it along to the next
component and from where they are to acquire the initial input.

If you think you are going to specify a set of precise, hard-wired
rules, I think you'll find that your AI is not very general. And an AI
that doesn't adapt usually doesn't leave a big impression on
people, either. Such systems tend to be viewed more as
engineering solutions than AI.
[...]
Behaviorism is no longer the golden child of psychology or
artificial intelligence research.

That's because it's artificially constrained in a non-useful way.
Cognitive psychology is the current darling.

That's because it's a much more pragmatic approach.
[...]
I'm a behaviorist and view any meaningful contribution from
cognitive psychology as having a strong underpinning in
behaviorism.

It's easy to say: "Idea X came from behaviorism." But that's just
playing partisan politics with psychology. Does it really matter if
it "came from" behaviorism or cog. sci.? If the idea works, use
it. Arguing about whether it came from paradigm X or paradigm
Y is like arguing whether string theory or quantum gravity is the
better physical theory. Right now, we just don't know enough to
say. So run with what works.
[...]
Cognitive psychology adds a layer of mysticism on top of
behaviorism.

It does no such thing, and I defy you to substantiate this unfounded
claim. This makes me wonder if you are just Longley posing as
someone else.
[...]
That's essentially the approach my programmer friends and I are
taking. I lay down the requirements I need the component to do,
they build me a component that will meet those requirements.

And what I'm wondering is if any such components have yet been
built and tested. My guess is "no".
[...]
What I've been thinking of late is that if I get all the way to the point
of telling my idea to an AI expert and s/he tells me that my AI idea
hasn't been attempted before and sounds promising, that I will ask
them for a letter of recommendation.
[...]

Good luck! There are infinitely many AI ideas that haven't been
tried before, and considering the state of the art, it's pretty hard to
say whether any given approach is "promising" without having a
prototype. It's extremely easy to overestimate the power of a
proposed AI system. Take it from someone who *has tried*. And
what you'll find is that the people who are able to spend money on
AI are doing it secretly, and those doing AI in the open have no
funding to speak of.

Dave
 
D

David Longley

David B. Held said:
Scott T. Jensen said:
[...]
I agree ... though I think we might disagree from which discipline
those "engineers" should be from. Or rather, who should be in
charge and who should just be support. I think history will show
that this is the main reason why AI didn't develop over these last
fifty years.

The idea that AI engineers don't study psychology is ridiculous,
insulting, and extremely naive on your part.
The reason being that it is currently dominated by the wrong
academic discipline and people. For it has always dumbfounded
me why programmers think they can come up with an AI.

It's simple. They understand something that psychologists don't:
computers.

How the #### would you know?? You don't even know the history of your
own professed area of expertise!
 
M

Matt

Scott said:
I understand that. I'm just seeking suggestions for how to get a lot of RAM
and/or hard-drive on the cheap. Namely unconventional suggestions ...
saying any exist.

At one of the tables at a computer show at the state fairgrounds about a
year ago, I met a grizzled mumbling old man with thick glasses and a
pocket protector. For a hundred dollars more than the expected price he
sold me a RAM stick in a plastic package having a special property. You
remove the RAM stick and install it. Next time you sleep, you put the
empty package under your pillow and repeat to yourself three times "wham
blam free RAM". When you wake up you find that the package isn't empty
anymore: it has another brand new RAM stick in it. So now I have more
RAM than I will ever need. I don't need the package anymore, and I'll
make a deal with you. Generic 1G PC2700. I will email you.
 
M

Matt

Scott said:
Right now, I'm researching what has already been done in the AI field to see
if my idea has been tried and proven wrong. I've read "Mind Matters:
exploring the world of artificial intelligence" by James Hogan, read
numerous web articles on AI, and am currently reading "Artificial
Intelligence: A Modern Approach" (2nd Ed.) by Stuart Russell and Peter
Norvig. From what I've read so far in Russell's book (as of this post, I'm
on page 73), it doesn't look like I will find my idea in his tome. In
reading his history of AI, it would seem AI researchers haven't yet thought
of exploring the approach that I believe will lead to AI. However, I'll try
to finish reading it to be sure ... though I'm not math-friendly and that
handicap has now increasingly slowed down my rate of progress through the
book.

If, after reading Russell's book, I still haven't come across my AI idea,
I'll begin searching AAAI's archives for it. Focusing first on what has
been published since the release of second edition of Russell's book and
then, if that proves unfruitful, hunting through the rest of the archive for
it. However, I'm not looking forward to paying the hefty AAAI membership
fee to do so and am hoping to find a friend, a friend of a friend, or
someone local that is an AAAI member who will let me use their account to
access the archives, such as a kind-hearted local professor at University of
Wisconsin - Madison, Edgewood College, or Madison Area Technical College ...
as I live in Madison, Wisconsin, USA.

If I do not find my AI concept in the AAAI archives, I will try to find an
AI professor or researcher within reasonable driving distance of where I
live that feels s/he has enough knowledge of what has been tried in the AI
field to be a good sounding board for my idea ... and one that I feel I can
trust not to run off with it. For you see, I've grown rather fond of my
little idea and would like to be the one that brings it to life. Yes,
that's a bit selfish on my part, but I never said I wasn't human. ;-)

Currently while I do my reading, I'm mentally operating under the assumption
that my AI idea has been looked into thus I am just trying to find out when
it has been done and why it failed or why the basic concept was shown to be
faulty. My idea seems obvious to me and thus I assume it would be to others
... or at least one AI researcher somewhere at sometime in the past sixty
years. However...

I have three close friends who are experienced professional programmers and
who say they've kept up on the AI field. I've presented my AI idea to each.
After a rather thorough grilling of me by them, they've stated that they
haven't heard of anyone taking my approach and find it interesting. We've
even had a few evening sessions where they've started to chart out what
would be needed for the AI program. Software, not hardware. My take on how
to bring about my AI is that we just need to tweak several already-existing
computer programs and get them to work together to get them to do what I
want them to do. I am quite comfortable at this stage with any
inefficiencies this might allow. However, at least one of the components
for my AI my friends feel will need to be built from scratch ... if not all
the components, as one of the three maintains. These programmer friends are
the programmers I've been referring to in this thread and who are willing to
help me do this project. That and they want me to just tell them what each
component is to do and let them figure out how they'll get that component to
do that for me. They keep telling me not to be concerned about what goes on
inside the "black box" as my contribution is setting up the rules for how
the AI is to think once all the components are set to go. Personally, I
feel like the captain of a ship with a loyal crew that wants to go wherever
I want to go, but none of which want me to help out in the boiler room to
get the ship underway ... and tend to get upset everytime I poke my head
into the room to see how they're doing. *laugh* However, before I ask them
to volunteer their time any further, I've decided to try again to find if my
AI idea has already been attempted and proven wrong.

Just recently, I've emailed a few AI professors at MIT, Stanford, and
Carnegie-Mellon University. I selected these universities as they have
well-known AI labs. I read each college's AI department/lab's faculty
homepages and tried to find the faculty members whose expertise and/or area
of current research is closest to what I feel my AI concept would call home.
I then emailed them asking for reading recommendations that would help with
my search for my AI idea. The professors at MIT and Stanford never replied
back. Fortunately, the two professors I contacted at CMU did. One
suggested that I read Russell's book and the other suggested I dig through
the AAAI archives as outlined above. Both recommended that if after doing
their suggestions, I still don't find my idea, that I present it to an AI
professor or researcher (that I trust) to see if they've ever heard of it
being attempted. One of the two CMU AI professors then said that if that AI
expert says it hasn't been attempted that I should go ahead and "just
implement the thing and show us all." The other professor said that if the
AI expert said my idea hasn't already been attempted and holds promise that
I should pursue a doctorate in AI.

Saying my search gets me this far, I'm not sure I'd pursue a doctorate in
AI. From my readings of AI graduate programs, they require a lot of math
and computer languages. Neither of which interests me and both of which I
know I'd struggle through due to that lack of interest ... if not -- in the
attempt -- die of numbness of the mind. I'm a psychologist by nature
(possessing a humble little BA in psychology with a minor in marketing ...
and having worked for many years now as a marketing consultant and am now in
the process of launching my own talk show, www.scottjensenshow.com) and
barely know how to turn on my computer. That and my programmer friends
understand my limitations and are willing to provide the technical knowledge
and expertise to see what develops by attempting to bring about my AI idea.
My friends want us to do it along the lines of how the Wright Brothers
brought about the age of aviation. However...

Can we say that you hope your special vision will enable you to outdo a
lot of extremely smart people who have worked on AI problems for a long
time?
Working with my friends to bring about my AI will mean the construction of
the AI will take a long and sporadic time since they'll be working on it in
their free time ... when they have free time. Whereas working in an AI lab
with dedicated researchers helping me would speed up the timetable
significantly. However, I'd only do that if I could get a full scholarship
(a.k.a. "full ride") to pursue a doctorate in AI. Then there's the issue
that I'm be a "bit" old for a graduate student at the age of 40. However,
still being single I think I could handle the social ramifications of that
... especially if I could just come in and work on my idea with the help of
an AI professor and a couple of her/his lab's talented computer science
graduate students. That and not have to take any college courses in
computer science and math ... though I'd be up for attending some
graduate-level psychology courses. If I was able to get all that, I would
then not need to buy a computer for this project since I would then be
allowed use one of that university's supercomputers. Yes, yes, I know this
paragraph is just wishful daydreaming. Anyway...

I am looking at this search for my AI concept as merely an intellectual
exercise and a nice way to motivate myself to educate myself further on this
topic. If I learn that my AI idea has been attempted and proven wrong, I
will be a bit down but not terribly so. First, I am still expecting to find
out that it has already been done and tossed aside. Second, I would very
much want to know why it is a bad idea. And, third, I feel the mere pursuit
of knowledge has a value in and of itself.

And while I do the above research, I've decided to address the hardware
aspect of this project and thus the reason for this thread.




I appreciate the offer. If you would email me your credentials, I will look
forward to receiving them. Also, are you within reasonable driving distance
of Madison, WI, USA? I'd prefer to meet face to face. That and I'd like
you to sign a confidentiality agreement before I tell you my idea.

How can I put this? I believe you need somebody to wake you up. Find
somebody who knows about scientific research and is honest or cares
about you.

This group may be too polite or too far removed from scientific research
for your needs. You might also try posting your plans to some sci.*,
sci.research.*, and other groups.
 
S

Stephen Harris

http://www.autosophy.com/database.htm
"Autosophy databases work best with large Content Addressable Memories
(CAM). An ideal CAM would have storage capacities in the Terabit range,
fast access, cheap and rugged construction, and low power consumption.
The CAROM Content Addressable Read Only Memory, explained in a separate
tutorial, has all those characteristics. Such memory modules could be
implemented in complementary pairs to assure system reliability with
cross-verification and repair. Giant multimedia databases could thus
be built to virtually never fail. ...

Each location address can also be erased and relocated to other
sections of the memory. The CAROM is therefore usable both as a
Content Addressable Memory (CAM) and a Random Addressable
Memory (RAM). Memory locations are normally filled with data, by
the learning algorithms, starting from the bottom of the memory and
proceeding to the top. All network types (such as serial, parallel
and associative) are interleaved in the same storage device."

http://www.autosophy.com/carart.htm

"The new Content Addressable Read Only Memory (CAROM) is being
developed with help from the U.S. Army to replace the CD-ROM in
severe conditions, and as the memory for a future brain-like
Autosopher. The same device can be used as a Random Addressable
Memory (RAM) or Content Addressable Memory (CAM)."
 
S

Scott T. Jensen

David B. Held said:
And what you'll find is that the people who are
able to spend money on AI are doing it secretly,
and those doing AI in the open have no funding
to speak of.

Would that secret funding be from the Men In Black, the Illuminati, or the
New World Order? Or would you have to kill me after you tell me?

Whoops! Sorry, everyone, I've been feeding the trolls. I'm done with this
person.

Scott Jensen
 
E

Eray Ozkural exa

Scott T. Jensen said:
I agree ... though I think we might disagree from which discipline those
"engineers" should be from. Or rather, who should be in charge and who
should just be support. I think history will show that this is the main
reason why AI didn't develop over these last fifty years. The reason being
that it is currently dominated by the wrong academic discipline and people.
For it has always dumbfounded me why programmers think they can come up with
an AI. No one would ever go to them with a psychological problem and yet
they think they can solve the ultimate psychological problem. Baloney. I
would place AI research under psychology and merely use computer programmers
as tech support. The programmers sitting along the wall and only talking
when spoken to by the team of psychologists at the conference table ... or a
small team of programmers working for and underneath just one psychologist.

Don't be ridiculous. You are trolling.

Check your mind out.

Cheers,
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top