Buffer overflow

  • Thread starter Thread starter Stephen
  • Start date Start date
S

Stephen

Is the i386 platform always open to buffer overflow? Or is it just the
languages presently used to build its OSes etc. ?

Stephen
 
Buffer overflows exist for platforms other than I386 as well

--
--Jonathan Maltz [Microsoft MVP - Windows Server]
http://www.imbored.biz (DOWN - blackout) - A Windows Server 2003 visual,
step-by-step
tutorial site :-)
Only reply by newsgroup. If I see an email I didn't ask for, it will be
deleted without reading.
 
Thanks, but what I'm getting at is, is it built into the platform, so to
speak, or is it a matter of programming and programming languages?

Stephen

--

Drop 123 to email me.


| Buffer overflows exist for platforms other than I386 as well
|
| --
| --Jonathan Maltz [Microsoft MVP - Windows Server]
| http://www.imbored.biz (DOWN - blackout) - A Windows Server 2003 visual,
| step-by-step
| tutorial site :-)
| Only reply by newsgroup. If I see an email I didn't ask for, it will be
| deleted without reading.
|
|
| | > Is the i386 platform always open to buffer overflow? Or is it just the
| > languages presently used to build its OSes etc. ?
| >
| > Stephen
| >
| > --
| >
| > Drop 123 to email me.
| >
| >
| >
|
|
 
Stephen said:
Is the i386 platform always open to buffer overflow? Or is it just the
languages presently used to build its OSes etc

It is the way in which the system components have been written. Arrays
of bytes are allocated as buffers as well as for many other purposes,
and in initial testing at least there will be 'debug' checks that
references to them do not go out of bounds. But once things are tested
such checks are usually turned off, in the interest of compactness and
efficiency. References from one part of the system to another are then
seen as trustworthy. Where an array is used as a buffer, it *ought* to
be a special case where the procedures used to write into it make their
own checks that it cannot overflow by something sending a message longer
than specified *from outside*. But that point seems to have been
missed.
 
OK, then, it is a matter of programming languages and how they are used and
not the i386 machine. So you are saying to me that the problem can be solved
altogether with the right programming strategy and the Microsoft gang have
their work cut out for them!

Stephen

--

Drop 123 to email me.


Stephen said:
Is the i386 platform always open to buffer overflow? Or is it just the
languages presently used to build its OSes etc

It is the way in which the system components have been written. Arrays
of bytes are allocated as buffers as well as for many other purposes,
and in initial testing at least there will be 'debug' checks that
references to them do not go out of bounds. But once things are tested
such checks are usually turned off, in the interest of compactness and
efficiency. References from one part of the system to another are then
seen as trustworthy. Where an array is used as a buffer, it *ought* to
be a special case where the procedures used to write into it make their
own checks that it cannot overflow by something sending a message longer
than specified *from outside*. But that point seems to have been
missed.
 
Computers do input and output. When a program does input, it has to reserve
some memory in which to deposit the data. There are basically two strategies
for input.

1. Reserve a fixed amount of memory (a "buffer"), and ask the O/S to read
any amount of input, but NO MORE than the size of that buffer. No problem.
Good strategy.

2. Reserve a buffer, as before, and ask the O/S to read any amount of input,
regardless of whether it will fit into the buffer. Bad strategy. This should
be abolished. But it's not :=(

When more input comes in than can be accommodated in the buffer, one of two
things happens:

a. In the movies, the computer spits out sparks and catches fire.

b. In the real world, the incoming data overwrites memory that was reserved
for holding variables OTHER THAN the buffer. The result varies with the
system architecture. The overwritten area typically contained code, or a
function call return address, or saved CPU register values from the caller,
or all of the above. Those original data are now kaput, replaced with the
input data. Program execution is altered.

A "clever" hacker takes advantage of this. He overflows the buffer with just
the right data to cause the program to do whatever he wants ... which is
usually nasty.

This is do-able in just about any computer, regardless of CPU or O/S. But it
depends upon having a program (written by a @&#*%^!@#&^ programmer) that
uses strategy 2, above.

Of course, you could blame the O/S makers (or compiler makers) for making
available library code that implements strategy 2. But then we could blame
matchstick manufacturers for fires, right?
 
Back
Top