Hardware is easier to program than software?

R

RayLopez99

So says a Computer Science programmer whose book I am reading. This is because you cannot make certain mistakes (or potential mistakes) when programming hardware-they physics won't let you--but you can in software, leading to problems down the road (this is why software often fails in 'mysterious ways' necessitating a reboot). Having done only a bit of programming in hardware (on a breadboard) in college but lots of software programming, I found his statement counter-intuitive (it seems hardware programming is harder)but it may indeed be true. After a while, after you memorize and master the hardware building block templates used in hardware programming (for example, a oscillator circuit, a Wheatstone bridge for balancing, etc etc, add your list here) indeed hardware programming maybe 'easier'.

Paul, what you say?
 
P

Paul

RayLopez99 said:
So says a Computer Science programmer whose book I am reading.
This is because you cannot make certain mistakes (or potential mistakes)
when programming hardware-they physics won't let you--but you can
in software, leading to problems down the road (this is why software
often fails in 'mysterious ways' necessitating a reboot). Having done
only a bit of programming in hardware (on a breadboard) in college but
lots of software programming, I found his statement counter-intuitive
(it seems hardware programming is harder) but it may indeed be true.
After a while, after you memorize and master the hardware building block
templates used in hardware programming (for example, a oscillator circuit,
a Wheatstone bridge for balancing, etc etc, add your list here) indeed
hardware programming maybe 'easier'.

Paul, what you say?

What a load of rubbish :)

(That's the short answer...)

I guess you'll have to wait for my new book now, where
I give my considered opinion, on the habits and
qualities of Computer Science graduates :)

Paul
 
S

SC Tom

RayLopez99 said:
So says a Computer Science programmer whose book I am reading. This is
because you cannot make certain mistakes (or potential mistakes) when
programming hardware-they physics won't let you--but you can in software,
leading to problems down the road (this is why software often fails in
'mysterious ways' necessitating a reboot). Having done only a bit of
programming in hardware (on a breadboard) in college but lots of software
programming, I found his statement counter-intuitive (it seems hardware
programming is harder) but it may indeed be true. After a while, after
you memorize and master the hardware building block templates used in
hardware programming (for example, a oscillator circuit, a Wheatstone
bridge for balancing, etc etc, add your list here) indeed hardware
programming maybe 'easier'.

Paul, what you say?

I may be missing something, but how do you program hardware without
software? You build hardware, but if you need it to do a special function
(like say, an electric door opener/closer), then you build parts into it
that make it perform the function you need. In this example, you would
probably use some kind of limit switch to stop the motor when the door
reaches its full open or closed sequence. I don't really see that as
programming; it's just hardware.
Now if you're looking for a microcontroller to use maybe an electric eye or
two to limit the speed with which it goes at end of cycle, then you're
talking about firmware programming (still not hardware).
But simply putting resistors, capacitors, and ICs on a breadboard isn't
really programming, it's simple circuitry, even if it's designed to perform
a particular function.

Did I miss something?
 
D

Don Phillipson

SC Tom said:
I may be missing something, but how do you program hardware without
software? . . . Did I miss something?

The OP's textbook may be rather old. Its reference was to the
earliest generations of computers programmed by manual
switches (or paper tape simulating many switches). Within a
decade computer operators discovered that (1) languages
(e.g. ASM, Fortran) and (2) Disk Operating Systems (as well
as (3) much-improved larger memory = data storage) was
more efficient and more powerful.
 
S

SC Tom

Don Phillipson said:
The OP's textbook may be rather old. Its reference was to the
earliest generations of computers programmed by manual
switches (or paper tape simulating many switches). Within a
decade computer operators discovered that (1) languages
(e.g. ASM, Fortran) and (2) Disk Operating Systems (as well
as (3) much-improved larger memory = data storage) was
more efficient and more powerful.
It would be REALly old then :) Are we talking about the computers like the
WWII language decryption models? If so, I don't think that was any easier
than the originators of Fortran or DOS had it.
 
R

RayLopez99

Did I miss something?

I was thinking ASICs. It is true that ASIC design is software driven but the actual stitching together of the ASIC to do something useful is hardware design.

As for firmware, etc, that's usually a small part of ASIC design, typically for bootup. That part is still hardware design in my mind, though you can quibble over definitions.

RL
 
A

Astropher

So says a Computer Science programmer whose book I am reading. This is
because you cannot make certain mistakes (or potential mistakes) when
programming hardware-they physics won't let you--but you can in
software, leading to problems down the road (this is why software often
fails in 'mysterious ways' necessitating a reboot). Having done only a
bit of programming in hardware (on a breadboard) in college but lots of
software programming, I found his statement counter-intuitive (it seems
hardware programming is harder) but it may indeed be true. After a
while, after you memorize and master the hardware building block
templates used in hardware programming (for example, a oscillator
circuit, a Wheatstone bridge for balancing, etc etc, addyour list here)
indeed hardware programming maybe 'easier'.

Paul, what you say?

Writing firmware for hardware can be easier as long as you have the
appropriate tools (logic/bus analyser oscilloscopes etc). It is easier
in the sense that you are not working with an operating system and you
are totally in control of what the hardware is doing. If it breaks, it
is your fault and it can be diagnosed. With an operating system there
are often many layers of abstraction and 3rd party systems that you
have negotiate with - all areas that are ripe for presenting problems.

Debugging race conditions in hardware can be cuntiferous.
 
R

RayLopez99

Debugging race conditions in hardware can be cuntiferous.

Yes, race conditions are unstable and do not replicate--the same hardware can have a race condition in one moment and on reboot be fine the next moment. That's why designers have timing constraints that factor in a safety factor. I think I read somewhere that in fact it's possible to have a race condition even with certain inputs and/or operating conditions (temperature, voltage swings, etc at the limit) no matter what you do, or that was the implication. Every flip-flop, a backbone of hardware design, is unstable by definition in certain conditions btw.

RL
 
P

Paul

RayLopez99 said:
Yes, race conditions are unstable and do not replicate--the same hardware
can have a race condition in one moment and on reboot be fine the next moment.
That's why designers have timing constraints that factor in a safety factor.
I think I read somewhere that in fact it's possible to have a race condition
even with certain inputs and/or operating conditions (temperature, voltage
swings, etc at the limit) no matter what you do, or that was the implication.
Every flip-flop, a backbone of hardware design, is unstable by definition in
certain conditions btw.

RL

If this were true (that "flip-flops were evil"), your
computer wouldn't stay running for very long. Obviously,
it does stay running, so there's got to be a measure of
stability present.

Where a flip-flop can screw up, is if the data input changes,
at the same instant that the clock input is used to take a sample.
This typically arises in digital systems where two subsystems use
a different clock. Clock relationships can be synchronous (flip-flop "likes"),
plesiochronous (if phase is wrong, fails horribly and repeatedly),
or asynchronous (fails statistically, and potentially once in a blue moon).

In this case, Wikipedia doesn't have a good article, but if you're
a hardware guy, you'll recognize the subject matter on this page.

http://www.asic-world.com/tidbits/metastablity.html

My company had its own fab and CMOS process. I shared office space
in the fab building, with some of the geniuses over there. One neat
gadget they build, was to measure metastability (parameters). It's
so they could tell their customers (people who made use of the
fab products), what the statistics of metastability were for that
particular CMOS process. So if you visited there, you could be shown
an oscilloscope trace, of the squiggles caused by metastability.

A way to reduce metastability, is to use resampling. This is fine,
except in cases where the additional delay affects performance. Or,
in the old days, adding additional resampling stages, to improve
the statistics, cost more money. And that would be an incentive not
to use eight stages of resampling (as suggested in some document
I read). Typically, we'd use the two stage resampler, if a solution
was required, just like in this example. Certain late model 74F
flip flops, could be used at a couple hundred megahertz to fix
problems like this. This was back when we were still doing
significant amounts of "board logic", rather than ASICs.

http://www.asic-world.com/images/tidbits/meta.h1.gif

Ah, this brings back memories. The good ole days...
Our home-brew metastability measurement jig, may have
been based on the concept in this paper. Documents like
this, would have been required reading, at the time.

http://www.ti.com/lit/an/sdya006/sdya006.pdf

If your design is purely synchronous, the flip-flop is as stable
as can be. It's when you purposely change the data, as the clock
causes the flip-flop to sample, that bad things happen. That's
effectively a timing violation.

Paul
 
R

RayLopez99

If this were true (that "flip-flops were evil"), your

computer wouldn't stay running for very long. Obviously,

it does stay running, so there's got to be a measure of

stability present.



Where a flip-flop can screw up, is if the data input changes,

at the same instant that the clock input is used to take a sample.

This typically arises in digital systems where two subsystems use

a different clock. Clock relationships can be synchronous (flip-flop "likes"),

plesiochronous (if phase is wrong, fails horribly and repeatedly),

or asynchronous (fails statistically, and potentially once in a blue moon).



In this case, Wikipedia doesn't have a good article, but if you're

a hardware guy, you'll recognize the subject matter on this page.



http://www.asic-world.com/tidbits/metastablity.html





If your design is purely synchronous, the flip-flop is as stable

as can be. It's when you purposely change the data, as the clock

causes the flip-flop to sample, that bad things happen. That's

effectively a timing violation.

Thanks, that's interesting Paul. But is anything 100% synchronous these days? Don't they try and synchronize to the nearest clock edge, meaning yourclocks can all be off by N periods, where N=an integer? So if you miss that clock edge...your circuit is no longer 100% synchronous, no? And you can have the metastability condition occur?

RL
 
P

Paul

RayLopez99 said:
The Wikipedia link is not as bad as you might think, and confirms that metastability is inherent in any design, especially if you are below a minimum clock, even in synchronous design. Check it out: http://en.wikipedia.org/wiki/Metastability_in_electronics

RL

"Metastable states are avoidable in fully synchronous systems when
the input setup and hold time requirements on flip-flops are satisfied."

What that means is, if you design a state machine, in a chunk of silicon clocked
with one input clock, and you haven't overclocked the thing (meaning, it still
meets Tsu and Th), then it will run stably, forever. With zero probability of
failure to work correctly. This assumes (as is true in this day and age),
that you package the circuit properly, and feed it power properly.

As an example of a state machine, I could take three flip-flops and
make the circuit count 0-1-2-3-4-0-1-2-3-4... What I'd be doing in
that case, is detecting the value "4", and telling the circuit to
start at zero again. Three flip flops, unconstrained, would be
able to count from 0 to 7 and start over again. But in this
example, we use feedback to change the behavior. And that is a
state machine (we haven't decoded the states or anything, or attempted
to use the information).

The flip flips in that example, could all be clocked from the same
clock signal. Say, a 100MHz clock with a 50% duty cycle. Every
20 nanoseconds, the output of the three flip-flops would change,
and the bit pattern observed would be the counting sequence
0-1-2-3-4-0-1-2-3-4...

Now, I could leave that circuit running, on a benchtop, virtually
forever.

And I've actually done something like that here. I have an FPGA on
a PCI card, which I bought for a few hundred dollars. I used the
programming tool, to make a simple circuit. The initial design counted
up, and I modified the circuit (text file with Verilog in it) to
count down instead. I connected a PCI Express parallel port card
to a JTAG programmer cable, over to the PCI card, while it was
sitting on an adjacent table. I downloaded the FPGA bit pattern,
and the circuit kicked off. It sat there counting in the programmed
sequence. And it ran for six months, before I got tired of it
and switched it off one day. I wasn't really that concerned
about the stability - it was more a matter of not bothering
to turn it off. That board is a little demo board, with
LED displays so you can verify (for sufficiently slowly
updating designs), that it's still alive, and working
properly. And that was a purely synchronous, digital circuit.

(Picture of my FPGA board... Has a two digit, 7 segment display.
JTAG cable connects to the top. Power supply plugs into the lower
right hand corner. PCI connector is only useful, if you have the
intellectual property block that runs it.)

http://www.assistelie.fr/realisation/JPG/170_dec03_31.jpg

Paul
 
R

RayLopez99

(Picture of my FPGA board... Has a two digit, 7 segment display.

JTAG cable connects to the top. Power supply plugs into the lower

right hand corner. PCI connector is only useful, if you have the

intellectual property block that runs it.)



http://www.assistelie.fr/realisation/JPG/170_dec03_31.jpg

Yes, it's stable "when the input setup and hold time requirements" are met,along with other parameters relating to clock in some designs.

I've designed state machines in school, but back in the days we did not have Verilog and other such hardware description languages and instead did design by paper and pencil, with state diagrams, followed physical realizationby FFs on a breadboard... Nowadays it's all done on a computer screen, probably even in high-school and college I would imagine.

So you Paul are a true hardware programmer...one of the few, compared to software programmers.

RL
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top