CPU temps stable, then rising w/o load?

J

John Doe

S. Whitmore said:
John Doe wrote:

Not when Windows isn't running, i.e., when the system abruptly
shuts down before the OS is loaded.

Which means before the BIOS setup option appears.

Hardware drivers load very early.
Sorry the logic of it is escaping you.

And vice versa.

If you cannot reproduce the problem, you are likely going to have an
incredibly difficult time finding a solution.

When did you start dual booting to Linux? I had a bad experience
dual booting to Linux. It corrupted my disk so that PartitionMagic
couldn't recognize it.

How did you make the change to the new hardware? Did you reinstall
Windows/Linux or just switch the mainboard from under everything?

Good luck.
 
J

John Doe

....
FWIW, it's a dual boot system with Windows 2000 and GNU/Linux
(Slackware 10). When it crashed, I'm fairly sure that I had not
even seen the LILO prompt.

Do you have a backup of all/any important files?

If not, stop what you're doing now and make one.
 
J

JAD

nice try even mentioning 'top posting' puts you a 'troll' category,
corrected? you have never corrected me, not that I am not correctable, just
not by you. Your just the polite spam front man, that's your con.
 
J

John Doe

nice try even mentioning 'top posting' puts you a 'troll'
category,

Top posting makes your attempts to help less effective.

I could have also mentioned you plonking me.
Message-ID: <[email protected]>
Message-ID: <c9_Bd.47729$R73.47639 @fe06.lga>
corrected? you have never corrected me,

http://tinyurl.com/3lneg
Message-ID: <[email protected]>
Message-ID: <8MABd.47038$pF1.42093 @fe06.lga>
not that I am not correctable, just not by you. Your just the
polite spam front man, that's your con.

A troll making unsupported/ridiculous assertions.
 
S

S. Whitmore

John said:
If you cannot reproduce the problem, you are likely going to have an
incredibly difficult time finding a solution.

Agreed. And so far I can't consistently reproduce it.
When did you start dual booting to Linux? I had a bad experience
dual booting to Linux. It corrupted my disk so that PartitionMagic
couldn't recognize it.

Well, I've never had problems with a dual boot system and have been
using one off and on (for playing around with Linux and OS/2) since 1994
or so. I've also never used PartitionMagic.
How did you make the change to the new hardware? Did you reinstall
Windows/Linux or just switch the mainboard from under everything?

Here's what I did for this upgrade:

1. Backup everything "vital" to CD-R.
2. Move everything from drive 0 to drive 1.
3. Disconnect drive 1.
4. Swap drives so old drive 1 is now drive 0.
5. Install new mobo, PSU, and memory (and DVD burner).
6. Repartition and reformat drive 0.
7. Install Windows 2000 on drive 0.
8. Install drivers and a few key apps on Windows 2000.
9. Install UPS and monitoring software.
10. Connect drive 1.
11. Move everything from drive 1 to drive 0.
12. Repartition and reformat drive 1.
13. Install Linux on drive 1 (incl. installing LILO on drive 0 MBR, as
I've always done in the past).
14. Reorganize data where I want it.
15. Install applications that I want to have again.

(Steps 14 and 15 are still in progress.)

Given the timing of the first abrupt shutdown, I consider hardware a
more likely cause than software. I'd like to find a way to test the PSU
to "prove or disprove" its role, so that I can return it for replacement
(if it's the cause) while I'm still within that window -- but I don't
know of a way to do so. I could return it for replacement anyway, but I
don't want to claim that it's defective without being reasonably certain
that it's true.

As "luck" would have it, I haven't seen the problem again since I
started inquiring here. So it goes back to the "incredibly difficult
time finding a solution" by not being able to reproduce the problem. {sigh}
 
D

David Maynard

S. Whitmore said:
Ok, thanks -- this is the most salient point of this thread. So I will
continue looking at reasons other than a steadily rising "at-rest" CPU
temperature since that wasn't what I was seeing after all (i.e., it was
not "at rest").

The temp should, of course, eventually settle out after the heatsink
saturates and the case temp stabilizes.

Yeah, I haven't looked at the system logs yet but I doubt there's
anything there, it's too abrupt of a shutdown; also, the first shutdown
was before any OS was loaded, which would preclude any OS-related logs.

Yeah, sure does. Have you tried 'safe' BIOS settings?
 
M

Mxsmanic

Matt said:
Thanks. Presumably the main or only advantage of
halting is to save energy.

Mainly, yes.

In the olden days, computers had synchronous processors that consumed
essentially the same amount of power all the time, whether they were
actually executing instructions or not. As designs advanced,
technologies like CMOS developed that consumed power only when the
circuits on the processor were changing state; the rest of the time,
only a tiny amount of current was drawn. So originally it didn't matter
whether you actually executed a machine halt instruction or spun around
in a loop--power consumption remained at the same (high) level in both
cases. But today, a halt instruction stops a lot of switching on the
chip and dramatically reduces power consumption and heat production, so
halting the processor when there is nothing to do becomes a very smart
idea.

Even so, large computers have used this method for ages, either because
they ran on hardware that _did_ consume more power when running or
because it was simply good programming practice.

There are a few other factors. It's somewhat easier to write code that
spins in a loop as compared to code that halts and must be awakened by
an interrupt. And today's processors are so fast that they are idle 99%
of the time on many systems, whereas processors on old computers were so
slow that they almost never went idle even under modest workloads.
 
M

Mxsmanic

John said:
Your dismissal of my suggestion to upgrade the operating system is
not based on the problem as described.

I didn't dismiss it, I ignored it. Not knowing what the OP was runnign
to begin with, discussion of an upgrade makes no sense.
The original poster recently upgraded to a new mainboard and CPU.
Like it or not, new hardware potentially benefits from a more recent
operating system, probably/typically more than any other change in
circumstance. Windows XP produces fewer failures. Windows XP is the
current technology. Windows 2000 and Windows NT are aging.

Windows XP, 2000, and NT all enjoy the same reliability. Windows 2003
probably has the edge in reliability; XP has the edge for
user-friendliness and compatibility in desktop environments. NT is
indeed out of the picture these days for new installations, but it is
still in very wide use around the world and may remain so for a long
time, thanks precisely to its stability and reliability (no need to
upgrade as long as NT continues to do the job).
 
J

John Doe

Mxsmanic said:
John Doe writes:

I didn't dismiss it, I ignored it. Not knowing what the OP was
runnign to begin with, discussion of an upgrade makes no sense.

From: "S. Whitmore" <[email protected]>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US;
rv:1.7.5) Gecko/20041217
 
J

John Doe

Mxsmanic said:
Matt writes:

Mainly, yes.
In the olden days, computers had synchronous processors that
consumed essentially the same amount of power all the time, whether
they were actually executing instructions or not. As designs
advanced, technologies like CMOS developed that consumed power only
when the circuits on the processor were changing state; the rest of
the time, only a tiny amount of current was drawn.

Really? Yes CMOS logic consumes a tiny amount of power, but that's
not just when idle. If you know of discussion by chip makers like
National Semiconductor, Texas Instruments, Motorola, or any other
well-known manufacturer which supports your reasoning for the
difference in power consumption, I would enjoy reading that.

Please keep in mind that I am not trying to encourage you to further
a long off topic discussion here.
So originally
it didn't matter whether you actually executed a machine halt
instruction or spun around in a loop--power consumption remained at
the same (high) level in both cases. But today, a halt instruction
stops a lot of switching on the chip and dramatically reduces power
consumption and heat production, so halting the processor when
there is nothing to do becomes a very smart idea.

If not for thermal cycling, the degrading stress frequent
temperature variation places on integrated circuits, the halt
instruction would be a better idea.

I'm not saying that there will be a significant decrease in lifespan
even in fast systems like mine which are frequently at 100% CPU
usage. Components are made to run hot. Reducing the temperature at
every opportunity will not necessarily make components last longer.

I think we should leave it at "the halt instruction is good for
power reduction" and return to the more entertaining and appropriate
subject of homebuilt personal computers.
 
J

John Doe

I said:
Really? Yes CMOS logic consumes a tiny amount of power, but that's
not just when idle.

If you haven't already replied, I think I know what you mean (or
alluding to, or recalling, whatever). During the transition, they suck
current, so you have to make the transition quick.

That stuff is way out in left field, in my opinion.
 
D

David Maynard

John said:
Really? Yes CMOS logic consumes a tiny amount of power, but that's
not just when idle. If you know of discussion by chip makers like
National Semiconductor, Texas Instruments, Motorola, or any other
well-known manufacturer which supports your reasoning for the
difference in power consumption, I would enjoy reading that.

It doesn't take a heavy duty discussion from some chip maker. When
switching CMOS power consumption comes from charging/discharging circuit
capacitances. In the static state CMOS circuits draw only leakage current.

Execute instructions and they consume maximum power due to the switching.

Stop executing instructions and power drops to static leakage.
 
J

John Doe

David said:
It doesn't take a heavy duty discussion from some chip maker.

Oh, it might. But I wasn't asking for heavy-duty, I was asking for
authoritative.
When switching CMOS power consumption comes from
charging/discharging circuit capacitances. In the static state CMOS
circuits draw only leakage current.
Execute instructions and they consume maximum power due to the
switching.
Stop executing instructions and power drops to static leakage.

My recollection is that the input transition being slow is what
causes CMOS to suck current. So you provide a fast switching input.
Complementary metal oxide silicon is extremely efficient even while
operating. I can't imagine why anyone would think otherwise.

This discussion is way off-topic.
 
D

David Maynard

John said:
Oh, it might. But I wasn't asking for heavy-duty, I was asking for
authoritative.

Check any textbook on CMOS circuitry.

My recollection is that the input transition being slow is what
causes CMOS to suck current.

That could be a problem, if it happened, because it could potentially keep
the circuit in the active region longer (increasing through current), but
that isn't the normal case since anything connected to a 'slow moving'
signal would have schmitt trigger inputs and on-die transitions are not
'slow'. I mean, you're right in that "if you did this then..." but you
wouldn't do it.

In traditional CMOS, power consumption, when 'doing work', comes primarily
from circuit capacitance charging/discharging.

I say 'traditional' because as cycle times approach the rise and fall rate
of the device the transition period becomes a significant portion of the
duty cycle and, hence, power consumption, but it falls into the same
category of power consumption due to switching and is eliminated, along
with the capacitance switching consumption, when you stop executing
instructions.
So you provide a fast switching input.

That's 'automatic' since it's all on-die.
Complementary metal oxide silicon is extremely efficient even while
operating.

'Efficient' is a relative term, but moot since gate switching is a major
source of power consumption and the original reason for 'low power' halt
and sleep states: power is cut by eliminating gate switching.

As devices shrink with ever larger numbers of them on die static leakage
has also become a major problem, which is why processors also turn off
power to sections when they're not being used.
I can't imagine why anyone would think otherwise.

Maybe because CMOS power consumption from switching is well documented.
 
J

John Doe

In as many words as possible, someone is apparently trying to tell me
that CMOS logic becomes abnormally inefficient as operating frequency
rises, that it's only efficient when idle.

I have designed and built lots of circuits with CMOS logic (thanks to
National Semiconductor's 1988 CMOS logic data book). The family
seemed great for micropower devices including oscillators.

My main question is this:
As operating frequency rises within normal limits, does CMOS become
grossly inefficient compared to other typical forms of logic like
maybe TTL? I don't know much about typical logic families.

From what I recall, the main CMOS power consumption problem occurs
when inputs rise and fall slowly, that it is extremely low power
during normal operation.

Thank you.
 
B

bill.sloman

It is difficult to define "efficiency"when you are talking about logic.

CMOS draws hardly any current when it isn't doing anything, while TTL
and ECL have a static current drain.

When CMOS switches, it draws current to charge up and discharge its
internal capacitors - which are not all that big - and at high
frequencies some CMOS parts can draw more current than some TTL.

IIRR CMOS data books and data sheets include formulas that let you
estimate current drawn as a function of operating frequency.

Basically, I've always used CMOS when it was fast enough to do what I
needed done.
TTL , ECL and GaAs can be faster, but you start needing massive power
supplies if you have to use them, and loads of cooling fans to get rid
of the heat.
 
J

Jim Thompson

In as many words as possible, someone is apparently trying to tell me
that CMOS logic becomes abnormally inefficient as operating frequency
rises, that it's only efficient when idle.

I have designed and built lots of circuits with CMOS logic (thanks to
National Semiconductor's 1988 CMOS logic data book). The family
seemed great for micropower devices including oscillators.

My main question is this:
As operating frequency rises within normal limits, does CMOS become
grossly inefficient compared to other typical forms of logic like
maybe TTL? I don't know much about typical logic families.

From what I recall, the main CMOS power consumption problem occurs
when inputs rise and fall slowly, that it is extremely low power
during normal operation.

Thank you.

My limited logic experience is that, on-chip, the
efficiency/functionality crossover point from PECL-to-CMOS is at about
250MHz.

...Jim Thompson
--
| James E.Thompson, P.E. | mens |
| Analog Innovations, Inc. | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| Phoenix, Arizona Voice:(480)460-2350 | |
| E-mail Address at Website Fax:(480)460-2142 | Brass Rat |
| http://www.analog-innovations.com | 1962 |

I love to cook with wine. Sometimes I even put it in the food.
 
F

Fred Bloggs

John said:
In as many words as possible, someone is apparently trying to tell me
that CMOS logic becomes abnormally inefficient as operating frequency
rises, that it's only efficient when idle.

That would be right if you define "efficient" as Idd=0. But in the real
world, when the CMOS operates at speed, all the complementary MOS
drivers must charge and discharge their output node capacitance C. This
means that each clock cycle, the CMOS must draw a charge Q=CxVdd from
the power supply and deposit it on C. The next half of the clock cycle
discharges C to GND. The power supply then sees an average current of
charge/clock cycle x clock cycles/sec= C x Vdd x Frequency for each
switched node in the IC. You add these up to get the total Idd draw at
frequency- the manufacturer usually specifies a Cpd value so that you
can compute Idd= Cpd x Vdd x Frequency for the whole chip. At large
enough frequency of operation, the CMOS Idd will overtake the TTL Icc.
From what I recall, the main CMOS power consumption problem occurs
when inputs rise and fall slowly, that it is extremely low power
during normal operation.

That is true- slow inputs cause the input CMOS pair to dally in the
linear region where both FETs are on- and this creates a path from Vdd
tp GND through the pair of indeterminate and possibly large current- in
addition, the output of this pair may put other pairs in the same state
and/or disrupt logic function of the chip. The older 4000 series
specifies input rise times no longer than 1us, and the newer HC types
want input transitions no longer than 500ns for guaranteed proper
operation.
 
K

Ken Smith

John Doe said:
My main question is this:
As operating frequency rises within normal limits, does CMOS become
grossly inefficient compared to other typical forms of logic like
maybe TTL? I don't know much about typical logic families.

"grossly inefficient" applies to all logic families running at all speeds.
Nearly all the power that goes into the chip ends up as heat.

CMOS will only keep your fingers warm if it is being toggling at something
like 100MHz. TTL and ECL are nice and warn even when they are not
toggling.

The power loss in TTL rises a little more slowly than the loss in CMOS so
at some high (100MHz) frequency, the two lines cross. It is not something
that the word "grossly" would normally be applied to.

ECL has a non-switching loss about the same as its high frequency loss.
The slope is much lower than even TTL's but since it starts at such a high
point it doesn't matter until you are well above 100MHz.
 
K

Keith Williams

"grossly inefficient" applies to all logic families running at all speeds.
Nearly all the power that goes into the chip ends up as heat.
s/Nearly//

CMOS will only keep your fingers warm if it is being toggling at something
like 100MHz. TTL and ECL are nice and warn even when they are not
toggling.

Ever try to put your finger on a modern processor even when it's not
toggling? Leakage is a huge deal in high-end CMOS these days.
The power loss in TTL rises a little more slowly than the loss in CMOS so
at some high (100MHz) frequency, the two lines cross. It is not something
that the word "grossly" would normally be applied to.

Both are going to have an AC component of power proportional to
capacitance. TTL will have a higher DC component than SSI CMOS, but
when you get to 130nM and below CMOS starts looking pretty bad too
(though TTL doesn't look at all here ;).
ECL has a non-switching loss about the same as its high frequency loss.
The slope is much lower than even TTL's but since it starts at such a high
point it doesn't matter until you are well above 100MHz.

As a percentage, perhaps.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Top