Power failure during Windows 7 defrag (x64 Ultimate edition)

C

Castor Nageur

Hi all,

Yesterday night, my Windows 7 started a disk defragmentation (Windows
7 defaultly scheduled a disk defrag once a week).
Unfortunately, the Murphy's law was verified once again : the defrag
started at 1:00 AM and my house had a general power failure at 1:10 AM
in the middle of the C:\ defrag.

Hopefully, I was able to reboot.
I planned a CHKDSK from Windows 7 which successfully executed (upon
next reboot).
I did not find yet how to get the report (generated at boot time) but
I read that I could get it from the Event Viewer in a Wininit event.
Anyway, my system reboot fine and I do not notice any instability.

* Is the CHKDSK execution enough ? Is there anything more that I
should do ?
* I read that NTFS based file systems could not be corrupted even if a
crash or power failure occurred in the middle of a defrag process or
any write operation ? Is it true ?

Thanks in advance.
 
A

Arno

Castor Nageur said:
Yesterday night, my Windows 7 started a disk defragmentation (Windows
7 defaultly scheduled a disk defrag once a week).
Unfortunately, the Murphy's law was verified once again : the defrag
started at 1:00 AM and my house had a general power failure at 1:10 AM
in the middle of the C:\ defrag.
Hopefully, I was able to reboot.
I planned a CHKDSK from Windows 7 which successfully executed (upon
next reboot).
I did not find yet how to get the report (generated at boot time) but
I read that I could get it from the Event Viewer in a Wininit event.
Anyway, my system reboot fine and I do not notice any instability.
* Is the CHKDSK execution enough ? Is there anything more that I
should do ?

I don't think so.
* I read that NTFS based file systems could not be corrupted even if a
crash or power failure occurred in the middle of a defrag process or
any write operation ? Is it true ?

Very likely untrue, as that would require a perfect implementation
and a reliable flush-to-disk. In particular the second is not
done on consumer grade disks, as the Linux kernel folks
found out when they decided to not trust the specification but
actually try it out. As it turns typical consumer disks can
return from a write with disk-buffer flush _before_ the data is
on disk. This is an exceedingly stupid thing to do by the HDD
manufacturers, but hardly the first time they messed up.

The most important practical consequence is that for reliable
database commits you either have to use HDDs with buffer
turned off (very slow, may be acceptable if you just put the
DB journal on it), working flush (SCSI or SAS may have this
as one of the very few advantages) or an UPS with orderly
shut-down in case of power fail and generous waiting times
before power is withdrawn from the disks after the flush
command (should be standard on modern OSes).

As to a perfect implementation, Microsoft is unlikely to have
achieved that.

That said, it typically works nonetheless, as in 99% or so of the
time. Unlike FAT, NTFS is a lot harder to mess up. If CHKDSK does
not complain you should be fine.

Arno
 
C

cjt

I don't think so.


Very likely untrue, as that would require a perfect implementation
and a reliable flush-to-disk. In particular the second is not
done on consumer grade disks, as the Linux kernel folks
found out when they decided to not trust the specification but
actually try it out. As it turns typical consumer disks can
return from a write with disk-buffer flush _before_ the data is
on disk. This is an exceedingly stupid thing to do by the HDD
manufacturers, but hardly the first time they messed up.

The most important practical consequence is that for reliable
database commits you either have to use HDDs with buffer
turned off (very slow, may be acceptable if you just put the
DB journal on it), working flush (SCSI or SAS may have this
as one of the very few advantages) or an UPS with orderly
shut-down in case of power fail and generous waiting times
before power is withdrawn from the disks after the flush
command (should be standard on modern OSes).

As to a perfect implementation, Microsoft is unlikely to have
achieved that.

That said, it typically works nonetheless, as in 99% or so of the
time. Unlike FAT, NTFS is a lot harder to mess up. If CHKDSK does
not complain you should be fine.

Arno

My understanding is that ZFS is pretty robust, even on somewhat shoddy
disks.
 
E

Ed Light

Defragging is almost always close to useless, and is certainly not worth
doing once a week. Just because it is a Windows default, does not mean
it makes sense.

My experience is the opposite. An NTFS computer will be noticeably
slower after some months.
--
Ed Light

Better World News TV Channel:
http://realnews.com

Iraq Veterans Against the War and Related:
http://ivaw.org
http://couragetoresist.org
http://antiwar.com

Send spam to the FTC at
(e-mail address removed)
Thanks, robots.
 
R

Rod Speed

Ed Light wrote
My experience is the opposite.

Like hell it is.
An NTFS computer will be noticeably slower after some months.

Pity a defrag makes no difference and you couldnt pick that in a randomised double
blind trial with you not being allowed to run a ute that displays fragmentation.

And the reason for that is that **** all that most do on their desktop systems
are affected at all by the very fast seek between fragments on modern systems
and most stuff like playing media files isnt affected at all and thats about the
only linear processing of large files most ever do on modern desktop systems.
 
A

Arno

Many systems are pretty robust, even in the face of not-quite-honest
hard disks and enabled disk write buffers. NTFS is not bad, and a world
ahead of FAT. ZFS has a reputation of being very solid, as you say.
But so also are most major Linux filesystems - ext3, ext4, xfs and jfs
(and newcomer btrfs). Ironically, xfs is viewed as being /less/ robust
on power failure, even though it is at least as good as the others -
this comes from misunderstandings of the safest combinations of write
buffers, barriers, etc., along with the xfs documentation which explains
the problems that other filesystems mostly gloss over. While NTFS
documentation implies that metafile journalling makes it safe, xfs
documentation goes to lengths to explain how it is still unsafe, and how
you can improve it.
The best advice is to start out with the most appropriate filesystem for
your system (NTFS for windows, typically ext4 or xfs for Linux, and ZFS
for Solaris) depending on your requirements). Then read the advice in
the filesystem documentation. Sometimes you have to choose a balance
between performance issues and robustness in the face of crashes or
power outage - that's /your/ choice.
If you need solid reliability during power fails, get an UPS so that
your system can shut down cleanly. /No/ filesystem will be entirely
reliable without that.

That is not quite correct either.
a) If you have reliable flush, a filesystem can be reliable without
UPS. There are some quite old research papers out there with
a mathematical proof of that.
b) UPS helps, but is not entirely safe either. Take the following
scenario: A disk accepts quite a bit of data, then lies to the
OS about having it written to disk successfully. Then the
system is shut-down, giving the disks about a minute or so
to flush their buffers. Again, the disk lies but runs into write
problems afterwards and does take longer than that minute.
Unfortunately the OS truns the power off, having gotten
no indication that there was a problem.

The only thing that really helps are honest disks. That is why
it is so stupid to have the disks lie to the OS. Normally the
OS does _not_ do request that flush, only when it is really
needed.

Arno
 
B

Bill James

Arno said:
That is not quite correct either.
a) If you have reliable flush, a filesystem can be reliable without
UPS. There are some quite old research papers out there with
a mathematical proof of that.
b) UPS helps, but is not entirely safe either. Take the following
scenario: A disk accepts quite a bit of data, then lies to the
OS about having it written to disk successfully. Then the
system is shut-down, giving the disks about a minute or so
to flush their buffers.

A correctly sized UPS gives a lot longer than that.

Again, the disk lies but runs into write
problems afterwards and does take longer than that minute.
Unfortunately the OS truns the power off, having gotten
no indication that there was a problem.

Not with a correctly sized UPS.
The only thing that really helps are honest disks.

A UPS that lasts a lot longer than a minute does too.

That is why
 
A

Arno

Bill R TechSpec said:
Castor Nageur;1303401 Wrote: [...]
As for defrag... the controversy rages on!
IMHO, it helps. Not because of who says so or what study says what, but
because I notice my system runs better with a defrag program that
transparently and continually defrags my disks (and my disks tend to
last MUCH longer than just about anyone I know.... Coincidence? Maybe,
but like I said, I observe a difference so I defrag....)
Since even Microsoft says defrag is necessary (otherwise they wouldn't
spend the resources to supply their OS's with a defragmenter) there
really shouldn't be any controversy or questinas to whether or not it is
needed (even the "perfect" Macs now have defragmenters for their OSX).
Bill R TechSpec
.

The main thing here is that modern filesystems do not
require defrag, but NTFS is not quite a modern filesystem.
That was supposed to be WinFS, which MS never managed to
get to work, despite something like more of a decade of trying.

Some people keep instisting that NTFS is a modern
filesystem and consequentially does not need defragging.
This is however pure posturing, not technical fact.

So, yes, on Windows, defragging helps.

Arno
 
R

Rod Speed

Bill R TechSpec wrote
Castor Nageur;1303401 Wrote
Getting back to the original question, you should be fine if Check
Disk says all is OK.
If you are at all concerned, it seems to me that a system restore to
the point before the power outage would handle any complications
resulting from it.
The obvious long term solution would be to get a UPS as then your PC
would end off any processes and shut down so as to minimize any
problems from an abrupt shot down.
As for defrag... the controversy rages on!
IMHO, it helps.

Your hairy opinion is irrelevant. What matters is the evidence to support that opinion.

There isnt any except for the situation where you spend a lot of time furiously
copying very large files around, and if you are doing that, you shouldnt because
it makes a lot more sense to organise things so you dont do that.
Not because of who says so or what study says what, but because
I notice my system runs better with a defrag program that transparently
and continually defrags my disks

You wouldnt be able to pick it in a random double blind trial where
you werent allowed to run a ute that displays the fragmentation.
(and my disks tend to last MUCH longer than just
about anyone I know.... Coincidence? Maybe,

No maybe about it given that a system thats defragged
weekly will exercise the drive more than one that doesnt.
but like I said, I observe a difference

No you dont.
so I defrag....)
Since even Microsoft says defrag is necessary

No they dont.
(otherwise they wouldn't spend the resources
to supply their OS's with a defragmenter)

That doesnt mean that it actually is necessary.
there really shouldn't be any controversy or
questinas to whether or not it is needed

Only a fool would claim that because MS has implemented defrags
that they are necessary.
(even the "perfect" Macs now have defragmenters for their OSX).

Not from Apple they dont.
 
F

Franc Zabkar

b) UPS helps, but is not entirely safe either. Take the following
scenario: A disk accepts quite a bit of data, then lies to the
OS about having it written to disk successfully. Then the
system is shut-down, giving the disks about a minute or so
to flush their buffers. Again, the disk lies but runs into write
problems afterwards and does take longer than that minute.
Unfortunately the OS truns the power off, having gotten
no indication that there was a problem.

A disc's buffer size is around 64MB at most. Average write speed for
modern drives would be at least 50MB/s. Wouldn't this suggest that 1
minute is more than enough time to flush the cache?

- Franc Zabkar
 
A

Arno

Franc Zabkar said:
On 5 Aug 2011 08:28:46 GMT, Arno <[email protected]> put finger to
keyboard and composed:
A disc's buffer size is around 64MB at most. Average write speed for
modern drives would be at least 50MB/s. Wouldn't this suggest that 1
minute is more than enough time to flush the cache?

Unfortunately no. First, if it is non-linear accesses, that
write speed can go way down even if everything works perfectly.

Second, say there is a seek problem because of, say, vibration.
As the nice fishworks video on youtube here shows


disk latency can go up to 500ms in that case.
So, say, you have 1000 accesses in that 64MB (not the worst
case). That gives you 60ms maximum latency for each, totally
disregarding write times. With vibration, that can already
be far too long.

Other potential problems exist. For example, there may be
a new surfacte problem with a sector that prevents
writing. The disk may take that minute (or longer) to
diagnose the problem and do something about it.

However, one thing is correct about your numbers: They are
the reason it typically worsks reliable despite the
design defects in the disks.

Arno
 
R

Rod Speed

David Brown wrote
Bill R TechSpec wrote
MS doesn't say defrag is necessary - but they do make weekly defrags a default setting for Win7 (assuming the OP was
correct there).

Yes he is on that point.
They certainly make defraging easy to do.
But does that mean it is actually /useful/, either in reality or in
the minds of MS?
No, of course not - all it means is that customers /think/ defragging
is useful, and expect it to be easy and automatic. Whatever one may
think of MS and their software, they do listen to customer expectations.
Users of Windows, and DOS before them, have learned to see defragging
as essential - so MS gives them that.

Its a bit stronger than that, some of the MS KBs do say its useful.

Doesnt mean they are right tho, or that they do any more than proclaim that.
There was a time in the DOS and Windows world when defragging /did/ have a significant impact. DOS and Windows (even
the latest versions) have never been very good at file allocation,

Its been much better lately.
so there has always been more fragmentation than necessary. When combined with small and badly designed disk, file
and directory entry caches, this meant that fragmentation lead to a great deal more head movement

Thats overstating it.
and disk latency than was necessary - defragging, and other disk optimisation (such as placement of files and
directories) could have a measurable difference.

And ever since XP, its done that stuff auto too.
But modern machines have much more memory, and newer windows versions are better at using it as cache - greatly
reducing the head movement and the impact of fragmentation.

And large files dont fragment anything like as much either.
(This is the main reason why there have been very few defrag programs for non-Windows systems - *nix systems have
always been far better at using memory for caches,

And have been more vulnerable to power failures etc too.
as well as having better allocation policies in the first place, so that fragmentation has never been a big issue on
*nix.)

And they havent been in Win for a long time now.
Even at its worst, you might find an impact of perhaps 5% on reading large files

Its nothing like that in practice, essentially because there isnt much
linear processing of large files on modern systems except when
playing media files and with that situation, a few extra seeks during
the linear movement thru the file is literally completely invisible.

Even with the transcoding of large media files like video where
you do move linearly thru the file, so much work is going on in
the transcoding that you dont see anything like the transfer rate
that the hardware can do, so again, extra seeks are invisible.

And the system is normally doing extra seeks anyway just because
few use a system for just the transcoding alone so the heads are
moving around because the user is doing other things while the
transcoding is happening, if only catching up on the news or mail etc.

- but if it takes 10 seconds to start a big program or open a large file,

Anyone with even half a clue doesnt close those once they are opened
so that only happens just after a reboot that doesnt happen often either.
with a variation of +/- 20% due to other factors (other programs, anti-virus, automatic updates, network delays, etc.,
etc.), then the fragmentation overhead is lost in the noise.

Yep, completely invisible to the user.

Even with hibernation to the drive, where you do see linear processing
of quite a large file, a few extra seeks isnt even visible either.
No one is saying that file fragmentation does not have a performance impact - only that it is rarely noticeable or
measurable in real life.
The myth of defragmenting being "essential" is hard to kill - there
are too many companies making good money from selling basically
useless defrag programs that do nothing that cannot be done using the inbuild Windows software (or totally free
alternatives). Such companies put a lot of effort into their marketing to keep these myths alive - and to spread them
to other platforms.
There /are/ situations when defragging can be helpful - such as if you deal with a lot of large files.

Hardly ever even if you do. I have a hell of a lot of very large files
but they are what the PVR has produced and even tho I do process
them a bit, to delete what I have watched from an entire evening's
capture of a particular TV channel, a few extra seeks arent even
visible, because only a fool hangs around twiddling his thumbs
waiting for the edit to happen etc.

Ditto with backups which do involve very large files, the time
is dominated by the backup op, not by head seeks and again,
only a fool sits their twiddling their thumbs while a backup happens.
But it is essentially useless on a busy filesystem with lots of small files and lots of deletions (especially on
Windows, due to its poor allocation policies)

That hasnt been true for a long time now.
- even if defragging helped noticeably, your new files would be fragmented shortly afterwards.

Fraid not, even on the PVR which does write a hell of a lot of very large files.
If you have a separate disk (or partition) used for storing large files that you need to access as fast as possible,
then it may be worth the effort defragmenting that on occasion

Nope, even on the PVR.
(/not/ weekly).

Not even yearly.
 
A

Arno

David Brown said:
Bill R TechSpec said:
Castor Nageur;1303401 Wrote: [...]
As for defrag... the controversy rages on!
IMHO, it helps. Not because of who says so or what study says what, but
because I notice my system runs better with a defrag program that
transparently and continually defrags my disks (and my disks tend to
last MUCH longer than just about anyone I know.... Coincidence? Maybe,
but like I said, I observe a difference so I defrag....)
Since even Microsoft says defrag is necessary (otherwise they wouldn't
spend the resources to supply their OS's with a defragmenter) there
really shouldn't be any controversy or questinas to whether or not it is
needed (even the "perfect" Macs now have defragmenters for their OSX).
Bill R TechSpec
.

The main thing here is that modern filesystems do not
require defrag, but NTFS is not quite a modern filesystem.
That was supposed to be WinFS, which MS never managed to
get to work, despite something like more of a decade of trying.
I think MS probably realised quite quickly that the basic idea of WinFS
- essentially a large Access database written directly to the disk - was
flawed. They have suffered from over-hyping WinFS, and now can't change
out NTFS until they have something that will actually /work/ (unlike the
original WinFS idea), have the features that WinFS was supposed to have,
be faster and more reliable than NTFS, and compete with ZFS and btrfs
(or at the very least, ext4 and xfs) on modern filesystem features.
That's no easy task.

The mere fact that MS came up with (and announced publicly) a set
of features they found they could not reliably implement means they
are incompetent. Not a surprise. AFAIK NTFS is not an original
MS design either or was done by the VMS developers they hired.

I consider it quite possible that MS does not have the skills to
design and implement a modern filesystem at this time.
NTFS was definitely a modern filesystem when it came out, but that was
about 15 years ago, and it's only had small incremental changes since
(though the implementation of the filesystem in windows has improved
along the way).
The need (or lack of need) for defragging is not really an issue with
the filesystem - it is the /implementation/ of the file system that is
most important.

Of course "filesystem" was used as short form for "filesystem
and its current reference implementation".
For example, good file allocation policies reduce the
amount of fragmentation, and good caching and read-ahead policies reduce
the impact of fragmentation. And that /has/ got better as windows has
developed. There was a time when Windows would only use a small amount
of memory for caches no matter how much was needed for the rest of the
system - it would rather leave memory unused than use it for read
caches, which is absurd. But windows has got better here.

It would be extremely pathetic if they did not get at least
some improvement out of better hardware. But the buffer/cache
is not part of the filesystem implementation (at least not in
any sane design), but part of the virtual filesystem layer.

It is possible, that as MS supports so few different
filesystems they do not have that layer, but I doubt it.
There are good technical arguments for why defragging has little
real-world impact, and is seldom noticeable. I have seen nothing of
recent age (say, XP or newer) indicating that defraging helps.

There are people that have different experiences. It seems to
be usage-pattern dependent.
Only theoretically - it /does/ help, but not noticeably so except in a
few cases. Of course, it's easy to make synthetic benchmarks showing a
difference.

Actually, for a _modern_ filesystem it is not. For example, getting
ext2/3/4 to fragment is pretty hard (and ext2 is 18 years old).
That is one reason that while there was an ext2 defragger, its was
so unnecessary the project died. And the sstandard linux filesystem
check does display fragmentation on each filesystem check.

Arno
 
C

Castor Nageur

David Brown said:
There is also always a small chance of a more physical or electrical
problem with a disk during power failures, especially if there is a
period with poor-quality power (power that comes and goes, single-phase
failures, etc.).
But in general, if you don't see any problems, be happy.

CHKDSK was fine and I did not see any problem so I am quite happy.
Indeed, I can not imagine reinstalling all my system every time I have a
crash or power failure.


Arno said:
The most important practical consequence is that for reliable database
commits you either have to use HDDs with buffer turned off (very slow,
may be acceptable if you just put the DB journal on it)

I think I won't turn it off because as far as I am concerned, the
benefits are much lower than the power failure risk, especially if I
lose a lot of I/O performance.
I do not ever know if my BIOS or W7 allow me to turn the disk buffer
off.

* I am surprised the hardware OS level is not able to know if the data
is in the disk buffer or written on the disk then apply the right action
if a failure (crash, power) occurs.
If CHKDSK does not complain you should be fine.

Yes, it is.
I also ran a MD5 check on my C: dir where I have plenty of big
archives' files and they were all OK.
Morevoer, my computer works perfectly.


Yousuf Khan said:
Yeah, NTFS is a journalled filesystem, the defrag process would just
get replayed back to it's most recent write operation. Nothing gets
erased until it is overwritten. Data will remain in its original
location until then.

Thanks but as Arno and David said, when the data is in the cache, the
disk tells the OS that the data is written on it.
Of course, the cache is erased on a power failure. I do not know what
happens in the case of an OS crash.

* Does the NTFS journalling system can detect the problem anyway ?


If you are at all concerned, it seems to me that a system restore to
the point before the power outage would handle any complications
resulting from it.

Yes, I agree. I think system restore will not restore all the moved
files' blocks from my C: disk.
In the past, I did a system restore and got many problems with my
software licenses due the time difference between the restore time and
real time.


To finish my post, I worked with Linux in the 90's (I did not remember
which filesystem I ran) and remembered that when the system crashed,
they was a long filesystem checking and recovery on boot.
I also remembered that this step could fail (it never failed for me) so
I suppose you had to reinstall your system if so.
 
A

Arno

Castor Nageur said:
problem with a disk during power failures, especially if there is a
period with poor-quality power (power that comes and goes, single-phase
failures, etc.).
CHKDSK was fine and I did not see any problem so I am quite happy.
Indeed, I can not imagine reinstalling all my system every time I have a
crash or power failure.
I think I won't turn it off because as far as I am concerned, the
benefits are much lower than the power failure risk, especially if I
lose a lot of I/O performance.
I do not ever know if my BIOS or W7 allow me to turn the disk buffer
off.

I was talking about databases here, as in postgres, mysql, oracle, mssql,
etc., not general filessystem usage.
* I am surprised the hardware OS level is not able to know if the data
is in the disk buffer or written on the disk then apply the right action
if a failure (crash, power) occurs.

According to the standard, the OS is able to reliably flush.
It can tell the disk to only return form a write transfer
when the data is on disk. The problem is that the disk
manufacturers are violating the standard and are lying
to the OS by claiming the data has been flushed to surface,
when in fact it is not.

Unfortunately, the law is slow, conservative and
backwards-facing, otherwise this deliberately
introduced design flaw would make disk vendors liable
for data-loss caused by it.
Yes, it is.
I also ran a MD5 check on my C: dir where I have plenty of big
archives' files and they were all OK.
Morevoer, my computer works perfectly.
Thanks but as Arno and David said, when the data is in the cache, the
disk tells the OS that the data is written on it.
Of course, the cache is erased on a power failure. I do not know what
happens in the case of an OS crash.
* Does the NTFS journalling system can detect the problem anyway ?

Not reliably, no. It typically (99%) works anyways. In the normal
situation, the disk gets enough time to write its buffer to disk.
This is not perfect (see my example about when the disk encounters
problems on this flush), but works in most cases.

The Linux people that found out about the disks lying did large
writes with return only on completion and then immediately
removed the power from the disks. THis is not the normal
situation.
Yes, I agree. I think system restore will not restore all the moved
files' blocks from my C: disk.
In the past, I did a system restore and got many problems with my
software licenses due the time difference between the restore time and
real time.

To finish my post, I worked with Linux in the 90's (I did not remember
which filesystem I ran) and remembered that when the system crashed,
they was a long filesystem checking and recovery on boot.

Linux is careful.
I also remembered that this step could fail (it never failed for me) so
I suppose you had to reinstall your system if so.

No. You had to repair the files it could not fix manually. A
reinstalltion basically never necessary with UNIX file systems.
Different from Windows, UNIX file systems are intended for 24/7
usage, which makes power failure without warning a typical
shutdown situation and the filesystems are all designed to
handle that gracefully. It is the substandard Microsoft
filesystems that have possibly severe problems with data-loss
in this situation.

Arno
 
C

Castor Nageur

The problem is that the disk
manufacturers are violating the standard and are lying
to the OS by claiming the data has been flushed to surface,
when in fact it is not.

Now, I understand better.
I suppose they lie in order to be well ranked in benchmarks and
consequently, if one of them does it, they must all do it.
Not reliably, no. It typically (99%) works anyways. In the normal
situation, the disk gets enough time to write its buffer to disk.
This is not perfect (see my example about when the disk encounters
problems on this flush), but works in most cases.

Anyway, I disabled all the scheduled defrag and will always start them
manually when needed.
I also plan to use JkDefrag will it is said to work better than the
Windows defrag.
I also partioned my disk to isolate the system so I never have to
defrag it (and keep it safe :)).
The Linux people that found out about the disks lying did large
writes with return only on completion and then immediately
removed the power from the disks. THis is not the normal
situation.

This a very good test.

* And do they maintain a list of reliable disks (who do not lie) ?
No. You had to repair the files it could not fix manually. A
reinstalltion basically never necessary with UNIX file systems.
Different from Windows, UNIX file systems are intended for 24/7
usage, which makes power failure without warning a typical
shutdown situation and the filesystems are all designed to
handle that gracefully. It is the substandard Microsoft
filesystems that have possibly severe problems with data-loss
in this situation.

I hope one day, Windows will do as well as Linux !
 
A

Arno

Now, I understand better.
I suppose they lie in order to be well ranked in benchmarks and
consequently, if one of them does it, they must all do it.
Anyway, I disabled all the scheduled defrag and will always start them
manually when needed.
I also plan to use JkDefrag will it is said to work better than the
Windows defrag.
I also partioned my disk to isolate the system so I never have to
defrag it (and keep it safe :)).
This a very good test.
* And do they maintain a list of reliable disks (who do not lie) ?

Not to my knowledge. Woukld be far too much effeort.
Anyways they were debugging some "impossible" filesystem
corruption when they found this little gem.
I hope one day, Windows will do as well as Linux !

I doubt it. MS does neither have the technological competence
not the will to be. And as long as the masses thing MS trash
the hight of quality, nothing will change.

Just look at it: They force an only partially backwards
compatible OS upgrade on their "customers" every few years
just to maximize profit. If any UNIX vendor would do that,
they would go right out of business. And here is the dirty
secret: Nobody runs critical stuff like bank core-IT on
Windows. Well, maybe Citibank. But they are too greedy to
have even a single competent IT security expert look at
their customer-facing interfaces. The last attack against
them is in the standard stuff you check in the first
few hours of a pentest.

Windows is just a toy and everybody doing reliable IT
knows that.

Arno
 
H

helloworld

Not to my knowledge. Woukld be far too much effeort.
Anyways they were debugging some "impossible" filesystem
corruption when they found this little gem.

That's a pity !
So the only way I have to get around this is to buy an UPS but this
will not protect against system crashes.
I doubt it. MS does neither have the technological competence
not the will to be. And as long as the masses thing MS trash
the hight of quality, nothing will change.

Just look at it: They force an only partially backwards
compatible OS upgrade on their "customers" every few years
just to maximize profit. If any UNIX vendor would do that,
they would go right out of business. And here is the dirty
secret: Nobody runs critical stuff like bank core-IT on
Windows. Well, maybe Citibank. But they are too greedy to
have even a single competent IT security expert look at
their customer-facing interfaces. The last attack against
them is in the standard stuff you check in the first
few hours of a pentest.

I can not disagree.
I am an IT programmer and all our critical process are running on Unix
platforms.
We have some Unix workstations which have been running for 3 years non-
stop (and still running !).
But I prefer the Windows GUI especially the Windows 7 one :) so you
are right, this is a nice toy.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top