Linux OS

P

Paul

RayLopez99 said:
Are you sure about that j? Perhaps Linux is faster because you're not running several programs at the same time, like with Windows. In Windows 7, with Task Manager, I typically see well over two dozen programs running in the background (resident in memory). Not sure in Linux has that sort of complexity. As I said in COLA, a paperweight is 'stable' and 'fast' in that it sits there, does not change state, and will always be a paperweight. Same with Linux--if it's doing one thing or two, it's gonna be fast, but not do much. Just asking, not flaming, which I reserve for COLA.

RL

So you're evaluating Linux, and have never run the "top" command ???

http://www.tech-faq.com/wp-content/uploads/images/linuxtop.png

That's the equivalent of Task Manager, at least one of the displays.

And "top", uses the same kind of info that

ps aguwwwx

might use.

There is also a GUI interface for such things, perhaps gnome-system-monitor
or something with a similar name. That displays core usage, again, similar
to what Task Manager does. It draws a graph for you. And as opposed to the
"total memory" displayed in "top", it shows the actual amount of RAM
used (such as vmstat might have reported).

http://files.cyberciti.biz/uploads/tips/2009/06/gnome-system-monitor.png

If you need information about disk read/write, you can use the Package
Manager to find a copy of "iotop". One of the options to iotop,
sums up all the bytes transferred. By default, it might give megabytes/sec.
If you're evaluating storage transfer rates, that gives useful info.

If I were to draw a distinction, Linux seems to use a bit more processor,
when reading or writing NTFS. But other than that, you might find comparable
performance for a program ported to both Linux and Windows. They should
be pretty close.

*******

And if you need a breath of fresh air, try running Windows 8, with a program
that needs a lot of RAM. Running GIMP, it behaves a lot better when dealing
with large images (~1.5GB) than any previous Windows does. No swapping
was apparent for a job I was trying to do. I had to shut down WinXP and
boot up Win8, just to get the job to finish, it was taking so long.
This is the image I was working on at the time (7050 × 21296 pixels).
I needed to crop it a bit. Before it was cropped, the image was
causing indigestion for GIMP under WinXP. Relatively speaking,
it was smooth as butter under Windows 8. Perhaps if the image
had been even bigger, there would have been trouble. The
hardware provided in this case, was 4GB of RAM.

http://img201.imageshack.us/img201/6113/kenprint.gif

Since being cropped, that file is small enough now, that GIMP will open
it without a lot of fuss. But when the image was a bit larger (longer),
I was stuck waiting for the swapping to finish. And GIMP seemed to be
doing it in a way, which caused the most disk I/O possible. It's when
you test corner conditions of OSes (what they do when they run out
of resources), that the true nature of your OS comes out.

So at least for that one test, Win8 won the contest.

*******

For more info, on the Linux side, learn about the "OOM killer".
Essential information, if you're faced with the "Linux firing squad".
One of the more stupid features of Linux.

http://lwn.net/Articles/317814/

The OOM killer is a bastard, because it "kills the wrong process".
You could be editing in GIMP say, execute an operation that causes
the system to run out of memory, and in response, Linux kills Xorg,
and not GIMP (that means your desktop display disappears and your
session is lost). The program causing the memory shortage, didn't typically
suffer when out of memory. It could be any old program running
on the computer, which is abruptly killed. Or, Linux could become
so sluggish when tight on memory, no process dies, the keyboard
and mouse become useless, and your only (practical) option is
to press reset. Because system responsiveness, isn't coming
back any time soon. (What's happening in that case, is Linux
is stuck in a loop, asking all processes to return a few bytes
of RAM they aren't using. I wasn't patient enough, to wait
for the eventual outcome, which should really have been
the "oom killer".)

On some distros, bad behavior exists, because the distro designers
didn't "tune" their kernel properly. The people at kernel.org,
don't know just how badly, the average distro is treating their
kernel. The Linux kernel, is a lot better than the average
distro ends up portraying it. The kernel does have tuning options,
and on some distros, it's taken as long as five years for someone
to notice it isn't set up properly. (This is why some older
Linux LiveCDs, would actually crash under memory pressure.
Bad tuning prevented memory from being freed up in a timely
manner. You can now run Linux LiveCDs without a swap, and
expect them to "stay up".)

*******

Every operating system has some secrets they'd rather you don't know.
As an example, load up many copies of Prime95 under WinXP, and
notice that not only do the processes die, but more of them
die than is actually necessary. Which means WinXP doesn't
handle corner conditions that well either. I haven't tried
that test yet, on Windows 8. I didn't set out to test the
condition on WinXP either, but I was using multiple copies of
Prime95 to test the memory on the computer. And that's when I
noticed how poorly behaved it was. There was sufficient page
file available for what I was doing, but the paging didn't
work nearly as well as was required for the computing load.
And stuff started "dropping like flies".

Staring at an empty desktop, is not a test of any OS.
You've got to come up with good test cases, if you
want to see "what stinks underneath". And every OS
has features that make you go "what were they thinking" :)

Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top