Kaspersky wins yet another comparison

N

null

Tested by AV-Comparatives organization., the Kaspersky Antivirus gets
the best on-demand results with 99.85% of malware detected, McAfee
seconds with 95.41%.

Looks like McAfee seconds with 99.24%, not 95.41%.

Finally! I've been waiting for a "crud" detection category, and this
test has it (they call it "unwanted files"). I see McAfee is the super
crud detector here, "winning" by 75% to 68% over KAV :)


Art
http://www.epix.net/~artnpeg
 
R

Roy Coorne

Jari said:
Tested by AV-Comparatives organization., the Kaspersky Antivirus gets
the best on-demand results with 99.85% of malware detected, McAfee
seconds with 95.41%.

The comparison looks quite professionally made.
http://www.av-comparatives.org/seiten/ergebnisse_2004_02.php

Jari

Many comparisons look quite professionally - but many only include
detection rates.

For me, as a user who builds his rigs at home, several qualities of an
AV scanner are important:
- detection rate
- frequency of updating
- scanning of several POP3 accounts
- handling comfort (e.g. it must be easy to activate/deactivate the
background on-access scanner)
- easy registration (no activation)

And remember: Safe Hex is at least as important as AV scanning;-)

<my 2 cents> Roy <using NAV and Avast>
 
K

kurt wismer

Jari said:
Tested by AV-Comparatives organization., the Kaspersky Antivirus gets
the best on-demand results with 99.85% of malware detected, McAfee
seconds with 95.41%.

The comparison looks quite professionally made.
http://www.av-comparatives.org/seiten/ergebnisse_2004_02.php

yes, so professional they neglected to provide any information on the
testing methodology used...

you can't really judge the quality of a comparative by how pretty the
table looks or now many significant digits they represent their
percentages in...

also, i'm quite suspicious of some of the numbers used... 217,000 dos
viruses? there aren't that many viruses...
 
N

null

yes, so professional they neglected to provide any information on the
testing methodology used...

you can't really judge the quality of a comparative by how pretty the
table looks or now many significant digits they represent their
percentages in...

also, i'm quite suspicious of some of the numbers used... 217,000 dos
viruses? there aren't that many viruses...

Depends on your criteria. Does five bytes of code difference
constitute a different virus? I'm not surprised at numbers exceeding
250,000 for all viruses (just extrapolating from past data and the
rapid increases seen) but it does seem strange that there would be
that many in the DOS category.


Art
http://www.epix.net/~artnpeg
 
K

kurt wismer

Depends on your criteria.

the exact number may, but 217,000 is still outside of the ballpark...
Does five bytes of code difference
constitute a different virus? I'm not surprised at numbers exceeding
250,000 for all viruses

i am... that's nearly 3 times as many as i would expect to hear about...
(just extrapolating from past data and the
rapid increases seen)

rapid increases in the rate of virus writing?...

last year (or was it the year before) there was supposedly somewhere
around 80,000 total... 250,000 means that there were about 170,000
written in one year... that math works out to about one new virus every
3 minutes...

on top of that, however, 250,000 viruses would take an incredibly long
time to verify... at a modest 15 minutes per sample it would take more
than 7 years... get it done in one year you'd have to cut that time
down to 2 minutes per sample on average...

all of this seems very unlikely to me...
 
N

null

(e-mail address removed) wrote:
last year (or was it the year before) there was supposedly somewhere
around 80,000 total... 250,000 means that there were about 170,000
written in one year... that math works out to about one new virus every
3 minutes...

I used different data ... from here:

http://www.cknow.com/vtutor/vtnumber.htm

And I extrapolated the exponential increase using 50% per year from
the year 2000. So I got:

2000 50,000
2001 75,000
2002 112,500
2003 168,750
2004 253,125

An exponential increase is not only consistent with past history but
also the number of people (both users and vxers) involved. And who
knows? Maybe the number of vxers is growing at a larger exponential
rate than that of PCs and users.
on top of that, however, 250,000 viruses would take an incredibly long
time to verify... at a modest 15 minutes per sample it would take more
than 7 years... get it done in one year you'd have to cut that time
down to 2 minutes per sample on average...

He's a busy guy, all right :) Faster than the speed of light!
all of this seems very unlikely to me...

I'm just talking about the total number of viruses ... and I wouldn't
be surprised at 250,000 this year. But I dunno.


Art
http://www.epix.net/~artnpeg
 
B

Blevins

Jari Lehtonen said:
Tested by AV-Comparatives organization., the Kaspersky Antivirus gets
the best on-demand results with 99.85% of malware detected, McAfee
seconds with 95.41%.

The comparison looks quite professionally made.
http://www.av-comparatives.org/seiten/ergebnisse_2004_02.php

Jari


I'm sure that for most users, both McAfee and KAV are equally adequate
regardless of the slight variations of results on any test. Both put a huge
amount of work into their products and both use quality scanning engines.
 
C

Clive

Jari said:
Tested by AV-Comparatives organization., the Kaspersky Antivirus gets
the best on-demand results with 99.85% of malware detected, McAfee
seconds with 95.41%.

The comparison looks quite professionally made.
http://www.av-comparatives.org/seiten/ergebnisse_2004_02.php

Jari

Simple answer that many probably agree with......

Let's say you have a AV prgram that gains 100% success in every test - BUT,
slows your system down a lot! (Symantec, Mcafee KAV...)

AND you have an AV that scores 90% success, but with limited effect on
system resources......

I know which I would go for... :)

Clive
 
J

Jari Lehtonen

I have been reading peoples reactions to this test, and they seem to
be quite critical and they have the attitute tahat "they know better
how to test AV-products".

I really wouid like that those experts could tell me where I can find
a nonbiased, professionally made, scientifical and reliable test for
antivirus programs. As far I haven't. I personally thing Virus
Bulletins 100 test is crap. Nod32 always gets 100% so does Norton. And
for my own experience and what people have written here Norton is far
from perfect scanner.

Jari
 
N

null

Simple answer that many probably agree with......

Let's say you have a AV prgram that gains 100% success in every test - BUT,
slows your system down a lot! (Symantec, Mcafee KAV...)

AND you have an AV that scores 90% success, but with limited effect on
system resources......

I know which I would go for... :)

But there's nothing to stop you from using a top notch scanner
on-demand as a backup. In fact, if you have a clue and practice safe
hex, on-demand scanning is all you need. You can (almost) do without a
resident scanner completely in that case, since KAV, RAV and Dr Web
(at least) have single file upload av scan sites. I say "almost" since
there are restrictions on file size you can upload for scanning. And
it's _always_ a good idea to get more than one "opinion" on suspect
files.

The realtime scanner market is aimed at the clueless and those
concerned with PCs not directly/complely under their control. In any
case realtime scanners are aimed at the clueless end user problem.
They've become bloated monstrosities. And no single av scanner
realtime offers the kind of real protection safe hex affords.

I know which _ones_ I go for :)


Art
http://www.epix.net/~artnpeg
 
T

Tweakie

Simple answer that many probably agree with......

Let's say you have a AV prgram that gains 100% success in every test - BUT,
slows your system down a lot! (Symantec, Mcafee KAV...)

AND you have an AV that scores 90% success, but with limited effect on
system resources......

I know which I would go for... :)

The big problem is : can you show us an indepndant and rigorous test
showing the impact of different AVs on system performances ?

I know two recent tests that give some hints for a few AV products :

http://www.pcmag.com/image_popup/0,3003,s=400&iid=53574,00.asp
(impact of AVs on Winstone benchmark - October 03)

http://www.rokop-security.de/main/article.php?sid=693&mode=thread&order=0
(Nb processes, RAM used and scan speed - January && February 04)

On-access scan speed, that have an impact on system perfs. is also
mentionned here :

http://www.f-secure.de/tests/ctvergleichstest0304.pdf

The fastest is eTrust using VET engine. Nod, Sophos and McAffee
are less that 25% slower than it. Slowest are KAV and AVs using
several scanning engines : F-Secure and AVK.
 
K

kurt wismer

Jari said:
I have been reading peoples reactions to this test, and they seem to
be quite critical and they have the attitute tahat "they know better
how to test AV-products".

I really wouid like that those experts could tell me where I can find
a nonbiased, professionally made, scientifical and reliable test for
antivirus programs.

personally, i think the virus test centre at uni-hamburg
(http://agn-www.informatik.uni-hamburg.de/vtc/naveng.htm) does a really
good job of documenting their methodology (absolutely necessary for the
review of a scientific test)...
As far I haven't. I personally thing Virus
Bulletins 100 test is crap.

it's not a test, it's a byproduct of their *real* comparative... an
example can be found in
http://www.virusbtn.com/magazine/archives/pdf/2003/200308.pdf ... i'm
not quite as thrilled with the availability of documentation for the
methodology of these tests though - there may be more documentation
than what i've found, but it should be easier to find...
Nod32 always gets 100% so does Norton. And
for my own experience and what people have written here Norton is far
from perfect scanner.

there is no test that can show how well an anti-virus will deal with
new viruses in the future, nor is there any test that can show how
usable or efficient an anti-virus is, nor how well it deals with
non-virus issues... comparative reviews are of only limited utility
when judging an anti-virus....
 
N

Nick FitzGerald

I used different data ... from here:

http://www.cknow.com/vtutor/vtnumber.htm

And what precisely is Tom's source for his numbers?

Don't get me wrong, it's a good page, but certainly not a reliable source
for the kinds of numbers (and other information) you need to make the
guesstimates you go on to make below...
And I extrapolated the exponential increase using 50% per year from
the year 2000. So I got:

Hmmmmm -- "exponential" maybe, but 50% per annum is way too high.
2000 50,000
2001 75,000
2002 112,500
2003 168,750
2004 253,125

An exponential increase is not only consistent with past history but
also the number of people (both users and vxers) involved. And who
knows? Maybe the number of vxers is growing at a larger exponential
rate than that of PCs and users.

Wrong.

For one, Tom's numbers leading up to that 2000 number of 50,000 are not
well bounded. Also, you appear to have failed to allow for the "some
schmuck wasted great gobs of his time generating approximately 15,000
trivial DOS viruses with a kit" factor a few years back. Some scanner
developers did not add 15,000 to their detection counts as they already
detected all (or almost all) of these "new" viruses because they had
good generic detection of that kit's output, but eventually all (?)
developers have added that 15,000 to their count to keep it (roughly)
in line with all the other developers. This happened over the course
of two or three calendar years and as Tom's source is unclear, it is
equally unclear whether his 2000 figure is exaggerate by this factor or
not. Of course, extrapolating from data that does or does not contain
such a massively distorting one-off "oddball" event is bound to be
fraught with problems (even if you otherwise get the right curve and
growth factor...).

I'm just talking about the total number of viruses ... and I wouldn't
be surprised at 250,000 this year. But I dunno.

I would be -- that number seems to me, as it clearly does to Kurt too,
to be out by a factor of approximately 2.5 to 3.

It seems that when Andreas says "X viruses" he means "X samples of some
unknown number of viruses". There is a _very_ important distinction
here that the editors of Virus Bulletin previous to me took great pains
to point out and ensure that the VB tests did not fudge...

Of course, properly classifying which proven viral files are samples of
the same, and which samples of different viruses, is a major research
undertaking and for very many viruses it will be a much more time-
consuming effort than actually proving that a sample is viral in some
real-world environment.
 
N

null

And what precisely is Tom's source for his numbers?

I have no idea, but the exponential growth info and the number 50,000
by the year 2,000 didn't seem out of whack to me.
Don't get me wrong, it's a good page, but certainly not a reliable source
for the kinds of numbers (and other information) you need to make the
guesstimates you go on to make below...


Hmmmmm -- "exponential" maybe, but 50% per annum is way too high.

Not based on the historical info he gave where in some years there was
100% or more increase. I picked 50% as a kind of rough mean of the
historical info.
Wrong.

For one, Tom's numbers leading up to that 2000 number of 50,000 are not
well bounded. Also, you appear to have failed to allow for the "some
schmuck wasted great gobs of his time generating approximately 15,000
trivial DOS viruses with a kit" factor a few years back.

Wrong! :) That's precisely what I did and do think and brought up with
Kurt (not in so many words) when I asked if a five byte difference
counts as a different virus.
Some scanner
developers did not add 15,000 to their detection counts as they already
detected all (or almost all) of these "new" viruses because they had
good generic detection of that kit's output, but eventually all (?)
developers have added that 15,000 to their count to keep it (roughly)
in line with all the other developers. This happened over the course
of two or three calendar years and as Tom's source is unclear, it is
equally unclear whether his 2000 figure is exaggerate by this factor or
not. Of course, extrapolating from data that does or does not contain
such a massively distorting one-off "oddball" event is bound to be
fraught with problems (even if you otherwise get the right curve and
growth factor...).



I would be -- that number seems to me, as it clearly does to Kurt too,
to be out by a factor of approximately 2.5 to 3.

It seems that when Andreas says "X viruses" he means "X samples of some
unknown number of viruses". There is a _very_ important distinction
here that the editors of Virus Bulletin previous to me took great pains
to point out and ensure that the VB tests did not fudge...

I know that, Nick. It obviously all depends on exactly what you're
counting.
Of course, properly classifying which proven viral files are samples of
the same, and which samples of different viruses, is a major research
undertaking and for very many viruses it will be a much more time-
consuming effort than actually proving that a sample is viral in some
real-world environment.

Hey, I realized much of this years ago. And if it was far from a one
man effort years ago, it's far worse now.


Art
http://www.epix.net/~artnpeg
 
J

Jari Lehtonen

personally, i think the virus test centre at uni-hamburg
(http://agn-www.informatik.uni-hamburg.de/vtc/naveng.htm) does a really
good job of documenting their methodology (absolutely necessary for the
review of a scientific test)...
But it does not help much if your newest test si soon 1 year old. The
virus (and Antivirus) world is quite different now than it was April
2003

So we come to the conclusion that Av products are quite impossible to
test or compare?

Jari
 
K

kurt wismer

Jari said:
But it does not help much if your newest test si soon 1 year old. The
virus (and Antivirus) world is quite different now than it was April
2003

i was under the impression you wanted examples of good tests so as to
learn what they looked like and therefore be better able to judge the
quality of arbitrary tests you came across... the uni-hamburg tests
satisfy that criterion in at least one important way...

you did not ask for a *current* test... if you had i would have been
much more suspicious of what you ultimately hoped to do with it... the
main benefit of a current test over an older one is that you would be
able to compare the data points - but you can't really judge the
quality of a test by looking at the data points, you have to look at
how the test was performed...
So we come to the conclusion that Av products are quite impossible to
test or compare?

i fail to see how you've jumped to that conclusion...
 
N

null

i was under the impression you wanted examples of good tests so as to
learn what they looked like and therefore be better able to judge the
quality of arbitrary tests you came across... the uni-hamburg tests
satisfy that criterion in at least one important way...

you did not ask for a *current* test... if you had i would have been
much more suspicious of what you ultimately hoped to do with it... the
main benefit of a current test over an older one is that you would be
able to compare the data points - but you can't really judge the
quality of a test by looking at the data points, you have to look at
how the test was performed...

But you're not addressing Jari's question. Where does one look for
quality _and_ recent detection tests of a broad (zoo, Trojans,etc.)
nature? Or don't you consider "recent" (the last 3 months or so) to be
important?

It's my impression that some scanners are changing rather rapidly.
Some are now adding Trojan detection, for example, where in the past
they ignored the category. Some may be moving away from broad
detection and focusing on ITW. Some may be dropping detection of old
and outdated (DOS) malware.

Lacking quality and recent broad based tests, how can we determine the
current trends?

Actually, the so called "unscientific" tests do at least indicate such
trends. And in this way they do have value. Also, imagine what would
happen in the improbable scenario that product X turned honest and
quit alerting on "crud". That would certainly show up in the
"unscientific" tests :)


Art
http://www.epix.net/~artnpeg
 
K

kurt wismer

But you're not addressing Jari's question. Where does one look for
quality _and_ recent detection tests of a broad (zoo, Trojans,etc.)
nature? Or don't you consider "recent" (the last 3 months or so) to be
important?

i consider it important, but i accept that there can't always be recent
data available...
It's my impression that some scanners are changing rather rapidly.
Some are now adding Trojan detection, for example, where in the past
they ignored the category. Some may be moving away from broad
detection and focusing on ITW. Some may be dropping detection of old
and outdated (DOS) malware.

Lacking quality and recent broad based tests, how can we determine the
current trends?

wait until another good quality test is performed...
Actually, the so called "unscientific" tests do at least indicate such
trends.

no they don't...
And in this way they do have value. Also, imagine what would
happen in the improbable scenario that product X turned honest and
quit alerting on "crud". That would certainly show up in the
"unscientific" tests :)

you clearly have the mistaken impression that 'unscientific' tests are
still well defined enough to derive some valid conclusions from them...

lets perform a thought experiment, shall we? lets consider a test that
has 3 samples... the first sample is virus A and the second 2 samples
are virus B... the test, being unscientific, counts all 3 as separate
viruses.... scanner X misses virus A, scanner Y misses virus B - the
results *should* be 50% for both but because of the improper
methodology it turns out to be 66% for scanner X and 33% for scanner Y...

this illustrates just one of the reasons why unscientific tests cannot
be trusted on *any* level... there is no way to be sure any
interpretation of the results is correct because of the lack of proper
methodology... if the conclusions one reaches happen to be accurate, it
is purely coincidental...
 
N

null

i consider it important, but i accept that there can't always be recent
data available...


wait until another good quality test is performed...

That's your opinion. It's not mine.
no they don't...


you clearly have the mistaken impression that 'unscientific' tests are
still well defined enough to derive some valid conclusions from them...

Mistaken impression???
lets perform a thought experiment, shall we? lets consider a test that
has 3 samples... the first sample is virus A and the second 2 samples
are virus B... the test, being unscientific, counts all 3 as separate
viruses.... scanner X misses virus A, scanner Y misses virus B - the
results *should* be 50% for both but because of the improper
methodology it turns out to be 66% for scanner X and 33% for scanner Y...

You're off on a tangent that I wasn't even talking about.
this illustrates just one of the reasons why unscientific tests cannot
be trusted on *any* level... there is no way to be sure any
interpretation of the results is correct because of the lack of proper
methodology... if the conclusions one reaches happen to be accurate, it
is purely coincidental...

Bull. If I have a large test bed which includes many Trojans and
product X failed to alert on 95% of them and a year later product X
alerts on 70% of them I'm justified in drawing the conclusion that
product X has been addressing their lack of Trojan detection, no?
It doesn't even matter for this purpose that 10% of my samples aren't
viable. I'm just looking for a major _change_ in the detection
characteristics of a product.

Similarly, if product Y alerted on 95% of the old DOS viruses in my
collection last yeat and today it alerts on 10% of them, I'm justified
in concluding that a major change in direction has been made by the
producers of product Y.

And so on. These are the kinds of things a unscientific test bed can
be useful for. There are others. I can use my unscientific test bed to
evaluate scanner unpacking capabilities (as a matter of convenience)
since vxer collections are full of unusual packers and the malware is
sometimes multiply packed and packed with more than one packer to
confuse scanners. It soon becomes clear to me which scanner(s) I want
to use for on-demand scanning (It was KAV DOS to anyone that cares :))


Art
http://www.epix.net/~artnpeg
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top