AV products tested vs 50K virii

H

harry wong

Pretty interested to hear any comments. Although this link is from the May
2003 test,

http://www.virus.gr/english/fullxml/default.asp?id=59&mnu=59

it seems that the results from the most resent test will be along similar
lines:

www.livepublishing.co.uk/content/page_325.shtml

This 2nd link is the recent press release from the pub that sponsored the
tests.

I personally found the results rather disconcerting, as I use Symantec
Corporate (90%). I also have crash problems with Kaspersky (ranked 1), and
absolutely hate the way F-secure gets its updates.
 
H

harry wong

Thanks for the declension lesson Dave!

Virii is the term used by nerds for such collections. However in the future
I will not refer to my 2 chow-chow pups as my dogii.

regards

harry
 
N

Nick FitzGerald

harry wong said:
Pretty interested to hear any comments. Although this link is from the May
2003 test,

http://www.virus.gr/english/fullxml/default.asp?id=59&mnu=59

Did you read how he selected "samples" for inclusion in the test?

By taking all files from some larger set of rubbish and throwing out
all those that none of his pre-selected "top four" scanners detected.

There are _many_ problems with such bullshit tests, but the two
biggest are the following...

Starting with a sufficiently shitty pile of original "suspect files",
the "pre-selection" scanners _should_ be the top four scoring (so
long as they have roughly equal detection rates among "only real
malware" sample sets). In fact, this is _EXACTLY_ what the test
results show!

No they don't I hear some moron object. They say:

1. F-Secure version 5.40 - 99.67%

2. Kaspersky version 4.0.5.37 - 99.55%

3. e-Scan Pro version 2.5.181.5 - 97.66%

4. McAfee version 7.00.5000 - 97.14%

5. RAV version 8.6.104 - 95.18%

6. F-Prot version 3.13 - 92.92%

True, but to accept that at surface value is very disingenuous if
your concern (and expertise!) is impeccable product testing (as
mine is). To make sense of those results and my claim that they
in fact prove the presence of precisely the kind of testing bias
logical analysis suggests the employed testing methodology should
have, you have to "correct" that list for a couple of product
quirks. Note that eScan is actually an OEM'ed version of KAV.
Further note that F-Secure is a dual-engined product _combining_
both the F-PROT and KAV detection engines. Therefore, as neither
of these products is really separate or different from the four
pre-selection scanners, we will eliminate their results from the
list to get a "top four" just as the logical analysis suggested:

1. Kaspersky version 4.0.5.37 - 99.55%

2. McAfee version 7.00.5000 - 97.14%

3. RAV version 8.6.104 - 95.18%

4. F-Prot version 3.13 - 92.92%

So, the worst of the expected detection biases is proven in the
results.

What is the second most likely bias in such a test?

Well, as the test producer clearly has no ability to determine
for himself what is a valid sample and what not, it also seems
highly likely that a scanner with what I have, in past, referred
to as a "high crud-factor" should score above its real ability
(and, conversely, that some "OK" scanners will score low because
although they do not have stellar virus detection rates, several
of these pride themselves in having _very_ low crud factors).

At the risk of being sued I won't name the scanners I suspect to
have to have unduly benefited in this test for being high crud
factor detectors (though will note that they are mostly from
Central and Eastern Europe, though the reverse does _not_ hold).
Further, it seem highly likely that KAV's rating highest (of the
single-engine scanners) is due to both a very high real detection
rate _AND_ its notoriously high crud detection rate. (Some
products are strongly suspected of deliberately chasing crud
detection for precisely the reason that ignorant testers doing
stupid tests such as this will greatly overrate these products
relative to products that actually have better detection but
that also actively avoid the crud factor. These are easy to
pick by profiling the results of high crud content tests against
more carefully run tests...)

Also, given that F-PROT was about 4% behind McAfee and 6% behind
KAV, I'd say it's a safe bet that _at least_ 6% of the "samples"
present in this "test" would not have been present in any
properly-run test.
www.livepublishing.co.uk/content/page_325.shtml

This 2nd link is the recent press release from the pub that sponsored the
tests.

And it shows that Live Publishing in general, and even more
disconcertingly, the editor of the group's "PC Utilities"
magazine has is seriously devoid of clue when it comes to AV
(and especially AV testing) issues...

I found the following comment from Gavin Burrell, in that
release, especially telling:

Gavin concluded: “It’s important to note that none of the
antivirus programs tested proved 100% effective, and none
even managed 100% effectiveness in a single category. This
is partly because what some programs class as a virus,
others may class as harmless code.

I couldn't fabricate more "clueless about AV" copy were I ever
required to!
I personally found the results rather disconcerting, as I use Symantec
Corporate (90%). I also have crash problems with Kaspersky (ranked 1), and
absolutely hate the way F-secure gets its updates.

I found the results very disconcerting, but not for the reasons
you did...
 
N

null

Did you read how he selected "samples" for inclusion in the test?

By taking all files from some larger set of rubbish and throwing out
all those that none of his pre-selected "top four" scanners detected.

Would the results have been any different if he had included any and
all files that any scanner alerted on? I venture to say the results
would have been virtually identical.
There are _many_ problems with such bullshit tests, but the two
biggest are the following...

Starting with a sufficiently shitty pile of original "suspect files",
the "pre-selection" scanners _should_ be the top four scoring (so
long as they have roughly equal detection rates among "only real
malware" sample sets). In fact, this is _EXACTLY_ what the test
results show!

No they don't I hear some moron object. They say:

1. F-Secure version 5.40 - 99.67%

2. Kaspersky version 4.0.5.37 - 99.55%

3. e-Scan Pro version 2.5.181.5 - 97.66%

4. McAfee version 7.00.5000 - 97.14%

5. RAV version 8.6.104 - 95.18%

6. F-Prot version 3.13 - 92.92%

True, but to accept that at surface value is very disingenuous if
your concern (and expertise!) is impeccable product testing (as
mine is). To make sense of those results and my claim that they
in fact prove the presence of precisely the kind of testing bias
logical analysis suggests the employed testing methodology should
have, you have to "correct" that list for a couple of product
quirks. Note that eScan is actually an OEM'ed version of KAV.
Further note that F-Secure is a dual-engined product _combining_
both the F-PROT and KAV detection engines. Therefore, as neither
of these products is really separate or different from the four
pre-selection scanners, we will eliminate their results from the
list to get a "top four" just as the logical analysis suggested:

1. Kaspersky version 4.0.5.37 - 99.55%

2. McAfee version 7.00.5000 - 97.14%

3. RAV version 8.6.104 - 95.18%

4. F-Prot version 3.13 - 92.92%

So, the worst of the expected detection biases is proven in the
results.

No, that's far from proven. It's simply expected.
What is the second most likely bias in such a test?

Well, as the test producer clearly has no ability to determine
for himself what is a valid sample and what not, it also seems
highly likely that a scanner with what I have, in past, referred
to as a "high crud-factor" should score above its real ability
(and, conversely, that some "OK" scanners will score low because
although they do not have stellar virus detection rates, several
of these pride themselves in having _very_ low crud factors).

But do they claim that any malware they alert on is definitely viable?
I've never heard of any such claim. And until vendors can honestly
make such claims, demanding the use of only viable samples in tests is
actually quite irrational.
At the risk of being sued I won't name the scanners I suspect to
have to have unduly benefited in this test for being high crud
factor detectors (though will note that they are mostly from
Central and Eastern Europe, though the reverse does _not_ hold).

Whacha got against super crud detectors? :) Personally, I much prefer
them. I like being alerted on all crud. After all, malware is just
"crud" . Anything typical users don't care to have or want to have on
their hard drives is "crud". Whether the crud is a result of botched
disinfections or whatever kind of crud you have in mind, I'd prefer to
have my scanner alert. I couldn't care less whether or not the file in
question is viable. I want to know about it.
Further, it seem highly likely that KAV's rating highest (of the
single-engine scanners) is due to both a very high real detection
rate _AND_ its notoriously high crud detection rate.

More power to it IMO.
(Some
products are strongly suspected of deliberately chasing crud
detection for precisely the reason that ignorant testers doing
stupid tests such as this will greatly overrate these products
relative to products that actually have better detection but
that also actively avoid the crud factor. These are easy to
pick by profiling the results of high crud content tests against
more carefully run tests...)

Or they may simply have the same attitude I have.
Also, given that F-PROT was about 4% behind McAfee and 6% behind
KAV, I'd say it's a safe bet that _at least_ 6% of the "samples"
present in this "test" would not have been present in any
properly-run test.

Are you suggesting that F-Prot detects as much as KAV when large scale
virus zoo and Trojan tests of a more "scientific" nature are
conducted? Please show me those test results. I've never seen them.


Art
http://www.epix.net/~artnpeg
 
F

Frederic Bonroy

(e-mail address removed) a écrit :
Would the results have been any different if he had included any and
all files that any scanner alerted on? I venture to say the results
would have been virtually identical.

That wouldn't have been much better. You don't select samples by using
virus scanners (since the goal is to test virus scanners), but by
analysing the files and determining whether or not they can act as
viable samples.

When I was in school I once wrote a program that used Pi to calculate
Pi. :)
No, that's far from proven. It's simply expected.

Why is it not proven? What further evidence do you need? Don't you think
that this list, if it's not a proof, is at least a very very strong
indication that the testing method is flawed?
But do they claim that any malware they alert on is definitely viable?
I've never heard of any such claim. And until vendors can honestly
make such claims, demanding the use of only viable samples in tests is
actually quite irrational.

That depends on the kind of crud we are talking about, no? :) Crud can
be "working crud" (in that it's able to cause damage of some sort) or it
can be "crud crud" that doesn't work at all. Why make any efforts to
detect the latter?
Whacha got against super crud detectors? :) Personally, I much prefer
them. I like being alerted on all crud.

I certainly would like AV scanners to detect working crud. But another
issue is naming: wouldn't it be great if virus scanners could be
modified to display *meaningful* messages and reasonably precise
designations of the malicious program they detect, so as to allow
differentiation between viable malware and crud?
I couldn't care less whether or not the file in
question is viable. I want to know about it.

So do I but doesn't it make sense to concentrate first and foremost on
the malware that actually works?
 
N

null

(e-mail address removed) a écrit :


That wouldn't have been much better. You don't select samples by using
virus scanners (since the goal is to test virus scanners), but by
analysing the files and determining whether or not they can act as
viable samples.

I know how it's allegedly done Frederic :)
When I was in school I once wrote a program that used Pi to calculate
Pi. :)

Not a good analogy. You're missing my point.
Why is it not proven? What further evidence do you need? Don't you think
that this list, if it's not a proof, is at least a very very strong
indication that the testing method is flawed?

Indications are not proof. I was picking nits.
That depends on the kind of crud we are talking about, no? :) Crud can
be "working crud" (in that it's able to cause damage of some sort) or it
can be "crud crud" that doesn't work at all. Why make any efforts to
detect the latter?

Why not? I don't care to have that crud on my hard drive. Do you? So
why not alert on it?
I certainly would like AV scanners to detect working crud. But another
issue is naming: wouldn't it be great if virus scanners could be
modified to display *meaningful* messages and reasonably precise
designations of the malicious program they detect, so as to allow
differentiation between viable malware and crud?

You're preaching to the choir there :)
So do I but doesn't it make sense to concentrate first and foremost on
the malware that actually works?

That's what "quality" testing agencies have claimed to do for ages. So
I don't know what you mean by "first and foremost" since it's been
done many times over.

What's missing are current published test results of a higher quality
than the frowned upon test in question. But I doubt if the quality
tests would show significant differences in the results. And that's my
point,


Art
http://www.epix.net/~artnpeg
 
F

Frederic Bonroy

(e-mail address removed) a écrit :
I know how it's allegedly done Frederic :)

I know but I was confused by what you said.
Why not? I don't care to have that crud on my hard drive. Do you? So
why not alert on it?

There are many things on my hard drive that I don't want to have. :)

You would have to define precisely what crud is. You need to consider
that crud poses less of a problem than actually working malware. And
then you need to consider that working malware is already so abundant
that AV companies have a hard time trying to keep up.
What's missing are current published test results of a higher quality
than the frowned upon test in question. But I doubt if the quality
tests would show significant differences in the results.

Maybe not. The fact is, it could be a coincidence. You can't expect
meaningful results if your methodology is flawed. You can obtain them if
you're lucky, but you can't expect them.
 
K

koorb

Pretty interested to hear any comments. Although this link is from the May
2003 test,

http://www.virus.gr/english/fullxml/default.asp?id=59&mnu=59

it seems that the results from the most resent test will be along similar
lines:

www.livepublishing.co.uk/content/page_325.shtml

This 2nd link is the recent press release from the pub that sponsored the
tests.

I personally found the results rather disconcerting, as I use Symantec
Corporate (90%). I also have crash problems with Kaspersky (ranked 1), and
absolutely hate the way F-secure gets its updates.

This does not test only wild viruses. So even if an AV has a high
percentage that does not mean it protects against 100% of the current
wild virii.
 
N

null

(e-mail address removed) a écrit :


I know but I was confused by what you said.

Sometimes I get confused by the responses :)
There are many things on my hard drive that I don't want to have. :)

You would have to define precisely what crud is. You need to consider
that crud poses less of a problem than actually working malware. And
then you need to consider that working malware is already so abundant
that AV companies have a hard time trying to keep up.

Nick has sort of defined his infamous term "crud" through the years. I
know he includes the remnants of botched disinfections, for just one
example. You and I have discussed this before. Remember when I pointed
out how F-Prot with the /collect (for virus collection tests) switch
on will alert on some "crud" files I sent to you? Some other crud in
typical vxer collections include infected boot sector images, object
files, and even .ASM text files. Of course I've never seen an av alert
on a text file regardless of the file extension. I doubt if any av
product is that stupid.

But I view all malware as crud whether the crap is viable or not. I'm
not discriminating :)

Insofar as av vendors keeping up with crud as well as viable samples,
it seems that KAV and McAfee do a pretty good job of it. Others like
F-Prot also "know about" many vxer crud files but won't alert unless
asked to. I do prefer Frisk's approach.
Maybe not. The fact is, it could be a coincidence.

Obviously it _could_ be. I made no claim that it's _necessarily_ that
way. Merely that such "coincidences" are not at all unusual.
You can't expect
meaningful results if your methodology is flawed. You can obtain them if
you're lucky, but you can't expect them.

Obviously. I'm not defending flawed methodology, and you of all people
should know that.


Art
http://www.epix.net/~artnpeg
 
N

Nick FitzGerald

Actually, it now seems this method differs for different of his tests. Some
of his tests have required at least two of his varying number of "preferred"
(that is, pre-selection) scanners to report _something_ for the file.

However, it seems "corrupt executable" and "intended virus" are more than
sufficient "somethings" to report. (Given no other information than that
F-PROT reported either of those results and some other scanner found what
it claimed was real malware, I know what I would do with such files --
unless there was enough time to analyse them all (F-PROT does _very_
ocasionally get such things wrong, though incredibly seldom with its
"intended" ascriptions) I'd bin them...)

The test set also apparently includes ASM source listings and other such
drivel any marginally competent tester would automatically ditch as
irrelevant to a meaningful malware detection test.
Would the results have been any different if he had included any and
all files that any scanner alerted on? I venture to say the results
would have been virtually identical.

Yep. The poor but very crud-loaded scanners (I referred to these
elsewhere in my initial response) would have scored even more
disproportionately highly than they "deserve". Some products are
suspected of basically being able to detect virtually any file that has
ever been (publicly) posted on a VX site and given the source of VirusP's
initial test set (his own VX collection) the starting point was bound to
be replete with utter crap that, even by Kaspersky Labs' standards has
nothing to do with malware detection...
No, that's far from proven. It's simply expected.

Nit pick.

Had I written something like "So, there is compelling evidence that the
worst of the expected detection biases is present in the results" my
meaning would have been the same but you would not have been able to
disagree...
But do they claim that any malware they alert on is definitely viable?

Depends on the product. For example, F-PROT goes to great lengths to
make its detection reports as clear as posisble, _particularly_ when
it is reporting non-viable crud that its developers have basically
only included under protest (yusually to protect themselves against
such stupid, ill-informed and mal-intended tests and publications such
as this...
I've never heard of any such claim. And until vendors can honestly
make such claims, demanding the use of only viable samples in tests is
actually quite irrational.

But the flip side is equally true. The test is presented as a test
of "malware detection" yet the test set is replete with unviable crap,
completely non-malicious stuff and huge scads of widely accepted
"grey area" stuff. If the tester and publisher were to be half-way
honest, common decency would, at the least, require them to separate
the non-malware and grey area stuff into separate test sets. Simply
lumping it all into a "malware test" and making snide comments about
how poorly some products do is, at best, grossly disingenuous.

And that reminds me, I meant to mention this earlier... IIRC, Live
Publishing (or some earlier incarnation of it) was the publisher
behind some of the virus distribution via cover CD instances. Their
position at the time was "we test with multiple up-to-date" virus
scanners" (which, again IIRC, was provably false in at least one
case) and clearly wrong-headed anyway. You don't think that, as a
publishing group
Whacha got against super crud detectors? :) Personally, I much prefer
them. I like being alerted on all crud. After all, malware is just
"crud" . Anything typical users don't care to have or want to have on
their hard drives is "crud". Whether the crud is a result of botched
disinfections or whatever kind of crud you have in mind, I'd prefer to
have my scanner alert. I couldn't care less whether or not the file in
question is viable. I want to know about it.

I understand this oddly idiosyncratic view of yours, but it largely
misses the point. Much of the crud we are talking is unable to get
on your machine _unless you specifcally and knowingly seek it out_.
Thus there is no point in detecting _unless you want to boost the
egos of the VX community_ where the driving ethos is a minor variation
on the old "mines bigger than yours" dick-waving ceremony...

In fact, had no scanner developer ever insisted on detecting any of
this crud, much of it would not "exist" today, because not "detected"
by AV means not collected by VX morons. (The "partial" infections,
munged disinfections, etc is another issue -- that sort of crud would
still be with us...)
More power to it IMO.

I acknowledge that you want to know about all that weird stuff. Fine
-- you're a weird curmudgeon like that anyway -- but what is not fine
is Virus-P and the publishers presenting his results as a malware
detection test.
Or they may simply have the same attitude I have.

No -- they deliberately ansure they detect all files available from
(public) VX collections. Such products tend to be at the low end
of "real" detection rates too, but this tactic (which is very cheap
to implement) gives them a huge boost in shoddy tests, typically
leap-frogging them square into the middle of the "not top but
respectable looking" bracket and (to those who know better) oddly
putting them ahead of many products that have better "real"
detection rates.
Are you suggesting that F-Prot detects as much as KAV when large scale
virus zoo and Trojan tests of a more "scientific" nature are
conducted? Please show me those test results. I've never seen them.

No, not quite.

What I was suggesting was that there would be little crud that
F-PROT detects and KAV doesn't, _and_ it was unlikely such a poor
collection would contain many real malware samples that KAV detects
and F-PROT would miss. If you find that hard to believe, recall
that these are typically the two preferred scanners of most VXers
(at least historically -- not sure if this is still the case). As
VXers are hugely "protective" of their preferred scanner (because
they believe their own hype that the larger your collection the more
expert you are) F-PROT and KAV tend to be sent stuff from these
collections that the other misses. F-PROT should be (based on its
history) have been less likely to add detection of outright rubbish
thus received, whereas it would quite likely be a matter of pride
with AVP/KAV to not allow a VXer to say "F-PROT detects this but KAV
doesn't"... Thus, a detection difference between F-PROT and KAV on
samples not well filtered for crud and (primarily) sourced from VX
will almost all be crud.

Of course, F-PROT does detect all manner of non-replicating and
otherwise non-malware stuff too. However -- as I said above -- its
developers generally go to great pains to ensure that such stuff is
reported as _not_ viral or _not_ malicious ("intended", "corrupted",
"inactive", etc). The precise level of this stuff in the collection
is unknown unless we can access full scanning logs for the products.

Thus, it should be clear that it is a safe bet that the actual level
of crud in such a test will be quite a bit higher than the detection
difference between F-PROT and KAV. In fact, we should expect the
crud level of such a test set to closely approach the difference
between these scanners' detection rates _added to_ F-PROT's "crud
detection rate" (that is, the rate of stuff F-PROT reported as
non-malicious, inactive, etc).
 
K

kurt wismer

That wouldn't have been much better. You don't select samples by using
virus scanners (since the goal is to test virus scanners), but by
analysing the files and determining whether or not they can act as
viable samples.

I know how it's allegedly done Frederic :)
When I was in school I once wrote a program that used Pi to calculate
Pi. :)

Not a good analogy. You're missing my point.[/QUOTE]

actually, it's an excellent example of why sample selection bias is a
bad thing... it's a self-selected sample and it's bad experimental
design...

[snip]
Why not? I don't care to have that crud on my hard drive. Do you? So
why not alert on it?

what a wonderful argument in favour false positives...
 
F

Frederic Bonroy

(e-mail address removed) a écrit :
Nick has sort of defined his infamous term "crud" through the years. I
know he includes the remnants of botched disinfections, for just one
example. You and I have discussed this before. Remember when I pointed
out how F-Prot with the /collect (for virus collection tests) switch
on will alert on some "crud" files I sent to you? Some other crud in
typical vxer collections include infected boot sector images,

Why are boot sector images crud? Fine, they are not dangerous (or
functional, for that matter) in that state, but they can still be viable
viruses.
Of course I've never seen an av alert on a text file regardless of the file extension.
I doubt if any av product is that stupid.

Feed a text file with this string to NAV:

Eddie lives...somewhere in time!

I don't remember whether or not you have to give it an executable
extension. Probably yes.
 
N

null

I know how it's allegedly done Frederic :)


Not a good analogy. You're missing my point.

actually, it's an excellent example of why sample selection bias is a
bad thing... it's a self-selected sample and it's bad experimental
design...

[snip]
Why not? I don't care to have that crud on my hard drive. Do you? So
why not alert on it?

what a wonderful argument in favour false positives...

False positives are quite a different matter. I have tried to raise
the issue off and on through the years here concerning the question
"Does alerting on crud constitute a false positive?". Nobody ever
responded. So far as I can tell, the av professionals seem to take the
tac that alerting on crud does _not_ constitute a false postive. So
deifne your terms.


Art
http://www.epix.net/~artnpeg
 
N

null

I understand this oddly idiosyncratic view of yours, but it largely
misses the point. Much of the crud we are talking is unable to get
on your machine _unless you specifcally and knowingly seek it out_.
Thus there is no point in detecting _unless you want to boost the
egos of the VX community_ where the driving ethos is a minor variation
on the old "mines bigger than yours" dick-waving ceremony...

In fact, had no scanner developer ever insisted on detecting any of
this crud, much of it would not "exist" today, because not "detected"
by AV means not collected by VX morons. (The "partial" infections,
munged disinfections, etc is another issue -- that sort of crud would
still be with us...)


I acknowledge that you want to know about all that weird stuff. Fine
-- you're a weird curmudgeon like that anyway --

LOL! I agree. And my wife and daughters wholeheartedly agree :)
but what is not fine
is Virus-P and the publishers presenting his results as a malware
detection test.

Also agreed. Thanks for the added info.


Art
http://www.epix.net/~artnpeg
 
N

null

(e-mail address removed) a écrit :


Why are boot sector images crud? Fine, they are not dangerous (or
functional, for that matter) in that state, but they can still be viable
viruses.

I tend to agree with Frisk in assigning them to the "crud" category,
which F-Prot does.
Feed a text file with this string to NAV:

Eddie lives...somewhere in time!

I don't remember whether or not you have to give it an executable
extension. Probably yes.

Interesting. It's been years since I took a look at any version of
NAV. Just the old NAVC for DOS long ago. What does NAV report?


Art
http://www.epix.net/~artnpeg
 
F

FromTheRafters

I tend to agree with Frisk in assigning them to the "crud" category,
which F-Prot does.

I was going to suggest this very thing.
Interesting. It's been years since I took a look at any version of
NAV. Just the old NAVC for DOS long ago. What does NAV report?

DA.1800.F

IIRC it is a detection of corruption caused by the payload of this virus.
 
V

virusp_at_

Hi everyone,

I guess i should explain some things over here as well, regarding the test i performed.
I once again admit that i still miss alot of knowledge about viruses, compared to professional av'ers.
Still, i will not accept people calling me stupid or moron or other
"specially chosen" nicknames, just because i do not know as much as they do.

Yes, my tests are not 100% flawless. But i also believe that most of the files i have in my collection
are not garbage, as some mention. If they were garbage indeed,
why do most respectable av software detect them as malware? Is this, too, my fault? Or is it that, certain av companies
tend to add many more stuff that before, just to have a huge number of detectable viruses? And, isn't
it a good way to increase sales?Or is that also my fault? And, when it comes to the bottom of this, how can
i trust ANY av software, if i really cannot tell how many REAL viruses it detects..??

Anyway, as some of you know, i have had quite bitter comments about this last test. So, i decided to start
learning more things about viruses, hoping that sometime soon i will
be able to maintain a really good collection, without any garbage files in it. What i have seen lately,
is that most av'ers do not want to hear about us, "vxers", and generally do
not want anything to do with us. I ask you, now, if a guy like me wants and tries to learn all the stuff that
professional avers know, how can he accomplish it?

To sum up, i 'd like to say that i performed the test using the best methodology i could get, based on
my current knowledge. Yet, that was not enough, i know it and i do not deny it.
I really try to catch up on the info i miss and i hope that in future tests my methodology will be better.
That is what i aim at anyway. Thanks for your support and comments, good and bad :)

Best regards
Antony Petrakis
a.k.a VirusP
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top