Browser Plug-in Warns Of Surfing Risks Before Clicking

V

Virus Guy

What I don't understand is why can't the maliciousness of html content
be analyzed (and blocked) in real time, after the content is
downloaded but before it is handed off to the browser to be rendered?

The method described below would have to entail additional bandwidth
load and latency in rendering search-page results as the individual
URL's are checked and rated (unless the database of known-bad URL's
are stored locally and updated periodically?)

The service described below seems to be tied into the search-page
results of the major search engines.

One wonders why the search engines don't perform their own
content-analysis and throw up their own rating as part of displaying
search results - or go a step further and allow a user to set a
check-box to automatically filter-out results from known-bad domains
or URL's (unless they fear liability issues - or media blow-back if
they erroneously ID a bad URL).

I guess it's only a matter of time until SiteAdvisor is bought by
Google and they tinker with incorporating it into their own search
engine.

---------------------

http://www.linuxpipeline.com/showArticle.jhtml?articleId=181500400

March 01, 2006
Browser Plug-in Warns Of Surfing Risks Before Clicking
By Gregg Keizer Courtesy of TechWeb News

A company founded by several MIT engineers launched free Internet
Explorer and Firefox plug-ins Wednesday that reveal dangerous Web
sites listed by popular search engines.

With the plug-ins installed, users see green, yellow, or red tags
beside hits in search results on Google, MSN, and Yahoo, said
Boston-based SiteAdvisor. The tags -- red represents sites that
heavily spam visitors, host spyware and adware, or hijack browser home
pages -- give users a heads-up before they click on a link.

"We believe consumers want to know, in plain English: 'If I download
this program, will it come with adware?' Or, 'if I sign up here, how
much and what kind of e-mail will I receive?'" said chief executive
Chris Dixon. "SiteAdvisor zeros in on the moment of decision, when
users are about to interact with a dangerous site. We can tell them:
'We've been here before, and here's what happened to us.'"

The company's ratings were with the help of automated Web spiders,
which crawled the millions of sites that represent more than 95
percent of the Internet's total traffic. Nearly half a million
downloads were analyzed for spyware and other malicious code, and 1.3
million registrations were logged using unique e-mail address to track
spam from each site source.

Users need a proactive approach to security, said Dixon, because of
the shift in attackers' strategies, from technical assaults such as
viruses and worms to for-profit attacks such as adware, spyware, spam,
and phishing.

Traditional security software "leaves a big hole in consumers' Web
safety armor because they don't know what's safe to click in the first
place," Dixon added. "We focus on the kinds of attacks that other
companies miss, so consumers can browse with confidence and stay safe
and in control online."

Although the plug-ins are free, SiteAdvisor plans to release more
powerful versions that will carry price tags. "In the future, we will
offer paid versions with additional premium features," the company
said.

The plug-ins can be downloaded from here.
http://www.siteadvisor.com/preview/index.html

Additional details on the inner workings of SiteAdvisor, check out the
recent review on InternetWeek.
http://internetweek.cmp.com/handson/181400665
 
K

kurt wismer

Virus said:
What I don't understand is why can't the maliciousness of html content
be analyzed (and blocked) in real time, after the content is
downloaded but before it is handed off to the browser to be rendered?

how do you define maliciousness programmatically? i'm sure the proxy
half of what you're talking about is possible, but the analysis part
can't be attacked any more intelligently than is currently done with
viruses...

plus, real-time analysis of that sort introduces latency which would be
annoying...

conventionally, net filters filter by domain rather than content, and
even that is prone to false alarms (as anyone at boingboing.net could
tell you)
The method described below would have to entail additional bandwidth
load and latency in rendering search-page results as the individual
URL's are checked and rated (unless the database of known-bad URL's
are stored locally and updated periodically?)

there *might* be a cache...
The service described below seems to be tied into the search-page
results of the major search engines.

that was my experience when i tried it out... it wasn't very useful to
me because of that - i'd rather see it markup all links on all pages the
way it does search result pages...
One wonders why the search engines don't perform their own
content-analysis and throw up their own rating as part of displaying
search results - or go a step further and allow a user to set a
check-box to automatically filter-out results from known-bad domains
or URL's (unless they fear liability issues - or media blow-back if
they erroneously ID a bad URL).

i believe the word is censorship... combine censorship with false alarms
and see what a nasty mess you can make...
I guess it's only a matter of time until SiteAdvisor is bought by
Google and they tinker with incorporating it into their own search
engine.

maybe, maybe not...
 
V

Virus Guy

kurt said:
i believe the word is censorship...

So if Google comes across a URL that contains obvious or known browser
hijack tricks (or down-right exploits) hidden among decoy (or legit)
content, then you would consider it censorship if Google didn't list
that URL among it's results?

Doesn't MVP hosts file, or Adaware, or Spybot immunization perform
more or less the same sort of blocking? Do you also call that
censorship?

What if there was a "don't include URL's with dangerous content"
check-box on Google's search page? Would you object to that?

At the very least, Google could throw up an icon beside each URL
result (green, yellow, red) to indicate what it thinks of the URL.
Then the user can decide whether or not to follow any given URL. If
google determined the threat-level at the time it spidered the URL,
then conveying that information as part of the search result would
entail essentially no extra bandwidth or latency, and would require NO
extra software on user's PC's.
 
K

kurt wismer

Virus said:
So if Google comes across a URL that contains obvious or known browser
hijack tricks (or down-right exploits) hidden among decoy (or legit)
content, then you would consider it censorship if Google didn't list
that URL among it's results?
yes...

Doesn't MVP hosts file, or Adaware, or Spybot immunization perform
more or less the same sort of blocking? Do you also call that
censorship?

it's not quite the same thing... if google were *only* providing a
filter, as the hosts file and the anti-malware apps do, then it wouldn't
be a big deal... but that's not google - google is an information
provider and if they stop providing some of that information, no matter
how well intentioned, it is censorship...
What if there was a "don't include URL's with dangerous content"
check-box on Google's search page? Would you object to that?

if they did it in a manner similar to safesearch (or basically augmented
the functionality of safesearch) then i don't think there's a problem...
At the very least, Google could throw up an icon beside each URL
result (green, yellow, red) to indicate what it thinks of the URL.
Then the user can decide whether or not to follow any given URL. If
google determined the threat-level at the time it spidered the URL,
then conveying that information as part of the search result would
entail essentially no extra bandwidth or latency, and would require NO
extra software on user's PC's.

it would just make google's reindexing a much more computationally
expensive process...
 
D

David W. Hodgins

What I don't understand is why can't the maliciousness of html content
be analyzed (and blocked) in real time, after the content is
downloaded but before it is handed off to the browser to be rendered?

To an extent it can, but it requires rather cpu intensive monitoring, and
it's far too easy to block intentional (good) content.
One wonders why the search engines don't perform their own
content-analysis and throw up their own rating as part of displaying

Google does "censor" results. See
http://www.google.com/webmasters/seo.html for the spam
reporting website of http://www.google.com/contact/spamreport.html

While some exploits are obvious, many can be hidden behind javascript,
or other methods.

Regards, Dave Hodgins
 
D

David W. Hodgins


They already do.
it's not quite the same thing... if google were *only* providing a
filter, as the hosts file and the anti-malware apps do, then it wouldn't
be a big deal... but that's not google - google is an information
provider and if they stop providing some of that information, no matter
how well intentioned, it is censorship...

In order to avoid search stream pollution by spammers, they've had
no choice but to exclude obviously overrated search results.

Regards, Dave Hodgins
 
V

Virus Guy

kurt said:
it would just make google's reindexing a much more
computationally expensive process...

Google is constantly reindexing it's database (ie - spidering the
web). Performing a content analysis as part of that process wouldn't
take that much extra overhead (it would be done behind the scenes -
end users wouldn't be exposed to that overhead, and it wouldn't take
any extra bandwidth for google to do).

As others have said - google is already performing secondary analysis
to their spidering data, some of which results in URL censorship.

Problem is, some nasty domains (like crack/hack server hosts) might
feed google spiders with custom content (free from malware, browser
exploits, etc) but would still feed that shit to the average web
browser.

Perhaps Google (or any other search agent) will never really be able
to get an accurate picture of just what a given URL is serving up if
the agent uses it's own browser ID tag line, making it vulnerable to
being served customized content.
 
K

kurt wismer

David said:
In order to avoid search stream pollution by spammers, they've had
no choice but to exclude obviously overrated search results.

you're talking about people who manipulate their ranking, right? that's
an abuse of google's service so i don't really think there's a problem
with google filtering those pages out...

as always, there are exceptions to rules and ideologies... just as you
can justify limiting free speech to the extent of excluding the yelling
of fire in a crowded theatre, you can justify certain other limits
too... but can you justify unconditionally filtering out exploit pages?
i don't think you can... there are certainly good reasons for it, but
there are good reasons against it too - like it being an impediment to
full-disclosure... already indexed exploit sites have value for security
research...
 
O

Offbreed

kurt said:
already indexed exploit sites have value for security
research...

That suggests a multiple tier search function. Google already provides a
"safe search" function, so adding a "fully unrestricted" option in the
preferences should work. I'd suggest (as if they'd ask <G>) that the
more dangerous option be something people would have to reselect each
session.
 
K

kurt wismer

Offbreed said:
That suggests a multiple tier search function. Google already provides a
"safe search" function, so adding a "fully unrestricted" option in the
preferences should work. I'd suggest (as if they'd ask <G>) that the
more dangerous option be something people would have to reselect each
session.

reselect each session? i think it's enough to save the option in a
cookie as they do with your safe-search setting...
 
O

Offbreed

kurt said:
reselect each session? i think it's enough to save the option in a
cookie as they do with your safe-search setting...

That would be better, but idiots abound. Google would get cursed for the
added hassle, and sued because someone was a fool.

Getting cursed is cheaper.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top