Will never happen unless computers get the ability to read the users mind.
There's just simply absolutely no way to detect if an action is wanted by
the user or not. Even if an action might be damaging to the system, there
still might be a valid reason for the user needing to do it.
That'd be nice but like I said, without mindreading capabilities, I just
don't see it. =)
And even *with* mindreading capabilities, I wish any computer luck reading
my mom's mind. She can't ever make a decision. =)
Well, there are kind of two separate issues rolling around at this point:
1) What well-defined action is the user taking (in fact)
2) What is the user hoping to accomplish from the result of that action
While #2 might need some voodoo hocus pocus to work, I strongly believe
determining #1 is entirely possible.
You're saying that when a user grabs a hot piece of metal and burns
themself, it's impossible to determine whether the user is actually
wanting to burn themself or not (after all, they may have a good reason
for doing so).
I agree.
However, UAC is only about determining if the user did, in fact, grab
the metal - or not. Did somebody throw it in the user's hands, or did
they grab it?
The problem with UAC is not that the user can burn themself, but that
they can say that they grabbed the metal when they did not, or say that
they did not grab the metal when they did.
The problem isn't really that the user can make a mistake - it's that
the user can make perform an action (or inaction, in the case of not
starting a program but it runs anyway) which is correct by definition
(correctness being defined as whatever the user is or is not doing) and
then TURN it into a mistake.
I believe it is possible to get rid of this action/prompt mechanism and
replace it with a verifiable action mechanism, at least in the context
of a GUI - this would be much more difficult in a purely CLI, I think.
This wouldn't stop people from using tools to hurt themselves. Nothing will.
But it starts to ensure that the user is indeed the one hurting
themself, as opposed to a program doing the hurting.
This starts to draw a line between what actions the user is taking vs.
what actions a program is taking, and enforcing different security
policies depending on which one is going on and what action the user is
taking, and I really hope that we will see this sort of thing happen in
future operating systems.