[NCLUG] I was hacked!
John L. Bass
jbass at dmsd.com
Fri Dec 29 01:08:14 MST 2000
You say that like the up2date tools are the only options for remote update.
There are options, but most non-debian users seem to be more of the
opinion that they don't want automated updates of their systems...
My point is to agree with the "most non-debian users ... don't want automated updates".
It doesn't really matter if they are run via cron or manually with a blind
mandate.
>That means the distributions need to be clean, maintainence free, and lights out
>(AKA hands free) admin'able for a reasonable period (2-3 years).
You mean like Windows?
No.
With any OS deployment, you need to deal with "approving" updates,
distributing them to the desktops, and managing them. The ability to
remotely diagnose and resolve something is quite a benefit.
Remote updates/management is NOT hands free ... "hands free", is a distribution
which has been well enough shaken out, that you CAN/WILL be willing to deploy
for more than 12-24 months without changing in any way.
>The problem with the OpenSource movement, is an explosive version of the Unix
>problem ... to many idle hands with egos, pushing unnecessary (improvements)
>features which contribute bugs (AKA security advisories) at an alarmingly
>ever increasing rate. Rule of thumb says it takes 1-2 product years to ring
This statement seems to ignore the fantastic work of the SecurityAudit
folks. Something you can't do with closed source...
Chicken and Egg problem ... Open Source with open updates REQUIRES Audits.
Closed source, with audited infrequent minimum necessary changes, creates an
environment that some claim have equal if not better probablities of errors
to exploit. Auditors make mistakes, especially in large complex updates with
the intent to obscure a trojan. It quickly falls back to a trust issue. Can
you positively prove the NSA or KGB (or less organized hackers) do not have
a back door installed in the system, carefully constructed from several
unrelated subsystems?
>running kernel mode code ... did that twice. The ultimate linux virus today,
>would not touch a single file in the filesystem, but rather patch itself into
>a running kernel and remain undetectable by tripwire or other "security" tool.
A Linux virus that you can get rid of by rebooting? How... Windows...
Maybe, maybe not ... maybe a stateful attack includes a secondary boot hook too,
which current Linux tripwire tools do not look for. Maybe a fully steath attack
just waits for the machine to boot again and promptly re-infects. So does rebooting
really "fix" the exploit?
I don't know why you brought windows into this?? was there a point I missed?
>Lastly, the ultimate crack, would be to divert DNS from the RH update site to a
>cracker managed server offering "updated" packages containing trojans. And without
>even a thought, millions of machines would be updated overnight with the latest
Correct me if I'm wrong, but doesn't up2date use signed packages?
Not quite so easy to acomplish in that case... I would suspect that
the key 1024 bits long -- let distributed.net chew on that... I don't
suspect that setting up a "distributed.net" key cracker could crack the
key before somone notices it, alerts RedHat, and RedHat ships out a
2048-bit key update.
distributed trust, is still trust that can be subverted or broken. If the individual
packages contain the signature, then it simply implies that a shadow update server
requires that a shadow certificate authority must be available to verify the trojan
packages. As keys get bigger, the obvious attack is against the key management
and verification infrastructure, not the keys themselves. The trust issue, is that
the network topology/management is relatively secure ... which is not the case today.
IPng is working toward that goal, but a solid solution is not deployed today.
Lastly, keybreaking is a matter of odd's. People win the lottery every week with
odds of 100,000,000:1 by randomly choosing a single correct string of numbers (key).
A 2048 bit key might be broken in the first 1,000 random attempts to locate it (and
it might never be broken). If the key is widely deployed (trusted), then it's also
widely exploitable (IE the grand prize) should it be broken.
Think of key based electronic signatures this way ... a big company is betting their
entire reputation (and possibly fortune) on some hacker not hitting the jackpot,
which is a risk "in addition to" product bugs (IE an additional risk). You can
manage the introduction of product bugs, some claim even almost perfect security
flaw free (SecurityAudit ideal), but this is always an additive risk with absolutely
no guarentees it will decrease risk exposure. At least with well contained design
processes, review, and code management, you can bound product flaw risks substantially.
Now let's say some crypto nut wins a big lotto, takes his earnings lump sum,
sets aside a mil or two, and puts the rest up as a bounty for the RH/M$ key.
Since that has a bigger payback than the current RSA (or other distributed.net)
prize, it will be the focus of a LOT of effort. Presto, we take 2^70th or so off
off the odds on a 2048 bit key ... still a big crack, but a 256 bit key with
nobody looking for it, is much less likely to be broken than a 2048 bit key
with everybody looking for it (keyspace/cycles looking for it). With a $60mil
prize, someone might even be tempted to build a VLSI cracker tuned for the project,
which would take another 2^70th or so off the odds. Investing a year to partially
factor the edges of the solution could take off another 2^70th (or a lot more)
signfificantly simplifing the VLSI cycles required to test solutions.
Lastly, there are always the paranoid, who are sure NSA/KGB *ALREADY* have a backdoor.
And the pragmatics, that know that for $60mil, a RH/M$ employee will covertly
aquire the prize by conspiring with an arms length 3rd party after stealing the
original keys (or an organized crime or government figure will simply "purchase"
the key). Or pay a similar employee to introduce a backdoor in the released product
with the sole goal of targeting several hundred Fortune 1000 companies networks
for cyber-terrorism or extortion. The circle of trust gets bigger every day,
starting with thousands of OpenSource developers.
My point was, the current attacks aren't *EVEN* interesting ... the game is just
getting started ... do you really disagree? You seemed to imply that the ultimate
solution is already at hand, and this form of hacking is slamdunk fokelore history
today.
Are you tring to support the original comment that today's tripwire is the ultimate
solution? My point was just to point out that it's form today doesn't even envision
a really good attack ... do you really disagree?
Is your position that all these "trust" issues, have a technological solution?
John Bass
More information about the NCLUG
mailing list