As a security person I have a lot of sympathy for his position. It is much easier by far to break a system than to design a system that is secure and harder still to implement a secure design securely.
Too much of the security world is dominated by people whose only contribution is to point out flaws in everything. In many cases pointing out flaws that were known and understood by the system designers as necessary tradeoffs if certain goals were to be met.
As a result, much of the security infrastructure we have is overbuilt and unusable as far as ordinary users are concerned. In many cases we are still waiting for a solution to be deployed.
Some flaw discovery is useful and important. The early work on WiFi security for example. And sometimes public exposure is necessary. But people who discover flaws should not think that makes them clever than the people who design systems. Only people who design systems that are not broken have the right to feel smug about other people's flaws, but they are unlikely to do so because they understand how hard getting security design right really is.
But I do take issue with Torvald's depiction of what is a security bug. A bug that causes a system to crash is a security bug. That the machine crashes by accident is just as big a problem as that it was malice. In fact I don't think that any bug at the kernel level is likely to be anything other than a security bug, that is the nature of kernel mode. That is why recourse to kernel mode should be minimized.
Thursday, July 17, 2008
Torvalds on security bugs
Linkworks: FARK del.icio.us StumbleUpon reddit
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment