Tuesday, February 28, 2006

Send spam, charge recipients, $profit

The Malsian Star reports on a company billing recipients fo its own SMS spam.

Needless to say the company says it is a mistake, the charges are being reversed, appologies provided &ct. But what are we going to do when the intentional criminals start exploiting these schemes? Premium number fraud is a major problem but the charges are outgoing so the user has to actually be tricked. With this spam the attack there was nothing the recipients could do to avoid the charge.

How Bookmarks should work

Bookmarks/Favorites lists have a habit of getting out of hand. The systems built into IE are next to useless when you have 1000+ favorites which is actually a small number when you think about it.

The new Google toolbar allows bookmarks to be easily shared between machines and kept in sync, it also supports labels rather than just folders. This is a big step forward but the interface for adding labels is clunky, it takes too many clicks to do it right and there is no way to customize the interface.

What I would like from a bookmark scheme is for clicking 'add' to bring up a simple, one click menu that allows any of my active labels to be applied. IE forces me to thread through nested layers of its folder tree. The Google interface suffers baddly from mouse/keyboard transitionitis. Even on a laptop with a trackpoint built in it is tedious to have to switch from one to the other.

None of the systems have a good interface for bulk editing. Being able to add a bookmark to faorites with a single click is useful but that only puts of the task of cataloging. There should be an interface that allows all the previously unfiled links to be cataloged in one go. Even on broadband having to wait for the server to respond to each and every change individually is tedious.

A deeper problem is that there is only one mechanism for 'favorites' and 'bookmarks'. These should really be two separate things. I want to be able to file away sites that I use regularly separately from pages that I have found and might want to revisit or use in citations later.

What I would like is a comprehensive citations system that allows me to add a paper to a citations db. It could then be catalogued acording to author, title, keywords, &ct. Citations could then be extracted as HTML or richtext markup as needed. The extraction process could be automated by tools, there could even be provision for exchanging citations collaboratively or doing group annotations Wikistyle.

BBC NEWS | Europe | Irving expands on Holocaust views

The attempt to avoid a long prison sentence by recanting having failed David Irving repeats his earlier holocaust denial.

I am pretty sure that this was the plan all along, get the publicity for recanting under duress, get more publicity for repudiating the recantation. Jailing Irving is only furthering the neo-NAZI cause.

There is a line that has to be drawn but the action against Irving is counter-productive. Suppressing argument over facts only draws attention to the side being suppressed. It is clear that nobody, not even the neo-NAZIs really believe the history they are concocting. Holocaust denial is just a code for anti-Semitism.

It is somewhat odd to see the prosecution of Irving taking place at a time when the Saudi-backed rent-a-mob is calling for the murder of Danish cartoonists with impunity. Even more surprising is the fact that Blair and co did not realize that failing to act would cost them in the Muslim community who regard the rioters as if they were football hooligans.

Monday, February 27, 2006

Educated Guesswork: MRE Menu 22: Jambalaya

What do meals ready to eat taste like? Eric and Fluffy find out.

Someone should get them a subscription to Zaggat.

Diebold paranoia continues

Kevin Drum blogs the standard left wing line on Diebold. How can we be sure the machines are secure when there are known serious flaws and the manufacturer's CEO has expressed strong partisan sympathies? Drum does not mention the last bit but in the leftiesphere Diebold is synonymous with 'known GOP hacks'.

Concern over voting machines in the US is largely confined to the left. In other countries, particularly Venezuela it is the right that is concerned. At some point there will be an election where the right feels that it has been unfairly treated (ie they lose an election narrowly). At that point expect concern over voting to become bipartisan.

What the concern over Diebold demonstrates is that the traditional academic treatment of voting scheme security is flawed at best. The primary concern is not the secrecy of the ballot, its the integrity of the count. No system is going to be acceptable in the long term without the ability to perform an independent audit of the count.

Sunday, February 26, 2006

Gladwell gets blog

Malcolm Gladwell has got a new blog, a good time to review his book Blink.

Gladwell takes seriously the first litterary commandment: thou shalt not bore.

Taken as a scholarly thesis Gladwell's book does not score highly, but that is not the point. There are plenty of neuroscientists working on the problem of cognition, very few of them have the skill to explain their research in a form accessible to the ordinary reader and when they do have the skill the certitude with which the theories are presented suggest that they probably should not bother.

Gladwell's central thesis, that subconcious, split second thought can be as valuable, frequently more accurate than higher level concious reasoning is surely accurate at some level. We still know far less about how the mind works than we ought to, but few would dispute that learned skills become automatic, subconcious with practice. We recognise faces instantly, why not recognize the authenticity of a statue as in the opening chapter?

The capacity for learned skill might also bend the other way. Reading the book I had to wonder about Khune's structure of scientific revolutions: if your subconcious is making split second judgments according to the established view it is going to be hard to learn/accept a new conceptual framework which requires the established one to be abandonded. That would explain why the genuinely revolutionary theories tend to be conceived by people in their twenties and thirties. They know enough of the old view to critique it effectively but it is not so ingrained in them that it runs on autopilot.

The daVinci suit

Via the Huffinton Post the Observer writes about the forthcoming da Vinci Code plagiarism trial.

The case is going to be important for many reasons, not least the fact that if the plaintiff win it will mark a huge change in copyright law. And what a pathetically ungrateful lot they are.

Most people who sold an extra two million copies of a book because of another author would be only too pleased. Baigent and Leigh want a cut of the da Vinci code royalties.

Holy Blood/Holy Grail is an Oliver Stone history of the early church. Drawing on a range of wacky conspiracy theories the result is an entertaining read but definitely not a work of scholarship. That is not to say that all the conspiracy theories in the book are complete rubbish, even a stopped clock is right twice a day.

HB/HG was written at a time when the history of the early Christian church was being reexamined by serious scholars. The idea that Constantine's reasons for adopting Christianity might have been political rather than spiritual had considerable shock value as did the idea that St Paul's theology might have been radically different from that of Christ and his appostles. Today these are perfectly respectable scholarly positions, albeit ones that are not likely to be endorsed by the Vatican any time soon.

The rest of the HB/HG is considerably more specualtive, to put it mildly. But even here Baigent et. al. can hardly claim great originality, not least because when HB/HG went on sale they were claiming is as fact. The fact is that the core of the HB/HG conspiracy theory is lifted wholesale from Pierre Plantard, the French confidence trickster who started the Priory of Sion who later admited having fabricated much of the material. If Baigent et. al. are due royalties then what about Plantard's estate?

Dan Brown is not the only person to have borrowed from Plantard's imagination, one of the Lara Croft Tombraider games uses similar motifs.

The other feature of the da Vinci code that has received less comment than it should is the virulent anti-Catholicism. The Catholic church is certainly not above criticism: it covered up for pedophile priests allowing them to molest more children, it meddles in politics, it is reactionary and sexually repressed. There is however an important difference between factual and ficticious allegations. The attack on Opus Dei in particular has overtones of the Protocols of the Elders of Zion.

Discourse.net: French ISPs Found to Violate French Consumer Protection Law

Michael Froomkin blogs about an ISP having a tensy problem with the French consumer protection laws.

Although I seem to remember Wanadoo started in France the business practices are almost certainly copied from the US. Thats not a happy situation.

DoJ vs. the botnets

The DoJ recently annonced two successful cases against botnet herders. Both were in their early twenties, engaded in DDoS attacks and affiliate program frauds.

Anthony Scott Clark
Jeanson James Ancheta

It would be interesting to know how these perps were tracked down, the plea bargains mean the interesting details won't be comming out at trial. I suspect that it has something to do with the affiliate program fraud. DDoS attacks are hard to track to their origin. Affiliate programs are largely self correlating.

M4 Message Breaking Project

The M4 Message Breaking Project is trying to break three last unencrypted Enigma messages, one message has already been broken.

The fact that it takes 100 celeron processors 4 days to walk the key space demonstrates that the NAZI faith in the enigma machines was not entirely misplaced. The codes were broken because the machine had two relatively minor weaknesses and because the cipher clerks were sloppy.

It appears that Michael Smith's excellent Station X history of the Ultra project is out of print. But this book 'codebreakers' looks interesting, a collection of personal accounts of the project by the codebreakers themselves.

Friday, February 24, 2006

A brief History of Net.NAZIs

The jailing of David Irving for three years in Austria has resulted in the predictable defenses of freedom of speech.

The jail term is certainly a bad move, but none of the articles I have read on the net have made the one argument that everyone other than Irving and his supporters can agree with: sending Irving to jail is a bad idea because that is exactly what he wanted.

Let me clarify that, I would not be at all surprised if Irving is now having second thoughts about martyrdom. He did attempt to recant his claims at the last minute. But the trial in Austria is merely the latest act in a long history where Irving and his supporters have invited prosecution as a political tactic. Without the gift of state prosecution Irving's ideas would have attracted considerably less publicity.

The publication of the first edition of Irving's Hitler's War in 1977 coincided with Ernst Zundel's self published Did Six Million Really Die, both are founding texts for the modern holocaust denial movement.

Irving disputed the minutiae of the history of the holocaust, whether details of first hand reports could be trusted, whether the atrocities were directed by the high command or the result of mid-ranking bureaucrat's working on auto-pilot. By this time Irving had become established as a writer of populist histories for the mass market. Few academic historians read them, let alone considered them worthy of rebuttal. Individually the claims made by Irving were not particularly surprising: history is messy. Taken as a whole however the book was an outright apologia, its claims only sustainable if there was a widespread conspiracy to create a false history.

Zundel took the direct approach: outright denial. Despite the inflammatory nature of the claims Zundel remained practically unknown outside far right circles and the anti-hate groups that track them until 1981 when his postal 'privileges' were withdrawn for publishing hate speech. In 1984 Zundel was charged with "disseminating and publishing material denying the Holocaust", David Irving and Fred Leucheter testified for Zundel at trial. At the time Leucheter was acting as a consultant on execution methods for a number of US states, a position he was later discovered to have absolutely no qualifications for.

There can be little doubt that Zundel would have remained an obscure crank but for the prosecution. Very few people have heard of, let alone remember Dan Gannon, the net.NAZI behind Banished CPU.

Banished CPU began spewing holocaust denial propaganda in August 1991 and continues posting today. Despite being the first major hate site on the Internet, the earliest and for a long time the most prolific poster Banished CPU is no longer prominent even in hate group circles.

Gannon started spamming USENET before the name had even been invented. A trickle of posts under a variety of pseudonyms quickly became a flood. The most frequent posters were Pete Faust, Ralph Winston together with Maynerd, the Main Nerd. Occasionally Maynerd's 'girlfriend' Foxy Roxy would appear claiming to be a 'reformed' Jew.

The banished CPU incident was the first of many that demonstrate the problem of debating in an open forum where the only qualifications required of contributors is to have a keyboard and an opinion. As people quickly tired of Banished CPU, the question was asked 'is it time for the Usenet death penalty?'.

Which was of course Gannon's game. Virtually the material he was posting was a byproduct of the Zundel trial. If there had been no trial there would have been no material.

Irving's subsequent history shows the same pattern. After the Zundel trial Irving was eclipsed. He can hardly have expected the Libel case he brought against Lipstadt to result in anything other than to be forced into bankruptcy. The Austrian case would never have been brought unless Irving had decided to deliberately court prosecution by returning.

In the end Irving and his ilk are simply a more offensive form of that well known net phenomenon the troll. Unlike the ordinary troll their egos are not satisfied by a mere IP block. Only a prison sentence garners them the validation and publicity that they seek. Prosecution breaks the cardinal rule: troll, do not feed.

Tuesday, February 21, 2006

XML Key Management Specification (XKMS)

During my RSA talk 'Trusted Third Parties the next Ten Years" I mentioned ACC and promised a linkHere is the paper

The point of ACC is that it should be easy for devices to automatically configure security settings and that XKMS provides a way to do that with minimal overhead.

Crime: The Real Internet Security Problem - Google Video

The talk on Internet Crime I gave at Google is available on Google video.

Internet crime is a serious problem, real money is being stolen. The Nigerian letters/419 advance fee frauds are comical until you find out about the victims who lost their life savings.

The main Internet Crime threat at the momnent is phishing. This is theft of access credentials through a social engineering attack. There are three main approaches to stopping phishing:


  1. Stop attacks in progress: this is what we do in the takedown service. When a phishing attack is detected we try to get it shut down by the ISP as soon as possible. This does not drive the gangs out of business but it does limit their profits and it does encourage them to choose other targets.
  2. Disrupt the social engineering attack: Email allows the phishing gangs to plausibly send email that purports to come from any trusted brand they choose. Secure Internet Letterhead provides a trustworthy method of identifying content as from the trusted source. So an email will show the trusted Bizybank logo if and only if it is signed by a party authenticated as being Bizybank.
  3. Use theft proof tokens: In the long term we will all be using OTP or smartcard technology to log in to our bank account.

The main thrust of the talk was on the Secure Letterhead concept which I am presenting at NIST and at the W3C workshop. Unfortunately this talk was before the Infocard launch which is a pity because Infocard uses the letterhead concepts.

It is important to attack the problem using all these approaches. They are not exclusive. Stopping attacks in progress is all that we can do without completing a major infrastructure build-out. Disrupting the social engineering attack is essential if we are going to restor trust in the Web and email. Theft proof credentials such as OATH are the long term solution but deployment will take time.

It is equally important to maintain pressure on the tool providers, botnets and dumps markets. Break up the trading sites, shut down as many bots as possible, prosecute the tool providers as well as the tool users. The two engines of Internet crime are botnets and spam. Both can be significantly reduced at little cost.

If every WiFi box and cable modem was required to have a reverse firewall built in to limit outbound attack traffic the volume of spam and DDoS attacks from botnets would diminish significantly.

Another powerful technique would be to require ISPs to filter out executable content in emails by default. There is no real need to use email to distribute executable code, most programs are much too big to fit in email anyway, the people who are able to do it securely can easily circumvent simple blocking techniques. 99% of Internet users do not need this very risky feature, the fact that it is enabled for them by default creates a problem for all of us.

Tuesday, February 14, 2006

RSA Cryptographers panel

Last year we heard about the new attacks on SHA-1.

Rivest: This is 30 years since D-H invented public key and DES came out. 15 years ago, first RSA conference people were attacking RSA and Tim was inventing the Web. He is looking forward to RFID, computer vision and speech.

Shamir: Has been beating up on RFID tags, how secure are they? Normal power analysis does not work, but it is possible to measure the amount of energy being absorbed from the environment. This allows the password schemes to be defeated, the chips have a surge in energy after the password fails. So an incremental attack can be used.

Diffie: Talking about NSA Suite B, the non-classified algorithms expects them to squeeze out other schemes. Most notably they are using ECC. This sis going to make it harder for cryptographers to propose new schemes, instead focus is likely to be onb analyzing existing schemes.

Hellman: Comment on the paradox of public awareness, if a disaster is averting the money spent on prevention appears wasted. Comments on the 'small gene pool' for public key cryptography. There are in essence only two principle schemes. ECC is realy just a gene on the chromosome. Why don't we use key distribution centers and public key crypto in combination - hey sounds very similar to my NIST paper.

Shamir: Arguing for raising general awareness of security risks rather than preventing attack.

Diffie: Points out that 9/11 was actually due to failure to secure the cockpit door and the failure of the passengers to beat up hijackers. Cowardice of giving planes to the terrorists [Me: as Jeff Schiller pointed out after the attack, they did four the same day because they knew it was the last time it would work]

Shamir: Commenting on the Hash algorithm developments in the last year. Practical impact not yet apparent.

Rivest: A wakeup call. Need to design schemes from scratch, not just tweak existing designs. Unlike public key crypto the gene pool is very large. In the past we were skating too close to the design/efficiency edge.

Diffie: Cryptography is still the best hook for security despite the breaks.

Shamir: Imagine you are going to break AES, can either spend $gazillion or $50K befriending the head of security and entrapping him. Side channel attacks in 55 ms. if you can send the machine certain data.

Kaliski: Asking Rivest on usability of security, did you invent the right thing

Diffie: (Answering) Security is always political, security measures always advantage one group and thus disadvantage others. Security codes for legitamacy. Security channels them into particular relationships. [Good comment on DRM politics]

Hellman: Security needs to be built into an application. At the time could not get the O/S people involved at the time.

Rivest: We did the right thing with the math, but scenarios. Model was the agent was the computer, Alice is really a computer. Problem is how does the user trust their electronic agent?

Shamir: You did the litterature a favor by making it more user friendly with Alice.

Diffie: Write down your password, your wallet is much more secure than your computer.

Rivest: [on next 15 years] We still have not proved the fundamentals, P=NP etc. Answer question raised by Diffie, field is a lot richer and innovation is on different fronts, different constructs etc.

Shamir: Problem of long term security much different from problem of sending message today needing limited security in transit.

Hellman: Can't go forward, but can go back. Almost 30 years ago we said 2000 bits minimum for 30 year security, 4000 was preferable, 10K not too much. We did not do too baddly.

Diffie: In some respects the signatures on the constitution and the signatures on the Magna Carta are still important.

Rivest: It takes about 15 years for ideas to go from concept to use. Identity based crypto may be becomming the right approach to authenticated email.

Shamir: Multiparty computation is a good idea but existing schemes are too complex, e.g. voting. Protocols are too complex, they do a lot of proofs but they are not protecting against real world attacks. Example of Palestinian elections, use of camera phones to demonstrate voting for/against certain candidate.

Diffie: Intel going to talk about covert channels.

Bill Gates at RSA

I watched Bill give his keynote at RSA, he has become a regular feature here. He started off by joking that he is glad he decided to speak here rather than go duck hunting with Dick Cheney.

Much of the speech was intended to say 'yes we get security message, do you folk understand the make it simple for ordinary people message?' It was pretty effective.

One interesting note is that Microsoft is now using the term 'trustworthy computing' not 'trusted' so its not just me anymore. Also interesting was that Bill disappeared off stage for both sets of demos. Trying to avoid CEO syndrome perhaps?

The first demo was mostly aimed at enterprise admins showing how an employee who lost their laptop and cell could be reprovisioned without recourse to central IT support. It was pretty slick and if it works would certainly avoid the need for every new machine to be shipped to IT support for configuration before issue. Even more important the configuration of the machine is checked each time it connects to the corporate net.

The second demo showed the use of infocard. I have seen several presentations on infocard but this was the first time I saw a live demo of the user interface portion in final form. It is pretty slic, particularly running on top of Vista.

From a security perspective there are lots of important details: security sensitive information like private keys is being locked down to secure partitions they are using PKI, certificates and the WS-* stack. The most important change though is that this is a user interface that ordinary people are likely to be able to actually use.

Infocard does not have all the features of Secure Internet Letterhead that I would like to see but it is clearly a step in that direction. We are starting to see icons being used to communicate trusted brands to users. But people don't yet get the need to hold the trust providers accountable in a system that depends so heavily on them.

Scott McNealy is up now, he is saying that Bill forgot to mention his invitation to go hunting with him...

Sunday, February 12, 2006

Bill Thompson on two tier service

The BBC have an article by Bill Thompson on their site. He makes the case against two trier Internet service.

This is an important battle and in the end I suspect that the carriers seeking rent from large content providers are going to lose. But this is not necessarily the Pareto optimal outcome.

Verizon's attempts to get Google to pay for bandwidth are doomed to fail. I doubt Google management bother to return calls. What is Verizon going to do if Google refuse to pay up? If they try to cut access to Google they will face a subscriber revolt. If Google did pay of Verizon they would guarantee similar demands from other carriers which would only escalate over time.

As a business proposition paying money for something Verizon's subscribers think they are paying for already makes zero sense. It is old style telcothink.

The idea that Congress is going to help with this scheme is even more absurd. Congressmen may be clueless about the net but their staffers are not. What tool does Verizon think the staffers use every day for their research? Even if Congress could be bribed into helping with the scheme the success would be shortlived. Whatever party went along with the scheme would be sure to get punished by the voters.

There is a possible win-win scenario but I doubt it is reachable. It makes no sense for Goolge to pay more for what subscribers are already paying for but it might make sense for blockbuster to pay more for Verizon to deliver a video on demand movie over the net at high speed. DVD quality video requires about 1 Gb/hr using MPEG2. That works out at about 2.2 Mb/sec, rather more than the raw bandwidth that cable provides. Take into account framing issues and contention and the maxiumum sustained throughput one subscriber can expect is maybe 0.1 Mbs. Even with MPEG4 we are far away.

It would make every bit of sense for blockbuster to be able to pay Verizon to boot a subscriber's bandwidth for a couple of hourse to download a movie. Getting to that point would require development of a new set of protocols, net routing technology &ct. That is the easy part, much of the technical work is more or less done.

What I don't think is going to happen in time is settling the political issues. Steady increases in performance are what net users expect as a matter of course. Unless an agreement is reached soon the potential purchasers of the bandwidth are going to think that they can get what they want by just waiting a little longer until the standard net bandwidth rate is high enough for their needs.

Saturday, February 11, 2006

Why can't people do the obvious?

Why can't people do the obvious thing and make a computer monitor with a DVD player, media reader and wireless keyboard/mouse hub built in?

20" LCD monitors are becoming standard, a decent one can be had for $500 in Costco. Even the absolute top of the range Apple screen is 'only' $2500. That is certainly a chunk of change but its not so long since the top of the range was 23" at considerably more.

The obvious thing to do when faced with a situation of this type is to differentiate the product by building in more features. Sony has taken this to an extreme with a range of PCs built into a monitor case. Its a cute but doomed concept, not least because squeezing the computer up against the monitor does nothing for heat dissipation. Computers last about three years before becoming obsolete. A monitor should be good for at least two computers.

I want a nice clean, uncluttered desktop. I don't want to have to buy a disposable $2000 computer to get that. Moving all the peripheral devices that the user needs access to to the monitor means that the computer box can be off in a corner.

My current desktop system was an attempt to do something of the sort five years ago. The CPU sits off in a corner. On my desk I have an LCD monitor which happens to sport a USB hub. The DVD player plugs into the USB hub. That still represents the state of the art, although finding a monitor with the USB hub is more of a challenge than it should be.

The system looks OK from the front but behind there is a rats nest of cables and power supplies. I tried to get the direct DVI connection to the monitor to work but the only way to get the necessary DVI cable in the length I need and with the right varieties of connector on each end is to hunt round the net and special order.

I want one cable for power and one fiber optic for data between the monitor and the CPU. The reason I want fibre is not because I need that much bandwidth, its because then I can buy the cable as stock instead of paying Belkin $75 or more.

Moving all the peripherals to the monitor would allow the size of the CPU box to shrink. The basic size of the CPU box has remained unchanged for 15 years or so. Even though nobody uses 5.25 hard drives any more the machine has to be wide enough for the dvd-rom. Take that out and the machine can lose 2" of width and up to 4" of height.

Of course the reason that these companies can't do the obvious is that for a market for this type of monitor to be marketable it has to either be sold as part of an all inclusive computer/monitor bundle or the monitor manufacturers and the computer manufacturers have to come together and agree on how to do it.

Thursday, February 09, 2006

Why Linux must embrace trustworthy computing

The GPLv3 debate continues.

At this point the debate is framed as a simple question: are you for or against DRM? But as Linus has been trying to point out with characteristic reasonableness, there is much more to the issue than RMS makes out..

As a security specialist I see the issues from a somewhat different perspective. Whether justified or not a large part of the Linux brand image rests on its reputation for security. Trustworthy computing represents the next generation of security technology. If Linux adopts license terms that forclose support for trustworthy computing it is almost certain to be overtaken in key markets by platforms that do.

While DRM and 'trusted computing' are often treated as if they were synonymous they are in fact very different. Strong DRM requires trustworthy hardware but trustworthy hardware is good for much more than DRM.

Take the perenial problem of how to store private keys for an SSL server for example. A typical Apache server configuration is to store the private key in a file on disk. This means that anyone with access to the backup tape can read the private key value. The file can be password encrypted but that only postpones the problem: how do we protect the password?

The best answer to this problem today is to use SSL accelerator hardware that provides built in hardware protection for private keys. But this is an expensive solution if you only need security and not extra speed.

A trustworthy computing platform would provide the ability to lock down the private key so that it could only be accessed by the Apache executable and could not be exported off the machine in any circumstance.

So what you might say, Linux is used for more than just Web servers? And here is the bigger point: in the future every major server application is going to involve cryptography and thus the need for key management. Email is going to be signed with DKIM, synchronous protocols secured with SIP and at some point even DNSSEC will get deployed. And this before we even get started on Web services.

As the use of cryptography becomes ubiquitous the number of private keys begins to proliferate. And as cryptographic based security controls are used to defeat professional Internet crime a whole new set of incentives for stealing private key files is going to be created.

Securing private keys is just one example of a situation where trustworthy computing is invaluable. Another example is the famous question posed by Ken Thompson in his Turing Award speech: how do we know if the system has been compromised or not? The problem of rootkits is not unique to Windows, it began on Unix. If a system has a rootkit installed it is impossible to tell that it has been compromised. Without a trustworthy computing base to work from even programs like tripwire can be fooled. Tripwire can only verify the file system as it is visible to tripwire. If a rootkit has been installed that view may have been corrupted.

Yet another example is document level security. Traditional operating system security schemes control access to the file storage system rather than the documents stored in the file storage system. This allows Alice to set the permissions on her home directory to stop Bob reading her confidential files. But once Alice sends a copy of a file to Carol she is now depending on Carol to protect it correctly. The computer systems used by the military to protect access to classified information take a more robust approach. The security classification (label) is applied to the document itself. Encrytion is used to ensure that the classified document can only be read on approved computers running approved applications that enforce the access restrictions placed on that document. When Alice sends this document to Alice the access restrictions follow the document.

This type of system will become a regulatory requirement in an increasing number of applications and jurisdictions. HIPPA already requires healthcare providers to control access to confidential patient data. Sarbanes-Oxley requires companies to attest that the information systems used to prepare their accounts were trustworthy.

This type of system is also identical to a DRM system.

Security is the journey, not a destination. Microsoft is spending several billion dollars a year on security. Over the past five years they have hired many of the most prominent names in the field. That is not in itself a guarantee of success: the Yankees spent a squillion dollars on its team last year and had little to show for it. But their opponents would have done much worse if they had decided to leave their heavyiest hitters on the bench because of a theological dispute.

And a theological dispute is unfortunately what the GPLv3 issue is likely to become. Even though the target is DRM the scope of the clause arguably covers every form of trustworthy computing and quite possibly more.

It does not help matters here that the discussion seems to be largely between RMS and Linus. Some of RMS's early salvos look to me as if they might be aimed more at trying to peel off Linux developers from the Linus camp and rally them to what RMS beleives is the true, pure cause. Fine tatics for a political activist, a lousy way to help promote the cause of developing an open source operating system with state of the art security.

And these are the real stakes: if we want to secure the Internet the race must remain competative. If Linux is forced to keep its heavy hitters on the bench there will be less incentive for the opposition to field their A team and the supporters of both sides will lose.

Fixing the patent system part 3

Part One

The US Patent system is a mess, it is time to fix it.

Free software advocates frequently argue that software patents should be excluded completely. I don't think that is a sustainable line of argument, if software is to be excluded then why not other types of invention? Its a slippery slope that quickly leads to the argument that the patent system should be abandoned altogether. This is not necessarily an invalid argument, the US patent system today is certainly doing more harm than good but an argument for abolition is certain to fail.

It is hard to think of an example of a software innovation that is the result of the patent system. Even the RSA encryption algorithm, a rare example of a justifiable software patent and one of the most profitable software patents ever was patented as an afterthought. This is not the case with biotechnology patents where billions of research dollars are spent each year to discover patentable drugs. Take away the patents and you take away the research.

Patents are a valid incentive for innovative research. The European patent system works without causing the major problems that the US patent office has.

"The USPTO acts as if it were a modern day land office"

From the beginning the USPTO has seemed to have a rather different idea about the purpose of patents. European patent law is very clear: the purpose of patents law is to benefit society by encouraging innovation. The USPTO pays lip service to this objective but its actions suggest that its real goal is to create rights to intellectual property.


The difference is a very important one. Issuing junk patents does nothing to further innovation but it does ensure that no part of the technology frontier will be wanting for a private owner.


The USPTO acts as if it were a modern day land office: it is in the business of creating private rights to intellectual property. And like the Victorian land office it is not particularly interested in preserving property rights in the public commons.


Other patent offices do not cause the same amount of problems because they apply three very simple rules to patent applications. First to be awarded a patent you have to actually invent something. Second you have to invent something significant. Third your patent only covers the thing you actually invented.

According to the USPTO it observes these basic rules as well. The challenge is to make it apply them in practice.

Take the idea that you have to actually invent something. Every other patent office follows the rule that you have to file the invention before its first publication. This is an important safeguard against fraud. Inexplicably the USPTO allows an inventor up to a year to file their patent which in effect allows the applicant to backdate their purported invention date by a year.

This provides the inventor willing to perjure themselves (and there are many) a near effortless means of getting a valuable patent. Simply monitor the mailing list of one or more Internet standards working groups and whenever you see an interesting idea described that might make it into the standard, submit a patent claim. After the patent is issued you can even sue the real inventor for royalties.

Sound far fetched? Not at all, I have seen it done more than once. Even if you know this is the game being played it will cost you over a million dollars to prove it.

Manifesto Point: End backdating of patent claims any prior art published at any time before the description of that particular claim is filled invalidates the claim.

Another point of departure is the idea that you have to actually invent something significant. The USPTO claims that this is a requirement in practice however it regularly approves 'inventions' which are nothing of the sort.

Most of the Internet 'business model' patents currently causing so much difficulty consist of nothing more than a description of a business model that has been in place for several decades at least combined with 'do it on the Web'. According to the USPTO this constitutes a non-obvious novel invention.

To add insult to injury the 'doctrine of equivalents' means that the scope of an existing patent is effectively extended to include this type of trivial substitution. It is not in general possible to circumvent a patent that covers a way to broadcast music on digital radio and digital TV by using the idea on the Internet. So why should an idea that would be unpatentable on digital radio or digital TV because it is not novel in any way become patentable by taking it to the Internet?

Manifesto Point: Mere combination of obvious idea should not be patentable. If the combination of ideas would be covered by the doctrine of equivalents it should not be patentable.

The third major problem created by the USPTO is the granting of broad patents on the basis of almost no intellectual effort whatsoever.

Although this is a big problem in the software field it is likely to become an even larger problem in the field of medicine. If steps are not taken soon the US may find that a significant proportion of medical research is taken offshore simply to avoid overly broad patents that should never have been granted.

The main area where this problem is seen is in the granting of 'DNA patents'. According to patent law a patent on a human DNA sequence should be a contradiction in terms. Human DNA sequences are physical observations and physical observations are explicitly excluded from patentability.

The loophole that the DNA patent barons have found is to file patents that lay claim to every imaginable application of a DNA sequence the minute it is identified. It does not matter that the 'inventor' may not have the slighetest clue what the sequence actually does. If they enumerate every possible use then they are almost certain to list the valid ones.

The result is vast quantities of patents that disclose nothing of value whatsoever. The DNA sequences described would have been found without the 'assistence' of the money grasping 'inventor'. The list of 'applications' is utterly useless because it is utterly indiscriminate.

A similar problem is seen in software patents. It is not unusual to find a patent that consists of hundreds of pages, claim after claim. The basic strategy here is to take one or two simple ideas and enumerate every imaginable variation of them. Even though one or two claims may be invalidated by prior art the patent holder effectively gets title to everything else.

Manifesto Point: The scope of a patent should only cover what was actually invented

Wednesday, February 08, 2006

Big scary numbers

Bruce blogs on Check Washing. The fraud itself is reasonably well known, take a check, alter the payee, increase the amount, repeat. What caught my eye though was the number of $815 million a year lost in check washing fraud and Bruce's request for a footnote.

After a bit of digging I found out that the page is a project of the National Consumers League which is a hundred year old organization originally an offshoot of the union movement. Look for the union label, thats them.

So I have no doubt that the information is being offered in good faith. Bruce points out that the Web site looks rather amateurish, to me it looks like it hasn't been updated since 1999. You could get away with that type of stuff then, before Google demonstrated that black type on a white background has much to be said for it. The rest of the NCL site has been updated to a modern professional look.

But back to that 815 million figure. While it does not sound completely unrealistic as a figure for cheque fraud in general it is the type of big scary number that gets endlessly repeated from one presentation to another, often turning out to have originated as either a guess or a tenuous extrapolation from some sort of official estimate.

Recently a person speaking at a conference in Dubai presented the claim that cybercrime is now more profitable than narcotics. This was picked up by some of the trade press but fortunately didn't make it as a mainstream media meme. Once the figures came out it was clear that the claim was total nonsense. If you use an amazingly broad definition of cybercrime, including all types of credit card and bank fraud, not just the ones that touch the net and including pedophilia a very big number can be put on the 'damage caused by cybercrime'. This number is arguably less than the profits of the major narcotics dealers.

But this is not comparing like with like.

The profits from the drugs trade are far less than the amount of damage caused and the same is true of every other type of crime including cybercrimes. Say that a carding gang buys a stolen credit card number for $1 and runs up $800 of charges on the card buying a fancy camera. The camera is then received by a package reshipper who sends it to Romainia via international shipper, paying the $200 charge with another stolen card. On arrival the camera is sold to a fence for $400. The damage caused is $1,000 but the revenue to the carding gang is only $400 of which maybe $350 is actual profit.

So now imagine that we are compiling figures for the 'size' cybercrime, do we work off the $1,000 that was lost or the $350 realized as profit? Another common problem is double and tripple counting. If we simply add the losses resulting from phishing crime and the losses caused by carding we end up counting the same money twice.

And yes, those figures were made up figures for purposes of illustration, not real figures. But often when I try to look into a crime statistics claim I find what is presented as real figures in one presentation is an estimate in another based on illustrative numbers in a third.

A factor of two is not a big deal when trying to estimate the size of a crime. All statistics that try to put a dollar value on criminal activity are pretty 'squishy'. But when it is introduced in the process of numbers being shuffled backwards and forth between powerpoint presentations that take a number from here, a number from there and add them together doubling the size of the problem at each stage is a real problem.

The point here is not just that sloppy statistics and sloppy research are a problem. The point is that ultimately most of these figures are either unknowable or should be quoted with collosal error bars. I have very good first hand evidence that tells me that direct losses from phishing in the USA are more than $50 million. I also have pretty good but circumstantial evidence showing that the direct losses are less than $1 billion. Between those figures I could make a guess but that is what it would be.

The blogosphere abhors an information vaccum. If there is a demand for a precise statistic then the blogosphere will provide it. And after it has been repeated often enough it will be treated as fact regardless of what the original basis of the figures was.

Monday, February 06, 2006

The problem with walls

This article on OSN makes a point about Windows vs. Unix security that is important for Java and sandbox security models in general: Walls are good but they only work so well, if your users have to let the wolf inside the wall to do their work they get eaten.

The big challenge in developing an operating system for home use is how to know what to protect. Traditional O/S distinguish between mere users and administrators. But in the home those two people are one and the same. It may make perfect sense to the computer geek but there is no way to explain to the ordinary person that some of the time they log in as one person and some of the time they log in as another.

Windows XP and later do a great job of introducing ordinary people to the concept of 'accounts' and in particular the idea that you don't run as root all the time. Then PC software undoes all that work, most of my 4 year old son's computer games insist as being installed as root, some insist on running as root.

Separating the O/S and core applications from unauthorized modification is good, but as the OSN article points out all the users crown jewels, the data they actually care about is outside the protection barrier. If Office gets corrupted by a virus they can re-install. But not that word file they spent a month working on or the pictures of Johnny aged 3 months.

Try to introduce controls inside the barrier and we have a major problem, we are outside the scope of traditional security systems and Unix does not help any more than anything else.

Friday, February 03, 2006

Linus on GPL v3

It would be good if more people took the trouble to understand what DRM and 'trusted computing' can and cannot do.

First the term 'trusted computing' is a misnomer, almost all computers in use today are 'trusted'. The question is whether they will be trustworthy.

The point of trustworthy computing is to be able to be sure that a computer is running the software that we think it is. This is no small matter when computers are as complex as they are.

Today security arguments invariably tail off into an infinite regress of 'well what if someone had modified the browser code', 'what if someone had modified the operating system' and so on. Its an induction without a base case. Trustworthy computing provides the base case, that is all.

Contrary to speculation it will still be possible to buy a computer and run whatever operating system you like on it. One way to build a trustworthy computer system would be to build trust into the bootstrap system, so that the O/S will only boot if the O/S image has a valid signature. This would allow a hardware vendor to lock out non-approved operating systems by refusing to sign them. That is the obvious way but it is not how Microsoft's Palladium works.

A trusted boot scheme would be nice in principle but implementing it in practice would be very hard and the scheme would be compromised by the first unsigned device driver loaded onto the system. There has to be at least one system that can run unsigned device drivers or there would be nothing to develop new device drivers on.

Instead the Palladium nexus is a piece of code that runs in parallel with the O/S. The only thing that is special about the nexus is that a particular version of the nexus (and only that version) has access to a small amount of encrypted data stored in a cryptochip hanging off the low pin count bus. Most of the code in the nexus appears to be there to manage the scheme for upgrading from one version of the nexus to another.

I have seen several presentations on Palladium (aka next generation secure computing base). Each time we have been told that the code for the nexus will be available for open public examination. This means that people will be able to see that there are no trapdoors and will also allow equivalent technologies to be developed for Linux.

The other big fear about trustworthy computing is that it will be used as the ultimate copyright enforcement mechanism. I don't think this is as much of a problem as people think.

In principle strong DRM systems could be used to stop people copying the latest Disney movie and thus allow Disney to effectively enforce its copyright long after it has expired. In practice this is no easier on a computer than it is on a DVD player. copyright enforcement is break once run anywhere. Trusted hardware is not uncrackable hardware. As soon as production samples are available there will be people opening up the chips and reading out the keys using electron microscopes and such. Hardware that is resistant to that type of tampering costs far more than people expect to pay for their PC.

If you want to keep a secret you have to restrict circulation of the information to a small circle. If a hundred people have the ability to decrypt a spreadsheet it is quite practical to prevent distribution beyond that point. If a million have the ability to watch a film a break in the dam somewhere is inevitable.

Thursday, February 02, 2006

Solving the patent mess (part 2)

Read Part One

Slashdot links to an article in IEEE Spectrum today that shows the wrong way to solve the patent mess. The idea is to create a new class of patent valid for only four years that are not required to show that the invention is not obvious before the patent is issued.

I don't think the idea would be attractive to anyone but inventors peddling completely fringe ideas. The new patents would require completely new law to be established. They would provide no international protection and they would conflict with efforts to establish an international system.

The first step to solving the US patent mess is to recognize the fact that it is almost uniquely a US problem. There are certainly overly broad and entirely bogus patents that get approved in othe countries but nowhere near as frequently.

The way to solve the US patent mess is to reform the US patent office.

The first problem is that the USPTO is under-resourced. A patent examiner has an average of 20 hours total to review each application. That 20 hours is not just for the initial examination, it is for dealling with all the followup, drafting questions, drafting replies. Reviews in certain areas (including information technology and biotechnology) get more time, up to 40 hours but this is still completely inadequate for the government to make a considered decision before awarding a private monopoly.

Patent examiners are rated on the speed with which they complete reviews. Dilligence is rewarded, there are few penalties for letting a bogus patent through. The typical examiner will only serve for a year or two in any case. After that they can get a much more lucrative job in a private lawfirm as a patent attorney.

The USPTO lacks the resources it needs to do its job well, but it is not quite accurate to say it is underfunded. The USPTO generates much more revenue through filing fees than it spends. The problem is that it isn't allowed to keep it. For the past decade Congress has considered the USPTO to be a 'profit center'.

Before anything else the USPTO needs to be allowed to keep the resources to do its job. In return it has to do a better job than it does today. In particular it must act to protect the public interest and not just the interests of what it refers to as it's 'customers'.

Part three

Wednesday, February 01, 2006

Globe and Worcester T&G customer credit info mistakenly released - The Boston Globe

The Boston Globe just released up to 227,000 credit card numbers of account holders.

Its the same old story familiar from the Internet, but this time it was the back office systems that were involved. For some reason they had printed out details of large numbers of subscribers, including their credit card info. Then instead of being shredded the sheets were reused to wrap copies of the paper.

This is likely to be a very expensive mistake indeed. It costs the banks something like $50 to reissue a credit card and those costs are passed back to the merchant if they are found to be at fault. If all 227,000 cards are compromised that comes to over $11 million.

What this really shows is that businesses need to think about security in all their business processes, not just when they are on the Internet. Most Etailers know that they have to protect card numbers after CDNow got slapped with a $1 million charge. It looks like this message has not got home to non-Internet merchants.

Traded on the black market the credit card numbers would be worth $1 to $5 each. The chance that the numbers involved in the reported incident will be used criminally is quite small, far more card numbers are stolen than can ever be used. These particular numbers are going to be facing much tighter scruitiny. But what about the safety of the numbers before this happened? What does it say about the Boston Globe that this information was so easy to get to? A newsagent who finds a couple of hundred credit card account details wrapped around their stack of newspapers would have to be quite desperate to be tempted by a potential reward of a couple of hundred bucks. But a clerk working in the back office is a different story.