Thursday, March 30, 2006

Fingerprinting schoolchildren

Reminding us that unquestioning acceptance of technology by authority should be a major concern, The Guardian reports schools fingerprinting children

The problem with these schemes is that they tend to be promoted by glib salesmen whose only interest is making quota that month to technologically naive administrators who want the latest shiny toys. It only takes a few people discovering unforseen privacy holes to create a serious backlash.

I do not know whether Diebold's election systems are secure against tampering or not. But I am confidident that at this point it does not matter a jot. It is not a question of if the whole lot ends up in the landfill but when. Public confidence has been lost, it is unlikely to be recovered.

BC symposium on Standards Law

Mark Lemley makes some interesting points about patents

In particular he is spot on when it comes to patent injunctions. There is absolutely no need for the courts to issue an injunction against RIM prohibiting the infringing conduct as there is no irreparable harm. When a patent troll with no product goes after a manufacturer it is purely a matter of money.

Wednesday, March 29, 2006

Shuttertalk:: Putting Together a Budget DIY Lighting System

Now this is a pretty slick idea, instead of paying $1000+ for a low end lighting rig use standard halogen lights with colour correcting filters. Halogen bulbs with daylight filters are commonly sold in decorating stores because the painters need them.

The big drawback to this type of system is likely to be heat output.

2.0 2.0

Paul Boutin expresses skepticism about Web 2.0

Apparently the big idea O'Riely has is collaboration. Hmm one might think that with that objective you might want to have one or two Web 1.0 luminaries at your conferences if only to have something to throw pot shots at.

The really bizare thing here is that Web 2.0 appears to be what Web 1.0 was about before it was hijacked by folk whose pretty transparent motivation was to find an alternative platform for Interactive Television i.e. television where the 'interaction' consisted of the ability to buy stuff.

Web Interactive Talk did everything that blogs are doing today back in 1994. What has happened since is not so much innovation as selection. Back in 1995 there were hundreds of conventions being used to organize information and dialogue on Web sites. People can't use a hundred different organization methods, choosing one is better than presenting people with a hundred.

The real point is not innovation then, its selection. Back in 1995 people were focusing on the technology, and in doing so they were missing the whole point. The point of the Web is not technology, its people. And people don't work on 'Web time'. The blogs of today are actually somewhat less sophisticated then WIT which not only had threading, it had lightweight semantic links that made it possible to track discussions more effectively.

I don't think its yet time for systems like WIT, people can only take so much at one time. When we watch a movie today we are interpreting it through our understanding of layers of convention that have evolved over decades. The staccato 'European style' of editing used in Die Another Day would have been considered incomprehensibly avante garde when the first Bond movie was made. The standard in the day of Goldfinger required every scene to have an establishing sequence, if a character flew to Rio there had to be a shot of the plane taking off and landing. Today we skip the plane entirely and cut straight from Q's office to Carnival.

Update: John Dvorak on Web 2.0

Tuesday, March 28, 2006

Fifth law?

I found that Krug has already tried to lay claim to 'laws of usability'. I don't think his really work as general design rules though, they are very much tied to design of Web sites.

One point though, the exposition should be of cause to effect. For example: The user will be frustrated if they cannot find the information they need for their task. Same essential point, but an is rather than an ought.

While thinking about the point I thought of a fifth law: When tasks that the user considers atomic are presented require multiple interactions with independent systems inconsistency will result.

The reason I got thinking about this was the current problem of administering a service such as email that requires both local service configuration and a DNS configuration and quite possibly firewall configuration as well. From the network architecture point of view these are 'obviously' independent systems. From the network administrator's point of view they are all parts of one single task.

There is a heavy bias in network administration towards an imperative mode of configuration rather than the declarative. Instead of announcing to the infrastructure 'this machine is to be a mail server' we have to know what being a mail server is and configure the necessary support services and connectivity piecemeal.

Politics of fishing

Asda has just taken cod off its shelves.

A sane fisheries policy would have imposed a total ban on cod fishing in the North Sea years ago. The stocks have collapsed and the fish is on the verge of extinction. Instead of taking drastic action to save the fishing industry the ministers take the cowards way out and give in to the fishermen's demands to keep quota reductions to a minimum. This year the quota is set at half the remaining stock. Cod do not spawn until they have reached seven years of age.

It does not take an expert in fisheries to realize that the inevitable result of this policy is going to be the complete destruction of the fishing industry.

Monday, March 27, 2006

Four laws of usability

1) The system must present a clear and consistent use model that reflects the tasks for which users are expected to apply it.

2) The response of the system to any given action should be predictable at all times.

3) The user must be presented with all the information they require to perform their task.

4) The user must not be presented with extraneous information.

MicroID - Small Decentralized Verifiable Identity

I am not sure what to make about MicroID. The problem is that the paper does not really seem to explain how the technology meets the intended use cases:

MicroID = sha1_hex( sha1_hex( "mailto:user@email.com" ) + sha1_hex( "http://website.com" ) );

This provides me with a way to prove that I developed the content, but only if a plagarist did not strip out the MicroID.

It does not provide me with a way to repudiate a third party claim that I created content I did not create. Anyone can create a micro ID for anyone.

It does not provide me with a way to find the copyright owner.

Washinton Post Blogger on Domenench

Joel Achenbach comments on the Domenench affair.

The piece rasises a much more fundamental question about the blogosphere than the question of whether one partisan hack should have been employed by the Washington Post. Is the blogosphere compatible with newspaper type journalism or is it simply a new venue for partisan talk radio?

Hiring Domenench was a disaster in the making for the Post. If he had continued he would have quickly reverted to form. Whatever separation they may imagine exists between the online and print edition there is no distinction in the minds of the online readers. For them Froomkin is the Post, no ifs or buts. If Froomkin were to leave most of his readers would move with him.

The Post brand is associated with journalism. Talk radio is entertainment. Mixing the two is a bad idea whichever side of the fence the partisan hackery comes. The rapid fire response encouraged by the blogosphere does not mix well with journalism. I suspect that a major reason for the frequent inannities voiced on talk radio is the sheer pressure of filling up a 3 hour show every day. 3 hours of broadcasting does not exactly leave much time for research.

Fortunately there is a big difference between the blogosphere and talk radio. Talk radio is not a genuinely interactive medium. The host always has the power of the microphone. The blogosphere is much more democratic. Even if a particular blog does not allow comments (standard for partisan punditry blogs) there are plenty that do. Every statement that is made is going to be examined.

It is OK to make mistakes in the blogosphere, but leaving the record uncorrected is a different matter, as is attempting to erase the record. Campaigning on the fictitious charges or sloppy research that have made Ann Coulter and Michael Moore rich does not cut it in the blogosphere for very long. As the Post itself discovered, you can turn off comments on the Obmudsperson's blog but that does not stiffle criticism of them.

Friday, March 24, 2006

"Plagiarism is perhaps the most serious offense that a writer can commit or be accused of"

Rubbish!

Lying to the readers is much more serious. Taking bribes to peddle a particular point of view even more so.

The biggest problem with the mainstream media today is not plagarism. It is journalists who trade access for favorable comment.

Friday, March 17, 2006

Rep. Brian Higgins, D-N.Y. is an utter fool

Of course Gerry Adams should be on a terror watch list. He leads a terrorist organization that has murdered more people than Baader Meinhof, Action Directe and Timothy McVeigh combined.

How WiFi Sucks (and how to fix it)

I had to stop blogging the workshop after the WiFi broke down. That got me thinking about the fact that five years after people started pointing out how their security is broken they still don't get it.

Don't get me wrong, I am sure that the WPA cryptography is as good as anyone can make it. The problem is that fixing the cryptography is not the same thing as fixing the security.

The biggest problem with WiFi security is that it is simply too hard to configure.

Windows XP makes it especially hard to configure WEP. First you have to enter a 26 character hex key into a password form, then you have to re-enter it again to confirm. Did the designer of this interface ever ask themselves why it is necessary to enter an authentication key twice? The machine should be able to discover if the key is correct or not all by itself without bothering the user.

That is the first lesson: Security is not an excuse for a crappy user interface.

Before Apple users get too smug about their O/S I will point out that this is only one eggregious symptom. Mac users still have to deal with the same silly key.

In theory WPA should fix this problem. In practice it might as well not exist if you already have a WEP network deployed. All the access points I have used that support WPA support it as an alternative to WEP. These boxes will not run both systems side by side. I now have 6 machines connected to my home wireless network. Sitching them over would take half a day or so and if any one of them did not work at the end I would have to either leave that machine off the network or spend another half day getting back to where I started.

This is the reason that I had to enter the 26 bit key at the workshop. Most people have machines that can handle WPA but some people are still stuck with WEP. The access points cannot do both at the same time so everyone has to go at the speed of the slowest.

Thats the second lesson: A security improvement must provide an upgrade plan. Unless there is a way to upgrade a deployed network incrementally the system designers should be sent back to try again.

At most conferences the security is turned off altogether. The same is true of public access at hotels, coffee bars etc. The reason is that they want to control access to the network. Sometimes it is a paid service and they need to collect a credit card number. Almost all public access points want agreement to their terms of service (e.g. don't send spam).

In order to allow the initial contact to take place en-clair the entire remaining session has to be en-clair. Worse still the customer has to agree to the silly terms of service every visit. That is not good if they are paying for the service.

Thats the third lesson: If you don't provide a good authentication interface that meets their needs people will bodge together a bad one.

So how to fix it? First get some usability people into the loop. Don't consider the system secure until they are willing to sign off on the system as having acceptable usability with security turned on by default.

Then provide a channel that lets the machine trying to connect to a network discover the types of authentication on offer. Next design access points that support multiple security schemes rather than making them exclusive. Finaly define an extension to WPA that supports the type of rich authentication/agreement to terms dialogue that existing practice shows is in demand.

Thursday, March 16, 2006

Small Pieces Loosely Joined: A Unified Theory of the Web: Explore similar items

If you are wondering about the lack of blogging for the second day of the workshop the answer is that the network has been flaky so live blogging was not happening.

Mike Beltzner from Mozilla just spoke and mentioned small pieces loosely joined. So I thought I would provide an Amazon link.

Wednesday, March 15, 2006

Usability Workshop Part 4

Part 1 Part 2 Part 3

At break we discussed Sitekey. The idea of presenting the user with a secret such as a user selected picture in order to authenticate the log in process is useful. To be secure though it has to be in the chrome. Otherwise I can get the sitekey through a phishing attack.

Another point brought up at break that I have to work into my talk is that we have to change the nature of the game so we are playing chess, not whack a mole. And I want to be able to extend any solution to address more than just the one type of crime.

Tyler Close demonstrated his Pet Name Tool. Pretty good idea, give the user the tools they need to customize their security context.

Amir is going to present a tool that essentialy implements Secure Letterhead without the CA side support. I am going to focus on the CA side of the proposal.

He just mentioned the infamous browser list. Actually things are worse than he suggests, Microsoft can push out additional roots at any time and there is no mechanism to tell the user it has happened let alone agree/disagree.

The VeriSign authentication process is not merely passive validation of documents, there is an active part to the process. There has been a well publicized failure which was detected internally and brought to the attention of the impersonated party and the public by VeriSign.

[Reminder: make the I don't speak for my employer notice more prominent on this site]

Some discussion of 'high assurance CA' certs. People seem to be getting these mixed up with Server Gated Crypto premium certs. Must mention Tim's work at the CABForum.

The Ruhr Univ talk: lots of good points here, mostly low level incremental stuff but thats what we mostly need here, good solid incremental stuff.

If we could provide security for people who upgrade to a new browser version and do not have trojaned machines that is a very big step forward.

Usability Workshop Part 3

Part 1 Part 2

Now we are now on to the issue of metadata. Most of this is better understood from the papers. Kenneth Wright is currently demonstrating a multi-level authentication scheme.

Now we are having the usual reductionist security analysis, "this will not work because". I think it misses the point entirely, we have to unlearn how to do that. It is the security of the whole system that matters. There is very little that can be done to eliminate man in the middle attacks without secure chrome. But they can be very effectively controlled by an MSS provider. If each attack costs the attacker one bot that is pretty good from my point of view.

MEZ has just elaborated on the 'friends and family' idea. She thinks there is something there, address book filtering, flikr, kazar, (I would add PGP!) something like that.

One theme that keeps re-appearing though is a series of small measures that are relatively simple individually but in combination could have a big effect:

Chrome Protection: Don't let JavaScript or Active-X content stomp on the trust indicators, don't allow frameless pop-up windows.

Trustworthy key storage: The O/S stores the private key in a secure compartment

X.509 Logotypes: Hurray! Its not just me, most speakers seem to be thinking about them, several seem to assume they are just going to be 'turned on'.

Some others I would add:

Reverse Firwalls: Reduce the value of a bot to the minimum possible. The less bandwidth a bot has the less the value to the botmaster.

Block unsolicited executable content Get email providers to block unsolicited executables by default. Cut off the distribution of viruses at the source.

Usability Workshop Part 2

Part1

Back from break. We are in the Citigroup HQ on Long Island. Why did they build there? Well they have this amazing view of Manhattan. Reminds me of the story of Eton Hall, the 150 room Gothic pile built by one of the Dukes of Westminster. A visitor suggested he build a small cottage next to it and go live in that so he could look at his magnificent palace.

Just had the Google presentation, essentially why all the existing mechanisms are flawed, particularly when based on passwords due to the low entropy of passwords that people actually choose. The graph they show on the simplicity of passwords is scary. An even scarier graph would show how many people use the same simple password for more than one account.

I disagree with some of this analysis, static passwords are certainly vulnerable to a man in the middle dictonary attack but a dynamic password is not. If the dynamic password is bound to the domain using it the dictionary attack is no use to another party.

Static passwords are definitely bad juju. There may be no good way of fixing them that is compatible with the legacy infrastructure. If we get into the business of fixing the infrastructure why not just do the job properly with a technology such as infocard?

Now its Yahoo! Mostly concentrating on the type of thing that we would like the infrastructure to support.

World Savings Bank now giving the financial services provider point of view. The institutions are rushing to deploy two factor solutions but there are no standards in place.

A point I would like to raise here is the division of responsibilities. It is our responsibility as a standards org to define a secure protocol that enables the browser provider to deliver a secure, trustworthy user experience. It is the joint responsibility of the platform providers and the browser providers to fend off chrome attacks such as bogus status icons, address bars etc.

Another point that seems to be lost is the fact that we do not need the same level of security to protect a blog as we do to protect the ability to view a bank account as we do to transfer money out of that account, add a new payee etc.

At the W3C Workshop on Security Usability

So far we have had introductions and the requirements session.

I already know 80% or so of the people in the room from previous standards meetings. What is interesting though is that there are people here from all the major consortiums. There are quite a few academics and W3C people as you would expect but we also have quite a large IETF contingent, many AntiPhishing WG people, people who worked on SAML in OASIS, people from the FSTC and so on.

The main message so far is that we have to 'get out of this rut' as the speaker just put it. We have to stop avoiding these problems as being too hard.

The detailed comments mostly reflect the consensus that the Web user interface is a disaster where security is concerned. Features were added to the Web with negligible thought for the security implications. The security interface itself is a poorly designed afterthought.

As expected from the position papers two tracks are emerging, roughly speaking inbound and outbound authentication. The phishing phenomena attacks both sides of the equation. The phishing email is a social engineering attack against the outbound authentication scheme (bank to user). The objective is to hijack the reverse authentication (user to bank) by stealling the password.

Some points I would like to raise here are: 1) This is not only about phishing, we should look at phishing as an example of an attack that is uncovering security vulnerabilities. 2) HTTP Digest auth did get quite a bit right, but the design was completely undercut by the downgrade attack to BASIC and Form submission. 3) Its not just about fraud, its about not making the user miserable when you don't have to. To access the WiFi I just had to type in a stupid 26 character string into a blind password form twice.

Cleanup: We are now getting to the question of what W3C can do. One set of tasks that seems to be quite clearly needed is some form of cleanup so that e.g. banks start encrypting their front page rather than as is common today having a non SSL front page with a form submission on it.

Automatic Form submission: All the web browsers offer to remember forms information. As a result there is a ridiculous amount of risky information embedded in the browser.

Multi-Factor Authentication: Quite a bit of discussion on this. Most of the comments are unsuprising. Just had a comment from Tyler that really points to the need for tiered access. I should be able to log into my brokerage account pretty easily to do research. This does not have the same degree of risk as trading or liquidating the account. I seem to remember there was a proposal in the FSTC paper that had three level access, A, B, C.

Tuesday, March 14, 2006

Theft by any other name

Wired has an article on the Pirate Bay, a bitTorrent index site located in Sweeden.

Predictably this service is popular with the pirates and not popular with the people whose property is being stolen. Slashdot is
rapidly filling up with supportinve comments, its easy to be popular when you are giving other people's valuable property away.

Most of these people would never steal a CD or DVD from a store. They don't see themselves as theives even though taking the content off bitTorrent is exactly the same thing.

The RIAA was pretty flat footed in their approach to online piracy. They also made complete fools of themselves by buying legislation from Congress that allowed them to effectively steal the copyrights of the artists by having millions of works retrospectively declared 'works for hire' and thus exempt from the 25 year copyright clawback that would otherwise have returned rights to the artist. They did not take the threat from Napster seriously, it was just a minor annoyance that would not be allowed to get in the way of other business.

Coertion is only a small part of how law and order are maintained. It is not lack of opportunity that stops most people from stealling, it is lack of any temptation to do so. The RIAA made a major error by engaging in theft itself at the same time it was attempting to persuade others to avoid the temptation to steal.

Disney's legislative grab of 20 years extra 'copyright' for Mickey Mouse from the public domain was certainly regrettable. If copyright holders are going to expect their copyrights to be observed they have to themselves become vigilant about observing the rights of the public domain.

Technology and DRM will certainly play an important role in copyright protection. But reliance on technology alone is not going to be a successful strategy. To be successful the strategy must contain the psychological element.

Monday, March 13, 2006

Why not me

Why? Why?

DoJ Watch

Its Monday so time to look at the recent DoJ cases. No new cases on the Web site but one of the sentences handed down in the Shadowcrew case has an amusing twist.

In Cleveland, Ohio Man Sentenced to Prison for Bank Fraud and Conspiracy (February 28, 2006) we read:

"Nugent sentenced Kenneth J. Flury, age 41, of 4692 West 149th Street, Cleveland, Ohio, to 32 months in prison, to be followed by 3 years of supervised release, as a result of Flury’s recent convictions for Bank Fraud and Conspiracy. Flury was also ordered to pay restitution to CitiBank in the amount of $300,748.64, and a $200 special assessment to the Crime Victim’s Fund"

That $200 extra...

The press release is quite interesting as it contains a lot of details of how the crime was going on. Flury was buying credit card numbers and PINs through Shadowcrew and using them to make up fake ATM cards. He took over $384,000 in cash advances and paid his suppliers approximately $167,000. That is interesting as it shows that the 'cut' for the phishing gangs at the time was almost exactly half despite the fact that carding is usually considered the riskier part of the crime. Although it is also likely that he had bought more card numbers than he was able to use before being caught.

The most significant part is that he made his $167K in the space of only 3 weeks, not a bad pay rate by film star standards.

Saturday, March 11, 2006

Another credit card snatch

This is another big card snatch

This is beginning to look like a pattern. Looks like the phishing tactic is no longer quite as effective as it was in the past. I am also starting to wonder about the earlier Boston Globe incident, might be an attempt by an insider to cover up something.

This particular tactic has a major downside, the credit card companies have existing schemes in place to detect this type of thing very quickly. The history of each card that is used fraudulently is examined and clusters of fraud detected. A dishonest employee in a restaurant or shop is quickly identified.

The big change here is the existence of dumps markets where the stolen cards can be sold on. There was little point stealling 200,000 cards at a time in the past, there was no way to dispose of them before the cards were blocked. The dumps markets like shaddowcrew allow the hacker to quickly sell on the cards.

Will Chip and PIN come to the US?

Chicago Tribune : Citibank uncovers debit card fraud

Europe has already deployed chip and pin to address this issue. Deploying chip and pin in the US is much harder because of the business relationships. The main cost of chip and pin falls on the issuing banks. The cost of deploying new readers and so on falls on the merchants.

In Europe the banks are less sensistive to the allocation of costs and benefits because they all act as issuing banks and merchant acquirers. In the US there are many card issuers, nor all of which are even banks. There are fewer merchant acquirers. This means that any change to the system requires co-operation between the two sides and possibly a renegotiation of the business relationships. This is turn is difficult because antitrust prevents the type of coordination that is necessary to effect change.

The master returns

Schumacher is back on pole in Formula one

Last season FISA got fed up with Schumacher winning practically every race, so they took a leaf out of the NASCAR playbook and fixed the rules.

This year we are back to allowing tire changes and the qualifying scheme sounds like it is actully going to be interesting.

Friday, March 10, 2006

An utterly bizare story.

Thursday, March 09, 2006

A bizare suggestion

Business week suggests that Apple appoint a CSO in response to their latest security issues

Appointing a CSO is a pretty good idea for any F500 company. Apple has a lot of important data that it should be ensuring is kept secure, product plans, sales data, customer data. It is easy for a company to say that everyone puts security first but security is not the primary statistic used to evaluate executive performance, and if a statistic exists at all it is because a serious problem was discovered. There is no way to measure the number of security issues that an executive has created that have yet to be discovered.

Unless there is a single person with primary responsibility for putting security first security is going to suffer. A CSO/CISO reporting directly to the CEO or a division GM is an essential senior appointment for any major company.

But the job of a typical CSO has very little to do with platform security. There are some companies where the job of CSO includes responsibility for product security but there is a huge difference between getting the company through a SAS70 audit and running a research/development team responsible for platform security.

What Apple needs to do is to hire a some people who are not 100% invested in the notion that Unix is automatically secure to shake the development teams out of their complacency. Their situation is actually quite precarious, they made huge strides in security a decade ago, they were the first company to support encrypted email in the operating system. But they really have not done anything memorable since and they are rarely present on the standards circuit.

Apple now have millions of Internet connected users and users are the weakest link in any system.

GPL 3.0: A bonfire of the vanities? | Perspectives | CNET News.com

Zuck makes some good points about the impending free software v.s open software split.

One of the big tensions in the open software movement has been that large numbers of people simply do not believe that the GPL says what it says it does or that RMS can possibly mean what he means.

Utill recently the only company that has really taken RMS seriously is Microsoft. Gates may well be wrong in stating that the GPL is a viral license. But enyone who has talked to RMS for any length of time on the subject knows that viral is exactly what he intended it to be.

The GPLv3 is really an attempt by RMS to set the open source movement straight on this point. In addition to making the viral nature clear RMS has taken the opportunity to take a pot shot at trustworthy computing.

I suspect that RMS has left it too late to trust the loyalty of his followers. The most likely outcome at this point is for most of the open source movement to get behind the Apache and Creative Commons licenses.

Origami

Origami is here

It gets some things right but the first versios get two things very wrong. I will probably wait until they are fixed before I buy.

I like the form factor, it should be fine for doing email, surfing, blogging or even lightweight rext editing. It will be great as a media jukebox, for showing pictures to the family, watching movies and so on. There will be plenty of ways to rip a DVD to play on this.

When the price point drops it will be the ultimate Gameboy killer. This is the way to keep the kids happy on plane trips and long car journeys.

What they appear to have got wrong is no external monitor port. Without that I can't use this as a way to give a powerpoint presentation. It also cuts off the possibility of plugging into a full size monitor to do spreadsheeting or wordprocessing.

The other ommission is lack of built in GPS. With GPS built in these devices are no-brainer replacements for the overpriced satelite navigation systems sold for in car use. Hertz charges $15 per day for neverlost, at $500 this machine would pay for itself in a year. GPS can be added on as an accessory but it is not the same.

Wednesday, March 08, 2006

Baader-Meinhof

This site is a great resource on the Baader-Meinhof gang, also known as the Red Army Faction.

One thing that caught my attention was the events of 1970. The RAF is formed in March. In April Baader is arrested. In May Meinhoff and Ensslin break him out of prison. By June they are in Jordan getting terrorist training from the PLO.

Up to that point the members of the RAF have not amounted to much. Baader and Ensslin have bombed a couple of department stores. Before helping spring Baader from jail, Meinehof's most daring act of protest resulted in a traffic violation charge.

After comming back from Jordan the first thing the new terrorist cell does is to rob three banks on the same day netting 200,000 DM. The proceds are used to buy guns. After half the gang is arrested in a police raid the remainder robs another two banks taking 115,000 DM.

The gang does not issue its manifesto until after the first five bank raids. The bombing and Kidnapping campaign that the gang is best known for only begins in 1972.

The key ingredient that separated the RAF from the numerous other Marixst-Anarchist groups of the time was access to money. Access to cash allowed the RAF to absorb the Socialist Patients Collective (SPK) that later becomes the 'second generation' of the RAF. The SPK began its bombing campaign a year before the RAF performed its first purely political (as opposed to fund-raising) action.

Another point that the RAF history demonstrates is the fact that some terrorists are insane in the literal sense. The SPK was a group of mental patients being treated by the psychiatrist Wolfgang Huber. There are plenty of hackers whose grasp on reality is similarly tenuous.

It is highly unlikely that a 'cyber-terrorist' attack could ever have the same impact as a bombing campaign. Even a 'spectacular' attack such as shutting down the New York Stock exchange would have considerably less impact than planting a bomb outside it.

There are already nationalist hacker groups that latch on to pretty much every international incident. Their activities are mostly unremarkable except for the possibility that they might tip an international incident into an international crisis, possibly a war.

But what if a nationalist hacker group latches onto Internet crime as a method of raising money instead of robbing banks as the RAF did or using inherited oil wealth as Bin Laden did?

A group that operated in that mode could establish a web of criminal contacts giving it access to the type of 'day zero' attacks that could then be used for serious, high profile cyber attacks.

One of the slides I often use to introduce Internet crime has a brief history of hackers. A lot of time and effort went into stopping the professional Internet criminal in the mid 1990s. They didn't show up until about 2000 by which time practically everyone agreed that the real Internet crime problem was script kiddie hackers like Kevin Mitnick. As a result the professional criminals got a free ride until phishing attacks made professional Internet crime impossible to ignore.

What if there were like minded groups out there today operating under the covers?

Are we even looking to see if they are there?

Wrong Wrong Wrong

Why do people obsess about integrating every feature into a cell phone? Samsung's new 8Gb phone is positioned as an iPod kiler.

I don't want to be running my cell phone battery flat using it as an MP3 player and according to Molly Wood, 70% agree.

The obvious device to integrate the music player into is the headset, not the tiny earbud style headphones of course way too small. But there is plenty of room in those big comfy noise cancelling headphones like Bose sell. Get rid of the music player and you get rid of the wires.

When I suggest this people always start harping on about the need for controls. This is odd if you look at how little real estate there is on an iPod nano. I have never found the controls on MP3 players to be worth very much anyway.

My RCA lycra was designed by a total moron, to scan through songs you have to keep hitting the cheapest, nastiest little button ever to move forward song by song. To make it especially painful the button is very stiff with a rounded top so after a few songs you have a little dimple in your finger. My aging Archos device is somewhat more user friendy but not by very much. Its still at the level of navigating folders. A google style keyword interface would be so much better.

So what I want is an MP3 headphone player device with an iPod user interface, Google knowledge organization and Bose headphones.

Tuesday, March 07, 2006

Origami

Cnet has pictures

This might be interesting at certain price points but only if the hardware supports some pretty critical I/O.

I would probably get one of these machines as a dedicated presentation machine if there is a way to hook it up to an external projection device.

I can imagine that a lot of people would be interested in using the device as a mobile picture/video album, but only if the necessary SD card slot and firewire interface are there.

If all you can do is to surf the Web or read eBooks it is likely to crash pretty badly.

I can do without a keyboard in some situations, but on a plane I need a keyboard and something to hold the display at the right angle.

The Space shuttle is Concorde

CNN reports further cuts in the NASA science program. The elephant in the room here is the manned space program and the decision to continue making shuttle flights.

The shuttle was amazingly popular when it was first launched, after the first disaster everyone was asking 'when can they get it flying again'. After the second disaster people finaly seem to be asking the questions that should have been asked before the thing was built:

  • Does the reusable orbiter concept actually reduce per launch costs?
  • What is the scientific mission of the space station?
  • Why not shut down manned exploration entirely and use robots?

    The shuttle has become the space equivalent of Concorde: for thirty years the acme of technological achievement. But despite being relaunched at great expense after a disaster things were never the same again.

    After furious arm twisting NASA has agreed to 'consider' a shuttle mission to service the Hubble Space Telescope. If the shuttle is unable to do that it should be shut down immediately, servicing the HST is the only contribution to science it can make at this point.

    The rational policy at this point would be to fly one more mission to service the HST. The space station should be de-orbited, it is never going to achieve what it was meant to do. The Mars shot should be cancelled as premature. The money saved should be invested in cutting edge robotic exploration.

    We may have the technology to send a person to Mars but we certainly don't have the technology to get them there alive. The radiation would kill them before they were half way there.

    What we are close to having the technology to do is to set up a self-sustaining robot colony that can manufacture its own replacements. Establishing a colony of that kind would be an essential first step towards establishing any form of long term lunar presence. The colony does not need to be 100% self-sustaining. If the robot could fabricate the body parts of replacement robots the control systems could be supplied. The key thing is to start fabrication. Until it is possible to build essentials on site rather than ship every pound from earth the process is going to be hopelessly uneconomic.

  • Monday, March 06, 2006

    Hacker Outsmarts Kinko's ExpressPay Cards

    Ooops

    Some companies have a real problem understanding that the technology to read and write mag stripe cards is commonplace these days. It would not take much to make the cards a lot harder to fake, a MAC code generated using a system wide shared secret would cost next to nothing but hold off a large proportion of attacks.

    The terminals do have some controls in place. They can only be accessed from the kinkos locations. That means that a person using a fraudulent card is at personal risk. It would not take a great deal to detect fraud and arrest persistent perpetrators.

    The bigger more worrying threat is the fact that it appears criminal gangs have been picking up on this type of network and using them as a means of money laundering. That means that their security controls are going to be subject to a much higher level of scrutiny than the designers likely expected.

    Glass houses

    What is amazing is not the fact that it took a hacker less than 30 minutes to get root in the Mac security competition.

    What is amazing is the fact that the people who launch these competitions don't ask some security professionals how common rooted Mac or Linux boxes are first. Sure there are a heck of a lot of owned Windows boxes on the Internet. But there are also a heck of a lot of Unix machines.

    A constant refrain at the IETF is 'bad security is worse than no security'. The point being that bad security leads people to rely on a machine thinking that they are safe and the consequences of doing so can be worse than having no security at all.

    Judging an operating system by the frequency of security patches is the worst possible approach to security. All the operating systems we use today are hopelessly complex from a security perspective. None are really designed for use on an open international network. We do not even fully understand the requirements for security on such a network yet.

    It only takes one security hole for a good computer to be turned bad.

    Big screen vs little

    The organizers of this year's Oscars clearly told the presenters to tout the value of the big screen over the little. Throughout the evening there were references to DVD.

    I found this somewhat odd. First the message itself was muddied by the fact that the actors sent to deliver it could not read from the teleprompter with conviction. Perhaps they were afraid that on TV you don't get the chance for a second take. So instead we had a bunch of actors who are unused to using a telepromter delivering flat, lifeless lines and often misreading it.

    This may have magnified the odder aspect, basic rules of marketing are to tout the benefits of your own product. If you must run down the opposition get someone else to do it. Running down the opposition makes you look afraid of them. Running down the opposition at your main annual event is simply crass.

    There are so many ways that the same message could be got across without mentioning DVD. Talk up the 'big canvas' that the sound editors have to work with in the theatre. Talk up the social aspects, talk up the big picture. They tried to do this but each time they did they then had to ruin it by mentioning DVD, worse still they mentioned DVD last.

    Even more peculiar: the event was being hosted on TV, in the living room. So the people they were reaching out to were the TV watchers.

    Will DVD kill film? Yes and no.

    DVD is certainly going to kill the current business model for the film industry. It might even kill the mega budge blockbuster but I doubt it. The theatres themselves will move away from 35mm celluloid and use digital projection from a high definition DVD or a network source. And regardless of attempts to stop it the same technology will turn up in people's homes.

    But old business models are replaced by new ones. Once you get rid of the celluloid and the $5000 plus cost of striking each print a lot of things change. Every film will 'open big', initial runs will be shorter but reruns will be much easier to set up. When Harry Potter V opens I through IV will do a rerun in the same theatre.

    The people who are threatened by a shift in the business model are not the people who make movies, its the middle men, the distributors, producers, facilitators who get squeezed. Middlemen have to add value to survive in the online economy.

    Even the theatre owners are likely to survive. People want a social experience as well as entertainment. Cinema may loose out to DVD but theatre owners will look for new forms of social entertainment. In the 1930s theatres were turned into cinemas. In the 1970s UK cinemas were being turned into bingo halls. In the 1980s the bingo halls were turned back into cinemas.

    There is no shortage of demand, no shortage of technology. What we do lack is the paradigm. It took 20 years after the invention of the cinema camera for cinema to discover drama as its principle model.

    What is so shocking about cinema in the digital age is the total lack of interactivity. The best audience is the one that makes least noise and disturbance to others. So what is the point of being in a group?

    We have massively multiplayer games, why not have a massively multiplayer game that is genuinely cooperative and played in situ? A game designed to stimulate discussion and cooperation amongst the players?

    Friday, March 03, 2006

    How not to chase bots

    EWeek describes a hunt for bot command and control.

    I don't think this is going to do much more than force the bot-herders to develop a peer to peer control mechanism. In principle every member of a botnet could be a controller. The only thing the bot controller needs is an up to date list of IP addresses of the bots under control, that is easy enough to maintain as the bots are established. The only 'central control' you need is a public key pair for authenticating the bot control channel commands.

    Of course this is not necessarily what is really going on. Journalists don't always ask the right questions (Larry is usually good here) and people don't necessarily answer entirely accuractely.

    Botnets may look like armies but that does not mean that military style decapitation techniques are going to work. They work in the military because armies have to work within chain of command principles that are constrained by the limits of human communication and interaction. Every order has to flow through the officers. Take out the NCOs of the other side and the engagement is usually won. Take out the mid ranking officers and the battle is usually won. Take out the high command and the campaign is usually won. It takes a long time for an army to replace officers but it only takes a few seconds to replace a botcontroller unless you can arrest the human perpetrator. I would follow the money trail rather than the easier to confuse bit trail.

    The way to go after the bots is to go after the human operators at the top and to restrict the value of the worker bots to the bare minimum. I have written about reverse firewalls in the past. I still think that is part of the necessary approach. The other part is to have a reaction system so that when a bot starts banging with a DDoS attack we can identify it and shut it down pronto.

    The Feds have been effective against botherders recently. I would like to know how.

    Thursday, March 02, 2006

    An easy choice

    "You either give up your cheap trips to Majorca, or you give up astronomy. You can't do both."

    I suspect the voters will find this an easier choice than Prof Gilmore thinks.

    Just don't look

    Bundled or a la carte

    Apparently Conservatives want pay-by-channel cable — except the TV preachers

    This is not so surprising, most people don't want to see home shopping or pseudo-religious broadcasts but these are crammed into the basic cable package because they don't cost the cable company anything.

    The problem with the a la carte model is that 1) people don't evaluate the price of a commodity in a rational fashion no matter what idiot economists think, 2) the costs of distribution are fixed regardless of the number of channels 3) existing technology does not support attractive pricing models.

    If cable went a la carte most people would cancel the home shopping and religious shows immediately. Only a few hardened news junkies would want more than one 24 hour news channel, only a few hardened music fans would want MTV and all the clones, only people with kids would want the children's shows.

    Offered a choice between the 120 channels I allegedly get on satelite TV at $60 and a la carte at $1 per channel I would chose the a la carte option and cut down to about 25 channels. I certainly would not be paying for Fox News, home shopping, ESPN, MTV, VH1, ABC Familly. Court TV is lame, the SciFi Network inept, I don't speak spanish.

    Where the a la carte logic falls down of course is that the provider's costs are fixed. If the number of subscribers declines the price has to go up. My guess is that the price would be at least $2 per channel, I would be no better off at best.

    This would in turn hit the finances of the channels, their revenue would decline, program quality would fall. Faced with higher prices for worse content most people would further reduce the number of channels they subscribe to and the cycle would repeat.

    This particular market ends up with a result that is worse for every participant. The reason is that the goods being sold are not really as fungible as the market theology demands. There is finite bandwidth on the satelite or on the cable distribution drop. Content is not a rival good, my watching a program does not deprive another. Attempting to allocate costs for bits as if they were atoms fails.

    Enigma codes

    The project to crack the remaining three enigma codes has completed

    A while ago Whit Diffie gave a talk on the Vennona decrypts to Ron Rivest's reading group at MIT and tried to get me and others interested in analyzing the problem from the point of view of the complexity of the effort.

    The Vennona codes demonstrate the problem of using a one time pad in practice. Used correctly one time pads provide perfect confidentiality. There is not even a theoretical possibility of breaking the code. Used incorrectly they reduce to one of the simplest forms of cryptographic problem: an alphabetic substitution cipher where one text is used as the 'key' to encrypt another.

    Breaking the Vennona codes led to the discovery of the Cambridge spy ring, the exposure of Fusch and the execution of the Rosenbergs. The M4 group might be interested in taking on this challenge as a followup.

    Wednesday, March 01, 2006

    Atrios

    Atrios takes a brave stand

    For the record: I agree.