Monday, October 27, 2008

Why did Microsoft break my machine?

Last night my machine was configured completely correctly. I could see all the files on my Windows Home Server from my Vista box and from my MacBook.

Today I can still see the home server from my MacBook but the Vista machine downloaded and automatically installed an 'update'. Now the files on the network drive are visible in the folder view but some cretin has decided that I need to be protected from them.

Needless to say, the machine does not tell me why the access attempt is denied or how to fix it. Thes two questions are part of the critical gap in security usability.

This really exemplifies a naive approach to usability that is unfortunately rather too common. The naive approach to usability assumes that the user's 'problem' is that they are stupid and its the job of the designer to help the poor stupid user by removing as much confusing knowledge as possible.

The result of this approach is systems that often test quite well in controlled lab settings that are designed to test the users reactions over short periods against a series of tasks designed to show the user using the product in exactly the way the designers intended but fail completely whenever an eventuallity arises that the designers didn't think of.

Ask your system administrator is not an acceptable response in a system dialog being presented to someone with administrator privileges.

Update I have now discovered the actual cause of the problem - a printer that had been disconnected was reconnected. This was hardwired to the same IP address as the home server which caused the interference.

But it is still the responsibility of the machine to identify these issues and report them, not the user. The most likely cause of the problem was the last major change to the system - the software update.

Friday, October 24, 2008

Lessons in mud-flinging

It is probably not a good idea to attack the academic credentials of your opponent if you are peddling false academic credentials yourself.

Particularly in the case of the Washington 8th district, one Rep Dave Reichert is attacking Darcy Burner for describing a Harvard degree in 'Computer Science with a special field in Economics' as a degree in economics. Burner took five courses in Economics, a straight economics degree requires only seven. The joint degree is considerably harder to get. [Crooks and Liars]

Meanswhile Reichert has been claiming to have a Batchelors degree from an obscure Junior college that wasn't even entitled to grant them until ten years after he left. Methinks the difference between a two year associates degree from a Junior college and a four years batchelors degree is rather greater than the difference between Computer Science with Economics and Economics.

Kind of like Palin accusing Obama of 'Palin around with terrorists' when in fact it was her own party convention that featured a keynote speaker who really did pal around with and help raise millions of dollars in funds for terrorism

Tuesday, October 21, 2008

The agony and the exctasy of citations in Microsoft Word 2007

Microsoft Word 2007 has citations! At last, it is possible to use Word to write an academic paper without having to pay for an overpriced, overcomplicated plug in citation manager.

Such a pity then that the actual implementation sucks unless the included citation styles match your requirements exactly

By exactly, I mean precisely that. Want to put your list of references at the back of your paper in a section marked 'references', well you can't. The references section must be called 'Bibliography', that is what Microsoft has decided and if Word 2007 provides a way to choose anything different (other than by converting the references to flat text and editing the result), well I have not found it in many hours of trying.

This is not a minor issue either as a bibliography is not the same as a list of references and in fact many books have both. A list of works cited in the text is almost invariably headed 'References'. The term bibliography is used to refer to a list of works on the same subject matter.

Did nobody in the design team ever ask the folk in Microsoft Research if they could use this citation manager to write their papers? Apparently not as I find it difficult to see how such a basic requirement could be overlooked.

Equally annoying is the instructions given for choosing your references section style:

"Choose the style format that is required by the instructor or the publisher of the written material that you are presenting. When you insert a citation in your Office Word 2007 document, Word provides the correct inline format for your citation. Word then provides the associated bibliography style when you generate the bibliography from the sources that you cited."


In other words, the developers have chosen the styles you are going to use and don't you go thinking that you may change them 'cause you won't. In fact it is worse than that as the styles are all described by name, 'APA - American Psychological Association' and so on.

The style I need for my document isn't listed, which is hardly surprising as in the academic world practically every publisher has their own idea of what a references section shoud look like and they are going to use their own name for it, not 'Chicago' or 'ISO 609'. So even if Word 2007 does support the style you need there is no way of knowing that without trying each of the ten versions in turn.

After several attempts it turns out that ISO 690 seems to be the closest to the style this particular journal requires. But unlike practically every journal I have read, this style uses round parentheses (1) rather than the square brackets [1] that are used in the real world.

Note to Microsoft: nice try but why didn't you try to actually write an academic paper before you released it?

Wednesday, October 08, 2008

Gigapixel depth of field

Continuing to consider the future of the DSLR: how will future cameras cope with the increasing diffraction limit?

The diffraction limit is a softening of focus that occurs at small aperture sizes due to the physics of waves. The actual point at which a camera is diffraction limited depends on the pixel pitch on the sensor. According to one calculator, a 12MP DX format camera is diffraction limited below f/5.6, while a 12MP FX format sensor is diffraction limited below f/8.

These are not particularly small aperture sizes. In fact my main DX lens is f/5.6 meaning that I have to keep it wide open to completely avoid diffraction effects. Fortunately the effects are gradual and only become noticeable/uncorrectable at f/11 or so and 100% magnification. But it puts a hard limit on pixel resolution for that particular lens at 25MP or so.

Going to larger pixel resolution will require larger apertures which will in turn limit depth of field. That's fine for portraits where shallow depth of field is usually the objective. But for landscapes and architecture, deep depth of field is more likely the desired effect. What is the use of 80MP if you have to use f/2.8? Wide angle lenses help of course, but sometimes the desired effect requires a narrow field of view.

One answer to the problem is to increase the sensor area of course. And that will be one of the reasons that Nikon and others have returned to the full frame sensor format. But that only postpones the problem.

What I expect will be the eventual solution is to adopt a technique used for may years in macro photography: combining a sequence of pictures taken at different focus distances. Today that is a technique that requires the full version of Photoshop or similar. But there is no reason that it could not be applied in the camera.

A secondary benefit that might be made use of is that a potential byproduct of the process is a 3D image map. Its not quite stereo vision (the picture is from a single point of view and does not contain the same information as a two lens stereo vision camera provides, but its close enough to be faked in software.

Tuesday, October 07, 2008

Deep thought for the day

Tina Fey is better at being Palin than Palin herself.

Monday, October 06, 2008

Utility of a global shutter

The most notable party trick performed by the new Nikon D90 is the ability to capture short high definition video clips.

Unfortunately, the HD video mode, while perfectly functional for certain types of cinematography has a 'wobble' problem. Different parts of the image frame are captured at a slightly different time. If the camera is panning quickly, vertical lines become slanted. If the camera pans forwards and backwards a jelly like effect is created.

The same effect is in fact present in some consumer level camcorders and some are better than others at hiding it. But the only way to eliminate the wobble completely is to implement a global electronic shutter in the image sensor.

The drawback to having a global shutter is cost: an extra transistor or two per image cell. If the image sensor is pushing the limits of the manufacturing process this reduces the number of megapixels that can be squeezed in.

Video is currently an 'experimental' feature on DSLRs. Early results suggest that it is going to be a major success but at the point video mode is not driving DSLR design to the extent that a 10MP camera with good video would outsell a 12MP camera with occasional wobble issues.

But what if a global shutter could be of advantage to still photography?

One obvious advantage of a global shutter is the elimination of a component that is costly to make and ha a limited lifespan. The D300 shutter is 'only' rated for 150,000 actuations. In the film days nobody cared because 150,000 actuations would cost in the region of $50,000 in film. Today a professional photographer can easily take that number of pictures in a year and spend less than $200 to store the results.

Replacing a mechanical shutter with a global electronic shutter also means that a DSLR can finally achieve the type of flash sync speeds that were previously only available with medium format cameras with in-lens shutters.

And as the global shutter is a purely solid state device it should be possible to achieve even higher shutter speeds than 1/8000th. Traditionally, flash was used to 'stop motion'. What if the same effect could be achieved with available light?

Going solid state also means going silent, at least if the mirror is locked up, that is.

A 12MP camera with a mechanical shutter may still outsell a 10MP camera with a solid state shutter. But what about 24MP versus 20MP? Unlike some I do see a real value to going to higher resolution sensors. But it is clearly a case of diminishing returns.

Going a stage further and designing the sensor to allow a sequence of images to be captured in a short time allows for even more interesting effects. Need high dynamic range? Why not take a bracket of shots at different gain (i.e. ISO) settings and compose the results in the same RAW file?

Saturday, October 04, 2008

Time for a programmed-ISO mode on DSLRs

Last Thursday I was taking a picture of a sunset when suddenly a policewoman went past on a Segway. So breaking off from the sunset I tried to take a quick snap of policewoman on Segway in the few seconds before she was gone.

Shots like that are always something of a hit and miss affair, this one missed. Even with a VR lens, it is not possible to take decent hand-held shots indoors at ISO-200 and f/5.6 without flash.

Which got me thinking about the fact that the controls on my Nikon D300 are essentially the same as the controls on my Nikon N90s which in turn only added an aperture priority mode from my Nikon FG, now 25 years old. Given a shutter speed the D300 will chose the aperture or given the aperture it will chose the shutter speed or in program mode the camera will set both automatically or neither in manual mode.

But the exposure on a camera, whether digital or film based is determined by three camera settings (and the available light): the aperture, shutter speed and the ISO setting.

On a film camera the ISO setting is determined by your film stock, once set you can only change it by changing the film. But on a digital camera the 'ISO setting' is actually the gain setting on the A-to-D converters inside the camera and that can be changed on every exposure.

In most shots I have a very definite opinion about the aperture: usually either wide open to minimize depth of field or set to the diffraction minimum to maximize depth of field. In certain shots I also have a particular opinion about the shutter speed. The ISO setting does have some impact on the end result of course, but with the introduction of modern CMOS sensor cameras like the D300, this is generally much less of a factor than the aperture or shutter speed.

So here is my question: why not have a new program mode in which the camera uses the ISO setting to adjust to the available light?

The D300 does have an 'Auto-ISO' mode but it is nowhere near as easy to use as the other camera modes. I simply cannot predict whether the camera will adjust to the light level by changing the ISO setting or the shutter speed setting which makes it not very useful.