If you want an excellent example of the triumph of hope over experience, look no further than the generally rapturous welcome given to the news that Google has produced a driverless car with no brakes and no steering wheel.
According to Chris Urmson, the director of Google’s self-driving car project:
“They won’t have a steering-wheel, accelerator pedal, or brake pedal . . . because they don’t need them,” he wrote of the prototypes. “Our software and sensors do all the work.” (Google steers new course with driverless car)
In fact, all the car’s “driver” has to do is press a button to start or stop. Presumably he or she has to programme in the route too, unless that has already been determined, eg on a set run from A to B.
This is a brilliant proof of concept, but to think it could become a reality on mainstream roads any time soon is nuts. Oh, and before someone says, “Well, what about the Docklands Light Rail, which uses driverless trains?” Well actually, they don’t. The trains travel without anyone driving them, but they always have an operator on board who can drive the train in case of an emergency.
This provides an excellent case study for students of computing. The programming is incredible. According to an article in The Guardian,
“Google's cars use an array of sensors to map the world around them in real-time. On the roof, a spinning laser creates a 3D model of every major object surrounding it, be they fellow road users or potential hazards such as pedestrians and cyclists. Cameras on the front and sides supplement that model by looking out for important visual information such as road signs or traffic lights.” (Self-driving cars face a long and winding road to success)
(See Look, Ma, no hands: Google to test 200 self-driving cars for more information on how it all works.)
But there are numerous obstacles to overcome:
- In order to work properly, the maps that the car uses must be accurate down to the level of inches, eg it has to know the height of curbs.
- The maps also have to be absolutely up-to-date in real time.
- If a situation on the road develops that means the car has to choose between hurting one person rather than another, which one should it choose?
- If or when someone is hurt as a result of that choice, who is responsible?
- Do we really want Google not only knowing what we like, but where we go -- and when we go there -- as well?
- Driverless cars are still illegal in many parts of the world, and I can’t see that changing any time soon.
Based on experience, even if the challenges above are overcome, what about the following?
What happens when…
- someone steals the laser sensor thing on top of the car?
- the software goes wrong?
- someone develops a hack that will take over your car – what a great way of abducting someone, or worse.
- you need to accelerate out of a dangerous situation?
- you need to get away from someone who is trying to harm you?
- you need to get someone to the hospital quickly because they have gone into labour?
The idea that you don’t need a brake pedal or a steering wheel or an accelerator is, frankly, ludicrous if driverless cars are to be used in everyday driving. I can see something like this being useful in the form of a shuttle service between, say, the aircraft and the departure lounge, but beyond that I think we need to think about the wider issues. At the very least, there should be the option for the driver to exercise his or own judgement where necessary.
Your newsletter editor is hard at work sifting through the submissions for Digital Education, the free newsletter for education professionals. Have you subscribed yet?
Read more about it, and subscribe, on the Newsletter page of the ICT in Education website.
We use a double opt-in system, and you won’t get spammed.