The Times of New York has an article on AI-driven cars* developed by Google, whose Bigendian presence has the effect of making the technology seem simultaneously much more immediate and slightly ludicrous. The capability to engineer and drive these vehicles has, of course, been with us for some time now, and
[…] robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue.
Indeed. Minor vehicle accidents are, I claim, caused by one or more drivers acting in a manner that is unanticipated by the drivers of other cars. While it seems clear that the software is currently capable of dealing with the unanticipated behaviour of other, human, drivers, I’m not sure how the software is supposed to control for the fact that the robotic car will itself be driving in a manner that humans will have difficulty anticipating.
Two possibilities: modifying the software to more faithfully emulate the mischievous and erratic driving behavior of humans, to which we are all (self-)accustomed; or, attempting to forge ahead with the hope of re-adjusting driving expectations to a higher standard. The latter sounds like a good bet in the long run, and like social engineering too.
But then, driving is a social act, as well as a mechanical one.
* The article itself seems to be having some display issues (at least in Chrome), so make liberal use of the wonderful arc90 experiment, Readability.