Driverless cars summon up a utopian vision of the future whereby you can wave goodbye to the children as the family vehicle takes them off to school, blind people can take to the road while you can even be carried home safely after having one too many in the bar.

The reality may be very different, however. According to the University of Michigan Transportation Research Institute, you may still have to pass a driving test.

Researchers Michael Sivak and Brandon Schoettle say that most driver-licencing tests evaluate visual performance, knowledge of rules and regulations related to driving and traffic in general, and driving-related psychomotor skills.

They suggest that sensing hardware, spatial maps and software algorithms will vary among manufacturers of self-driving vehicles resulting in variability of on-road performance. Visual pattern recognition is a potential problem for current sensing systems in self-driving vehicles. Would they, for example, be able to recognise downed power lines or flooded roadways.

The researchers said that current technology has not yet been tested thoroughly under a variety of demanding conditions while on road performance of some current test vehicles is not yet perfect, even in good weather.

Sivak said: “The underlying logic for the use of graduated driver licensing systems with novice drivers does not apply to self-driving vehicles. A self-driving vehicle either has the hardware and software to deal with a particular situation or it does not. If it does not, experience in other situations will not be of benefit.

“On the other hand, the GDL approach would be applicable should a manufacturer explicitly decide to limit the operation of its vehicles to certain conditions, until improved hardware or software become available.”

So, the end of the driving test is not close – and maybe it should continue. We already have vehicles that can operate themselves – automatic pilot is positively ancient technology in the aviation industry, but you still need a human sitting in the cockpit in case anything goes wrong.

What happens when the technology goes wrong in an autonomous car – and I highlight the word when, rather than if, because it will. Whatever the technology and whatever the safeguards, history shows that it is inevitable that something, sometime will go wrong.

When it does, somebody will have to know how to drive the thing.

Another piece of autonomous car research to emerge this week says that cars that drive themselves could face a big ethical and moral decision when trying to avoid an accident.

A research paper from Cornell University asks whether it would be ethical for the car to kill its occupant or occupants, in order to save a larger group of people? This question is strictly linked to the greater good aspect and could pose problems for researchers, developers and potential buyers.

A scenario put forward for the research involves an unavoidable crash in which a group of 10 people could be killed or injured. The only way to prevent such a disaster is for the car to smash into a wall, possibly injuring or killing the occupant.

As self-driving cars are both designed to protect the driver and occupants, as well as pedestrians, the question is: will the car allow more or less people to die? Using and configuring these moral algorithms means they need to be strictly applied and should not cause a sense of public outrage or discourage potential buyers.

Researchers subjected several hundred people to a series of moral questions, regarding the number of lives to be saved, the age of the people involved in the accident and other characteristics. Results have shown that 75% of responders thought it was the right decision to avoid the group of pedestrians, with 65% believing the cars would actually be programmed to do that. But will they? Can they be?