Experts in AI for autonomous vehicles poured cold water on expectations that the self-driving future is just around the corner.
BRUSSELS — At the closing session of the AutoSens Conference here Thursday, two experts on the limitations and testing of artificial intelligence (AI) in autonomous vehicles (AV) gently poured cold water on a longstanding hope among automakers and technology companies that the self-driving future is just around the next corner.
The prospect of AI-driven cars, operating in a context far more complicated and perilous than a board game, is constrained both by artificial intelligence’s “common sense” gap and by humans’ inability to understand how “black box” AI systems think.
Fellow panellist, Therese Cypher-Plissant of Alliance Innovation Lab Silicon Valley (Renault-Nissan-Mistubishi), which does rigorous testing of AV systems, concurred. Her prediction was that full autonomy (Level 5) might never be accepted or practical. She said, “I don’t know if Level 4 or Level 5 will ever make sense in all situations.”
Both Selman and Cypher-Plissant suggested that the public infrastructure and vehicle-to-vehicle technology required for effective widespread Level 5 autonomy is pretty much a pipe dream.
These judgments threatened to throw the AutoSens conference, an annual showcase for advances in automotive technology, into what organizer Robert Stead called “the dread trough of disillusion.”
But Cypher-Plissant’s counterpoint was that progress short of total autonomy, including improvements in advanced driver assistance systems (ADAS), can make driving substantially safer. Indeed, she said, they already have.
The crux of the matter, in a panel called “Artificial Intelligence Safety and Its Limitations,” moderated by EE Times international editor Junko Yoshida, was that AI systems make choices — based on massive collection and interpretation of data — without knowing why. “A car,” said Selman, “doesn’t understand why it’s driving anywhere.”
As an example of how an AV system fails to understand its circumstances, Selman cited a medical emergency. A human driver, rushing to the hospital with a pregnant wife or an injured child, would exceed the speed limit and flout as many rules of the road as possible without taking crazy risks. An autonomous car, said Selman, cannot grasp the difference between an emergency and normal driving. “Cars,” he said, “will not be able to understand this for a very long time.”
Yoshida asked whether the AI system should “ask for help” in such exceptional cases. Cypher-Pillast answered emphatically that a machine should be able to say, “I need help thinking.”
She added that getting people to accept technology requires the possibility that a person can “interact and react” to the computer’s choices, and alter them if need be. This is customarily referred to as Level 3 autonomy, but it is proving brutally difficult to implement.
Selman expanded on this dilemma, noting that an AI system can make the right choices as much as 85 to 90 percent of the time, which is true in, for example, in language translation applications. This is impressive but, he said, “The key to remember is that the machine has no idea what is being said [in either language]. It has no clue to the meaning.”
He went on, “The system is right for the wrong reason. It does not know what’s going on. It might not realize what it doesn’t know. Humans can do this. Machines are very far from that.”
Both Selman and Cypher-Plissant stressed that a car, without a human at the wheel, that doesn’t understand where it’s going and why it’s going there poses a profound hindrance to the universal adoption of autonomous vehicles.
In Silicon Valley, she said, “We just love the challenge of technology, and we have accepted it, But my relatives live in Montana — and good luck convincing them to accept autonomous cars!”
Yoshida added an additional complication, noting that, just as AI systems do not understand the whys and wherefores of what they “think,” humans are equally ignorant of what goes inside the AI system’s “black box.” Selman concurred, noting that AI algorithms are often impossible to verify.
“Part of their magic is we don’t know how they do it,” he said.
Both expert panellists agreed that advances in automotive AI, probably limited to Level 3 in the foreseeable future, will make next-generation cars “sufficiently safe,” possibly reducing fatal accidents in such vehicles by 95 percent.
“People aren’t stupid,” she said. When perfection is the marketing pitch, she added, “Every accident stops progress.”
This is why, said Cypher-Plissant, that her lab tests every vehicle with a human driver who takes control “as soon as we feel something’s wrong. We know when it went wrong. We don’t wait. We record data. We don’t take risks.”
Cypher-Plissant’s description suggests that the process of trial, error, “feel” and common sense will not happen overnight. She used the phrase, “a really, really, really long time. Moreover, as research continues, regulations will emerge and vastly alter an auto industry that has always been largely self-policed.
She made clear that companies like Uber, trying to rush the proliferation of AVs to replace human drivers, are taking a huge risk.
“With Uber, you can be the big winner,” she said. “But you can be the big loser.”
All credits for this article to the source below: