“I doubt there will be another AI Winter…but significant new innovations will be required for continued progress. I think most top AI researchers would agree.”
Quite different from most of his earlier writings like “How AI Will Become Omnipresent” ( https://venturebeat.com/2017/07/24/how-ai-will-become-omnipresent/ )
Ford now provides a version of Piekniewski’s blog in which Filip Piekniewski writes that “AI Winter is well on it’s way”. For the Pied Piper of Artificial Intelligence (AI) and Robots to say “…significant new innovations (in AI) will be required for continued progress” is quite an acknowledgement that the leap from Narrow AI to General AI is not happening anytime soon, if ever.”
Piekniewski continues, “In my opinion there are such signs visible already of a huge decline in deep learning (and probably in AI in general as this term has been abused ad nauseam by corporate propaganda), visible in plain sight, yet hidden from the majority by the increasingly intense narrative. How “deep” will that winter be? I have no idea. What will come next? I have no idea. But I’m pretty positive it is coming, perhaps sooner rather than later.”
In what appears to be a coincidental Financial Times article entitled, “Why we are in danger of overestimating AI“ Richard Waters writes “The more fundamental case against deep learning is that the technology cannot deal with many of the problems that humans will want computers to handle. It has no capacity for things the human mind can do easily, like abstraction or inference that make it possible for us to “understand” from very little information, or instantly apply an insight to another set of circumstances.
“A huge problem on the horizon is endowing AI programs with common sense,” says Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. “Even little kids have it, but no deep learning program does.”
Dr. Etzioni’s comment is rather bizarre because the idea that AI developers are attempting to “endow AI programs with common sense” is quite hyperbolic. It distracts from the real issue which is that “human intelligence” is a poor model for artificial intelligence and this is the real point on which Martin Ford’s tweet zero’s-in. The issue might be best summarized by saying, “AI is never getting to say Eureka!”
“We are a a biological species arising from Earth’s biosphere and one adapted species among many; …however rich and subtle our minds, however vast our creative powers, the mental process is the product of a brain shaped by the hammer of natural selection upon the anvil of nature… Otherwise we would not use the term humanities to denote the study of those very special phenomena that make us human.” — E. O. Wilson
Horace Walpole, a member of the British House of Commons in the 18th century, recognized in himself a talent for finding what he needed just when he needed it. For example he needed a coat of arms with specific elements to decorate a new picture frame and accidentally found what he was looking for in an old book. Thrilled with this coincidence, Walpole wrote to his cousin, Horace Mann, giving a name to his ability to find things unexpectedly— “serendipity”.
Walpole got the name from a fairy tale called “The Travels and Adventures of Three Princes of Sarendip.” The king of the fable recognizes that education requires more than learning from books, so he sends his sons out of the country to broaden their experience of the world. Throughout the story, the clever princes carefully observe their surroundings, and then use their observations in ways that save them from danger and death. For Walpole, serendipity meant “finding something by informed observation” and by accident.”
AI developers may not want to acknowledge it but much of human intelligence is serendipitous. It’s not organized or planned-out like an algorithm. While thoughtful people can create the contours of an intelligent act they’re most likely to encounter an “a-ha” moment when their thinking leaps to what they see as a solution or revelation.
Serendipity has two related meanings:
- Looking for something and finding just what you needed but also
- Looking for something and finding something even better
The Eureka effect, also known as the ‘aha! effect,’ refers to the common human experience of suddenly understanding a previously incomprehensible problem or concept. It’s named after the myth that the Greek polymath Archimedes, having discovered how to measure the volume of an irregular object, leaped out of a public bath, and ran home naked shouting ‘eureka’ (‘I found it’).
Like luck, serendipity requires perseverance, preparation, and opportunity, only two of which can be reduced to “code”. AI researchers, including Martin Ford, are coming to realize this. Human intelligence is not like a complicated statistical problem, which is what “Weak AI” is. It may not even be a multidimensional random walk which is what “Strong AI” is thought to be. Very often human Intelligence is looking for something and finding something better. It’s an a-ha moment. When encountering an a-ha moment human beings do two things computers do not:
- They serendipitously acquire information and knowledge on their own (i.e. without being (trained”)
- They “imagine” different scenarios using the knowledge they acquire.
“To imagine is the characteristic act, not of the poet’s mind, or the painter’s mind, or the scientist’s, but of the mind of man.” – J. Bronowski, The Reach of Imagination
“What is now proved was once only imagined.” – William Blake.
As a consequence Martin Ford and his AI colleagues will be disappointed if they expect any computers to leap out of a bath tube shouting ‘eureka’ when their code gets to “end of job”
- AI winter is well on its way – Piekniewski’s blog 6/2/18,