Искусственный General Int ...

При каких обстоятельствах мы все еще можем увидеть еще одну зиму ИИ, несмотря на появление глубокого обучения?

Любит | Нелюбов | Ответы | Вид | 1558


Шридхар Махадеван член АААИ   
@ | Обновлено Right Now


I believe there is a good possibility of another AI winter, although of course it is impossible to predict such an event with any degree of absolute certainty.

The most worrying phenomenon is the degree to which AI and ML technology is being “oversold”. Nothing illustrates the hype phenomenon better than some of the PR underlying deep learning.

One of the most respected researchers in visual cognition, Allan Yuille, the Bloomberg Distinguished Professor at Johns Hopkins, recently published a frank assessment of the effectiveness of deep learning at computer vision, the very area that led to the current PR media blitz over deep learning.

The Limitations of Deep Learning for Vision and How We Might Fix Them

His article contains frightening examples of the utter failures of deep learning. None of these failures are surprising to me, since I have experienced first hand how poorly deep learning performs in my home at recognizing everyday objects. The MATLAB deep learning demo program, which lets you hook up a webcam to your laptop or computer, classified my living room as a “barbershop”.

Training a classifier on a fixed Imagenet dataset and expecting it to perform well on real world data is a fool’s errand. It will not work as the data distribution of real world images is very different from Imagenet test tube images.

Consider the following images in Yuille’s article, where placing a simple occluder like a guitar in front of a monkey causes a deep learning network to mislabel the monkey as a human! A two year old child would never make such a fundamental and blatant error.

https://thegradient.pub/content/...

In a second telling experiment, Yuille shows how changing the orientation of an everyday object, like a living room sofa, can dramatically alter the success rate of deep learning.

https://thegradient.pub/content/...

Yuille asks the following basic questions, which go to the heart of the challenge facing deep learning.

“(I) How can we train algorithms on finite sized datasets so that they can perform well on the truly enormous datasets required to capture the combinatorial complexity of the real world?

(II) How can we efficiently test these algorithms to ensure that they work in these enormous datasets if we can only test them on a finite subset?”

These questions have enormous ramifications for real world life and death tasks like autonomous driving. Companies like Tesla and Waymo are hurtling us towards a future of self driving cars and autonomous taxis, on the basis of testing the underlying deep learning solutions on finite sized datasets or simulations of cities.

Yuille’s conclusion is a pessimistic one:

It seems highly unlikely that methods like Deep Nets, in their current forms, can deal with the combinatorial explosion. The datasets may never be large enough to either train or test them.

If this conclusion is true, and I for one, with 30+ years of experience in research in ML, tend to agree with Yuille, we are looking at a frightening scenario where self driving cars will eventually fail, causing a huge pullback in funding for AI across many industries. Such an event would certainly precipitate an AI winter. That’s exactly what caused the first AI winter when knowledge based expert systems failed in late 1980s.

The billions of dollars being spent on deep learning solutions is no guarantee that the problems outlined by Yuille will be solved. My most recent PhD student spent several years studying the theory of GANs. His Arxiv paper on proving the convergence of a highly restricted class of GANs contains almost 50 pages of difficult math! While his work is highly impressive, it shows how far we have to go before we begin to understand if and when the full versions of GANs will reliably converge.

Global Convergence to the Equilibrium of GANs using Variational Inequalities

Listening to Ian Goodfellow’s wonderful summary of adversarial machine learning research, including GAN’s, at the recent Deep Learning Summit in San Francisco and at the AAAI 2019 conference in Hawaii just served to increase my anxiety. He gave many examples where GAN technology is being used, from self driving cars where daytime images are converted to nighttime images to GAN “manufacturing”, where 3D models of teeth are being made using GANs.

This “Cambrian” explosion of applications, to use Goodfellow’s apt terminology, would be heartening, even exciting, if we have suitable theory showing GANs converge reliably. But we don’t, and all the theory I’ve read, including my student’s PhD dissertation, points to their weaknesses. Heaven help us if autonomous cars, medical procedures, and who know what else, developed around such unproven technologies, begins to fail along the lines shown in Yuille’s article.

No one would be more disappointed than I if a second AI winter arises as deep learning spectacularly crashes in such real world tests. But hoping it won’t is not good enough. Richard Feynman, world class physicist, the iconoclastic “safe breaker” in the Manhattan project, wrote a haunting epitaph to the Challenger disaster findings, where he remarked that for a successful technology to work, science must take precedence over PR, “for nature cannot be fooled”.

These haunting words of Feynman ring so true to my ears, as he was probably the most honest physicist of his generation, who heard the “beat of a different drum”, as the title of his most haunting biography read.

Human vision took millions of years of evolution to perfect. It is a startling beautiful example of the power of natural selection. It will not be easy to reverse engineer the human visual system, not to mention other areas of the brain where we understand even less of how they function, from language to behavior.

President Trump just signed an executive order regarding AI, whose opening statement reads:

Trump Signs Executive Order Promoting Artificial Intelligence

“Artificial Intelligence (AI) promises to drive growth of the United States economy, enhance our economic and national security, and improve our quality of life. The United States is the world leader in AI research and development (R&D) and deployment. Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities. ”

In a premature rush to roll out an imperfect poorly understood solution, we may inadvertently set back the promise of AI for generations to come. This problem is the greatest scientific challenge of our time. Let’s not ***** this opportunity and remember Feynman’s wise words that “nature cannot be fooled”.

https://science.ksc.nasa.gov/shu...

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

| |



Онлайн-курс
«Всё о блокчейне и криптовалютах»
Один из самых трендовых курсов в сфере Цифровой экономики.