今週のNatureコラムNew Features: Can we open the black box of AI?

 新聞でも雑誌でもひっきりなしに話題になっているArtificial Intelligenceである。これについての簡単な記事が今週のNatureのNew Featuresのコラムにあったので、紹介する。Deep learningをどう考え、どう扱っていくかが焦点になっている。Deep learningとは、ビッグデータを使ってAI自身がネットワークを構築する技術である。これは今、広告や自動運転の技術として商業的に応用されている。そして、科学にも応用されている。

<Can we open the black box of AI?>

http://www.nature.com/polopoly_fs/1.20731!/menu/main/topColumns/topLeftColumn/pdf/538020a.pdf

科学に応用されたときの考察を紹介する。

<Eventually, some researchers believe, computers equipped with deep learning may even display imagination and creativity. “You would just throw data at this machine, and it would come back with the laws of nature,” says Jean-Roch Vlimant, a physicist at the California Institute of Technology in Pasadena.

But such advances would make the black-box problem all the more acute. Exactly how is the machine finding those worthwhile signals, for example? And how can anyone be sure that it’s right? How far should people be willing to trust deep learning? “I think we are definitely losing ground to these algorithms,” says roboticist Hod Lipson at Columbia University in New York City. He compares the situation to meeting an intelligent alien species whose eyes have receptors not just for the primary colours red, green and blue, but also for a fourth colour. It would be very difficult for humans to understand how the alien sees the world, and for the alien to explain it to us, he says. Computers will have similar difficulties explaining things to us, he says. “At some point, it’s like explaining Shakespeare to a dog.”>

提起されたのが、ブラック・ボックス問題である。次に、それが持っている危険性の考察を引用する。

<To scientists who have to deal with big data in their respective disciplines, this makes deep learning a tool to be used with caution. To see why, says Andrea Vedaldi, a computer scientist at the University of Oxford, UK, imagine that in the near future, a deep-learning neural network is trained using old mammograms that have been labelled according to which women went on to develop breast cancer. After this training, says Vedaldi, the tissue of an apparently healthy woman could already ‘look’ cancerous to the machine. “The neural network could have implicitly learned to recognize markers — features that we don’t know about, but that are predictive of cancer,” he says.

But if the machine could not explain how it knew, says Vedaldi, it would present physicians and their patients with serious dilemmas. It’s difficult enough for a woman to choose a preventive mastectomy because she has a genetic variant known to substantially up the risk of cancer. But it could be even harder to make that choice without even knowing what the risk factor is — even if the machine making the recommendation happened to be very accurate in its predictions.

“The problem is that the knowledge gets baked into the network, rather than into us,” says Michael Tyka, a biophysicist and programmer at Google in Seattle, Washing-
ton. “Have we really understood anything? Not really — the network has.”>

ブラック・ボックス問題を解決するような研究はいくつかあるそうだ。しかし、まだまだ解決には遠い。そもそも、人間の脳のニューラル・ネットワークを模したのがAIであるが、我々の脳がブラックボックスの性質を持っている。だから、それを「信頼」しなくては脳もAIも使えないだろう、と最後に締めくくられる。

<Despite these fears, computer scientists contend that efforts at creating transparent AI should be seen as complementary to deep learning, not as a replacement. Some of the transparent techniques may work well on problems that are already described as a set of abstract facts, they say, but are not as good at perception — the process of extracting facts from raw data.

Ultimately, these researchers argue, the complex answers given by machine learning have to be part of science’s toolkit because the real world is complex: for phenomena such as the weather or the stock mar- ket, a reductionist, synthetic description might not even exist. “There are things we cannot verbalize,” says Stéphane Mallat, an applied math- ematician at the École Polytechnique in Paris. “When you ask a medical doctor why he diagnosed this or this, he’s going to give you some reasons,” he says. “But how come it takes 20 years to make a good doctor? Because the information is just not in books.”

To Baldi, scientists should embrace deep learning without being “too anal” about the black box. After all, they all carry a black box in their heads. “You use your brain all the time; you trust your brain all the time; and you have no idea how your brain works.” ■>

AIの問題点は詳しく理解できたが、結局「信頼」せよとは期待はずれの結論であった。 ニクラス・ルーマン『信頼』で論じられた社会のなかに、AIも入れよと言うのか。科学者に言う。それではAIを作っておいて、余りにも無責任ではないか。