Adrian Zidaritz is the author of AIbluedot.com, a blog that provides overview of AI, with a mix of math, ethics, politics, and “everything” in between. Although the articles contain a minimal amount of technical material, they are not aimed at specialists, rather they are aimed at the general public. AI is misunderstood by non-specialists and it is either hyped up or talked down in the media; it is nevertheless the most consequential technology in our present time.
What initially attracted you to AI?
AI development requires a wide array of expertise, unlike any other modern technology. It feeds on research from statistics, neuroscience, applied mathematics, computer science, software development, psychology, etc… That challenge is what attracted me, combined with the fact that I had the luck to dabble with many of these fields in my previous career: math, comp science, soft dev, statistics.
You’ve had an extensive career working in AI. Could you discuss some of these highlights?
This is in a way a continuation of question 1. Almost every middle-aged person working in AI currently comes from somewhere else. Until around 2005 there was no AI (by the way the success of AI is due mainly to neural networks = deep learning, all the other techniques pale by comparison; so for all practical purposes when we say AI we mean deep learning). As a result, many of us who work in AI bring unique perspectives to the field. I come from a mathematics background coupled with leading practical AI projects, in which BigData engineering plays a very large role (sometimes more that 80% of total project time). My background sandwiches AI between a questioning of its mathematical foundations (very theoretical) and the very practical aspects of leading teams of data scientists and machine learning engineers. There are other researchers who know more about the AI technologies in the middle of the sandwich.
You’ve stated that AI has either been hyped up or watered down in the media. Why do you believe there Is such a disconnect between the media accurately reporting the state of AI versus the actual realities of the technology?
Because AI is misunderstood even by some people working in AI, let alone the press. It is a very young discipline, with very young workers. The various opinions of these young workers make their way into the media, feeding a misalignment of objectives. Sufficient to mention the Social Dilemma documentary on Netflix, in which these conflicted views of AI, from a Silicon Valley perspective, are well documented.
Currently a bulk of the progress that we have seen in AI has been from deep learning. What are your views on the black box problem of deep learning?
That’s a big problem. Basically we do not have a theoretical (=mathematical) understanding of the process of learning. We do not know how deep learning algorithms actually learn. We just see that they do. There have been attempts of course to develop a theory, but none have gained wide acceptance. So in the absence of that basic understanding, all we can do is say “see, it works”. But giving a white-box explanation is impossible at this time. Other algorithms (not deep learning) are better understood and for them it is possible to give explanations of the results. Not for deep learning.
What are your views on AI bias and how do we prevent it?
Right now AI is all about data, not about algorithms. The algorithms know no bias, the bias is in the data. The data reflects the society composition and also the society stratification, as the data collection also has bias in it. These are by the way naturally occurring, what has to happen is a gradual inclusion of people of all sorts of backgrounds in the data collection process, so that the data reflects a correct representation of the population.
What type of machine learning do you find of most interest?
As I said earlier, machine learning is now ceding ground to its most successful inner branch, deep learning. Neural networks, through their versatility, are dominating.
You’ve stated that Universal Basic Income (UBI) will be absolutely necessary to deal with the job losses that result from AI. Could you elaborate on these views?
Society will suffer huge repercussions from automation (applied AI). We have seen the momentous shifts even in the political upheaval since 2016. There simply will be no way to go back. Many jobs will simply disappear. It makes no sense to train as a radiologist these days. AI can read X-rays and MRI and all sorts of other prints much better than a human. What will happen to people when there isn’t simply a job that they can do? UBI guarantees that humans will not suffer needlessly when automation becomes pervasive. And there is no need to, because AI will deliver the necessary work for society to still function.
Do you believe we can ever achieve Artificial General Intelligence (AGI)?
Yes, many people argue that DeepMind’s software already borders on AGI. I do not subscribe to that idea, but even for me the answer is yes. AGI does not mean emotions or consciousness, the I in AGI is simply cognitive intelligence. And for that level of intelligence, the answers seems to be yes.
Do you believe that there is a probability that we live in a simulation?
A possibility? Yes, meaning that the probability of us living in a simulation is not 0. It is also intellectually appealing. But is it likely? No, to me it is not likely, i.e. the probability, although not 0, is very very small.
Thank you for the interview, readers who wish to learn more about Adrian’s views on different aspects of AI should visit AIbluedot.com.