Reading a technical paper written by Peter Norvig is surprisingly entertaining. From tackling the problem of solving every Sudoku puzzle and making it look quite easy, to describing the challenges in organizing Research and Development in Computer Science at Google, Peter might just be one of the most creative computer engineers that I know. After all, it takes someone out of this world to lead the Computational Sciences Division at NASA.
I have so many questions for Peter about innovation, his time at Google as Director of Research; random curiosities, and how he would solve the world’s biggest problems without using mathematical equations, but so many questions and so little time. So here are 5:
01: You have been talking about AI for a while, so much so that you first published
the book Artificial Intelligence: A Modern Approach back in 1995. How much has
AI changed from the 4th edition published in 2020?
I would say there are two big changes. One is computer hardware, not in AI per se: In 1995, we would say “it is not practical to take approach A, because it would require something like 10 gigabytes of main memory, so we recommend the more complex approach B.” By 2020 we would say, “Approach A only requires a mere 10 gigabytes of memory, so it is preferred over the more complex approach B.” The other change is that more problems can be solved by an end-to-end neural network that does the whole thing (such as a chatbot) rather than by a human engineer breaking the problem down into components and solving which piece separately.
02: In the article Ethics in AI: Navigating the Moral Landscape of Tomorrow’s
Technology, I approached the topic of ethics in AI and know that you also believe
strongly in the responsible development and deployment to mitigate potential
risks. What are some of the risks that are the most vital to consider?
There is a whole checklist of risks that we outline in the AI text and the Data Science text, including:
– Abuse of the technology by malevolent actors (as in election disinformation or revenge porn)
– Privacy risks of leaking personal data
– Societal upheaval through the concentration of wealth in the hands of a few winners
– Geopolitical instability and war enabled by autonomous weapons
03: With the rise of interest in STEAM careers, people seem to be in a rush to learn
how to code, and the demand seems to be there to have more talent.
Programming bootcamps have become extremely popular. Some people might
say that programming is easier to learn today than it was 10 years ago. What are
some of the challenges and opportunities that you see today when it
comes to learning how to be a programmer?
I applaud the bootcamps and other educational opportunities, but I don’t love the phrase “Learn to Code.” I would rather say “Learn to Problem-Solve” or “Learn to Design Systems.” I think that the current set of programming languages and tools are one way to achieve those goals, but the tools will change over time and the important thing is the underlying thought processes of designing solutions.
04: What problems do you believe are ambitious opportunities today for
us to serve others and expect value over the next 20 years?
I would encourage the reader to go after the opportunity that appeals to them. There are so many: in medicine and health, environment, education … pick the one that speaks to you and get started!
05: One of your favorite quotes is “Computers are useless. They can only give you
answers” by Pablo Picasso. Why is that?
That’s from my list of quotations on my norvig.com site. I agree with Picasso that the nascent computers of 1968 were used mostly to solve rigidly-defined problems. But that is no longer true today; now computers can tackle ill-defined problems and assist you in gaining understanding and insight, and indeed can guide you to more interesting questions.
Learning leads to leverage. Stay hungry!