Yaron Singer, CEO of Robust Intelligence and Professor of Computer Science at Harvard University – Interview Series

Yaron Singer is CEO of Robust Intelligence and Professor of Computer Science and Applied Mathematics at Harvard. Yaron is known for its breakthrough results in machine learning, algorithms and optimization. Previously, Yaron worked at Google Research and earned his Ph.D. from UC Berkeley.

What initially attracted you to the field of computer science and machine learning?

My journey started with math, which led me to computer science, which put me on the path to machine learning. Mathematics first caught my interest because its axiomatic system gave me the ability to create new worlds. With computer science, I learned the existential proofs, but also the algorithms that underlie them. From a creative perspective, computing is about drawing boundaries between what we can and can’t do.

My interest in machine learning has always been rooted in an interest in real data, almost the physical aspect of it. Taking things from the real world and modeling them into something meaningful. We could literally design a better world through meaningful modeling. So math gave me a foundation to prove things, computer science helps me see what can and can’t be done, and machine learning allows me to model those concepts in the world.

Until recently, you were a professor of computer science and applied mathematics at Harvard University. What were the main lessons learned from this experience?

What I take away most from being a Harvard faculty member is that it develops the appetite to do great things. Harvard has traditionally had a small faculty, and the expectation of tenure-track faculty is to tackle big problems and create new areas. You have to be bold. It ends up being great preparation for launching a category-creating startup defining a new space. I don’t necessarily recommend going through the tenure path at Harvard first, but if you survive that, building a startup is easier.

Could you describe your “aha” moment when you realized that sophisticated AI systems are vulnerable to bad data, with potentially significant implications?

When I was a graduate student at UC Berkeley, I took time off to create a startup that built machine learning models for social media marketing. It was 2010. We had massive amounts of data from social media and we coded all the models from scratch. The financial implications for retailers were quite significant, so we closely monitored the performance of the models. Since we used data from social media, there were many errors in the entry, as well as deviations. We have seen that very small mistakes lead to big changes in model output and can lead to poor financial results for retailers using the product.

When I switched to working on Google+ (for those of us who remember), I saw the exact same effects. More dramatically, in systems like AdWords that made predictions about the likelihood of people clicking on an ad for keywords, we noticed that small errors in entering the pattern led to very poor predictions. When you witness this Google-wide problem, you realize the problem is universal.

These experiences strongly shaped my research focus, and I spent my time at Harvard studying why AI models make mistakes and, most importantly, how to design algorithms that can prevent models from making mistakes. This, of course, led to more “aha” moments and, ultimately, the creation of Robust Intelligence.

Could you share the story of the genesis of Robust Intelligence?

Robust Intelligence started with research on what was initially a theoretical problem: what guarantees we can have on decisions made using AI models. Kojin was a student at Harvard, and we worked together, initially writing research papers. So it starts with writing articles that describe what is fundamentally possible and impossible, theoretically. These results were then continued in a program of designing fault-tolerant AI algorithms and models. We then build systems capable of running these algorithms in practice. After that, starting a business where organizations could use a system like this was a natural next step.

Many problems Robust Intelligence tackles are silent errors, what are they and what makes them so dangerous?

Before giving a technical definition of silent errors, it’s worth taking a step back and understanding why we should care about AI errors in the first place. The reason we care about errors in AI models is the consequences of those errors. Our world uses AI to automate critical decisions: who gets a business loan and at what interest rate, who gets health insurance coverage and at what rate, which neighborhoods should police patrol, who is the most likely to be the best candidate for a job, how should airport security be organised, etc. The fact that AI models are extremely error-prone means that by automating these critical decisions, we inherit a lot of risk. At Robust Intelligence, we call this “AI risk” and our mission in business is to eliminate AI risk.

Silent errors are AI model errors where the AI ​​model receives input and produces an erroneous or biased prediction or decision as output. So on the face of it, everything looks fine for the system, as long as the AI ​​model is doing what it’s functionally supposed to. But the prediction or decision is wrong. These errors are silent because the system does not know there is an error. This can be much worse than the case in which an AI model does not produce results, because organizations can take a long time to realize that their AI system is flawed. Then the AI ​​risk becomes AI failures that can have disastrous consequences.

Robust Intelligence essentially designed an AI firewall, an idea that was previously considered impossible. Why is this such a technical challenge?

One of the reasons the AI ​​firewall is such a challenge is that it goes against the paradigm of the ML community. The previous paradigm of the ML community was that to eradicate errors, more data, including bad data, had to be fed to the models. By doing this, the models will practice and learn to fix mistakes themselves. The problem with this approach is that it dramatically drops the accuracy of the model. The best-known results for images, for example, drop the accuracy of the AI ​​model from 98.5% to around 37%.

The AI ​​firewall offers a different solution. We decouple the problem of identifying an error from the role of creating a prediction, which means that the firewall can focus on a specific task: determining whether a data point will produce an erroneous prediction.

This was a challenge in itself due to the difficulty of giving a prediction on a single data point. There are lots of reasons why models make mistakes, so building technology that can predict these mistakes has been no easy task. We are very lucky to have the engineers we have.

How can the system help prevent AI bias?

Model bias arises from a discrepancy between the data the model was trained on and the data it uses to make predictions. Going back to AI risk, bias is a major problem attributed to silent errors. For example, this is often a problem with underrepresented populations. A model may have a bias because it has seen less data from that population, which will greatly affect that model’s performance and the accuracy of its predictions. The AI ​​firewall can alert organizations to these data discrepancies and help the model make the right decisions.

What are some of the other risks to organizations that an AI firewall helps prevent?

Any business using AI to automate decisions, especially critical decisions, automatically introduces risk. Bad data can be as minor as entering a zero instead of a one and have significant consequences. From incorrect medical predictions to false loan predictions, the AI ​​Firewall helps organizations prevent risk completely.

Is there anything else you would like to share about Robust Intelligence?

Robust Intelligence is growing rapidly and we get many great candidates applying for positions. But something I really want to emphasize for people considering applying is that the most important quality we look for in candidates is their passion for the mission. We come across a lot of candidates who are technically strong, so it’s really about understanding if they’re really passionate about removing AI risk to make the world a safer and better place.

In the world we are heading towards, many decisions that are currently made by humans will be automated. Like it or not, it’s a fact. Given this, all of us at Robust Intelligence want automated decisions to be made responsibly. So anyone who is enthusiastic about making an impact, who understands how it can affect people’s lives, is a candidate we are looking for to join Robust Intelligence. We are looking for this passion. We are looking for the people who will create this technology that the whole world will use.

Thank you for this great interview, I enjoyed hearing your views on AI bias prevention and the need for an AI firewall, readers who want to learn more should visit Robust Intelligence.

Comments are closed.