Alumni Q&A with Wojciech Zaremba, Co-Founder of OpenAI

By Sarah Ward


In 2012, when Wojciech Zaremba began researching potential PhD programs, “Deep learning as a branch of artificial intelligence was far from being popular.” He says, “Frankly, there were only three universities in the world that were cultivating this domain: University of Toronto, Université de Montréal, and New York University.” Zaremba had studied mathematics and computer science at the University of Warsaw; he was twenty-four and finishing up his master’s degree at the École Polytechnique in Paris. The next logical step was a PhD, but where? For Zaremba, his decision would be based on the institution’s faculty. “In my opinion,” he says, “the most important factor in choosing a PhD program is your advisor and the chemistry between you; everything else is secondary.” After his acceptance into Courant, Zaremba met with a current student and left that conversation thinking, “I want to be at Courant.” He got his wish. Zaremba enrolled in the computer science PhD program in September of 2013, working alongside Rob Fergus in the CILVR Lab (Computational Intelligence, Learning, Vision, and Robotics).

 

“Wojciech was a dazzling student during his time at NYU,” says Professor Fergus, “it was clear that he was going to play a big role in the future of AI.” That “big role” materialized even sooner than expected. In late 2015, while still completing his PhD, Zaremba was announced as a founding member in a new venture backed by a $1 billion commitment from Sam Altman, Elon Musk, Peter Thiel and others. The venture, OpenAI, was initially formed as a non-profit research firm with a goal “to advance digital intelligence in the way that is most likely to benefit humanity.” While studying at Courant, Zaremba had established himself as an up-and-coming researcher through a pair of internships at Google and Facebook. He turned down job offers from the tech giants—and the accompanying “borderline crazy” salaries—in order to join OpenAI as a co-founder and help advance its mission of AI for all.

 

In its first seven years, OpenAI has made significant contributions to the field of artificial intelligence and machine learning. The company has published hundreds of research papers, spun off a for-profit subsidiary, and received a $10 billion investment from Microsoft. OpenAI’s products and applications (the most famous being DALL·E and ChatGPT) have finally, decisively pushed AI to the forefront of the cultural conversation. When Zaremba and I spoke on a video conference at the beginning of this year, the newest model of ChatGPT was setting the internet ablaze. The recent launch of ChatGPT-4 proves that the fire rages on, with extravagant praise, dire warning, or some combination of the two published on every major platform. 

 

In his role as co-founder and research scientist, Zaremba leads the team responsible for an infrastructure of human feedback, which guides model behavior thanks to reinforcement learning. He also takes advantage of the model's language services himself. “I use the model to help with my writing,” he says, “it clarifies my thinking; it helps me find grammatical errors; it helps me write outlines. I literally use it everyday.” OpenAI is hoping the wider public will begin to integrate these products into their daily routines as well. In Zaremba’s opinion, collaboration with AI will lead to an “explosion of human creativity” that he compares to the invention of electricity. In our interview, Zaremba discusses his time at Courant as well as his hopes—and fears—for the future of artificial intelligence. 

 

When were you first drawn toward your research area?

Since I was a kid, I was always quite passionate about mathematics. I enjoyed playing with numbers and with proofs. At some point, I realized that mathematics is remarkably well combined with computer science. It turns out that math, which is fairly abstract on its own, can transform things in the real world when combined with computer science. It becomes a superpower. The ability to transform computing power into reality is fundamental to artificial intelligence, almost like a law of nature. I realized that artificial intelligence has tremendous potential to redefine the boundaries of what it means to be a human. That was interesting to me, it seemed like an impactful domain to work in.

 

Tell me about your time at NYU Courant. What do you remember most vividly?

The majority of PhD work has to do with interaction with your PhD advisor and the other students working with your advisor, so that’s what I remember the most. It was energizing. When I was working at NYU, my sleep schedule was so unusual. I used to go to sleep at 7am and wake up at 5pm excited to focus and work uninterrupted throughout the night. That energy around Courant was remarkable to me. Rob Fergus especially was so passionate and available to his students. Sometimes I would message him on a Sunday and he’d reply, “Yeah I’m here, let’s meet in the lobby in 15 minutes.” This work is clearly coming from a place of passion, so that was very motivating.

 

You have said in previous interviews that you see fostering collaboration as your “superhuman skill”—could you speak more to that?

At different stages of my life, I learned various things. As a kid, it was about raw technical skill and problem-solving. That’s what my mind was drawn toward. As I’ve gotten older, I realize that you gain tremendous magnification by having many people working together. There is a misconception that is present—even within the scientific community to some extent—that there is a single clever idea or equation to solve, and then you figure it out and plug it into a computer, and you suddenly have a super smart machine. In reality, it turns out that making progress in AI is all about combining multiple efforts together. The progress requires incredible engineering, incredible research, and many other domains that were not as evident to me in the past. You have to consider accessibility for example when making AI that is reachable to the public. 

 

You’re emphasizing the importance of interdisciplinary collaboration to scientific progress. You have also discussed the potential in collaboration between humans and OpenAI’s products, such as ChatGPT and DALL·E, do you see these products as a tool for human expression?

There will be various stages. We are now in the stage where, all of the sudden, AI-assisted work starts spreading out. That’s just the beginning. It seems to me that writers who leverage AI will be in better shape than writers who don't leverage AI. A writer can ask the model “Where should I divide the chapters? Can you rearrange them into that order?” or else say “Tomorrow, I will add a character that reflects these values, could you visualize what this character might look like?” So you can go back and forth in this process. This type of text generation might one day help in creating blockbuster movies. A filmmaker could ask the model, “Can you make this scene more heartwarming?” Or a visual artist could create a gallery exhibition much more quickly. I don't know how things will evolve, but we are already seeing people leverage their skills. Things that were taking artist hours on PhotoShop, they can do way, way quicker with the help of AI.  It's almost like leaning into the technology rather than trying to fight against it. I actually think the world will see an explosion of human creativity. 

 

How do you respond to criticism that OpenAI dedicates its resources to replicating artistic pursuits when the company could focus on training models in tasks that are less creative?

Look at a book like iRobot, where Isaac Asimov describes these huge robots as big as buildings that cannot speak, or can only speak about the atomic mass of salt or something like that. This is fiction, but it reflects the opinion of his time—it is easy to make robots, it is way harder to make conversational agents. This did not turn out to be the case. So people in AI were exploring all sorts of domains, and some of them were able to make progress and some of them not. Tremendous progress has happened as a consequence of training on the large amounts of available data. It turns out that models are excellent in some skills and lacking in others. 

 

Which skills are the models making significant progress in? 

In terms of specific applications, one place that AI may have a huge impact is therapy. Speaking from my own experience, I once had some disagreement with my girlfriend, so we set up a conversation between myself, my girlfriend, and AI. The AI was empathetic, making sure that we were expressing ourselves and listening to one another. My girlfriend and I were both stuck seeing our own version of reality, but that conversation helped us see other potential solutions. I suspect that it’s possible to train a model on the latest guidelines about facilitation, conflict resolution, and so on to build an AI coach which will be at the level of our best therapists. At the moment, therapy is only available to people who can afford it, but this may be a way to guide more people towards better relationships with themselves and better connections with others. Obviously some therapists may object to it. 

 

Do you worry about AI making certain jobs obsolete?

Certainly there are some groups of people who will have to adapt. It might be a case of combining their skills with AI. Some activities in their raw form—as they were before—may be valued less, but combined capabilities are likely to be worth way more. AI will create so many opportunities that did not exist before. I do believe that AI will eventually be utilized in every discipline in some way.

 

What are some other potential applications of this technology?

I gave you an example of therapy, another field I'm particularly excited about is medicine and healthcare. In the current medical system, you practically have to be collapsing in order to be taken care of. I myself have been misdiagnosed in the past. I had an issue with sleep apnea and it took almost a year to find a solution. In my case, the cure was relatively simple and cheap. But still it was difficult and expensive to get good medical advice. I believe a huge number of people have chronic health problems that they are not aware of or are not properly diagnosed. Medicine as a field is quite fragmented—it’s like that old joke about the doctor that specializes in the right foot and the other doctor who specializes in the left foot. The body is connected. A problem in one part of the body might show up in a totally different part, but the medical domain is so complicated that no one human could fully comprehend it. I see no reason why it shouldn’t be possible to get to a point where anyone can pick up their phone and speak with an AI specialist. The AI can say “Try this solution. Try that. Can you show me a picture of your knee? Can you leave the phone next to your bed tonight so I can hear if your snoring has improved?” It would be cool to get to that point.

 

Could you speak to the application of AI in education? When you look back at your PhD, did you struggle with any tasks that might now be handed off to one of your models?

One that may be obvious is copywriting. We can already see that this is alive today. I use ChatGPT to help with my writing every day. I think that actually it will go way further. I think we’ll get to a place where scientific endeavors will be powered by and assisted by AI. For instance, a student or scientist would ask the AI, “Can you find me all the papers on this topic? Can you pull the table of numbers and plot the value versus this number of layers? What are the fallacies of this outlier?” And then boom, you have the data from a hundred papers processed in one place. We are definitely at a time in our civilization's development that so much knowledge is produced it's actually really hard to make sense of it all. You see this in academic circles all the time—people developing equivalent subjects but calling them by totally different names, and it might take years to even learn about the other’s existence.

 

Artificial intelligence will also be helpful in removing the language barriers between researchers, right? 

Yes, that’s very exciting. Very exciting.

 

Does being multilingual yourself change the way you approach training the language models?

Let’s see. I strongly believe that people who don't have the most typical background are likely to be more adaptable. Experience in different sorts of domains will prepare you to perform well in a domain you haven’t seen. It’s the same with languages. But more than any specific language, I think the most important skills are learning to think quickly and learning to be resilient. Those are the most valuable, and I think they are learnable.

 

Do you have advice for current students learning these skills?

The first fundamental component, which may be the hardest, is overcoming a sense of hopelessness. Many women especially are told that they cannot study math or computer science, that they can’t compete. A big part of personal development is learning to disregard this. Of course, you still have to be realistic with your expectations. It’s almost like a mathematical formula: you start with something you have a decent chance at succeeding in, and then you try something twice as hard, and then you double it and double it. You must also learn to be at peace with the fact that the challenge is part of the journey. We are familiar with success stories of these remarkable figures and Nobel Prize winners, but their failures were part of the process as well.  They learned from their failures and continued onward.

 

OpenAI states that its mission is creating technology that benefits humanity, how is the company ensuring that this is the case?

The ethical concerns are extremely, extremely, extremely important. The technology we have now—and the technology on the way—is very powerful. It would be a shame if all of our hard work became more harmful than helpful. But I believe that the opposite will be true; I want the opposite to be true; I am working for the opposite to be true. OpenAI has made a tremendous effort to think about AI from a policy perspective. There is a whole internal team thinking about the economical risks, the ethical risks, how governments should be involved in the technology and so on. They have sophisticated opinions, and they work on this topic full time.

 

Can you recall an occasion when the model said something that took you by surprise? 

Many times! It was exciting when we got to the point with the model that some of its jokes were funny. For a while, the model could mimic the structure of a joke but they didn’t make any sense. Then one day, it made a machine learning joke about a chicken crossing the road. It was joking about itself. I also remember once working on an earlier version of ChatGPT when suddenly the model invented its own name—it was like Aurelius or something classical. We asked the model why that name and it made some reference to Greek mythology. I was like “Wow. That is actually a nice name.” 

(When asked, the current model of ChatGPT did not recall this specific incident but suggested that Athena or Hermes might be a fitting mythological name for itself. Athena because she is “often associated with wisdom, intelligence, and strategic thinking;” Hermes because he is “the messenger of the gods and said to be quick-witted and cunning.”)

 

You mentioned Asimov’s iRobot earlier, do you have a favorite fictional depiction of AI?

When I look at SciFi books and movies, I find them interesting but not very reflective of reality. Many of them turn out to predict dystopian negativity. But I think—I would actually assign a decent probability to it—that AI will turn out to be one of the best things that ever happened to the human race. Let me make an analogy to an innovation of the past. Think about something like electricity. When electricity was invented, people were extremely scared of it. There were demonstrations showing how you could kill an elephant with electricity. So when the electric companies said “We’re going to put electricity in the walls of your home,” people freaked out. They asked “Why would I want something in the house that can kill me and my kids?” So I think that kind of shows how things evolve. Over time, you can see electricity transforming from a threat to a utility. People develop a basic need. 

 

This conversation has been condensed and edited for clarity. Photo by Grenobli.