The Infra Play #121: The age of AI research
I've often advocated for adopting the concept of mental models that Charlie Munger popularized, as the best way to try and access opportunities. Rather than focusing on facts alone, thinking through different mental models gives you the ability to evaluate a situation from multiple points of view. When it comes to AI, that means that we cannot ignore how AI researchers see the world and how they think about their role to play in the next years.
For the purpose of this deep dive, we will think through the point of view of Ilya Sutskever, a man who has played a key role in two important moments in the history of AI research (co-authored the AlexNet paper and co-founded OpenAI). He is currently the founder and CEO of Safe Superintelligence (SSI), a well funded AI research lab focusing on going straight toward building advanced AI models.
The key takeaway
For tech sales and industry operators: "There are more AI research companies than ideas" is your competitive intelligence. When everyone's pitching the same scaling narrative, the differentiated vendors are those building scaffolding that makes specialized applications actually work in production. Find them. They're the ones who will still exist in five years.
For investors and founders: Most of the discussions around AI on the investor side in the last year have been very concentrated around compute, as business leaders tried to extrapolate the concept of scaling into an actionable investment narrative. The obvious question here is what happens if this directional bet is a very inefficient way to progress toward what most actually want to get to, i.e. AI capable of delivering significant economic advantages to its users. It's a bit crazy to say this, but compute is quickly becoming a commodity input; genuine research insight is not. The trick here is that the moment there are actual breakthroughs from that research, it's likely that several other organizations will catch up quickly and the government will get involved in order to reduce the risks for the general public. So picking winners here is difficult, but it's safe to say that the infrastructure buildout trade is over (as the Oracle stock dump demonstrated), and the next alpha is in the best research teams (assuming you can actually invest in them). For founders it's important to highlight that building smaller companies around narrow, specialized AI is still a strong play, as long as the category can support multiple winners.
Age of research
The quotes are taken out of the recent Dwarkesh podcast with Ilya.
Ilya Sutskever: You know what’s crazy? That all of this is real.
Dwarkesh Patel: Meaning what?
Ilya Sutskever: Don’t you think so? All this AI stuff and all this Bay Area… that it’s happening. Isn’t it straight out of science fiction?
Dwarkesh Patel: Another thing that’s crazy is how normal the slow takeoff feels. The idea that we’d be investing 1% of GDP in AI, I feel like it would have felt like a bigger deal, whereas right now it just feels...
Ilya Sutskever: We get used to things pretty fast, it turns out. But also it’s kind of abstract. What does it mean? It means that you see it in the news, that such and such company announced such and such dollar amount. That’s all you see. It’s not really felt in any other way so far.
It's important to understand that while OpenAI, Anthropic, and Google have been able to productize and monetize LLMs exceptionally well in the last two years, the leaders behind AI research mostly see this as necessary in order to fund the compute required to do their work. As such, from their point of view, the AI bubble is simply a reflection of governments and enterprises willing to release larger budgets to scale this scientific breakthrough. This is a moment that many have dreamed of, but now being in the midst of it, they realize it's just table stakes toward the real goal.
Dwarkesh Patel: When do you expect that impact? I think the models seem smarter than their economic impact would imply.
Ilya Sutskever: Yeah. This is one of the very confusing things about the models right now. How to reconcile the fact that they are doing so well on evals? You look at the evals and you go, “Those are pretty hard evals.” They are doing so well. But the economic impact seems to be dramatically behind. It’s very difficult to make sense of, how can the model, on the one hand, do these amazing things, and then on the other hand, repeat itself twice in some situation?
An example would be, let’s say you use vibe coding to do something. You go to some place and then you get a bug. Then you tell the model, “Can you please fix the bug?” And the model says, “Oh my God, you’re so right. I have a bug. Let me go fix that.” And it introduces a second bug. Then you tell it, “You have this new second bug,” and it tells you, “Oh my God, how could I have done it? You’re so right again,” and brings back the first bug, and you can alternate between those. How is that possible? I’m not sure, but it does suggest that something strange is going on.
I have two possible explanations. The more whimsical explanation is that maybe RL training makes the models a little too single-minded and narrowly focused, a little bit too unaware, even though it also makes them aware in some other ways. Because of this, they can’t do basic things.
But there is another explanation. Back when people were doing pre-training, the question of what data to train on was answered, because that answer was everything. When you do pre-training, you need all the data. So you don’t have to think if it’s going to be this data or that data.
But when people do RL training, they do need to think. They say, “Okay, we want to have this kind of RL training for this thing and that kind of RL training for that thing.” From what I hear, all the companies have teams that just produce new RL environments and just add it to the training mix. The question is, well, what are those? There are so many degrees of freedom. There is such a huge variety of RL environments you could produce.
One thing you could do, and I think this is something that is done inadvertently, is that people take inspiration from the evals. You say, “Hey, I would love our model to do really well when we release it. I want the evals to look great. What would be RL training that could help on this task?” I think that is something that happens, and it could explain a lot of what’s going on.
If you combine this with generalization of the models actually being inadequate, that has the potential to explain a lot of what we are seeing, this disconnect between eval performance and actual real-world performance, which is something that we don’t today even understand, what we mean by that.
This is a provocative statement, aimed at what Ilya perceives as an obsession with benchmarks over the practical behavior of the models. There is a perception among users that while the models are getting demonstrably smarter on specific topics (helpfully visualized with appropriate evals), there hasn't been a consistent, significant step up in answer quality across large context windows. I think Ilya is both right and wrong here, because as a power user of two specific features (coding agents and deep research), it's quite obvious to me that over the past year, the recently released models have been consistently delivering better outcomes. It's also easy to see why the inability of these models to maintain coherence in long context scenarios remains a significant concern.
Ilya Sutskever: Somehow a human being, after even 15 years with a tiny fraction of the pre-training data, they know much less. But whatever they do know, they know much more deeply somehow. Already at that age, you would not make mistakes that our AIs make.
There is another thing. You might say, could it be something like evolution? The answer is maybe. But in this case, I think evolution might actually have an edge. I remember reading about this case. One way in which neuroscientists can learn about the brain is by studying people with brain damage to different parts of the brain. Some people have the most strange symptoms you could imagine. It’s actually really, really interesting.
One case that comes to mind that’s relevant. I read about this person who had some kind of brain damage, a stroke or an accident, that took out his emotional processing. So he stopped feeling any emotion. He still remained very articulate and he could solve little puzzles, and on tests he seemed to be just fine. But he felt no emotion. He didn’t feel sad, he didn’t feel anger, he didn’t feel animated. He became somehow extremely bad at making any decisions at all. It would take him hours to decide on which socks to wear. He would make very bad financial decisions.
What does it say about the role of our built-in emotions in making us a viable agent, essentially? To connect to your question about pre-training, maybe if you are good enough at getting everything out of pre-training, you could get that as well. But that’s the kind of thing which seems... Well, it may or may not be possible to get that from pre-training.
Dwarkesh Patel: What is “that”? Clearly not just directly emotion. It seems like some almost value function-like thing which is telling you what the end reward for any decision should be. You think that doesn’t sort of implicitly come from pre-training?
Ilya Sutskever: I think it could. I’m just saying it’s not 100% obvious.
Dwarkesh Patel: But what is that? How do you think about emotions? What is the ML analogy for emotions?
Ilya Sutskever: It should be some kind of a value function thing. But I don’t think there is a great ML analogy because right now, value functions don’t play a very prominent role in the things people do.
Dwarkesh Patel: It might be worth defining for the audience what a value function is, if you want to do that.
Ilya Sutskever: Certainly, I’ll be very happy to do that. When people do reinforcement learning, the way reinforcement learning is done right now, how do people train those agents? You have your neural net and you give it a problem, and then you tell the model, “Go solve it.” The model takes maybe thousands, hundreds of thousands of actions or thoughts or something, and then it produces a solution. The solution is graded.
And then the score is used to provide a training signal for every single action in your trajectory. That means that if you are doing something that goes for a long time—if you’re training a task that takes a long time to solve—it will do no learning at all until you come up with the proposed solution. That’s how reinforcement learning is done naively. That’s how o1, R1 ostensibly are done.
The value function says something like, “Maybe I could sometimes, not always, tell you if you are doing well or badly.” The notion of a value function is more useful in some domains than others. For example, when you play chess and you lose a piece, I messed up. You don’t need to play the whole game to know that what I just did was bad, and therefore whatever preceded it was also bad.
The value function lets you short-circuit the wait until the very end. Let’s suppose that you are doing some kind of a math thing or a programming thing, and you’re trying to explore a particular solution or direction. After, let’s say, a thousand steps of thinking, you concluded that this direction is unpromising. As soon as you conclude this, you could already get a reward signal a thousand timesteps previously, when you decided to pursue down this path. You say, “Next time I shouldn’t pursue this path in a similar situation,” long before you actually came up with the proposed solution.
It's important to understand that while AI researchers are not trying to build a synthetic brain, they are open to any technique and idea that could improve the behavior and knowledge of a model. Building these models is not a settled science with an obvious path; it's a constant game of trial and error. Subtle (or major) differences in how researchers approach learning can have massive implications for how the models behave (and ultimately think).
Dwarkesh Patel: People have been talking about scaling data, scaling parameters, scaling compute. Is there a more general way to think about scaling? What are the other scaling axes?
Ilya Sutskever: Here’s a perspective that I think might be true. The way ML used to work is that people would just tinker with stuff and try to get interesting results. That’s what’s been going on in the past.
Then the scaling insight arrived. Scaling laws, GPT-3, and suddenly everyone realized we should scale. This is an example of how language affects thought. “Scaling” is just one word, but it’s such a powerful word because it informs people what to do. They say, “Let’s try to scale things.” So you say, what are we scaling? Pre-training was the thing to scale. It was a particular scaling recipe.
The big breakthrough of pre-training is the realization that this recipe is good. You say, “Hey, if you mix some compute with some data into a neural net of a certain size, you will get results. You will know that you’ll be better if you just scale the recipe up.” This is also great. Companies love this because it gives you a very low-risk way of investing your resources.
It’s much harder to invest your resources in research. Compare that. If you research, you need to be like, “Go forth researchers and research and come up with something”, versus get more data, get more compute. You know you’ll get something from pre-training.
Indeed, it looks like, based on various things some people say on Twitter, maybe it appears that Gemini have found a way to get more out of pre-training. At some point though, pre-training will run out of data. The data is very clearly finite. What do you do next? Either you do some kind of souped-up pre-training, a different recipe from the one you’ve done before, or you’re doing RL, or maybe something else. But now that compute is big, compute is now very big, in some sense we are back to the age of research.
Maybe here’s another way to put it. Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling—maybe plus or minus, let’s add error bars to those years—because people say, “This is amazing. You’ve got to scale more. Keep scaling.” The one word: scaling.
But now the scale is so big. Is the belief really, “Oh, it’s so big, but if you had 100x more, everything would be so different?” It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.
This is an important point because it's a different perspective than the one that the CEOs of frontier labs are selling to the market. The version of that story is that scaling clearly works, so they will keep training scaled models and then monetizing them to recoup the training costs. Since the focus is also on delivering products to the market, a lot of compute is going toward inference and toward making the products more usable or efficient. The statement here from Ilya is that compute has become so massive that it's no longer going to magically solve problems for the labs; they need to do actual groundbreaking research work. Since we've not yet seen scaling laws stop working, this is more of a hunch and a thesis rather than an obvious truth in the field.
Other AI researchers like Yann LeCun have been vocal about the idea that LLMs are a dead end for AGI and instead are pursuing different approaches like Advanced Machine Intelligence (AMI). Yann recently left Meta and is seeking funding to develop AI systems that understand the physical world, maintain persistent memory, reason effectively, and plan complex action sequences.
Ilya Sutskever: We’ve already witnessed a transition from one type of scaling to a different type of scaling, from pre-training to RL. Now people are scaling RL. Now based on what people say on Twitter, they spend more compute on RL than on pre-training at this point, because RL can actually consume quite a bit of compute. You do very long rollouts, so it takes a lot of compute to produce those rollouts. Then you get a relatively small amount of learning per rollout, so you really can spend a lot of compute.
I wouldn’t even call it scaling. I would say, “Hey, what are you doing? Is the thing you are doing the most productive thing you could be doing? Can you find a more productive way of using your compute?” We’ve discussed the value function business earlier. Maybe once people get good at value functions, they will be using their resources more productively. If you find a whole other way of training models, you could say, “Is this scaling or is it just using your resources?” I think it becomes a little bit ambiguous.
In the sense that, when people were in the age of research back then, it was, “Let’s try this and this and this. Let’s try that and that and that. Oh, look, something interesting is happening.” I think there will be a return to that.
Another loaded statement, this time directly critiquing the majority of AI researchers for being very inefficient with their compute. At a time when OpenAI is trying to build out as much capacity as possible across the world, this raises questions about the vision behind this singular focus on scaling.
Ilya Sutskever: One of the things that you’ve been asking about is how can the teenage driver self-correct and learn from their experience without an external teacher? The answer is that they have their value function. They have a general sense which is also, by the way, extremely robust in people. Whatever the human value function is, with a few exceptions around addiction, it’s actually very, very robust.
So for something like a teenager that’s learning to drive, they start to drive, and they already have a sense of how they’re driving immediately, how badly they are, how unconfident. And then they see, “Okay.” And then, of course, the learning speed of any teenager is so fast. After 10 hours, you’re good to go.
Dwarkesh Patel: It seems like humans have some solution, but I’m curious about how they are doing it and why is it so hard? How do we need to reconceptualize the way we’re training models to make something like this possible?
Ilya Sutskever: That is a great question to ask, and it’s a question I have a lot of opinions about. But unfortunately, we live in a world where not all machine learning ideas are discussed freely, and this is one of them. There’s probably a way to do it. I think it can be done. The fact that people are like that, I think it’s a proof that it can be done.
There may be another blocker though, which is that there is a possibility that the human neurons do more compute than we think. If that is true, and if that plays an important role, then things might be more difficult. But regardless, I do think it points to the existence of some machine learning principle that I have opinions on. But unfortunately, circumstances make it hard to discuss in detail.
Dwarkesh Patel: Nobody listens to this podcast, Ilya.
There are two important takeaways here. The first one is to note how AI researchers try to reason through concepts by clearly indicating what they have confidence in and what is pure speculation that could be very wrong. We don't see a lot of this on the commercial side of AI. The second one is related to his mysterious statement about "not all machine learning ideas being discussed freely." The obvious interpretation is that he sees this as a proprietary secret for his company and he can't discuss it because of his obligations. Though there may be more to it beneath the surface.
Ilya Sutskever: One consequence of the age of scaling is that scaling sucked out all the air in the room. Because scaling sucked out all the air in the room, everyone started to do the same thing. We got to the point where we are in a world where there are more companies than ideas by quite a bit. Actually on that, there is this Silicon Valley saying that says that ideas are cheap, execution is everything. People say that a lot, and there is truth to that. But then I saw someone say on Twitter something like, “If ideas are so cheap, how come no one’s having any ideas?” And I think it’s true too.
If you think about research progress in terms of bottlenecks, there are several bottlenecks. One of them is ideas, and one of them is your ability to bring them to life, which might be compute but also engineering. If you go back to the ‘90s, let’s say, you had people who had pretty good ideas, and if they had much larger computers, maybe they could demonstrate that their ideas were viable. But they could not, so they could only have a very, very small demonstration that did not convince anyone. So the bottleneck was compute.
Then in the age of scaling, compute has increased a lot. Of course, there is a question of how much compute is needed, but compute is large. Compute is large enough such that it’s not obvious that you need that much more compute to prove some idea. I’ll give you an analogy. AlexNet was built on two GPUs. That was the total amount of compute used for it. The transformer was built on 8 to 64 GPUs. No single transformer paper experiment used more than 64 GPUs of 2017, which would be like, what, two GPUs of today? The ResNet, right? You could argue that the o1 reasoning was not the most compute-heavy thing in the world.
So for research, you definitely need some amount of compute, but it’s far from obvious that you need the absolutely largest amount of compute ever for research. You might argue, and I think it is true, that if you want to build the absolutely best system then it helps to have much more compute. Especially if everyone is within the same paradigm, then compute becomes one of the big differentiators.
If the framing of “scaling is a trap if treated as the main way to progress” is correct, then the real value will sit with the most talented and creative AI researchers in the space.
Dwarkesh Patel: So then why is your default plan to straight shot superintelligence? Because it sounds like OpenAI, Anthropic, all these other companies, their explicit thinking is, “Look, we have weaker and weaker intelligences that the public can get used to and prepare for.” Why is it potentially better to build a superintelligence directly?
Ilya Sutskever: I’ll make the case for and against. The case for is that one of the challenges that people face when they’re in the market is that they have to participate in the rat race. The rat race is quite difficult in that it exposes you to difficult trade-offs which you need to make. It is nice to say, “We’ll insulate ourselves from all this and just focus on the research and come out only when we are ready, and not before.” But the counterpoint is valid too, and those are opposing forces. The counterpoint is, “Hey, it is useful for the world to see powerful AI. It is useful for the world to see powerful AI because that’s the only way you can communicate it.”
Dwarkesh Patel: Well, I guess not even just that you can communicate the idea—
Ilya Sutskever: Communicate the AI, not the idea. Communicate the AI.
Dwarkesh Patel: What do you mean, “communicate the AI”?
Ilya Sutskever: Let’s suppose you write an essay about AI, and the essay says, “AI is going to be this, and AI is going to be that, and it’s going to be this.” You read it and you say, “Okay, this is an interesting essay.” Now suppose you see an AI doing this, an AI doing that. It is incomparable. Basically I think that there is a big benefit from AI being in the public, and that would be a reason for us to not be quite straight shot.
One of the theories floating around the industry right now is that the labs are sitting on significantly more capable models and what gets released is watered down versions suitable for public use. There is a good argument to be made that this doesn't make a lot of sense because they need any competitive edge to survive in a very capital-intensive business. On the other hand, it's not difficult to see how AI researchers could view those models as "not ready" for the public, even though demonstrating progress is helpful.
Ilya Sutskever: Well I think on this point, even in the straight shot scenario, you would still do a gradual release of it, that’s how I would imagine it. Gradualism would be an inherent component of any plan. It’s just a question of what is the first thing that you get out of the door. That’s number one.
Number two, I believe you have advocated for continual learning more than other people, and I actually think that this is an important and correct thing. Here is why. I’ll give you another example of how language affects thinking. In this case, it will be two words that have shaped everyone’s thinking, I maintain. First word: AGI. Second word: pre-training. Let me explain.
The term AGI, why does this term exist? It’s a very particular term. Why does it exist? There’s a reason. The reason that the term AGI exists is, in my opinion, not so much because it’s a very important, essential descriptor of some end state of intelligence, but because it is a reaction to a different term that existed, and the term is narrow AI. If you go back to ancient history of gameplay and AI, of checkers AI, chess AI, computer games AI, everyone would say, look at this narrow intelligence. Sure, the chess AI can beat Kasparov, but it can’t do anything else. It is so narrow, artificial narrow intelligence. So in response, as a reaction to this, some people said, this is not good. It is so narrow. What we need is general AI, an AI that can just do all the things. That term just got a lot of traction.
The second thing that got a lot of traction is pre-training, specifically the recipe of pre-training. I think the way people do RL now is maybe undoing the conceptual imprint of pre-training. But pre-training had this property. You do more pre-training and the model gets better at everything, more or less uniformly. General AI. Pre-training gives AGI.
But the thing that happened with AGI and pre-training is that in some sense they overshot the target. If you think about the term “AGI”, especially in the context of pre-training, you will realize that a human being is not an AGI. Yes, there is definitely a foundation of skills, but a human being lacks a huge amount of knowledge. Instead, we rely on continual learning.
So when you think about, “Okay, so let’s suppose that we achieve success and we produce some kind of safe superintelligence.” The question is, how do you define it? Where on the curve of continual learning is it going to be?
I produce a superintelligent 15-year-old that’s very eager to go. They don’t know very much at all, a great student, very eager. You go and be a programmer, you go and be a doctor, go and learn. So you could imagine that the deployment itself will involve some kind of a learning trial-and-error period. It’s a process, as opposed to you dropping the finished thing.
The most bearish argument against LLMs today remains their inability to learn from context over time. Solving this, either in model training or through scaffolding (i.e. building applications that fix the issue) would be the next big leg up in economic value generated by AI.
Ilya Sutskever: One of the ways in which my thinking has been changing is that I now place more importance on AI being deployed incrementally and in advance. One very difficult thing about AI is that we are talking about systems that don’t yet exist and it’s hard to imagine them.
I think that one of the things that’s happening is that in practice, it’s very hard to feel the AGI. It’s very hard to feel the AGI. We can talk about it, but imagine having a conversation about how it is like to be old when you’re old and frail. You can have a conversation, you can try to imagine it, but it’s just hard, and you come back to reality where that’s not the case. I think that a lot of the issues around AGI and its future power stem from the fact that it’s very difficult to imagine. Future AI is going to be different. It’s going to be powerful. Indeed, the whole problem, what is the problem of AI and AGI? The whole problem is the power. The whole problem is the power.
When the power is really big, what’s going to happen? One of the ways in which I’ve changed my mind over the past year—and that change of mind, I’ll hedge a little bit, may back-propagate into the plans of our company—is that if it’s hard to imagine, what do you do? You’ve got to be showing the thing. You’ve got to be showing the thing. I maintain that most people who work on AI also can’t imagine it because it’s too different from what people see on a day-to-day basis.
I do maintain, here’s something which I predict will happen. This is a prediction. I maintain that as AI becomes more powerful, people will change their behaviors. We will see all kinds of unprecedented things which are not happening right now. I’ll give some examples. I think for better or worse, the frontier companies will play a very important role in what happens, as will the government. The kind of things that I think you’ll see, which you see the beginnings of, are companies that are fierce competitors starting to collaborate on AI safety. You may have seen OpenAI and Anthropic doing a first small step, but that did not exist. That’s something which I predicted in one of my talks about three years ago, that such a thing will happen. I also maintain that as AI continues to become more powerful, more visibly powerful, there will also be a desire from governments and the public to do something. I think this is a very important force, of showing the AI.
That’s number one. Number two, okay, so the AI is being built. What needs to be done? One thing that I maintain that will happen is that right now, people who are working on AI, I maintain that the AI doesn’t feel powerful because of its mistakes. I do think that at some point the AI will start to feel powerful actually. I think when that happens, we will see a big change in the way all AI companies approach safety. They’ll become much more paranoid. I say this as a prediction that we will see happen. We’ll see if I’m right. But I think this is something that will happen because they will see the AI becoming more powerful. Everything that’s happening right now, I maintain, is because people look at today’s AI and it’s hard to imagine the future AI.
There is a third thing which needs to happen. I’m talking about it in broader terms, not just from the perspective of SSI because you asked me about our company. The question is, what should the companies aspire to build? What should they aspire to build? There has been one big idea that everyone has been locked into, which is the self-improving AI. Why did it happen? Because there are fewer ideas than companies. But I maintain that there is something that’s better to build, and I think that everyone will want that.
It’s the AI that’s robustly aligned to care about sentient life specifically. I think in particular, there’s a case to be made that it will be easier to build an AI that cares about sentient life than an AI that cares about human life alone, because the AI itself will be sentient. And if you think about things like mirror neurons and human empathy for animals, which you might argue it’s not big enough, but it exists. I think it’s an emergent property from the fact that we model others with the same circuit that we use to model ourselves, because that’s the most efficient thing to do.
The best indicator for a clear step up in model performance will be when users start to have a lot more security concerns and essentially fear AI. Whether that’s positive or negative is a different topic.
Ilya Sutskever: Here’s one reason why I liked “AI that cares for sentient life”. We can debate on whether it’s good or bad. But if the first N of these dramatic systems do care for, love, humanity or something, care for sentient life, obviously this also needs to be achieved. This needs to be achieved. So if this is achieved by the first N of those systems, then I can see it go well, at least for quite some time.
Then there is the question of what happens in the long run. How do you achieve a long-run equilibrium? I think that there, there is an answer as well. I don’t like this answer, but it needs to be considered.
In the long run, you might say, “Okay, if you have a world where powerful AIs exist, in the short term, you could say you have universal high income. You have universal high income and we’re all doing well.” But what do the Buddhists say? “Change is the only constant.” Things change. There is some kind of government, political structure thing, and it changes because these things have a shelf life. Some new government thing comes up and it functions, and then after some time it stops functioning. That’s something that we see happening all the time.
So I think for the long-run equilibrium, one approach is that you could say maybe every person will have an AI that will do their bidding, and that’s good. If that could be maintained indefinitely, that’s true. But the downside with that is then the AI goes and earns money for the person and advocates for their needs in the political sphere, and maybe then writes a little report saying, “Okay, here’s what I’ve done, here’s the situation,” and the person says, “Great, keep it up.” But the person is no longer a participant. Then you can say that’s a precarious place to be in.
I’m going to preface by saying I don’t like this solution, but it is a solution. The solution is if people become part-AI with some kind of Neuralink++. Because what will happen as a result is that now the AI understands something, and we understand it too, because now the understanding is transmitted wholesale. So now if the AI is in some situation, you are involved in that situation yourself fully. I think this is the answer to the equilibrium.
Do you understand now, tech anon? If you don't immerse yourself in different mental models, you'll miss some rather relevant pieces of information such as "if we can't force the models to behave, we might need to merge with AI."
Ilya Sutskever: The main thing that distinguishes SSI is its technical approach. We have a different technical approach that I think is worthy and we are pursuing it.
I maintain that in the end there will be a convergence of strategies. I think there will be a convergence of strategies where at some point, as AI becomes more powerful, it’s going to become more or less clearer to everyone what the strategy should be. It should be something like, you need to find some way to talk to each other and you want your first actual real superintelligent AI to be aligned and somehow care for sentient life, care for people, democratic, one of those, some combination thereof.
I think this is the condition that everyone should strive for. That’s what SSI is striving for. I think that this time, if not already, all the other companies will realize that they’re striving towards the same thing. We’ll see. I think that the world will truly change as AI becomes more powerful. I think things will be really different and people will be acting really differently.
I think the big question here is whether they are actually training an LLM or doing something very different. Long term, whatever they do, the moment that they publish it as a whitepaper or release a product, the rest will catch up.
Dwarkesh Patel: By default, you would expect the company that has that model to be getting all these gains because they have the model that has the skills and knowledge that it’s building up in the world. What is the reason to think that the benefits of that would be widely distributed and not just end up at whatever model company gets this continuous learning loop going first?
Ilya Sutskever: Here is what I think is going to happen. Number one, let’s look at how things have gone so far with the AIs of the past. One company produced an advance and the other company scrambled and produced some similar things after some amount of time and they started to compete in the market and push the prices down. So I think from the market perspective, something similar will happen there as well.
We are talking about the good world, by the way. What’s the good world? It’s where we have these powerful human-like learners that are also… By the way, maybe there’s another thing we haven’t discussed on the spec of the superintelligent AI that I think is worth considering. It’s that you make it narrow, it can be useful and narrow at the same time. You can have lots of narrow superintelligent AIs.
But suppose you have many of them and you have some company that’s producing a lot of profits from it. Then you have another company that comes in and starts to compete. The way the competition is going to work is through specialization. Competition loves specialization. You see it in the market, you see it in evolution as well. You’re going to have lots of different niches and you’re going to have lots of different companies who are occupying different niches. In this world we might say one AI company is really quite a bit better at some area of really complicated economic activity and a different company is better at another area. And the third company is really good at litigation.
Dwarkesh Patel: Isn’t this contradicted by what human-like learning implies? It’s that it can learn…
Ilya Sutskever: It can, but you have accumulated learning. You have a big investment. You spent a lot of compute to become really, really good, really phenomenal at this thing. Someone else spent a huge amount of compute and a huge amount of experience to get really good at some other thing. You apply a lot of human learning to get there, but now you are at this high point where someone else would say, “Look, I don’t want to start learning what you’ve learned.”
Dwarkesh Patel: I guess that would require many different companies to begin at the human-like continual learning agent at the same time so that they can start their different tree search in different branches. But if one company gets that agent first, or gets that learner first, it does then seem like… Well, if you just think about every single job in the economy, having an instance learning each one seems tractable for a company.
Ilya Sutskever: That’s a valid argument. My strong intuition is that it’s not how it’s going to go. The argument says it will go this way, but my strong intuition is that it will not go this way. In theory, there is no difference between theory and practice. In practice, there is. I think that’s going to be one of those.
Dwarkesh Patel: A lot of people’s models of recursive self-improvement literally, explicitly state we will have a million Ilyas in a server that are coming up with different ideas, and this will lead to a superintelligence emerging very fast.
Do you have some intuition about how parallelizable the thing you are doing is? What are the gains from making copies of Ilya?
Ilya Sutskever: I don’t know. I think there’ll definitely be diminishing returns because you want people who think differently rather than the same. If there were literal copies of me, I’m not sure how much more incremental value you’d get. People who think differently, that’s what you want.
This is quite an important discussion, since it revolves around the idea that the next big step in AI research is to get the models to be good enough to run "AI researcher agents" that can be scaled into millions of copies and invent AGI. Ilya is again being provocative, essentially stating that the idea won't scale unless you can figure out how to make these millions of agents unique in their perspective and approach.
Let's close this article with what's probably the most useful way of seeing Ilya's AI researcher mental model through the idea of "taste":
Dwarkesh Patel: Final question: What is research taste? You’re obviously the person in the world who is considered to have the best taste in doing research in AI. You were the co-author on the biggest things that have happened in the history of deep learning, from AlexNet to GPT-3 to so on. What is it, how do you characterize how you come up with these ideas?
Ilya Sutskever: I can comment on this for myself. I think different people do it differently. One thing that guides me personally is an aesthetic of how AI should be, by thinking about how people are, but thinking correctly. It’s very easy to think about how people are incorrectly, but what does it mean to think about people correctly?
I’ll give you some examples. The idea of the artificial neuron is directly inspired by the brain, and it’s a great idea. Why? Because you say the brain has all these different organs, it has the folds, but the folds probably don’t matter. Why do we think that the neurons matter? Because there are many of them. It kind of feels right, so you want the neuron. You want some local learning rule that will change the connections between the neurons. It feels plausible that the brain does it.
The idea of the distributed representation. The idea that the brain responds to experience therefore our neural net should learn from experience. The brain learns from experience, the neural net should learn from experience. You kind of ask yourself, is something fundamental or not fundamental? How things should be.
I think that’s been guiding me a fair bit, thinking from multiple angles and looking for almost beauty, beauty and simplicity. Ugliness, there’s no room for ugliness. It’s beauty, simplicity, elegance, correct inspiration from the brain. All of those things need to be present at the same time. The more they are present, the more confident you can be in a top-down belief.
The top-down belief is the thing that sustains you when the experiments contradict you. Because if you trust the data all the time, well sometimes you can be doing the correct thing but there’s a bug. But you don’t know that there is a bug. How can you tell that there is a bug? How do you know if you should keep debugging or you conclude it’s the wrong direction? It’s the top-down. You can say things have to be this way. Something like this has to work, therefore we’ve got to keep going. That’s the top-down, and it’s based on this multifaceted beauty and inspiration by the brain.