AGI with Carlos
What makes human minds uniquely powerful? How do we create such minds?
AGI with Carlos
AGI = Darwin + Turing + Darwin
What makes something an AGI? What requirements should it meet?
1. Darwin. It should be able to explore many possibilities, via variation and selection.
2. Turing. It should have the widest set of possibilities and opportunities. It should be able to think anything that is thinkable. Compute anything computable.
3. Darwin. Having found a new and good idea or theory, it should be possible to use it to improve how it searches. It should be able to engage in variation and selection in new ways.
Follow me on Twitter! @dela3499
Listen on Apple Podcasts, Spotify, Youtube, and more: https://carlos.buzzsprout.com/share
Howdy, howdy. Welcome to AGI with Carlos. I'm Carlos and let's talk about AGI. So my main research question is about what makes human minds uniquely powerful? Why do we go to space and squirrels don't? And I think it's personally interesting and I think it might be interesting to you because you get to learn more about your own mind. What is it that is so powerful about each of our individual human minds? It's sort of odd to think about it, but We're each walking around with like this most powerful, like the most powerful possible system in the universe seems to be brains like ours. And we each have one, you know, so it's a bit wild. So you can learn more about that by studying AGI. And then also, I think there are a lot of different fields that come into play. Things like epistemology, evolution, and computation. These are all intrinsically interesting fields, but I think it's also kind of fun to get a unique lens on them by seeing them applied to this very specific problem. You know, what makes human minds uniquely powerful? So with that, let me talk about what I've been working on recently, which is trying to come up with a sort of list of all the requirements for what makes an AGI, you know, what it should be able to do. And then having made such a list, try to figure out what it takes to meet each of each of those requirements. So if you need a, you know, how do you implement a, whatever it is. So that's actually been getting a little bit confusing for me, actually. But before I get to that, I think I should say that this approach of trying to identify specific requirements that AGI should meet, is actually a hot topic now with Francois Chollet's introduction of the Arc AGI challenge. And it's a general thing too, where people try to ask, what is it that makes an AGI? How would you tell if you saw an AGI? And I think the most obvious answer is, you know, like the sort of Turing test, something behavioral, see like how it behaves. What does it do? How well does it score on this exam? Can it reason like humans? Asking, you know, those sorts of questions to figure out what's an AGI, or to think about what an AGI should be able to do. And I guess I sort of take a different approach, which is to say, instead of looking at behavioral things and performance metrics, you know, here's an exam, how well do you do? I'm more interested in the structure of a system. And, you know, for an analogy here, think about like Turing machines or universal machines. If you were to test them out on different inputs and give them different programs, you would probably see, oh, it can do a lot of different things. But no finite number of such tests could tell you, ah, this machine can do anything. Just behavioral tests can't tell you that no matter how many you do. You would have to run an infinity of them. So you just can't do it. Meanwhile, if you were to understand the theory of computation and then open up the computer, and see, you know, it implements all the things it needs to implement, then you would be able to tell it, you know, you would be able to look at that and say, ah, this thing can compute anything. And you didn't even run the program, run any programs maybe. So that's what I'm interested in looking at for human minds and what makes them special. I want to understand what is it about their internal structure? What is it about the way that they work? What is inside of them that is special. And it may not even involve running them or seeing how they behave at all. I sort of think, you know, if you were to look at a baby brain, at the brain of a human baby, you would see that there are things happening there that are special. And if you were to look at the behavior of the baby, it would seem maybe quite dumb. You know, it can't do this sort of complex reasoning and navigation and all the things you might want to test for. It can't do any of them. And yet it has the generality that adults have. So what is it about the baby, the human baby that makes it special and different from like a wolf baby because there is a difference and you can put a wolf baby in a human setting and it will not grow up the same way. So babies are already doing something special and it's very hard to tell what it is. But that's what I'm interested in. So with that said, just kind of pointing out that there are two different approaches to asking what is it that makes an AGI and how would you tell if you found one? There's this behaviorist approach and it's more sort of internal structure so just how it works explanatory approach and that's the one that i'm interested in so with that said like i you know i'm trying to find a list of all the things that such a brain should be able to do all those things that make our mind special and then try to find out how to implement them and it's gotten kind of confusing and the reason is that uh the number of different requirements you know it's not huge huge. It's not a hundred, but it's getting to be around five to 10. And it's hard to see how they all fit together and how to achieve each of them individually. And then especially hard to see how to achieve them all together. So part of what I've been up to recently is trying to work my way through that and see, you know, what's a simpler way of putting all this stuff together so I can actually reason about it and kind of move forward. And what I've come up with is a sort of little simple AGI equation. It's a little bit jokey, but I think it's still useful. So my little jokey AGI equation here is AGI equals Darwin plus Turing plus Darwin. I think you could also say AGI equals evolution plus computation plus evolution. And the reasoning behind that is that. On the one hand, you need the ability to create things, to do variation selection, to explore the possibilities. Secondly, you want to explore possibilities, not just of DNA and that sort of thing, as you find in biological evolution. But the set of possibilities that we ought to look at is the set of all possible computations. I'll say more about that later, but I think that's the biggest, most important, most valuable search space there is. So that's the one I want to focus on. And then lastly, the third part of the equation is Darwin again. And the point there is that not only do you want to look at the world of computation, not only do you want to have that as your search space and then navigate it with evolution, but you also want the ability to take all the things you discover along the way, all those different computations and theories and ideas, and use them for more evolution. You want to be able to take the things you discover and fold them back into all the work you do going forward. So that's why I say AGI equals Darwin plus Turing plus Darwin, Darwin for searching things, Turing for having a search space, which is a set of all computations and theories and all that stuff. And then Darwin again, indicating that you want to be able to use all the things you've discovered to do more generation of ideas. So I'll talk about each of these in a bit more detail now. Starting with this idea that everything is computation or that what matters is this computational look at things. And the real thing to have in mind here is that, if I step even further back, this whole point about AGI, what does the G in AGI stand for? You know artificial general intelligence it's about generality and usually um or let's put it this way like what is the most general possible thing uh in particular what we let's say um. So something like, like, what is the biggest possible opportunity or set of opportunities out there, uh, to, for like discoveries, if we think about theories and, um, and, and ideas, like what is, what is the biggest possible set of ideas? And, um, I think of that as being the set of all possible computations, you know, everything that's computable, that's the set that's, uh, that's biggest, you know, in that set, there's general, general relativity in that set. There's special relativity in that set. There's all Newton's theories. You know, in that set is everything that's thinkable. So by humans, by anything. So that seems like it's the thing to focus our attention on. That is the largest possible set of all ideas that currently exist, that will ever exist, that humans will ever discover, that aliens will ever discover, all that. For now, for the next trillion years, on and on and on. So everything that we can think and feel is in that set of what is computable. So if we want to have AGI, that's worth the name, um, that that's really general, it needs to be able to handle that entire set of everything that's computable. So, you know, in other words, all the things that are thinkable, that's the thing that AGI should be concerned with. Um, and by, you know, that's what it should be concerned with i mean it should be able to you know generate and run and work with anything in that set whether it's relativity or some kind of intuition or a hunch or a number you know anything that's thinkable the system that's agi should be able to create it uh evolve it um run that thing and use it in practice uh and like there's a problem with that though by the way, which is this is in a way like the central problem of AGI the larger the search space is. That you get two effects out of that so the larger it is the more good stuff it contains but also the more bad stuff it contains so it's actually um you you you bring more things like relativity into play more good ideas are out there if you concern yourself with the biggest possible, set of uh things to search however it becomes then very very hard to search because most of the things that are thinkable are garbage so you could say it's you know more gold but also so more garbage. And so really, you know, you could put it differently by saying you want to have the biggest possible set of things to search when it comes to finding opportunities. The more opportunities there are, the bigger the set, the more opportunities. But what you want in terms of having an easy search is having the smallest possible set of things. The problem is that if you have a small set of things, you know, it omits all the valuable things. So if you can only search a tiny set of ideas, general relativity probably wouldn't be in that set. So anyway, all that is to say that if you want to have something truly general, it should be able to think of anything, anything that's thinkable by anything. But that is a really big set. So actually finding specific good ideas in that set is the challenge. And most things are not up to that challenge. And that's why you sort of see that most animals aren't general. And it's a good thing too, because they couldn't handle it. If you gave them the ability to compute anything, they would probably quickly lapse into computing garbage and kill themselves. So, you know, they would shoot themselves in the foot. I think programmers can see this too, where it's like, if you give people too much power over their computer or over whatever, then they may inadvertently cause themselves problems. This is a bit of a sidetrack, by the way, but I should say that I had a friend and who... Downloaded some weather app or something accidentally on their android phone and it totally took over their home screen and messed up their phone in a way which i was very confused by and i realized that as an apple user you know i have an iphone and it doesn't allow apps to, mess with the home screen but android does and in a way if you want to give that extra control to an app maybe you can do some cool things with it but it also means you could download an app that it totally messes with the home screen. It makes it for the purpose of making itself really hard to undownload and get rid of. And so it's a case where too much power was given to the system. You know, individual apps were given too much power and it caused real problems. I had to do quite a bit of Googling to sort out that issue. And it turned out it was because it was too powerful. The permissions of that system were too powerful. So anyway, if you have all those extra options and you don't know how to use them, you can shoot yourself in the foot as a programmer or as an animal or whatever. So with great power comes great responsibility in some sense. But anyway, the broad point there is that the G in AGI is about generality and what can be more general than the set of all possible thinkable ideas by anything for all time. And that is what the set of all computable programs is. So that's the set that AGI needs to be concerned with. That's the set of things that AGI needs to be able to create and use. So like I said, that's the most important thing that I want to focus on. And then moving on from there, by the way, that is the Turing part of my equation of AGI equals Darwin plus Turing plus Darwin. That's the Turing piece. That's the search space. That's all the opportunities. But moving on from that, like I said, the hard part is that it's one thing to have this great big search space that has all this treasure in it and all this trash. But because it has all that trash, it's really hard to find the good stuff. And this is where Darwin, the first part of the equation, comes in. You need to engage in variation and selection. You got to try some stuff. And having tried some things, you got to figure out which parts were good, which parts were bad, which parts to get rid of, which parts to keep. That's not trivial. You have to have specific ways to come up with ideas and specific ways to tell the good from the bad. And every situation you face, you'll have to come up with new ideas in a new way and you'll have to distinguish between the options in a new way. So this whole problem of searching the space of possibilities is super hard and only humans really do it well in this in this universal way but. If I zoom out on this a little bit, like I said, what you need is variation and selection and attention, by the way. That's sort of, like I said, bringing the relevant ideas to bear in the situation. And so there are, I may have said this already, but there's two things you can do with an idea. You know it once you found it you can use it for its direct purpose or you can use it to generate new ideas and that's what I'll turn to next yeah so so that anyway that that covers the, Turing part of the equation and the first Darwin part let me say a bit more about that actually which is that I like to think of it almost like if you were to think of a diagram, it's sort of a big circle in the middle, and that's Turing, that's all the computations. And then you have an arrow going into it, and that's the Darwin part. That's how you create all those computations. And then lastly, there's an arrow going out of it, which is those individual computations having effects on the world after they've been created. So with that picture in mind, I would say that the last bit of the picture, that last final rightward arrow, the last bit of the equation of Darwin plus Turing plus Darwin is this Darwin piece. And this is sort of the most intricate and interesting pieces. But I actually may put that off for just a second. Um, I wanted to say a bit more about how finding good stuff, uh, what, what, what that involves. Um. The part of the problem there is having a system which can generate any idea. And one detail that I can add there is that what you need is the ability to, I guess you'd say, you need all the right building blocks of computations and you need to be able to put them all together in different ways. And so what you need, in other words, is a quote unquote model of computation. Turing machines are an example. And then you also need a way of taking all the basic elements of that system and putting them together in different ways. And so for a Turing machine, that means you have a Turing tape and you can write different symbols on it. And the ability to create any computation involves being able to write any symbol and put them all on that really long tape. And there are other models of computation too and they all sort of i like to think of them as consisting of a small set of of elements and then a way of combining them either into a sequence or into some kind of uh grid or network but as long as you have all of the core building blocks like all you have all the lego pieces and you have the ability to assemble the lego pieces in arbitrarily big ways, well, then you can assemble any computation. So this is sort of a Lego-based view of how to generate computations. But anyway, I just wanted to say that that's part of what is involved here. That's one of the requirements for AGI is that you want the ability not only to run any program, but to generate any program. But perhaps that's a bit more in the details than I want to get at the moment. So zooming out back again, we come back to that AGI equation. AGI equals Darwin plus Turing plus Darwin. And so what I want to look at is that last piece, the last Darwin of that analysis. And this is where things get kind of interesting or a little bit meta or recursive or however you want to call it. And it's that once you have created a new idea, you should be able to use that to create more new ideas. And consider the alternative. The alternative is a world where you can create any idea, anything in that vast world of possible theories and feelings and everything else. You can create any of those. And maybe, let's say, you can use them for their specific purpose. Let's say if it's a recipe, you can use it to bake that cake. But you can never use it in this thought experiment. You can never use it to generate new ideas and new recipes. It is just a thing you pull out of the out of the bucket and you execute it and it's done you can't take apart its pieces and reassemble them in new ways uh so i guess you could think of it like it's a bunch of legos and once you assemble it they're all glued together and you can't start pulling them apart and playing around with them they're just glued in place so that would be bad because the reason is that it would mean that whatever ways you have of um coming up up with new ideas, whatever your methods of evolution, you would be stuck with them. Whatever you started with would be the way that you generate ideas forevermore. And my mental image of that is sort of like, imagine that you're walking in the middle of the desert. You have the ability to walk and you can walk in any direction, which means you can walk anywhere in the desert given enough time. However, you would require like a million steps to get a million yards away. And there's just no way around that unless you invent like a new mode of transport. If you invent a plane or a little parachute or something else, there may be ways of traveling those million yards way faster. And I think that's what new ideas get you is the ability to navigate the space of ideas in all sorts of new ways. And in other words, to make progress, not only in like what you know, and the kind of cakes you can bake, but to improve the way you come up with new ideas. And so that would be again, like having that recipe and be able to take all its pieces apart and see, okay, well, I see that there are some things here, which I can tweak, you know, how long to cook this, you know, how many of these elements to put in how much sugar, how much of this, you could begin tweaking it in your head and trying out those recipes, if you had the ability, sort of dig in to that knowledge, and you were just stuck with it in its main form. But anyway, so going back to that idea is if you imagine that you can create new ideas, but never use them to generate new stuff, then you would never be able to make progress and how you come up with new ideas. And that would be a problem. It it would be like you are trying to navigate this infinite set of ideas, this infinite desert, by only taking three foot steps at a time. And the way that human minds work is, like I said, an analogous more to taking. Those three footsteps to begin with, but then discovering, ah, if I do this, I can put on a parachute here and catch some wind and travel 10 feet with a single step. Or maybe I can make a little bike and I can use the same energy I would have used in one step and go 20 feet. And that's more analogous to the way that human minds work. And then having done that, maybe you get some more ideas, more resources, and eventually you design yourself a rocket. And now with the energy that you previously would have expended to take one step or go three feet, that same amount of energy gets you 300 miles. Human minds build on their previous discoveries in creating new discoveries. So the way I've been thinking about this, if we zoom out a little bit further, is that on the one hand, going back to that equation, AGI equals Darwin plus Turing plus Darwin. On the one hand, you care about how things come about, how you create new ideas, how you engage in variation and selection, how you go about the search. Secondly, you want that search space to be really big. You want it to be the entire set of everything that's thinkable, everything that's computable. And lastly, you want to just not do those first two bits. You want to be able not just to explore, not just explore anything, all that's computable and unthinkable, but you want to be able to use all of the things you have discovered in going forward and not just for their original purpose of, you know, if it's a recipe baking cake, but using it back, uh, for that purposes of variation selection and searching again. So I like to divide the, uh, this sort of concern on the one hand into the search space and the way you search it. So I think that if you were to look at my equation, you'd say you need to be able to search things. You want the search space to be really big, everything that's thinkable. And then you want to improve the way you search that space using whatever you discover. And that's really the key. So just to repeat that, you want to be able to explore and find new things. You want that set of that search space you're exploring to be as big as possible. And then you want to be able to use all the things you discover to, as you search further. In other words, you want to be able to improve the way you search using whatever you've discovered already. So I think zooming back out now, I would say that I have this longer list of individual requirements, but that I can sort of slot them into each of these three categories. One of them about Turing and all the space of computations, and then all the others falling into these little Darwinian sections, this how you generate things, and then having generated them, how do you then fold what you've discovered into the search process? So it's a little bit tough to figure all these things out, but I think this sort of three-piece approach, AGI equals Darwin plus Turing plus Darwin, is sort of the slickest and slimmest and simplest way I've got so far for grouping what ends up being like, seven or eight different requirements. And we'll see if it ends up being useful over the coming week to help figure out what it is that AGI requires and how to implement it. So with that, I'll call it quits for today. And until next time.