AGI with Carlos

Delphinus Sapiens: human brains in dolphin bodies

1. Dolphins. Transplant human brains into dolphin bodies. How long before they build printing presses? More generally, what matters more: our hands or our brains?

2. Set. Point. Search. Human power depends on three things: having many possibilities, good actual ideas, and the ability to find good ideas.

3. The Attention Machine. Accelerating search by focusing on what's relevant.

Follow me on Twitter! @dela3499

Listen on Apple Podcasts, Spotify, Youtube, and more: https://carlos.buzzsprout.com/share

Howdy howdy, welcome to AGI with Carlos. I'm Carlos and let's talk AGI. So one kind of interesting question, this sounds kind of like a science fiction experiment, but it turns out to be interestingly relevant, is if you took a human brain and put it in a dolphin body, how long would it take before they figure out how to build a printing press or would they never figure out how to do it? Is there just not a way to control a dolphin and body to do that. And the question is interesting because it sort of gets at the question of, is it the human body and like the existence of hands that is really important? Or is it how our mind works? So it's not at all obvious maybe that a dolphin with a human brain could build a spaceship, a printing press, these kinds of things. I think the answer is that they could. And so in other words, it's not because they lack hands, that they aren't as advanced as us. That could be like one excuse. That's like one explanation as to why humans have this civilization and dolphins don't. But I don't think that's the answer. And part of the reason is that I don't think that human hands are really the important innovation. It's human minds. And so So the flip side of that is I don't think it's the weakness of dolphin bodies and their lack of hands that matters. It's their lack of cognitive power. Something about the way they think isn't up to the task. And the reason I think that's the case is that ultimately building a spaceship, building a printing press, all of those things are about performing one big transformation, like taking all these raw materials, assembling them in the right way to build a very specific object. And there are lots of other ways to assemble those same things that are useless. That would either turn out to be like a heap of metal or wood or whatever, or just even a slightly incorrect version of the machine that would be jammed and just would never work. So, and here's the thing, like each of those big transformations, taking the, like that big set of raw materials and turning it into a finished product involves not doing just like one really big thing, but a whole bunch of little things, moving this piece to that location, adding a little bit of glue here and there. It's a bunch of small things. And I think that's something that you can see if you like see a building getting built, don't drop the whole building in place or assemble it with super advanced machines. usually, is a bunch of people doing a bunch of specific small things. And then when you put it up together, you know, over the course of several months, you end up with this very large object, a building that can hold many people. But that building consists of many, many small pieces, and many, many small operations on those pieces. And so, so ultimately, the fact that like humans are small, doesn't actually limit what we can do, we can build these very large objects. I think likewise, the fact that a dolphin doesn't have opposable thumbs doesn't really mean it can't build spaceships. Ultimately, all that matters is can it string together the things that it can do in the proper order, in the proper way? And if it can, then it can build anything. But that is the question. And so I think that's the thing that it can't do. It can't recognize where to put the pieces, under what conditions it needs to change up what it's doing. And that's what the intelligence of humans allows us to do. So anyway, all that is to say that if you think about the power, the transformative power of knowledge and of different kinds of species, you have to look at a few different things. One of them is the core building blocks they have and then how they can assemble them. By building blocks, I don't just mean physical building blocks. I mean the operations they can do. And so I think you can like, Almost any small set of building blocks could be used to build anything. You know, if you were put into a squirrel's body or a dolphin body or whatever, you could do anything. As long as you can string them together the right way, all these little operations. You know, that said, if you have a better starting point, that'll make things a lot easier. So it could be the case that if I drop you someplace, you could do very well if you have a knife or certain kinds of tools. And if you don't have them, it will hurt you in practice a huge amount. You might be able to figure out a way to make a knife eventually, but it could take you a long, long time. So in practice, it does matter what kinds of tools you have available to build things from. But in a more fundamental way, if you give the dolphins a few extra hundred years, a few thousand years, they can nevertheless maybe build themselves artificial hands. And then they'd be more or less on the same footing that we are and if you know talk about footing maybe literally maybe they could build themselves artificial apparatus to breathe and to walk if they really found that to be useful and that building on land was a lot easier so i think that um that is what humans would basically do and you can see that's what we're doing in space if we you can't live there um easily uh automatically just uh from our biology but we We can build what it takes to go manufacture in space if we want to. So all that is to say that when you ask like what can dolphins do and what can humans do and if it seems like the answer is that humans are more powerful, is that because of our minds or because of our hands? And I think the answer is our minds are the really fundamental thing even if it is the case that hands sure do help a lot. So I think you'd find yourself very frustrated if you're put into the body of a dolphin. But I think given enough time, you'd be able to figure it out and rockets might be in your future. So now I want to talk about a set of three things, I guess you could call it a trichotomy instead of a dichotomy, that I find helpful in my research. And it's the idea of having a huge set or a big space of all the possible thoughts. All the possible computations in the universe, and then particular points in that space, particular ideas, particular computations, particular mathematical operations, and those things. And then the navigation through that big space and that big set of possibilities so and each of these is a distinct kind of thing with its own sort of distinct concerns so just to repeat that first you have this big space all the thinkable thoughts all the computable computations all possible programs and then there are particular points in that space particular ideas programs theories feelings all that kind of stuff and then lastly there is the way you move through that that space or the way that you find things in that space by trying new things and throwing out other ideas you can almost imagine this uh this vast branching tree you know where you have a couple ideas and you mix them together and go to someplace new and um maybe you branch off more from that new location maybe that location isn't very good so you don't branch off anymore from that you forget it move on so there's this big space there's particular points in that space and then there's all these branching uh within it and so that's sort of the overall context for a lot of this agi and questions um and if i focus on each one of those the sets the points and the. Search um for a minute you can kind of see what role each of them plays so in the first case let's just focus on that big space it's really a set not a not a space but it's big big bunch of possibilities. The reason this is relevant to AGI is that if... If only a few ideas are available to you, if your mind consists of something, like a rabbit brain or something, that only you really can think, let's say, five things. I don't know how much a rabbit brain can think, but let's say it's five. Then most ideas that exist will be unavailable to that rabbit. All the designs for interstellar spaceships or even just things that would help it eat extra grass. All the engineering ideas that might be open to it, all the cultural ideas the rabbits could maybe exploit, none of that will be conceivable to that system, to that rabbit. And so it can't make use of it. And maybe its survival depends on those ideas. Maybe there are rabbits that if they had a few better ideas, they'd survive this tough winter. But they just can't conceive of them even. There's no way of even programming their brain to have those ideas, let's say. And in that case, the rabbit would be much worse off for that. And likewise, we humans, if we couldn't conceive of new cultural ideas and technological ideas, then those very useful things might be unavailable to us. And so ultimately, when it comes to thinking about AGI and having something which is really, really general, then what you want is you want it to have the greatest possible scope, the greatest possible opportunity for ideas. Really, that means you need to be able to compute anything that's computable. And I think that you can translate that mentally into, I can think anything that's thinkable, feel anything that's feelable, that sort of thing. And so that's ultimately what matters for that. You want to have the most possibilities that you can. And that way, you know, in the future, you'll, you know, there's all kinds of ideas out there, technological and cultural that we want to make use of. And so we want those to be, our brains to be capable of representing and computing and using those ideas. So that's all about the set or the space of possible ideas, and why it's important to AGI. Next, let's look at like individual points in that space, and why they're important. You know, ultimately, the there's sort of a cash value bottom line thing here, which is that having great abilities, having great general abilities, let's say having a great IQ or something. I don't know what I think about IQ. But let's say, you know, you have some kind of general capacity. That's that's basically worthless in a given moment when you need specific knowledge you know if if you're the smartest person alive but you need to know exactly the right thing to do in the next 1.2 seconds you know I want a tiger is about to eat you or your rocket ship is about to explode or whatever you need to press the right button at the right time and there's no time to come up with new ideas in that case you know there is no substitute for knowing the right answer in that moment. And that's what each of those individual points in the space of all possible ideas, you know, there's specific knowledge in there, specific knowledge about how to run that spaceship, how to bake that cake, whatever it is. And ultimately, that is what you need. So, whatever you're, you know, even if you can think of everything, that doesn't mean that you have in fact thought of and learned and know how to use a particular point in that space. So, yeah, ultimately, universality, computational universality, term completeness, ultimately, that is just sort of a potential useful thing that says, you know, one day, if you need this idea, you know, it will be available to you to arrive at. But it doesn't mean you have it today and so when it comes to those moments when you do need a particular knowledge right now to get some tasks done you know you either have it or you don't and, universality won't help you it's like basically like having a universality is like having this great great big tool shed and you can make anything in there but but then again in certain moments the question isn't whether you can make it in the future the question is whether you have it now now. And either do or you don't. So that's the first two of the three things I mentioned, the great big set of possibilities which you want, so that you have the maximum possible opportunity for exploiting all possible ideas. And then secondly, there's particular ideas, particular points in that space. And you need those because you know, there is no substitute for having the right answer when you when you need it. And lastly, there's the idea of search. It's more like how you navigate that big space. And if you have particular ideas, you know how you use them. And in this case. The point is that really, you only start off with a few ideas. Even if you start off with a trillion ideas, it's still a few compared to infinity. So no matter how advanced you are, like whatever your starting point is, it's infinitely far from what it could be. And you're going to confront new situations, at least if you live in a dynamic society or the universe throws something new at you, and you need to be flexible and come up with something new to solve that problem. Them um whatever the case is you may find yourself needing to come up with new ideas and uh and no finite set of prior ideas will be enough so in that case you need not just sort of infinite possibilities as i mentioned in the first case uh this great big set and um we're assuming here that the particular points the particular ideas you have aren't good enough uh for the present situation so now you need to strike out and come up with what new new ideas and hopefully find a good one. And this is really like the meat of my thinking on AGI. This is where Karl Popper comes into play with his evolutionary approach to knowledge creation. You know, it's all about variation and selection, coming up with new ideas, hopefully some of which are good, and keeping the ones that work and building on those. And neither of those processes is trivial. You know, there are better and worse ways to come up with good ideas. And there are better and worse ways to detect which ones are good. And so it matters how you do them. And I think that's where a lot of the, like I said, the bread and butter of AGI research lies is in how you perform variation and selection. But rather than getting into the details there, I just wanted to kind of sketch out these three things. The set or space of ideas, which you want to be as big as possible. Particular points in that space, which you ultimately need to get anything done. The space doesn't do any work. the set of points is what does the work. And then you need the ability to explore that space, to try new things and to build on what works and to ignore what doesn't. And these are three different things and each of them has their own role to play. So now I want to turn to something different which I like to call the machinery of attention. And what this is really about is sort of like imagine your brain and sort of imagine a big circle in front of you with a whole bunch of dots in it. And this is all the things that you know. The circle is everything is your brain. All the points inside of it are particular ideas, particular facts, theories, and other things. And the point about this circle is that it's really, really big. It has a lot of things inside of it. And almost all of those things are irrelevant to any given situation you find yourself in. If you're at work, most of the ideas in that circle about artworks and astronauts and and everything else just aren't really going to help you at work. And vice versa, you know, if you're on a golf course, you're any particular situation, it's going to be the case that some very small set of ideas is relevant to that situation. And most everything else is irrelevant, totally laughable, just not going to help you at all. Or you know maybe it's um or you know there's always a question of whether a particular idea that you think is relevant will turn out to be relevant in some way you don't know but regardless of whether or not you have the correct assumptions about which ideas are relevant, most of them won't be that relevant and so so what do you do with that information like what why is that practically important well it's practically important because if you want to solve problems quickly then you need to be using the things that will actually help, and if most things won't help you then you will be more efficient if you ignore those things and instead focus all of your energy on the things that will help you so i don't know if it's like if you're trying to like fix something in your wooden workshop and you're a carpenter then you want to be using the tools that are in the shed you don't want to be grabbing things from i don't know uh some kind of electronic store like maybe what you need is not a soldering iron or a new a hard drive, but you need it as a nail and a wrench or a hammer and these kinds of things. So, with that in mind, it really does matter by a factor of thousands or millions or billions that you choose the ideas that are relevant and not the ones that are irrelevant. Really, what I mean here is the ideas that will help you, not the ones that won't help you. And this is what biological evolution, incidentally, does not do. It doesn't have a a way of determining which of the mutations might help and which of them won't. It just does them all. It sort of randomly picks one and goes from there. And that's exactly what we don't do. And the reason this again matters is efficiency. So with that in mind, I think that it's helpful to see that evolution really consists of these two essential processes. On the one hand, And variation, trying new things. And on the other hand, selection, distinguishing the good from the bad. And in my research, thinking about AGI, I like to think about, a the fact that we have these mechanisms of attention to help us determine which ideas are relevant and that's a huge improvement over biological evolution itself but b i like to think about how that um just sort of construct some kind of mental image of how that works and so like i said you have on the one hand this process of variation you know creating new things on the other hand you have this process of like filtering uh distinguishing the good from the bad. And I think you can almost imagine this like a machine with, let's say, let me set up this little thought experiment here. So on the left side of your mental image here, you can imagine a big circle with all of your ideas in there. So astronauts, aardvarks and all those kinds of things are in there. And then you have some kind of mechanism of attention that we'll imagine as like a little robot that will pick some of those ideas out and put them on the right side of the page and we'll start using them so when it does pick up an idea you can either make it into a little idea generator you can imagine that being like a little little faucet or it can be a like a little little filter in which case you can put that like little bottom of your mental image And so in the end, you're in some situation, That little robot in the middle will pick from the big bucket of all your ideas. And it'll, let's say, pick 10 of them. And then it'll start using them. And as it does so, it picks each one up. And some of them become these little faucets, which you can imagine little raindrops falling out of. And some of them end up being in a series of filters. That each of them will prevent certain drops certain objects that are falling from reaching the bottom you know cut some of the amount of circulation and so the point of this picture is to say that depending on which ideas you pick they'll get a different mix of ideas getting generated so if you pick out uh ideas like i don't know um let's say you pick out ideas about different things related to carpentry and so different tools are there different raw materials and different ways of putting them together and so that's all those faucets up on top now they're operating and they're mixing together and doing all kinds of things and now the things that are falling from those uh are all these weird combinations of of nails and hammers and other things and different kinds of objects that can be formed from the combinations of those things and all of those are now being generated and as they sort of fall in this imaginary gravity they're hitting various filters. And those filters may be relevant or irrelevant depending on how good your uh your little robot is how good your attention mechanisms are but some of them might say you know this object is unbuildable or this object is useless uh this object um you know if you're trying to create something artistic maybe you want something that you haven't seen before So one of those filters might say, is this object new? Or is it something that's been done before? And you might have all kinds of different filters. And in the end, you have some things that go through this whole process. The original ideas came together and combined some new stuff that fell through the system. And then they hit some filters and some of them got stuck there and got weeded out. And some of them made it through. And there you have the results of this whole process of starting from your big bucket of ideas, picking out a few as relevant, some of them being used to sort of generate this rain of new ideas, and then some of them being used to filter out some of that rain. And then at the bottom of the bucket, you collect what's left. And what's interesting to note about this is that this whole process can be done in a better or worse way. That little machine in the middle, the little mechanism of attention, it could pick ideas that are indeed relevant to each other or ones that are not. And so maybe it picks good things that are relevant, like I mentioned earlier, uh, nails and hammers and woodworking stuff. But what if it instead pulls out, you know, like spaceships and rhinoceros and things like that. Most combinations of those two concepts aren't going to be very useful for any situation you'd face, um, unless you're going to build a zoo in space. But, uh, since most people aren't doing that, it's not going to work too well. Um, another thing that's perhaps less obvious is that even if things are sort of relevant in a vague way, the needs of the situation, um, can change. And so depending on what you're doing with carpentry, maybe you don't want to be grabbing any old tool from your shop. Maybe you have a very specific task in front of you, and only two or three of your tools are actually relevant to that task. So in both of the case of. Uh generating ideas and in filtering them you can be either too narrow or too wide in your selection you could uh you could pick things which are you know so yeah like either you'll pick from too broad a range of tools most of which are irrelevant or you pick from a range of tools which is too narrow leaving out things that are important and so you never think of them so both of those things can happen and um again that can happen for the idea generation part of it, but it's also going to happen for the selection part of it you know if you generate criteria or you uh you have filters which are too stringent and they cut everything out and you're left with nothing you know it's like telling somebody who's trying to build a rapid prototype that the thing has to look good too well if it has to work and it has to look good and it has to be cheap and all these things well you know you can't do that all at once so nothing will result from that process everything will be thrown in the garbage whereas if you say okay you know it doesn't have to look good. You just have to solve one problem and then that's a good enough rapid prototype. Okay, good. But then again, you can also be too lax with your constraints. And that would be like trying to put out a product that doesn't look good, doesn't work, whatever. Or let's say maybe all it has to do is look good. It doesn't have to work. This is a constraint which is not stringent enough. And so your customers are expecting you to meet all sorts of requirements. And if you're meeting only a few of them, okay, it's not a good product. So these are aspects of this big machine that I'm talking about. This set of all ideas, some little machine in the middle, this machinery of attention to pick out what's relevant. And then some of those ideas will be creating new ideas, and some of them will be acting as filters. And both of those things can be done in better and ways, ways which are too narrow or maybe too wide. And the point of all this is to say that, um. When you think about human thinking, it is highly tuned to the situation that it faces, and it picks out ideas that are relevant. And not just in the sense that there are ideas which are relevant, which are going to be useful, and then you do specific things with them. You don't necessarily just directly pick a tool off the shelf and that solves your problem. But human minds are creating new ideas from old ideas, and then they're having to select among the good and bad of those. So anyway, the point there is that there's a bit more detail than just, I guess I'm just trying to be more detailed about how those ideas get used once they're selected as relevant to a situation. And like I said, the ultimate result of that is just that if you have a very good way of telling what is relevant, what is useful for creating ideas and selecting among them, then you'll be vastly, vastly more efficient. It'll be better at rapid prototyping or you'll be better at coming up up with weird connections of ideas. And that is hugely valuable. So with that said, I'll just recap real quick. And, you know, first, I talked about sort of this, this mental or this thought experiment with the dolphin, you know, a human brain in a dolphin body. And the point there was to say that it's really about how you put together basic transformations, basic abilities, that determines what you can eventually build. It doesn't really matter that much whether you have hands or fins or whatever, because both allow some amount of control over physical reality. And therefore, if you have a mind that's capable of stringing those together in the right way, those basic units, you can do anything. Um, secondly, I talked about this, uh, sort of breaking down, thinking about AGI into thinking about the set of all possible thoughts, um, particular thoughts within that set. And then the way you navigate that set, the way you try new things and distinguish what works from what doesn't. And that each of these three is different, distinct, you know, you want to have the biggest possible set of possible thoughts because many of those thoughts are useful and you want them to be available to you. And then last secondly that it doesn't matter if you have infinite abilities in theory if you never come up with correct ideas in practice you need specific recipes to bake specific cakes and then lastly you need to be able to search that space effectively because you as you face dynamic situations you will need new ideas and you better be good at finding them then i talked about the attention machine or the machinery of attention. This idea that you have many ideas, most of them are irrelevant to your situation. So you need a way to find which ones are relevant, which ones would be helpful so that you can sort of generate that falling rain of new ideas and then filter them in the proper way. And the way you generate ideas and the way you filter them can be too narrow or too wide or just right. So, and if you do that job well, you will narrow your resources and your focus us um in a very very efficient way much much more efficient thousands millions billions of times more efficient than natural selection does with its more random mutations um so with that said until next time adios.