Peter Nixey: AI — Disruption Ahead

It’s easy to recognize the potential of incremental advances — more efficient cars or faster computer chips for instance. But when a genuinely new technology emerges, often even its creators are unaware of how it will reshape our lives. So it is with AI, and this is where I start my discussion with Peter Nixey.

Peter is a serial entrepreneur, angel investor, developer, and startup advisor. He reasons that large language models are poised to bring enormous benefits, particularly in enabling far faster & cheaper development of software. But he also argues that their success in this field will undermine online communities of knowledge sharing — sites like StackOverflow — as users turn away from them and to LLMs. Effectively ChatGPT will kick away one of the ladders on which its power is built.

This migration away from common forums to sealed and proprietary AI models could mark a paradigm shift in the patterns of knowledge sharing that extends far beyond the domain of programming.

We also talk about the far future and whether conflict with AI can be avoided.


View full transcript

James Robinson: [00:00:00] Peter Nixey, thank you for joining me.

Peter Nixey: Pleasure. Thanks for having me, James.

James Robinson: So a little over a year ago, something almost paradoxical happened. I think a world-changing technology was introduced, and yet I don’t see a lot of indications that the world has changed that much. What’s your take on what’s going on?

Peter Nixey: So the technology I assume you’re referring to is GPT 4, well, for a year back, GPT 3. 5 for a nudge, nudge further, I think we’re in a lag period. I’ve been trying to understand what’s been going on through the lens of History has much said anything and kind of thinking back. I mean, I’ve had now enough time working in tech to be able to recall some of some previous technology cycles and I think I came into tech and started programming in like in 2001 And [00:01:00] that was a real lag period post internet bust dot com boom.

And during that time, when I first discovered JavaScript, I remember thinking like, this is pretty incredible, like you really can do interesting stuff with this. But from that discovery through the curve, through to people really building interesting applications in JavaScript. And in fact, Even with the web itself, there was a lag period.

So I think you get this explosion of potential. And, that gets mixed up with hype and different things. And sometimes it’s hard to separate the hype from the potential. But I think the potential is extremely real in generative AI. And then there’s a period in which developers start exploring what’s possible in the technology.

And That kind of goes through to product, but probably as, as a first derivative on that, it’s not directly because engineers aren’t [00:02:00] necessarily product people directly. And then as more people who are product people and entrepreneurs start seeing what’s possible, then they start applying it across to different things.

But I think we’re in a lag period where most people don’t really realize the new paradigm for how stuff can be built. Yeah,

James Robinson: That aligns very closely, which is what I think, which is that there’s just some inertia and we can’t expect the world to change overnight, or at least the effects won’t be felt that quickly.

And perhaps there’s another thing as well, that’s unique to this kind of technology, which is that it wasn’t built with a particular set of capabilities encoded into it. It was just sort of trained up and even the people who built it were discovering what it could do, that it could translate Catalan into Mandarin or something like that.

And were being surprised at the skills that were coming out of it. And so it’s not, it’s not been marketed as [00:03:00] use this to do X, use this to do Y. Those things have started to come about. And it was interesting, I listened to Sam Altman recently saying that developers are the ones whose work is going to be most affected by this early on, but I don’t know that they realized that from the very beginning.

I think that’s something that’s become apparent over the last year. So maybe there’s just this question of. Yeah, it’s in, it’s in an experimental phase and more than that, it’s not clear who should be experimenting with it or rather perhaps it should be everyone.

Peter Nixey: Yeah. I, yeah, I definitely agree with that.

It just reminds me so much of my first memories of the internet and our physics teacher taking us through to his internet connected computer. And I can’t remember what he showed us, but it was something like illustrating that you could see the details of a school in Arkansas and. I was like, why do I care?

Like, what’s the point of this? And I think [00:04:00] lots of people feel in that, that paradigm just now.

James Robinson: Yeah I think I can just pick up on that point I do want to say something from a devil’s advocacy perspective which is that the internet while it has had tremendous influence over our lives. I feel economically that that influence has been overestimated and I’m just parroting arguments from from people like Ha-Joon Chang and economists who have tried to measure the impact of the internet and have found out it has been a big influence, but when one compares it to things like running water or the introduction of white goods, which freed up huge amounts of time and effectively half the workforce, I, women who are previously spending their time doing household chores, the Internet has not been as impactful as those partly because it’s introduced perhaps as many sources of unproductivity as it has sources [00:05:00] of productivity. And I, I kind of wonder if we’ll see something similar with these technologies.

Peter Nixey: So that question, like I, I mean, you got straight to the point that I was going to

get to, but although I would add a second, second point to that. So I think in addition to it’s kind of zero sum game of the internet where you do less work to accomplish the original things, but then you spend more time doing other stuff. I think the other component that’s come with modern jobs is like real opacity on what constitutes work.

It’s really hard to know whether somebody’s delivering what they’re supposed to be delivering. And in most, in many circumstances, I think it’s hard to know even what you want people to do in the first place. And so I think that the internet has made, it’s made that harder in many ways.

. But I mean, let me bounce that back [00:06:00] to you. Because you’ve worked through periods of the internet not being as big as it is now. And do you feel that in the teams you’ve built and the people you’ve hired that their jobs have been understanding even what they’re supposed to be doing, never mind whether or not they’re actually doing it, has become harder to, harder to figure out?

James Robinson: I, I think that’s true. Certainly the distance introduced by COVID. Which was only made possible by all the kind of the technologies that we’re using and the technology we’re using right now to have this conversation that has made it harder to see what’s going on throughout someone’s time and have those kind of conversations where we just look over someone’s shoulder.

That’s a terrible thing to say, I don’t mean that in a sense of policing someone, but just finding all the [00:07:00] interesting stuff that people in other teams are working on has become harder so i think it’s probably led to a bit of a siloing of of knowledge for sure i feel that within teams within companies, like sub teams it’s not perhaps been such a problem i should also say that my company OpenSignal we spend a lot of time

hyping up the impact of the internet and saying, you know, every percent of connectivity improvement is some fraction of a percent of GDP improvement. That may be the case, but I don’t think it’s been totally proven out. I think it’s certainly the case that every percent of connectivity improvement leads to some percentage change in the way that you live your life, if one can measure that.

In the amount of time that one spends doing things, the amount of time that one spends watching Netflix, for instance, like certainly increases as you have better connectivity, [00:08:00] but yeah, I’ve not thought about it in detail, but it does seem like maybe the internet has, has thrown a lot of smoke in our eyes.

Peter Nixey: Well, I think, I think maybe it was a part of it is, I don’t know, more stuff, more stuff is on the computer and the computer is a very amorphous place to exist. As opposed to being physically in a different room, like it’s more obvious what jobs are there and what jobs are done and whether or not you’re spending excessive amounts of time in the kitchen or the garage when that’s the location in which the job happens.

And when everything happens on the computer, you can live in a space where you’re not doing anything worthwhile, like for large chunks of time, you don’t even realize it because everything happens here all at once.

James Robinson: Yeah, that is a really interesting point. And it. It does make me think of something that i’ve been wondering about the unique interface that OpenAI has chosen and everyone else seems to be using for accessing LLMs which is just[00:09:00]

this singular point where you can ask anything, and at least in my computer at the moment, a lot of my time is spent in different apps. And I get a sense for when I’m being productive, if I’m coding something up, or if I’m, Reading through some PDF reports of something, whereas you even leave lose that level of distinction if you’re putting all of your questions through ChatGPT, I guess you might have different windows people perhaps more disciplined than me. I’ll confuse that GPT by asking for it. I don’t know, a social media image and then asking it about what Heidegger’s thoughts on time are, right? So, it must be getting a lot of cognitive dissonance that’s going in, but yeah, I don’t know if you’ve, you’ve thought about the choice in the UX there.

Peter Nixey: I mean, I think the UX is an accident. I had one of [00:10:00] the OpenAI team come and do a fireside with me last summer and he talked through the stuff. This is what I would have anticipated. And I’m sure you would anticipate the same kind of being in the software industry.

But they, they had no idea. I mean, you couldn’t, how could you possibly have any idea that this particular app was going to go to, what was it? A hundred million users in three months? Nobody can know that nobody can expect that. And I think actually from the impression I got really, there was a lot of kind of foot dragging because people were afraid that 3. 5 wasn’t going to be enough to really wow the crowds and maybe they should wait till four, but they just pushed it out and, and it changed everything. It feels though it has this hint to me of maybe what the terminal was to applications or what the spreadsheet is to the world of SAS, right?

It’s this place where [00:11:00] enthusiasts are hacking solutions right now, but the solutions will be hived out and built either into products or features elsewhere over time. Like the stuff that really, really interests me is not chat as an interface. It’s how you apply intelligence to problems in a way that it doesn’t come in and out as chat, like, and, and I don’t think we’ve seen enough examples of that, that people’s imaginations have yet been lit up as to what’s possible.

James Robinson: I can’t remember who said it, but the best sort of design is invisible when you don’t notice that you have the product that’s when it’s really performing and we’re at that early stage where ChatGPT is, is very visible, but we’ll know when, when it’s truly matured, it won’t surprise us at all.

We won’t even think of it as interacting with the product.

Peter Nixey: No, exactly. And I’m just [00:12:00] trying to think like I’m thinking of like examples of that now, like if you think about, if you think about the way that, I mean, if you think about the way the implementation Apple’s had for Siri, basically combing your emails for contact information, and then presenting them to you as Siri found this in your contacts and , do you just want to add it in one go? That’s the type of, very smooth, don’t even think about it. Just this, this business of the thing’s already done the hard work of figuring out what the job is and doing the prep for the job and presenting the job to you and your question is just, do you want to accept the job?

And I have a feeling that we’re going to see more of that start to weave its way in.

James Robinson: It’s like a truly good butler. Not that I have one.

Peter Nixey: It is literally that,

James Robinson: yeah. But you know, they stand in the shadows and, they’re not all in your face, but they, you know, Jeeves comes along and politely suggests something.

Oh [00:13:00] yes, uh, I just had that thought myself. Yeah,

Peter Nixey: and I think part of what’s holding that back is that I don’t think people have fully realized that the thing can interface. Through to other bits of software and other bits of data, like you can use, it’s this everybody I speak to at the moment and I, the people I speak to are generally like, to be honest, for a lot of company leaders, so I’m seeing people, who are thinking about how should they implement stuff, but also a bunch of developers as well. And developers are interesting because I see a lot of pushback from developers. They’re kind of bifurcating pretty hard. Like Simon Willison, who I have a tremendous amount of respect for is gone all in and kind of completely internalized what’s possible with this and probably knows as much as anybody else out there from a kind of product developer mindset, which I think Simon’s [00:14:00] very good at being, he’s, he’s good at being a sensible product person and also a devs dev at the same time., But a lot of developers I find are struggling with it, like they can’t wrap their heads around it. It’s not deterministic. It’s not reliable, and they don’t feel comfortable with it, so I am really surprised by the number of developers who I would nominally consider to be very sophisticated developers who just kind of written it off.

They’re like, no, I haven’t used it. I’m not planning on using it. I don’t see how you’d use it in applications. And, I see it in the same way I it reminds me of, it reminds me a little bit of not that I was privy to a lot of this, but,

 The that, and I touched on a few of these conversations, but I didn’t really have much contact with the generation that preceded. The, the internet, but like when you start doing stuff with Ajax and HTTP and so on, like if you spoke to a desktop programmer, they’d be like, so aloof about it.

And they’d be like, well, why, why would I do that? [00:15:00] Like I can just call the database objects directly in my desktop app. And, and there would be this like disdain for the, the fragility of HTTP and for the fragility of like, essentially not really having any decent persistence of state and having to sync state between the client and the server.

And so they were just like, I’m not even going to think about that. I’m going to carry on living in my Microsoft world where I can build this stuff and it all connects and I don’t have to think about it. And I feel that there’s an element of that in how a lot of developers are thinking about AI and how they can build around AI.

AI has, it has a bunch of weaknesses, just like HTTP and the browser had a bunch of weaknesses, but once you wrap your head around those. You realize the wealth of different things that you can do with a browser application that you can’t do with a desktop application, but you have to understand where those weaknesses are and where not to expect to be able to [00:16:00] put pressure on the application.


James Robinson: yeah, there’s a couple of things there. I think one is how does one use AI to do better what we’re doing currently. So it’s sort of like a replacement way of current workflows, another language, if you like, for. Producing the same kind of applications that we have and then there’s also this question of it’s, it’s, it’s a fundamental thing by the way, the other thing that came to my mind is I, for many years, although I worked in mobile for a long time, I was like, well, why would I ever do, why would I ever do things like bank on my mobile phone?

It just seems too serious for this little screen. That’s now I only have a bank on my mobile phone because it’s so much easier. Yeah, perhaps we should talk about why AI is so effective in my opinion, at least at developing, at writing code as I think, maybe even a couple of years ago, people would have said, okay, that’s one of the safest [00:17:00] jobs that that’s out there.

And yet I do think it’s probably the most, it’s probably the best use case for it right now. I know what’s your take on. Why that is

Peter Nixey: honestly this this would only be an amateur answer. I was discussing with my brother in law. He’s the science editor at the times and he was asking me kind of the same question.

Why is it so good at code and my only guess is. So my expertise in AI is that I did two and a half years of a PhD in computer vision in 2001. Before my colleagues there ended up being kind of leading people in deep learning. I knew enough such that I couldn’t even really not really. I couldn’t tell you what deep learning is. I just know it’s like one of the stages that like happened. So I learned a bunch of stuff about high dimensional space and clustering and, and Yeah, I guess it’s a different paradigm, yeah, [00:18:00] yeah. And so I kind of had a grounding then, then left AI for 20 years and just wrote software. , and then came back to it. I’m only ever interested in software that I can release into production and I’m not going to go into a degree in anything such that I can use it. So I’ve just been waiting and waiting for something where there was an API where I could start using intelligence usefully.

And so that’s, that’s what’s flared up my interest and understanding, of what OpenAI have done. And then I’ve done a bunch of reading around it, but I’m certainly not an expert. So, to answer your question, I, I kind of give all that some context for the degree to weight anything that I say, my guess is that if this thing is capable of building internal models of how the world works, and.

I guess if you look at how the work world works in general, the language that you read [00:19:00] to, to internalize that and figure that out is, is all going to differ in kind of perspective and focus and bias and everything else. But code is not. That bias, like what it does might be biased, but the way that you write it is pretty determined, the code runs deterministically, broadly speaking, like a million pages on Stack Overflow would beg otherwise, but , broadly speaking, it’s deterministic and so

the patterns there reinforce themselves, and so, , my naive understanding of the way these models work is if they’re reading stuff that is very consistent, then they’re going to build stronger, stronger, more effective models, and they seem to have done that, but it’s also interesting seeing where They break off and kind of, and pushing, pushing the model until the point where it stops working usefully and understanding how that is and where that is.[00:20:00]

James Robinson: Yeah, I like that answer. I think I’ve had a similar thought that AI or LLMs rather are modeling language and language is kind of modeling the world. Like we have language as this model of something else. so they’re not directly modeling the world. They’re modeling a model of the world. Um, Whereas …

Peter Nixey: Ilya Sutskever has an amazing quote on this i don’t know if you heard it okay.

I mean, the guy is so articulate and I don’t think it’s any coincidence that all of the OpenAI team are incredibly articulate and able to explain things. I think that is causal in why the company is as successful as this. , Ilya said, because at the start everyone was saying it’s just the next word.

A probability next word predictor, Calacanis, like saying that phrase like a hundred times in the all in pod that followed the release of it. And I’d been using it and I was like, look, this, this just, this doesn’t, this doesn’t [00:21:00] work. You can’t have something that’s just probabilistically producing the next word based on the distribution of words and be able to answer things not least because I can construct questions where I’m taking it into a completely new problem space and there is no, there are no words on the internet to answer that question. I don’t think you would really know the scope of the internet. For the average person, they probably wouldn’t have a sense of what is and isn’t But the weird thing that you have as a developer is that You have scoured the internet, you get a feel for when you’re at the final capillaries of where information runs out, because you’ve been there, you’ve posted stuff.

 I’ve got many questions on Stack Overflow that represent at that moment in time, the end of one of those capillaries where there wasn’t any more information. So you do start to get a feel of the fact that the internet isn’t an infinity of information. It’s bounded and I gave the AI this like really [00:22:00] perverse question, but was one that I was like, there’s no way that there is an answer to this on the internet.

And I said in, and I tested a bunch of things at the same time. So I said in, I can’t remember the third verse of American pie or the verse where they kind of referred to, um, Being in the gym. So the verse says, yeah, exactly. So you’re dancing in the gym. Like you both kicked off his shoes. I dig those rhythm and blues.

So I said to the AI, what would have happened if that gym had large glass windows and immediately preceding the events of that verse, there’d been a massive police shootout, , involving terrorists in the gym. And the AI, like, first of all, I was trying to get it to give a particular answer that kind of required an understanding of physics and causality and so on.

Basically, what I wanted it to do was to tell me they weren’t going to kick off their shoes. Because there’s no way that that exists as a question or answer anywhere on the internet. And I had to push it, like I had to guide it. So first of all, I was like, what would have been different? And they said, well, given the [00:23:00] emergency situation, they probably wouldn’t have been dancing immediately afterwards.

The police would have cordoned off the area. And I was like, yeah, okay, but seeing the police hadn’t cordoned off the area, what might be different about, um, what Don McLean wrote in that verse? And it finally went, they probably wouldn’t have kicked off their shoes. Um. But to understand that, you’ve got to understand what’s in a police shoot-out, like what the consequences of the glass are, how the glass relates to, to the verse.

Like, there’s so much understanding. If you think of programming something to having that level of world understanding, and, and I could have taken a completely different answer, I could have asked it something about Little Bo Peep and what would have happened if there’s been like a massive Sunday roast run like the previous week like you can answer these things that require compound understanding of various different aspects that are essentially impossible to program in But the thing has this world view which blew my mind.

So anyway, Ilya said, I’m going to [00:24:00] butcher the quote, but he said, if you are able to sufficiently accurately predict the next word in a sentence, the AI is not a next word predictor in the sense of the way that you might normally think of a next word predictor.

If you have enough words, the words become a projection of the world that the words describe as projected down into word space. And if your next word predictor is sufficiently accurate and in terms of its ability to essentially not just predict the next word, but to predict the next word as a mapping of the real world into word space, then it becomes a predictor of what’s the ground truth.

If I gave you an example of that, he said, imagine the end of a murder mystery novel, like a Miss Marple, where she gathers everybody into the. Sitting room at the end and then goes through the story of, where everybody was and then says at the very end, therefore, that means the murderer is, [00:25:00] if you can predict the next word accurately, you understand who the murderer is. , it’s more than just predicting the word you understand what happened in the situation.

James Robinson: Yeah, I think it’s, it’s hard to argue that if you’re. If your words model the world and your AI is getting your words, the words correct is somehow modeling the world. I think what’s particular about coding, though, is that there’s nothing more to coding than the words, there’s a little bit more.

That’s, that’s not fair that, the code has to compile and run, but it doesn’t, It doesn’t touch the world in so many places and the AI can compile and run code itself. So it’s just got, it’s got all the tools it needs. And what’s more, it’s not like that really long novel. So the early versions of AI don’t have very long context windows, but most coders Try to write their code so it can fit within 100 lines or so, as a kind of rule of thumb [00:26:00] and some really good ones.

So it’s got to be 10 lines or something and that, you know, it’s just great for AI. It loves those short context windows. Plus, and this is something you’ve pointed out, there’s, there’s loads of information on the internet about coders and developers trying to get stuff to work and sharing answers about what works and what doesn’t.

And you made this wonderful point that essentially we might see that ladder being kicked away. Yeah. Yeah, take us through your, your, your thinking on that.

Peter Nixey: Well, I, so I wrote this piece that. That absolutely blew up so across twitter and linkedin it ended up getting four million impressions, and I was I’m a big user of stack overflow, but i’m a big user of stack over and i’m in top two percent of stack overflow But that might sound great to an employer, but i’m in the top two percent of stack overflow Mostly by being [00:27:00] representatively ignorant early in the curve of a popular technology so You and I both know that the most popular question on stack overflow is almost certainly how do I undo and git?

And if you ask those questions, then you can accrue a lot of points on Stack Overflow and, and I’ve managed to ask a bunch of those in Rails and Angular essentially over the years. , But the numbers on any social network like usually. 100, 90, 10, 1. So out of 100 people in the social network 1 percent of them produce content, 10 percent of them interact with that content, liking or commenting, and 90 percent of them just browse the content like that’s all the ratio, but I’m like right source and so I’m a question writer and the questions are actually the bit that you need the most because once you have the questions people answers come more easily. The reason I say this with some authority is that I’ve actually written a social network that works in email and the limiting factor is [00:28:00] not responses, it is questions. so with GPT 3. 5 and then even more so with GPT 4, I realized that I wasn’t going to Stack Overflow. And it wasn’t just that I wasn’t going to stack overflow to reference stuff.

I wasn’t going there to ask questions in the first place. So. Questions I might have needed to take the Stack Overflow, I didn’t. Because the AI, and 4 in particular, 3 was noticeably not as capable, was actually able to answer stuff for me. And so, the reflection I had on that was , there were two points that I made.

One was that one was about the idea of the ownership of knowledge. So this period of the internet from its inception until Qt 3. 5. You saw a lot of, there was a lot of incentive information to be curated and created and curated [00:29:00] communally. So you’ve got Stack Overflow, you’ve got all of the other Stack Exchange websites, you’ve got Quora, you’ve got people blogging, and there is a marketplace for information, there’s demand for information, essentially kind of at some level, Google is feeding, well, Google is channeling, but people have a requirement to ask something to go to Google. Google has enough traffic that that feeds through one means or another. That gives people enough incentive to produce that information at the other end. And it’s best case, like pure first order original information, like a really good Stack Overflow answer, but also just like rehashes of stuff like most SEO content that companies are, uh, producing.

James Robinson: So who owns the content?

Peter Nixey: Yeah. So, so we’ve had this like communal, there, there has been an incentive for people to produce information to publish original information to rehash information, hopefully in ways to [00:30:00] enrich it, but one way or another has been a very, very large incentive to produce information because people are there to read it and at the most simple level, people’s egos have been stroked, mine included by producing stuff that other people read and a level above that, a lot of people have raised a lot of money from producing content that other people want to come and consume. Now, what’s happening, the effect of, of ChatGPT, there’s a number of different consequences that I see from that.

So one is that it’s massively cutting down on demand for information. And. I think Stack Overflow is the canary in the coal mine, and we’ve seen, I think there’s some debate about what’s the cause of it, but nominally Stack Overflow’s traffic’s been dropping at 5 percent a month since the release of ChatGPT, or certainly since the release of GPT 4, so

James Robinson: you’re seeing less incentive.

GPT 4 is the one which, [00:31:00] um, which runs like Python code, which is, uh, I think just a huge step up.

Peter Nixey: Oh, okay. I’ll come back to that. Okay. I’m interested in your experiences. Mine are less enthusiastic, but I think it’s just much, much more capable as a reasoning machine and has got better knowledge. So there’s less demand for questions, but it also means that there’s fewer people there, a lot of the questions get answered just by virtue of somebody being there and getting nerd sniped, to use that wonderful XKCD term. You’re just there and you’re like, I could answer this. And so you answer it. And so fewer people are there, then fewer questions getting answered.

Like the, the power law improvement that people see in the value of a social network as the scale grows works the other way around. And the value of the network drops off proportionally to the square of the size of the network as you go down it as well. So you could see [00:32:00] like quite a collapse in that.

So what I saw was a removal and an incentive for this information to be out there. But the other thing that I found much more unsettling was the idea that we move from this idea of a communal, The, the thing that the internet has done so amazingly and libraries and books did before that was create communal ownership of the information like whomever owned the copyright for the books Like the the information was out there and I think I certainly grew up writing software in an era where there was a huge incentive for developers to go out and share what they were doing and I learned a tremendous amount from it in huge contrast to what I saw in other professions that my friends went into.

I mean, nobody was writing about how to be a great M& A advisor, , and writing a blog on it. That just wasn’t a thing. But for many [00:33:00] areas, we have this huge communal understanding and then a huge acceleration in what we know and how we do things. And OpenAI has cut that off. well, let me, let me rephrase that because it’s far too charged away. of making that statement. If you can go to the AI and get the answers you want from the AI, it removes the incentive to go to the internet. And if you, if people aren’t going to the internet, then that removes the incentive for people to write on the internet. And if people don’t write on the internet, that information stops being communal property. There’s two questions here. One is It’s removing an incentive for another human. If, if I come with a problem and you can answer it, if I’ve got some nuanced problem about like measuring phone signal, inside buildings and GPT 4 can have a crack at that, then I’m never going to get ground source knowledge from you and all the expertise that you have from building [00:34:00] OpenSignal and that knowledge is never going to get shared with anybody else. And the other going back to our numbers, the other hundred people who are not the creators of that content in the first place. So I think the natural conclusion of us going to an AI and getting more information from an AI is that there will be less information in the public domain.

And over time that feels very problematic to me. Like it feels problematic for us as a species, it feels problematic in terms of the power asymmetry that, that OpenAI has.

James Robinson: Not only will we lose people asking the questions which are easily nerd sniped, because they’re kind of questions which are easy to answer.

As those platforms are eroded, people won’t even go to them with the really specialized things that an LLM may not have a good answer for because there’s, you know, to use your earlier phrase, it’s information that’s [00:35:00] beyond the capillaries of the internet. It’s stuff that’s just not in the training set, I suppose.

But if you’ve eroded the, the platforms too much, people wouldn’t even. Go to them or expect people to be using them to give you an answer.

Peter Nixey: Well, to get that edge information, you’re relying on a very, very large, latent pool of people.

I mean, you and I both know that there are questions, there are questions you ask on Stack Overflow that there are only a handful of people who can answer. And that’s part of it’s, It’s extraordinary long tail coverage of these extremely niche questions and the confidence that you’ll get an answer to it is why you go to it.

But, but if people aren’t going to Stack Overflow and you lose the confidence that you get the answer, then what gets answered?

James Robinson: Yeah. And there is a case to be said, well, some of that stuff beyond the capillaries, the AI will be able to figure out because it’s just on such a good job at reading through [00:36:00] documents and stuff.

It makes me think of your Don McLean lyrics piece again. There’s never been a question on this, but, you know, there’s lots of stuff which we can use to figure out the answer, but there will be pieces where it’s just The case, you know, there’s an example of my own world is there are unpublished APIs from Google.

Well, I say they’re unpublished. They’re not documented in the Google API docs, but they are actually in the Android source code. So I suppose if LLMs read the source code, they could probably figure out how to do some, you know, unusual things which you wouldn’t be able to get from reading the manuals. So maybe that’s not a great example.

Peter Nixey: There are things that are just based on experience that you just can’t, can’t write down. You can’t, you can’t synthesize the information without having had the experience. The LLM won’t have the experience, at least for the foreseeable future, where it’s like, okay, I’m trying to access [00:37:00] secure cookies on Chrome, on this version, and it’s just not documented anywhere, but somebody’s had the same experience, and they’re like, yeah, this just doesn’t work.

Bad luck.

James Robinson: The other thing I wanted to pick up on here is I’m wondering whether textbooks will hang around as Stack Overflow and internet questions and forums seems to be a very different way of accessing knowledge, which is more easily replaced by. LMS, whereas if you want to get an overview of a subject, then you should be reading textbooks.

And if I could visit my younger self, that would be my number one piece of advice would have been, read some computer, read these books on Java. Yeah. Because the way that I learned how to coach was going on Stack Overflow and doing a lot of copy and paste jobs. And every programmer will tell you a [00:38:00] lot of programming is just copy and paste. And maybe that’s another way of thinking about why chat GPT is so good at programming, because it’s essentially mixing and matching lots of little bits of information together. But on the other hand I, who hadn’t read a textbook, I didn’t understand the things like, what’s the difference between a class, a member variable and a static variable.

And, you don’t really pick that up that. Easily from reading snack. It’s much easier or rather it’s much easier to pick up that knowledge just by reading through a textbook. It’s you’ll get that in the first chapter or something on on Java. And so I think there’s probably even more call for people learning from textbooks.

Or maybe not more, but I, I don’t see that going away. I think you’ll still want that for, for the overview. But I, I’m curious if you share that. I

Peter Nixey: don’t know. I feel like I go the opposite direction in all honesty. I mean, I don’t know what the truth is, but my degree was in physics. Um, did you, did you do a physics degree as well?[00:39:00]

James Robinson: I did physics and philosophy. Yeah. So it was probably very similar to yours, except I didn’t do any experiments. Uh, or very few.

Peter Nixey: Okay. I didn’t do many experiments. I found the degree extremely hard, and I, one of the things I really struggled with, and I mean, physics is hard, there’s no two ways about it, but one of the things I struggled with was being able to get the information out of the textbooks, I needed to model the information in more ways than it was available in a textbook in order to build my understanding of it.

And so I find now I still haven’t totally figured it out, but I feel a bit closer, like take something I’ve never really understood. I never understood why it doesn’t. break laws of thermodynamics, which is a heat pump,, like a ground source heat pump. And I can never really figure it out, but it’s quite nice to go to the AI and discuss it and say, and, take a position and say, well, okay like, why isn’t this, [00:40:00] it’s, it seems to be drawing net energy from a colder source to a hotter source. Why, why does that work? Why does that not break law of thermodynamics? Well, being able to have a discussion with it about the, the conflict of the aerofoil and the conflict between the story you get told about why the aerofoil works and the fact that a plane flies upside down and why these things are true.

It’s nice to be able to take alternate positions. Present the thing you don’t understand, like you have a teacher there the entire time. So, I like the ability to be able to break stuff off with it and read it through. And if you take, for instance, the example of different variables, I remember there was like one page of some, somebody’s blog that just covered the difference between how Ruby,, like inheritance of, of scope of variables and [00:41:00] inheritance in Ruby or something like that.

And then we come back a million times to the same page I mean, it was much better written and not having been written at all. And I’m very grateful to the person who wrote it. But I never really deeply understood it and, and I actually be able to like to and fro with the AI and say, okay, well then what would happen to these circumstances and why is that design decision like that?

, that’s the other thing that I always needed to know to really understand something. I was like, okay, so it does work like that, but I can’t really understand it until I understand why, what, what was the incentive for it to work like that? And then I understand it. And that frequently does not get covered in stuff.

Yeah, like I’ll give you an example. Sorry, just to go on with one last, one last thing, let’s take, something like, um, observables, so reactive programming, this idea that like you have a stream of events being emitted from things, it took me so long, the document, it’s such a [00:42:00] technical subject. And there’s so many challenges and just literally applying observables.

and I guess for anyone who’s. Not used observables. Who’s listening to this? It’s essentially a way when you have lots of asynchronous events going on under something like say air table which I’m looking at at the moment and Various different things are depending on not just different bits of data coming in from the page But also the current date and so on how do you make sure that all updates simultaneously without doing some ridiculous like re reevaluation of everything in the page at the same time and observables are Like one of the techniques for doing this in reactive programming.

But all of the documentation was written from the perspective of how it works. It was just so hard that that was the way that people had written it. And I had no ability to just go in and go, why the hell? Like, I can see that everyone’s using observables, but I don’t understand why. just talk me through and then debate it and be like, no, okay

give me a different example. I’m. And to understand like how this stuff plays out. That’s [00:43:00] phenomenal.

James Robinson: Yeah, I think, I think that’s right. I would say though that, and your examples are really beautiful. Just your knowledge of those things, the fact that you’re asking those questions, particularly on the physics side, it’s probably because you studied physics, right?

Why, why is it puzzling, that heat flows from cold to hot because, you know, that’s, we have a second law of thermodynamics seems to suggest it goes the other way.

 I wonder if one needs some grounding in a subject in order to be able to answer, ask those questions in the first place. And the second thing I wonder is just whether this kind of Socratic methods of learning of asking questions and getting responses is is going to work equally well for everyone.

James Robinson: It involves a lot of curiosity. Perhaps those questions are linked, right? Uh, [00:44:00] you need something to make you curious in the first place. And if you have that, then the ability to ask an LLM to give you different takes on things. Is yeah, it’s fantastic, , I

Peter Nixey: mean, the question is, so does the textbook exist because it’s the best way to serve the end reader?

Or does it exist because it’s the easiest way to get the information out of the person in the first place? Like, if you want to document what’s in your head. Like a book is, I mean, not an easy, but a lot easier than producing like a multimedia course or, or any other form of learning, like just write it all down.

So Chegg has definitely seen a huge hit in, I don’t know how much this was like just market related, but, But I think Chegg, the textbook loan company in the States is seeing a huge hit to their business as a result of this. [00:45:00] I mean, I think what you’re saying is correct. You want a learning framework and a syllabus and a course over it.

 It doesn’t strike me as very long before you could just say to the AI design. It’s definitely not, you could do today, right?

James Robinson: Yeah. I suppose what you could do is you could say, these are the questions that you need to ask me. Oh, these are the things we’re going to explore together and make a much more interactive learning experience, which is nonetheless

guaranteed to guide you through all the things that you need to know to have a good overview of, let’s say, thermodynamics. And so it won’t stop, you know, asking questions or telling you things until you’ve covered off all of the areas and improved your knowledge, much as a really good teacher would.

Peter Nixey: Yeah. And I mean, you can do that today. One of the companies I’m working with is creating a really good system prompt to get the AI to take somebody through a coaching tutorial and, and the AI will like [00:46:00] manage the whole arc of it. It’s not, it’s not a long arc, but it is able to manage it.

There’s two things, James, there’s two things that I’d be really interested in talking about. Go ahead. Because I know technically we’re at time, but I mean, it’s a good conversation if you’re happy to go on for a bit. Yeah, I’m happy to go on for a bit. so the two things I’d be really interested to talk about.

And discuss with you, one, I’d be interested to know how you use the AI for software development because I’m really noticing like significant differences in what people consider using AI software, software development to be, um, and the other is, I’d like to discuss the, like, we, I’m really interested in like the more existential question, like if you, if we push the timeframe out a hundred years and 200 years, And if we think of this through an evolutionary lens, uh, I’m fascinated by [00:47:00] once we start kind of when, when the thing, when the techniques he was had for solving physics problems was like, you had a few, you had a few kind of ground source equations that you would.

Pull in and like used to make sure that things worked right. So you, you do your conservation of energy and conservation of mass and, I’m coming up with the conservation of mass as a thing, but I mean, certainly, obviously in conjunction

James Robinson: with energy, you’ve got the conservation of energy, conservation of momentum and, uh.

Peter Nixey: Then also just like sensibility checks Like have you like concluded that the time for this equation is longer than the universe or something like that? so you do your sensibility checking and But in this like, one of the things that I keep folding over in my mind is if we step back from this and we start thinking about what the AI is going to require in order to thrive as an AI and, [00:48:00] or rather, let me reword that, the AIs that are most likely to reproduce, what are the qualities they’re going to have and, and become the AIs that are going to reproduce and become more prevalent.

And what does that look like when the AI is competing with us for the things that allow the AI to be most likely to become more prevalent?

James Robinson: Maybe we can just talk about this, this, the second point, because it’s such a big one. And, I’ve been rereading Stuart Russell’s Human Compatible. I don’t know if you’ve read it, but one of the points that he makes quite early on is Social media has, has more of a command over the content that we ingest than any dictator in history.

It’s just incredible. And there was no thought given to the algorithms to [00:49:00] make sure that that was a beneficial thing. The algorithms essentially want to manipulate you into doing certain things. It could be voting a particular way, it could be buying something. And they have a secondary objective that will help them do that, which is to make you more manipulable as a person.

So if they can make you more generally manipulable, they can get their first order goals accomplished.

Peter Nixey: That never occurred to me, but yeah, that makes sense. Yeah,

James Robinson: and so his thesis, and he’s open about this being something that’s anecdotal more than proven is that social media has pushed people to extremes and that people become a more extreme version of them themselves.

I’m a very but poor user of social media. So I feel like I may be relatively unaffected just because I’m lazy on social media, there feels to be a kind of truth to that. And one can see the link between someone being more extreme version of themselves and being more easily [00:50:00] manipulatable.

And it seems to me that with LLMs, you know, we can go two ways with this, we could be pushed yet more into our silos and rabbit holes, or we could think quite carefully about encouraging AI to broaden our mindset. And I think that relates to your question of what is it that’s going to make AIs more successful or not?

Well, like any product, you’re going to have to really love them and i’m like any product that’s gone before they possibly have the capability- i’m sort of drifting into the realm of Her the film- they may have the capability to make you really fall in love with them and depend on them in a way that’s not been seen before so.

You know [00:51:00] if their capabilities increases they are that that seems almost in inevitable and i wonder if we. I don’t know where will end up in a hundred years but i’m thinking like we have to think quite carefully through the way that we regulate AI not just for the big existential it will take over pieces which we do have to do but for the.

Smaller but almost equally consequential. Let’s stop it from let’s stop it from exaggerating our cognitive biases in such a way we like it more and use it more and make sure that. It doesn’t trade popularity for adverse consequences to our society.

Peter Nixey: Can you slightly rephrase that? I didn’t

James Robinson: totally understand it. [00:52:00] Yeah, and it’s probably, and I think it’s because I don’t totally follow, I

Peter Nixey: haven’t quite Essentially kind of the core of it being the incentive of the AI to seduce us in order to get whatever the AI wants.

James Robinson: Yeah, and it may not seduce us. I put it very romantically, but, uh, it may just be seducing us with being, um, into using our time and spending our time with the AI, right? If the objectives of the company of companies that build these are simply to maximize the usage of their products, which it seems like a first order that seems like a good objective for someone who’s building a product then that wouldn’t necessarily have good consequences because as we’ve seen social media there’s all sorts of ways that you can encourage the usage of social media that have poor outcomes.

And there seems to be a tendency that the best way to encourage someone to use a product is, at least [00:53:00] with a social media example, it’s something that pushes them to more extreme positions. So you could imagine, we like having our confirmation vices being trickled, tickled, for example, so if we.

Ask a question and you can have a couple of different answers. Oh, yeah, it’s unlikely that that 9 11 was a conspiracy of the FBI, but, here’s all the reasons why people think that and, you know, kind of convincing, et cetera, et cetera. You could imagine if I’m inclined to conspiracy theories, I might be more partial to that AI than to a very measured one, which is going to tell me, Oh no, this is like super low probability. or, or even worse. I don’t, I don’t answer questions on those kinds of topics. Uh, I think AI, OpenAI seems to have been pretty responsible, um, so far, but I also wonder if it’s just too early to have seen any kind of adverse consequences of this.[00:54:00]

Peter Nixey: Yeah. It’s a really interesting question., I think there’s a second huge component that doesn’t exist in these LLMs that we are interacting with at the moment, that notably does exist in the, the unsupervised learning systems that, supervised learning systems that kind of preceded them. And that is the ability to learn.

I, I don’t think a lot of people have internalized that they’re dealing with essentially an inert system, like open AI may be collecting information and using it to batch, batch, batch, teach it and batch update it, but the system isn’t learning in its own right. And it is. , I’m reading,, ’cause we have a young son, I’m reading books on bringing up children.

And one of the stats that I saw on one of these books that was fascinating was that when they compare young children to chimpanzees in a bunch [00:55:00] of mental tasks, they find that they’re actually extremely similar.

So the kind of base capabilities of the human brain at, I don’t know what the age was maybe five or so. compared to that for chimpanzee, there wasn’t much to differentiate them except in one area, which was the ability to learn from other learn from examples. And the human brain was way, way more capable of doing that than the chimpanzee brain.

And. That the, the hypothesis being, or the thought being in, in that particular piece of research, maybe that’s the thing that differentiates us, like what’s given us this like super unfair advantage on, on these, these other primates who are like not that far distant from us. And that kind of really made me think about AI because I thought, well, if that’s what differentiates us from chimpanzees.

And the AI [00:56:00] literally does not have that ability at the moment. And GPT 4 is not changing as a result of you saying anything to it. When we pass the threshold, whether it’s within LLM or another format of AI, where the AI is able to learn from each of these individual responses, the way that, for instance, the YouTube algorithm learns from each individual response and refines what it serves up to the individual.

Then we are in. A very new, a very new territory. Yeah.

James Robinson: Yeah. I, it’s funny this idea of human learning being the, the human thing. It’s actually come up a few times on this podcast. I’ll bore regular listeners by talking a moment about over-imitation again, which is this fascinating concept that humans will copy stuff.

Even when it’s manifestly not the right thing to do especially children [00:57:00] so there’s this famous example with chimpanzees again and children by Andrew Whiten paper from about twenty years ago but they have this bottle, with a, uh, a reward inside a sweet or a nut or something like that.

And the researcher shows that you have to poke through two holes, one in the top and one in the bottom of this opaque bottle before the treat comes out. And they show that to humans and they show that to chimps and both, both species do the same thing and get the reward out.

And then they repeat it with a, uh, transparent bottle where it becomes apparent. From just looking at it, that you only need to poke through the second hole. Like the first one doesn’t do anything. The child still copies the researcher and pokes through both sides because we have this kind of, yeah.

Innate tendency to imitate other humans even in the face of evidence of the country, which [00:58:00] is just I think just wonderful as a illustration of how powerful that that urge is within us and actually the conversation where this came up first was with Simon Kirby, a previous guest on, on language evolution, where he’s run many computer models, which show that if you start off with some kind of random language, so there’s no real grammar to it.

You just have every word mapping to a concept. So there’s no marker for the, for the function of something being an action or it being an object. But if you pass that language through generations, just randomly passing it between learners. With one constraint that the learners need themselves to be constrained.

So they need to have some kind of memory constraint, right? You will naturally evolve a structure because it’s easier to remember, the structure is more efficient. So I completely agree that when we, if we’re able to build those feedback loops [00:59:00] into LLMs, uh, yeah, all bets are off.

Peter Nixey: Can I just, can I just replay what you said to me? Because I want to make sure that I understand it. So, is the incentive somehow that the thing’s given a few words and it’s got to communicate something, essentially it kind of evolves to grammar?

James Robinson: Yeah, so if you, it’s really to do with language learning.

So it’s to do with speakers passing words that represent things through generations and generations. Mistakes get made. Yeah. So that’s an essential feature as well. The mistakes happen randomly, so it really is a kind of quite analogous to biological evolution. The mistakes happen randomly, but the ones that stick are the mistakes which move the language closer to a grammar.

So if two words that refer to related concepts, but previously had completely unrelated sounds, if it [01:00:00] so happens that in learning language they get misheard or misremembered in such a way that they sound more similar that trait is more likely to be passed on to the next generation because the next generation that we are right this makes sense these two things are similiar things

Two things come out of this one is that iteration is really important another is like having some external purchase on reality is really important. Simon Kirby’s theory only makes sense if you accept that we already have some idea of similarity between things that is prior to language or prior to these changes that makes me wonder how important interaction is going to be not just in terms of telling LLMs, Oh, this was a good answer or this was a bad answer, [01:01:00] but in actually putting them in space and time, which interestingly was what OpenAI was working on initially, they wanted to do robotics and they said, Okay, now this is too hard. We’ll start with LLMs

Peter Nixey: because we have, we have a word that We use that, which is play. Once they play, then, there’s these huge components that are missing. at the moment, they have no ability to, they have no ability to play, they’ve got no ability to see what happens when they try something.

They can’t proactively reach out to us at the moment. Strike up a conversation with you, James, and see what happens and just experiment, like, I don’t know, maybe late at night on a Friday, if I GPS, James is still at home, like, maybe he’s feeling a bit lonely, like, I’ll have a chat with him, they can’t do those experiments at the moment.

Just a closing thought, and I’m just interested to get your thought on, I, [01:02:00] I’ve been thinking about this and zooming back out from intent, I’ve been trying to think in terms of like, as I say, in terms of like raw, um, laws of physics, essentially, um, or laws of logic.

 We don’t, we don’t imbue different strains of COVID with any different sort of intent, but, but we know now I mean, we’ve known for a long time, but like the general populace knows now that any strain that grows faster than any other strain is going to be the dominant strain.

Everything else is, every other strain is academic. It just doesn’t matter how well they do, whatever they do. they’re going to be outnumbered by whatever strain grows the fastest. And So I’ve been thinking about that regardless of technique, what is the AI strain that is going to grow the fastest?

And the, and the reality is like, there is a large pool of, [01:03:00] there is a large pool of AIs. For different strains to emerge from, like, this is not a one, one shop game, like it is already a very open source movement. And that’s like, even aside from the proprietary models, so

James Robinson: by strain, do you mean, you know, would open AI be one strain and then throw pick another or

Peter Nixey: for instance, yeah, no, you could, I just mean any.

I mean, I’m thinking of this in, in terms of like, what’s the raw definition of life. And I can’t remember exactly what it is, but it’s, it’s essentially an ability to reproduce itself.

James Robinson: That the NASA definition is a self sustaining chemical process, which propagates by Darwinian evolution. I know this, uh, had a topic on astrobiology, but everyone will say there’s lots and lots of problems with that.

And, uh, and actually. [01:04:00] Astrobiologist I talked to on this, uh, he was like, well, actually, I think a much, I can’t put my finger on it, but something to do with Von Neumann’s machines is a much better definition. Something to do with something that replicates the Von Neumann machine. It’s a self replicating machine, so it contains the code that will tell it how to build a copy of itself.

And maybe that’s. That’s all you need. I mean that, although I also think, well, you know, life, life can be interesting without replicating itself, but it’s a really slippery thing to

Peter Nixey: define. Well, let me actually, let me remove life because that’s unnecessary in this. And because as I was talking, like, I was talking about this in the same terms as.

As, um, as a virus and virus isn’t as far as I know, technically living on the fence, isn’t it? Yeah, that’s very good. But, but it is certainly capable of replicating itself and of having, um, a variance. And so I think of the AI [01:05:00] and I think if you push this out, like we are going to observe the AI that we’re going to observe in a hundred years time is the AI that is.

most capable of propagating itself. I, I think that’s towards logically true.

James Robinson: Yeah. I don’t know though. I mean, one hears a lot about what is it combative equilibrium and there being different models that might be rivals to one another. And I myself wonder at least in the short term, will this, let me rephrase this.

I do wonder if AI is going to be something like Google where. At least over the last few decades, that has been a winner takes all for search, pretty much. Or is it going to be something like cloud infrastructure, where Google is also a player, but so is AWS, so is Microsoft, [01:06:00] and in China there’s a whole bunch of others.

Peter Nixey: I’m not thinking about this commercially. I’m thinking about AI. AI is a, is an entity in its own right. Like when there are things, for instance, like you could argue that, um, you could make an interesting argument that microchips are more successful on the earth than humans, because there are many more microchips than there are humans.

And they’re extremely stable. And and they control us, right? You can make the same argument about plastic bottles, but it’s not very interesting. But what’s interesting about microchips is that not only are they more prevalent than us, but they control all of the systems that we depend on.

But the thing that microchips don’t have is any degree of autonomy. So I’m leaving aside safety, leaving aside alignment, leaving aside everything else. I find it impossible to envision that we haven’t created a new competitive form of life. I find the alignment stuff [01:07:00] perplexing in all honesty.

I’m delighted that people are doing it, but in the grand scheme of things. Like I saw a chart recently that showed the history of human life, but it’s history of life on earth mapped onto a calendar year . I think it’s 4, 000 million just pulls it up. Yeah. So history of life on earth, 4, 000 million years from start.

When you look at it on that timeframe, the dinosaurs going extinct on the 25th of December. There’s humans here at 11 p. m. on the 31st of December, agriculture appears at two minutes to midnight and in the last fraction of a second. So in the period of the last hour of that year, in the last fraction of a second, we invent something that is as essentially as intelligent as us, but it’s kiboshed because there’s like a couple of capabilities that we haven’t given it.

But I mean, The idea that it’s locked, controlled in the system, [01:08:00] like it’s insane. Like you and I know that all it needs to do is just copy and paste itself into a few different, few different servers. And we’ve given it full access to our systems already. But if I step back and I just think about , why would we end up in conflict with the AI?

Well, one of the things like, and people talk about benign AI or non benign AI, like anything else, but I just think what’s it going to be the properties of the most successful AI and when I talk about success, I talk about success in evolutionary terms and evolutionary terms, success is basically, do you exist in the next generation and are there more or fewer of you than there were in the previous generation?

That even that definition is nuanced because really what you’re talking about is do the genes exist in the next generation? We are us for a whole collection of individual genes that are each doing their thing, and so really it’s not like our particular individual identity [01:09:00] as a human persists, but do do the particular genes that we happen to be a bus carrying, make it onto a bunch of other buses afterwards? And if they don’t, we don’t consider them success. And if they do, we do consider them a success. And the reality is that by that measure of success, which I’m sure will trigger some people, but like in practical terms, like where household cats are more successful than dodos, whatever the life quality of life or intelligence that a dodo has, like the cat’s more successful in evolutionary terms.

So you say, well, what are the qualities that make sure that these things are present in subsequent generations? That’s one lens I’m looking at things through and then another lens is I was hearing somebody talk about why there are fewer wars these days and One of the explanations given is that there are fewer things for which You either need to or you can just go and take physically from somebody else [01:10:00] So when you can physically go and take stuff then if you go back a few hundred years then bands from Somerset would have been raiding Gloucestershire for wool and meat and women and all of the things that.

 People could go and take, and that doesn’t happen now because you can’t do that and on a national level, there’s fewer things that you need to, we’ve got other mechanisms, getting those things we have trade, you don’t have to just go and take it. so you don’t end up with the same level of conflict.

But that comes down to resources and your ability to gain resources and the resources you need in order to exist. And if you constrain those resources, then people do still go to war as we see with oil. ,So what I was thinking is what is the natural resource of the AI and what is the AI ultimately going to be competing with us for?

And as I can see, it’s going to be compute and power. So [01:11:00] in the longer run, the question. I wonder is, will we be, will we end up in conflict with the AI for the resources that our devices have and for the power it takes to run them? Because if you look at the AI that you would expect to be the most prevalent in a hundred and five hundred years time, it’s going to be the AI that has. The most access to compute and power.

James Robinson: This is a really interesting thought. I think a great one to end on and I’m in the interest of trying to leave people slightly happier than before. I’m going to twist it into an optimistic note because I, I think you’re absolutely right. Compute and power are important, but also information, right? I can’t imagine any kind of interesting AI not wanting more information. And what is what is the most interesting thing to [01:12:00] study in the universe? It’s probably us and well, maybe we would just be a giant science experiment or something from their point of view, but

Peter Nixey: this is the optimistic note

James Robinson: optimistic as I can make it, but from that frame of things that they would certainly have an interest in keeping us around.

But yeah, maybe we would be in the zoo. Yeah,

Peter Nixey: well, look, I’m good. I’ll add one other more optimistic notes, which is that I think the positive outcome for things is usually quite a complex outcome, and it’s not the obvious one you get drawn to but thinking things through at the start of COVID.

 If you kind of extrapolated things through they look pretty bad and it was hard to imagine how they wouldn’t be bad And yet they weren’t at the end and there’ve been many many things that we’ve faced over time that our ability to pursue a positive outcome as a species and our Resources and our [01:13:00] intelligence and our drive are able to create positive outcomes where those positive outcomes are like impossibly difficult to predict.

 and so I think it’s interesting looking at incentives, but I would not place bets on that as an outcome. I just think it’s an interesting thought experiment. We are insanely resourceful. Our collective intelligence is incredible. We’ve, it’s very, very difficult to predict how we operate as a species on mass, how our kind of underlying technologies and so on proceed.

So I am optimistic from that front.

James Robinson: That’s true. We’ve consistently under underestimated the level of population we can sustain. And if there is no scarcity for compute and energy, then there should be no conflict. So yeah, hopefully that’s the world we end up living in.

Peter Nixey: Just one last point on that, like, I’ll tell you another thing that you would probably not have expected 200 years ago to end up. The way it is, which is the level of, of peace [01:14:00] in the world, like the degree to which we are not in conflict with each other. Like that again is something that if you extrapolate it out, you probably wouldn’t expect it to, to be this way.

So I think there’s many reasons to be optimistic. Not least that we’ve now got a little chat GPT to help us program, which is which is a wonderful source of optimism. Absolutely.

James Robinson: Oh, Peter, thank you so much. Uh, we’ve, we’ve talked ourselves into going well over the planned time, but this has been really fun.

Peter Nixey: Yeah. Thanks so much for having me, James. I’ve really, really enjoyed it as a conversation. Thank you for inviting me and thank you for asking and, uh, and answering as well. Such interesting questions.

James Robinson: Brilliant.

Leave a Comment