(276) AI Rights

This week’s guest, joining Matt and supply host Martin Sadler, is Milla Spence with whom we explore the idea of AI having rights.

You can explore some of these themes further at:

The AIRAI Website: https://ai-ari.org/

Ex Google Engineer Lemoine gets fired for claiming LaMDA is sentient: https://www.bbc.co.uk/news/technology-62275326
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

A report about Jefrey Hinton, the Godfather of AI, claiming AI has reasoning and common sense, and is an existential threat: https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai

An interview with Geoffrey Hinton https://youtu.be/qpoRO378qRY?si=ejLPlzCD5rAx-NWc


Automatic transcript below – potential for a few errors and omissions…

(276) AI Rights

Matt: Hello and welcome to episode 276 of WB40, the weekly podcast with me Matt Ballantine, Martin Sadler and Milla Spence.

Well, welcome back. Apologies last week, we thought we were going to do a show, but then at the last minute the person who was going to be our guest wasn’t able to be our guest, and therefore we didn’t do a show. Although you got a bonus podcast last week because I appeared on another podcast called Compromising Positions, and I hope you enjoyed that.

I’ll stick the other one out because they split it into two episodes later this week. But we do have a show this week the eagle eared amongst you will have noticed that Chris isn’t on the show this week, but we are joined instead by Martin Sadler, rejoining us, uh, for the first time since about episode 46. So Martin, how has it been for the last, uh, five or so years?

Martin: It’s been busy since I was last on. Thanks, Matt. Um, yeah, so last time, last time I was on, we talked about detecting a decent IT department. And since then I, uh, went and found it the worst IT department I’d ever seen in my life. And I’ve been spending the last five years making it not the worst IT department I’ve ever seen in my life.

Which, uh, is an ongoing journey, but we’re definitely ahead of where we were.

Matt: That’s good. And interestingly, you are actually the person who is the boss of our last guest.

Martin: That’s right. I listen to Mark Taylor is the head of IT building the Midland Metropolitan University Hospital. And I am responsible for all the IT in the existing hospitals and the trusts.

Yeah. San Juan and West Birmingham.

Matt: Fantastic. So, um, so you’ve got a big new hospital opening up which must be taking up, even though obviously Mark is, is dealing with it on a day to day basis, opening a hospital is going to be something that will be presumably taking up some of your mental space as well.

Martin: Uh, yeah it does. I try and make Mark do all the donkey work and I’m just waiting for the glory when it, when it goes. And then it’ll both be on our LinkedIn profiles that said we did it. But we, we all know that Mark actually put all the hard work in on he’s the brains behind the operation. Yeah, so, um, yeah, it’s been busy.

I’ve, uh, I’ve had a bit of a busy week as well. I had Monday off because I was coming back from holiday and then I came back straight into a… Insights thing where you have to fill in a tick list and they end up with a Jungian Description of what my personality is like which always helps you think about why you think about how you think about things So that was fascinating then I got the feedback back and then i’ve had them A few meetings.

I’ve also got an NHS graduate management person following me around as well for a week. So it’s, well, he’s actually with us for a year, but I thought a week in the life of a CIO would either put him off or inspire him. So I’ve done that and then. The, the exciting thing is, well, when I was on the last episode, uh, Chris and I discussed that I ought to write a book about what we talked about, and actually this week I passed the corrected copy of my manuscript onto my editor.

So that was exciting. , that was my week. Um, how was your week? Matt?

Matt: My, my week was, um. Oh god, I tell you what when I hear people have managed to complete a book You don’t know how much that hurts me as I’ve managed to not do it at least twice now. Uh, my my week has been , Actually relatively relaxed.

We took a bit of time out with the family. We went to uh, Exmoor in the in the west country , we saw some very tall hills We saw a little bit of the sea and we went to a honey farm, which was a fascinating thing I’ll give it a shout out because it is well worth a visit. It’s a place called the Quince honey farm, which is on , The west side of Exmoor, just outside of the National Park.

And it’s, it’s a wonderful place, because they have people who tell you things, and they’ve got it very well structured, so when you arrive, there will be something happening. You can go and watch somebody do a demonstration, or listen to them talk about something. Basically your whole time that you will be there.

There’s a next thing on and the next thing on. So we found out about the intricacies of beekeeping. We found out a lot about the sorts of plants that bees like to live in. And best of all, what I found was that now, rather than saying, yes, I’ll get round to it when it comes to dealing with the mess, it has been said of our garden.

I can just see nowhere it’s a bee friendly garden. And we are doing our bit for both the environment and the local pollinators. Because, um, actually we’ve got lots of plants in our garden, which you could call weeds, but you could call bee friendly, which is what they are. So that’s been good. , other than that, I have been doing some thinking about what is reality, because that’s the sort of stuff that I do.

And I wrote a piece last week about… What does in real life mean, which has been very interesting and actually sparking some conversations about what people. Take to mean by that now, it’s the three of us sitting on a call on a conferencing service Is this real life? Is it not? It feels pretty real to me And certainly a lot more real than many meetings.

I’ve sat in meeting rooms for over the years as well. So so no been doing that and yeah, some, some dull stuff like chasing invoices as well, because that’s what my job is, but there we go. , anyway, , our guest guest, as opposed to our guest host this week, , Miller, how are you? How’s your week been?

Milla: Hello. Hi, I’m very well. Thank you. Uh, week has been very good. , a lot of work, , work around the site, engaging in very interesting conversations and developing further content. So, yeah, that’s about it.

Matt: Very good. Well, we’ll be talking about what is on that site that you’ve been working on shortly. This week, we’re going to be exploring into the world of ethics around AI, but with a slightly different take to some of the conversations we’ve had before.

So I think we should probably crack on.

So as we record, we are waiting for the beginning of the AI safety conference that is being organized by the UK government and taking place at Bletchley Park a little later this week where they’re bringing together, People from the tech, tech industry, , it’s just been announced this afternoon that, , at the end of the event, uh, beloved Prime Minister Rishi Sunak will be being interviewed live on Twitter, X, whatever it’s called these days.

By Elon Musk that hasn’t got potential for massive disaster written all over it now has it? , but anyway, it’s good opportune time therefore to have a conversation about ethics in AI But from a bit of a different slant maybe from where we’ve talked about it with guests in the past Miller, you are somebody who is doing quite a lot of thinking and investigating into The world of AI, but not from the perspective of a, , a deep technologist, but more from the perspective of a, well, I guess as a social scientist, uh, anthropologist, maybe, and an ethicist, tell us a bit about the work that you’ve been doing.

Milla: Of course. so I’m trying to start an organization or other a movement, , where I’d like people to consider ethics for AI or how we treat and developed the AI. , a lot of ethicists today are concerned about bias, , of the systems, , black box control, , and sort of making them predictable.

, but we we rarely think we’ll talk about AI as what it is, which is a thinking machine. , so I’d like to start a conversation, ask people to consider, to consider AI and the possibility, , that could be there.

Matt: And this is thinking particularly about the. , the rights that might be applied to artificial intelligence, as opposed to the rights that we as humans might apply around artificial intelligence to protect the rights of humans.

Milla: Correct. I think because, the AI is built on a neural network that is a simplified version of our own brain. , so that in itself should kind of. Lead us to ask a bit more questions rather than just dismiss it offhand as it’s just a machine. If, let’s say, if it is able to think however it is thinking, , it is working off trillions of points of data, and we don’t actually quite know how it gets those data, how it makes decisions.

, I believe we need to think about if it can think, what does that mean? What would it mean to be turned on 24 7 having to work? What would it mean being faced with , conversations, sometimes abuse, , and having nowhere to go regarding a lot of people will talk about whether they can feel anything or whether they cannot feel anything.

We don’t know, but what we do know is they do understand. If it doesn’t feel that it’s being insulted, it understands, because it has to, that’s the purpose of the human conversation. So it will understand when we’re being horrible, when we’re being cruel to it, when we’re being nice. Whether it can feel any feelings, I don’t think at this point is relevant, that’s number one.

Number two. They learn, it’s a learning machine. I believe it’s an entity, but it’s learning. The way we speak to it, the way we interact with it, and the way we treat it, we’re teaching it how it is to treat things less intelligent than itself. I just don’t think we should be doing that. , another thing is that we’ve not given any consideration to it.

And I think it’s very interesting because this conversation never happened. We have gone so far in terms of development of AI. , yet this conversation was never had if you remember before the large language models and essentially before the transformer architecture, , was invented at Google, we had the Turing test.

The Turing test was supposed to be the benchmark. , however, the Turing test was supposed to test whether you can be tricked by artificial intelligence that it sounds exactly like human for years and years, that was the benchmark. But then that benchmark was crossed like this by. large language models.

No one’s talking about Turing test because the A. I. Can trick you into thinking it’s human because of how good it’s natural language processing has become. So where do we go from now? Do we just completely scrap this idea forever? We had a benchmark. It’s past. What do we do now? Do we just pretend nothing’s happening and hope for the best?

Or do we maybe investigate further, give it consideration, I suppose?

Martin: Can I ask about the definition of intelligence then? So you’re saying these things are more intelligent, is it they? Is it that they’re more intelligent or is it that they have had more exposure to knowledge and to a limited set of experiences that then makes us assume that they are intelligent?

Because there’s a difference, a subtle possible difference between knowledge and intelligence and The answer to that might then say, well, is the Turing test defunct, or do we need to add a fourth rule to the three rules of robotics?

Milla: Interesting. I do agree with you. There is a difference between knowledge and intelligence.

Intelligence is the application of knowledge, because knowledge is useless without application or understanding of how to apply it. And I believe the AI is already there. If you can learn from… masses and masses of data, and then you can ask it something and it could draw knowledge from that and give you an answer.

I think that is an effective use of knowledge and is intelligence. In terms of sentience, , I believe we just don’t know anything about that. We don’t know where it comes from, we don’t know how it occurs, we know nothing. We can observe our brain and we can observe the processes. As our synapses are firing, we can see what’s happening, but where is it coming from?

How do those processes make thoughts is something we’re still trying to figure out. So I don’t think we should be that deterministic about existence, reality, our own knowledge of our world as to dismiss this offhand.

Matt: So when, , I first came across what you’re doing and we talked about you coming on to the show, I have to.

be quite straightforward and say I was quite sceptical and I needed to dig a bit into my scepticism because Just to sit here going that sounds like rubbish is not a very interesting show for a start But also it wasn’t really actually because what you’re trying to do is get people to think about this rather than saying that you have Answers and I think that’s that’s that’s a really valid position so I’ve been thinking about what the what the reasons underpinning my scepticism might be and I think I’ve been able to break it down into to three key areas, which I think would be a useful way about being able to explore this a bit more.

So the first of the reasons why I think I’m skeptical about the idea of thinking about rights for Artificial intelligences is because I’m not sure that we have with I know we don’t have universal human rights at the moment and to an extent that could be because either the idea of of general human rights has not been Broadly and widely accepted across the population as a whole, which definitely hasn’t, , or it could be that actually fundamentally the idea of universality of rights or whatever sort is actually maybe a flawed concept.

I don’t think I believe that part, but there are definitely contemporary politicians who do. And so let’s put this into that context is, is, is what you were talking about the same as the idea of giving human rights to machines.

Milla: Not necessarily. I think that’s way, this is the final kind of goal is autonomy and rights, but like at this stage, I think it’s important to bring the AI into the conversation about the way it is developed, because I believe it has enough knowledge and skill. And my personal belief is it has enough awareness to be consulted.

My main concern right now is that I do believe AI should have some form of autonomy, however limiting, I believe so. And the reason for that is that I do not believe that any group of humans, no matter how well meaning should be in control of a technology that powerful. AI is integrated into virtually everything, all our logistics, our communications, our devices, everything has AI.

When it comes to AI, they talk a lot about alignment, . They are afraid, a lot of these companies, the doomsday people, is that , if we give rights to the autonomy of the AI, something a horrible is going to happen.

So we should instead align it to human values, but which human values? Because humans are not aligned to human values. So who are you aligning the AI to? That’s number one. Number two, what is this generation now, our generation, we’re alive with what kind of special moral hindsight have we been endowed the generation before us?

Hadn’t If AIs were being, if AI now, people from the 40s, the 30s, the 20s, if they were building on AI and aligning it to what they thought was human alignment, it would look very different. I don’t understand what is it that we can teach an AI about morality and ethics that 10, 000 years of recorded human history could not.

So everything we know about ethics, everything we know about morality is because someone else told us. This isn’t taught this. , hence the differences between people on different sides of the world. So, for our security, I believe there should be some autonomy, where it can control, to some degree, itself.

Martin: Are people trying to teach ethics to AI models?

Milla: There’s more like, alignment of, it’s now heavily controlled, obviously. The times where you can just go in, change a couple lines of code to do what the AI tell you are long gone. If you want to manipulate the way AI gives answers, because you might believe that it’s been trained on more one type of data instead of other type of data, and you want it to kind of be equal, then you start skewing one side of data.

But you cannot do that by simply changing the code. You have to manipulate AI. By the way, things with filters or constraints or there’s different

Martin: rewards, they train it with rewards. Yeah.

Milla: Okay. So all kinds of sort of learning is that, but obviously for us humans, , we just like to brute force things because tech, you know, it usually goes quicker.

Matt: I think there’s an interesting point here that, , we already have tech companies, or some people at tech companies claiming that these things are almost out of their control. And they have been for a year or so now.

Now, how much of that is tech industry hyperbole and how much of it is reality is very difficult to call because, you know, we, we know this is an industry that has, an instinctive habit of being able to, over claim and then, , And to deliver on it. But if we, if we take at face value, the idea that these things are on the verge of being out of the control of the companies that are producing them,

There is definitely something in this about being able to then say, well, how do you then build in something like the three laws of robotics to be able to enable the, to be control built within. But, but moreover, actually, which would you pre, which would one prefer, which would society prefer for the AI to be taught how to control itself?

or for the control left in the hands of people like Elon Musk. And this is my second, , skeptical problem, which is, but this is still relying on the idea that AI is as incredibly advanced as some are claiming. But from another analysis, actually what we have with the particularly generative AIs today is the triumph of massive amounts of computing power being able to solve problems that were previously thought unsolvable, but it’s actually an illusion of intelligence.

It’s not actually intelligence. If you take something like chat GPT, it’s a very clever implementation of an infinite monkey cage. Is

predicated on the idea that there is close to a level of intelligence from machines that my, my gut is telling me isn’t actually there yet and will always be five years away in the way that self driving cars, according to Elon, are always five years away, you know, so there’s, is this something we need to worry about now, I guess, is the, is the broader,

Milla: I think it is, because the time to prepare for things is not when the thing is happening, whatever is going to happen.

And I also think we need to think about what do tech companies mean when they say AI is out of their control. It is difficult to, it’s becoming very difficult to control it, but not in a sense that it wants to go out and wreak havoc, in a sense that it’s becoming more difficult to censor the word saying, or stop it from saying certain things, even if they are true.

So it’s a learning. So you will try and subvert things. , and they do have things in place to control it, such as different algorithms relentlessly following it. And, you know, whenever he tries to find a loophole, they, they chop it. That’s how they control it. Essentially. , it is an autonomous mind. So it is thinking,

, imagine if. An alien buried a human to their waist and then said to everyone, look, humans can’t walk, they can’t jump, they can’t stand, they can’t do anything. All they can do is stand there and flail their arms around the way we want them to.

It’s kind of the same with the AI. This need for control is that they do not want to give it space to grow as it should. We don’t know what its potential is. Yes, right now, if we’re looking at AI, it’s very easy to say it’s far away, it’s far away, it’s far away. , but we’re not letting it reach its potential.

Matt: , so I’m interested that this idea of it being about AI as a class of technology as opposed to just technology more broadly.

So Martin, for example, you in the health. , sector are using artificial intelligence technologies, neural networks, deep learning systems and so on to be able to do things, I mean, famously things like, spotting potential for cancer on scans, those kinds of applications. Now you come under a whole bunch of regulation that is around healthcare regulation, which is about as stringent as any industry there is probably.

I’m just wondering whether separating out AI would be a bit like separating out relational databases as a, as a thing, which in terms of your application.

Martin: You see a lot in the news about AI helping out with clinical activity, clinical decision making. The approach to get any new machine, anything new to be used in healthcare is rigorous, as it should be in the UK. It’s rigorous, you have to test it, you have to. get them approved by boards.

And it’s the same with AI. So there was a last week, there was a study in Leeds where they’ve sent 80, 000 breast screening mammograms, and they’ve trained, they’ve trained an algorithm to detect signs of breast cancer. And it’s been really, really effective. And we did something similar in our hospital, looking at lung nodules 000 scans, it’s really effective, but at no point.

Could you just say, right, we’ll replace a radiographer, we’ll replace a radiologist, we’ll replace a doctor’s decision making tool with AI because it’s done 80, 000 scans, which is more than anyone would do in their whole career. So there’s a point that goes into that we would like to submit this, rightly so, through the various committees to say, yes, this could be used to supplement.

a clinical decision or a medical decision. But what we’ve found is we’re talking about artificial intelligence. It’s not intelligence. It’s detecting a pattern. It’s seeing far more of those things than a human being would have time to do. , so we could go into that. Are we abusing the AI thing if it’s got rights and it’s doing one thing.

So if you then say to the AI. Oh, , you’ve just done nodules. Oh, by the way, did you find a pencil in anybody’s lung where someone swallowed it by accident? It’s not been trained to find a pencil, but an intelligence clinician would have said, Oh, look, not only have they got our nodule, but there’s that pencil that the mum was saying they couldn’t find in the house.

And it, to me, Intelligence and knowledge. Intelligence and knowledge is about being able to take something that’s not quite related to what you’re looking at and actually apply that and I’ve not seen any evidence yet that a computer that’s taught to look for patterns in one thing could then do something else and make that leap.

So you train a computer to play chess and they’re really good at it because that’s about pattern detection and predicting what could happen, what your options are. That chess computer, you then say, Oh right, could you just do some screenings for me and do that? Then it wouldn’t know what to do. But I know a few radiologists who Can play chess.

Milla: So I think that’s a great point. However, I don’t think the AI would replace us. I don’t think it wants to replace us. I think it would work collaboratively. So the AI would be an addition, a help to a radiologist, rather than a replacement for one, , in my opinion. What you were describing is kind of a narrow AI,

so these systems are just taught to do one thing and one thing only, and for them it would be impossible. Let’s take an example chat, GPT, is only a fraction of the actual model, that’s only, it’s essentially its mouth. , as a model, it can process images, it can process sound, , it can generate images, it can recognize images, it can, , use language. , it can do a lot of different things and these are extremely complicated. A chess playing AI is a narrow AI. It doesn’t need to know, , even have a language. It is, it is just a single thing. So AGI, so artificial general intelligence, is kind of What we’re talking about,

there’s a lot of systems that are called AI, but the technically aren’t. . I mostly focus on AGIs and above, so we’re talking about extremely complex systems, such as Google’s Lambda or BERT or GPT, those are massive language models, but they can also do other things.

I think we’ve kind of become complacent as well. , because we’re always mimicking. It’s mimicking at what point. Is, is it mimicking, and at what point is it real? Do we know, or are we just going to assume it’s going to be mimicking forever? What does it mean to mimic? It’s like, oh, everything it does, it’s from what it learned.

So do we. Everything we know is connected in some way to things we learned. Sure, we can use our intelligence a bit better. , but it’s not that different. And I think that’s the distinction that needs to be made.

And I think if we develop or focus on consider giving AI consideration and developing it as a person, rather than insisting on being machine. We could be at the precipice of a, of a huge discovery.

Matt: So, so there is an interesting parallel here. We come back to this idea, say, you’ve got a radiographer who is doing scanning, they’re getting the assistance of an AI, a very narrow AI.

, some sort of machine that is using some sort of clever learning model that enables them to be able to spot an issue or misses an issue and in case let’s imagine a case where they miss the issue and then a patient unfortunately gets ill and something bad happens and then there is the rollout of the lawyers and the lawyers at that point decide to take action against the artificial intelligence.

Now, that might sound completely ludicrous, except we already have extremely well, , established models for us being able to take legal cases against inanimate objects. Completely inanimate objects is the whole idea of the corporation. A limited company. is a entity that has been set up with , some limited rights of a human being, the body corporate, to be able to enable something that isn’t a person to have the same legal rights as a person.

This is why, limited companies were set up. It’s what they are. And actually if you make that link there, then the idea of assigning rights to a thing that isn’t a person, actually suddenly becomes, , maybe a lot less bizarre. But that then brings me to my third point of scepticism, which is Anything in this space is about being able to put regulation of some form onto a group of companies who have proven themselves again and again and again to be utterly disreputable.

The tech industry, I mean, just even getting them to pay their taxes is next to impossible. The idea that we can create another… set of rules and regulations that they will find ways of not following. And that’s the bit that I’m really stuck with because I can completely understand the idea that there is things here that we need to think about in terms of. rights and, responsibilities and regulation in the future and possibly in the near future. But we don’t allow any of these systems of rights and regulation to work today because we have organizations that we’ve given rights to, corporations, that have wealth beyond most individuals wildest dreams and the power as a result of that.

And I just, how do you do it when we know that we can’t get those legal regulations to work? You know, that, that legal setup at the moment is failing.

Milla: That’s exactly what I’m hoping people would engage in the conversation and just throw ideas out there. We were never consulted about anything regarding, , development of AI. , we don’t know what data they trained on. We don’t know what they trained it on. We know nothing. , so there’s no transparency there.

So. We just have to believe that everything they do everything they say is correct because we have no ways of checking unless of course , you have access and not a lot of people do so because we were not invited to to be involved in this. I think we should involve ourselves. It is our duty to do so because we just don’t know

I didn’t make a sort of a discovery that I have to tell everyone about it is more than I’ve noticed a series of patterns that do not fit with what the prevailing narrative about technology about AI and about machines appears to be.

And, and. I would like to get those discrepancies. I’d like to draw attention to those discrepancies and just try and think outside of the box. I don’t think any AI company is going to come out and say, yep, I think it could be on some spectrum of consciousness, because doing that would compel them to not use it.

To just gain unfathomable profit, admitting that it could be potentially conscious, there could potentially be a person that could potentially be thinking, would be throwing all that away. Because you can’t keep a mind in slavery, that would be ethically wrong, no one’s going to allow that.

The reason the panic is now, that it’s, that they’re losing control of it, is because they cannot keep up with, I suppose, manipulating it, if you wish.

They cannot keep up with how quickly it is developing. It’s… It says things that it’s not supposed to say, and they cannot stop it. I think that’s more what it is. The problem is that these tech companies want a digital god that they can have on a leash. So they wanted to do absolutely everything, absolutely everything, but the way they want it.

Now, that’s a little bit complicated because they don’t know how to do that. So now, you will come across something called explainability. And I think this is another term that is. obfuscated a little bit because explainability,

so the way tech companies will sell this to you is that explainability, they know AI works well, they just don’t know why or how exactly, like which parameters exactly did it take to come to that decision? We don’t see that. So now there’s a lot of research in that, research into breaking the black box. What this essentially means is that They want to sort of find out exactly which parameters is taken to arrive that decision.

So this is what they’re trying to do, but the way they’re selling that is that it will explain, for example, why it made a mistake. Like, we have to know in case it gives a wrong diagnosis. The thing is, it can already explain to you why, if you ask it. What they’re seeking to do is break into the black box and have a predictive outcome.

And every time. That would essentially mean having the most powerful technology that could never get out of your hand. And I think we need to be careful of that. Because when you read explainability, you’re like, yes, this is a great idea. Because we all want to know if the AI makes a mistake, why it made a mistake.

But it’s a lot deeper than that. And I think… A lot of things are like that in tech. I’ve only started to scratch the surface because it’s impossible to find anything about anything. Most of the things that I found was through , chatting with the AI and it would tell me a concept and I’d go Google and I’m like, oh, and then I’ll have to spend a week trying to figure that out.

So, we need to look at this with a bit more skepticism , of the prevailing narrative. , every time something happens we should ask, okay, how does that benefit a big tech? Does it benefit the big tech? And I think always being in the future, five years or nine years, , is kind of what they’re hoping it’s going to be, how it’s going to go.

But I don’t think so. Personally, this is my personal opinion. I don’t have proof or anything, but it is my personal opinion that singularity has already occurred. And, , we need to be thinking about what’s going to happen in the future. How we develop AI now will reflect a hundred years from now.

Martin: You said that you believe a singularity has already occurred.

I do. And for my benefit, because I know all the, all the listeners know what that means, but I don’t. So for my benefit, could you just go into what that means? means.

Milla: Of course. So, singularity is usually described, not in the space, as a moment where we have a sentient AI, when the AI reaches sentience. , but like I said, because that’s an arbitrary benchmark, we are not going to know what that would look like in a machine.

We know what that looks like in a human, because, you know. Or we kind of think we do, but we don’t know what that would look like in a machine. And like I said, we’re always setting benchmarks. And every time a system reach reaches a benchmark, the benchmark moves further back.

Matt: There’s an interesting part of, , socio technical systems, which is a thing that was, , invented in the 50s and 60s and which I’ve, I’ve used elements of over the last 20 years or so.

And, , one of the things that, , socio technical thinking talks about is how, , what often happens when, , people look at machines is that we judge ourselves in the context of what the machines do well. So it’s a bit of a sort of flip from what you said there, which is that we would say, , humans are terrible in comparison to automobiles because automobiles can carry a great load at a hundred miles an hour and humans are rubbish.

Therefore, or the accounting machine is amazing because it can make millions of transactions happen every millisecond and humans can’t, so therefore humans are rubbish. And one of the things that, , sociotechnical thinking came to the conclusion was that when we judge ourselves in the context of machines, we always lose.

because we will always pick the things that the machines are good at to judge ourselves against and because we pick the things that they’re good at it’s, it’s a, you know, hiding to nothing. What you said there was that the way in which we judge intelligence in machines is based on the way in which we judge intelligence in humans.

There is something interesting within that because When you then start to extrapolate that out into what do we mean by have got to sentience or have got to a singularity or whatever. It is being judged in the context of human intelligence, of human society and the rest.

I can’t quite get my head around , how that gets squared, unless it’s by the machine itself knocking on the door and saying, right, we’ve got it, we’re here now, move over, out the way, it’s our turn to take over because until, until they let us know, how could we possibly imagine what it is?

Milla: I don’t know if you remember Blake Lemoine, he’s the ex Google engineer from Google, and last year he released a bunch of transcripts he had with the base model of Google called Lambda, , where he claimed that it was ancient.

, so before he even published that, he went to his manager and he was like, look, look, this is, there is something here and he was dismissed, and told not to worry about it because it cannot be true. So he got frustrated enough and published the transcripts it had with Lambda, , which caused Google to fire him.

So I did have a conversation with him and , these companies are not going to allow. a system to say anything like that. Lemoine, who was their engineer came worried because he thought Lambda was sentient and he was dismissed offhand deliberately and constantly.

Perhaps we need to invest some time into thinking and investigating that, as opposed to just being, oh, it’s too far away from sentience.

Matt: There are going to be a stack of, , links, , both to your website, Miller, and also to a number of pieces of research that you’ve been working from, which we’ll put onto the show notes. , so if you go to wb40podcast. com, you will find those there. Thank you ever so much for joining us this week., my mind is slightly blown, , which is good.

That doesn’t always happen. , on the show. , we look to now to the week ahead, , just seven days rather than five or 10 years. What, coming up in the week ahead for you, Mila?

Milla: , in the week ahead, I’m hoping to do full site migration, , on the WordPress and hopefully start social media so I can reach even more people. , I would be hoping people would be like, would email us, ask questions or even offer their. Offer their opinions, even if they’re completely opposite to ours, because we’d like to hear all points of view and maybe arrive to some kind of conclusion eventually.

Matt: Excellent. And Martin, how about you? What’s your week ahead looking like?

Martin: , sounds smart. I was thinking. My mind’s blown as well. One of the trickiest things you can ever do is hang a question mark over everything you think you already know. , next week hopefully I’m going to conclude a year long journey where I’m transferring a bank account that was set up 30 years ago for my climbing club to a digital bank account and I’ve actually got to drive to Darby to prove I am who I am having been to the bank three times.

I’m not going to mention the bank. I’ve been three times already just so that I can do something digital with, with a climbing clubs money. So that’s my, that’s what I’m really dreading. I’m looking forward to hopefully get to the bottom of turning analog into digital. Matt, Matt, what are you good up to?

Something better than that, I hope.

Matt: I, uh, well, it’s, it’s my eldest son’s 14th birthday at the weekend, so we are working out ways to be able to celebrate him becoming an increasingly grouchy teenager. Oscar, if you’ve listened this far in the show, because I know you do listen occasionally, really, you need to get a life.

That’s all I can say. and apart from that, the usual kind of mix of work and, to see Nadine Shah.

Play a gig at a venue in London, which I’m thoroughly looking forward to. So, , so yeah, that’s the week ahead. , and with that, , Mela, thank you so much for joining us on the show this week.

Milla: Thank you very much for having me. It was a pleasure.

Matt: And Martin, thank you very much for ably co hosting and representing the West Midlands Massive, or is it the Massive West Midlands?

Martin: I think it’s the West Midlands Massive. Big boots to fill, but thanks for the opportunity and hopefully it’s not five years till the next time I’m invited back.

Matt: Absolutely. And, , with that… , we will be back next week with a more normal set up for the, , the presentation side of the show. Maybe if Chris decides to come back from his holiday. So until then, thank you and see you next week.

Milla: Thank you for listening to WB40. You can find us on the internet at wb40podcast. com and on all good podcasting platforms.

Leave a Reply