Leading People
Gerry Murray talks to leading people about leading people. Get insights and tips from thought leaders about how to bring out the best in yourself and others.
Leading People
How AI Can Help You Resolve Your Conflicts
Could AI help us end conflict — at work, at home, or even at a global level?
In this episode, Gerry Murray talks with negotiation expert and author Simon Horton about his provocative new book, The End of Conflict.
We explore how AI is already influencing negotiation and collaboration, what it gets right (and wrong), and how a more symbiotic Human–AI relationship could change the way we resolve disagreements of every kind.
If you’re curious about the future of negotiation, the role of AI in reducing conflict, and what this means for leaders today, this conversation is worth your time.
Connect with Simon:
Website
LinkedIn
Follow
Leading People on LinkedIn
Leading People on FaceBook
Connect with Gerry
Website
LinkedIn
Wide Circle
Welcome to Leading People with me, Jerry Mermay. This is the podcast for leaders and HR decision makers who want to bring out the best in themselves and others. Every other week, I sit down with leading authors, researchers, and practitioners for deep dive conversations about the strategies, insights, and tools that drive personal and organizational success. And in between, I bring you one simple thing: short episodes that deliver practical insights and tips for immediate use. Whether you're here for useful tools or thought-provoking ideas, leading people is your guide to better leadership. If you're a regular listener to leading people, you'll know that I've been exploring the topic of AI quite extensively, particularly in the short One Simple Thing episodes, and more recently through that deep dive conversation with Dr. John Finn on how to train your brain for this new AI era. My guest today is Simon Horton, who's been examining an intriguing question. How could AI help us end conflicts? Whether on a global scale, inside our organizations, or in our personal lives. We go into this in some depth, and what keeps emerging is the potential for a genuinely symbiotic relationship between humans and AI. If we can harness that and use it for the greater good, well, the possibilities are remarkable. So let's get into it. Here's Simon Horton talking about his new book, The End of Conflict. Simon Horton, welcome back to Leading People.
SPEAKER_00:Hi Jerry, how are you doing? Great to be back.
SPEAKER_02:We'll have to stop meeting this way.
SPEAKER_00:People are talking about it.
SPEAKER_02:People were talking about it because you got a lot of uh downloads on the previous episodes, so I'm hoping for the same again. So you've been at guest twice before, so every couple of years you write a book, so you'll have to stop writing books, or else I'll have to keep having you on the um the podcast. So the first time you were on, we were exploring negotiation. You'd written a few books on negotiation as just really a part of life, and then later you came on with your great book, Change Their Mind, and um, which was largely about influencing and convincing people. And today we're here to talk about something and let's call it an evolution from from those uh books. But for those who don't know you yet, could you briefly share the journey that's brought you from your background in IT and AI, even way back in the day, through negotiation and collaboration to this new, and dare I say, provocative work on the end of conflict?
SPEAKER_00:Yeah, yeah, absolutely. So I've been teaching negotiation skills and collaboration and conflict resolution for the last 20 years or so, working across sectors, a visiting lecturer at Imperial College, taught at Side Business School. Um and as you say, I've written a few books on the topic. But my first career was in IT. Uh, I spent 12 years in the city designing derivatives trading systems and money market trading systems and that kind of stuff. Uh, and I got into AI, uh, the big buzzword of the day. Uh, I got into AI in 1989, uh, uh, but just in a kind of a micro level. Um, but my interest grew with Moore's Law. And by 2005, and Ray Kurzweil uh bought out a book called The Singularity Is Near, which was a really amazing book, and it blew my mind away. So basically, everything that we're going through now in terms of AI, etc., it predicted 20 years ago and it put dates to it, and it's predicted a whole load more as well. Uh, and and I read this 20 years ago and thought, oh my god, this is amazing. If this is what we're going to go through, and it was very credibly written, then I've got to watch this space. So ever since then, I've been obsessed with AI. And I, so if my field is negotiation and conflict resolution, and uh my background is IT and I'm obsessed with AI, and I write books, you might expect me to write a book on uh AI and negotiation. And so that is what I did. And thank you, Jerry. You're better at marketing than me because you've got a copy of my book on display over your shoulder, the end of conflict. I don't even have a copy with me uh at the moment. So um, but it's so it's about AI and negotiation and conflict resolution, and the amazing things that are already happening in that field is quite when I looked into it in depth uh a couple of years back, it really blew my mind away about what was going on uh and and what is likely to be going on moving forward as well.
SPEAKER_02:Okay. So I suppose the first thing out there is for all those people who who thought they discovered uh AI when they found Chat GPT two or three years ago, they're probably all feeling very disappointed now to know that it's the origins go date back way, way back, 1989. Some of the people might not have even been born who are listening to this podcast at that point. And um maybe we'll get into some more of those predictions later on, because I that that could be something that we could explore when we get into the topic in a bit more depth. Um let's just talk a bit about the the book now itself. It's called The End of Conflict: How AI Will End War and Help Us Get On Better. Wow. So, what's the central idea behind it, and what inspired you to believe that technology, especially AI, might actually help humans get on better rather than divide us further?
SPEAKER_00:Well, well, firstly, to say that there is the potential for it to divide us further, too. AI is a tool uh and it could go the wrong way. The title is the end of conflict. Well, it could accelerate conflict. Um, it all depends on how we use it. And so that was one of the main inspirations behind the book. That AI um is this tremendously powerful tool, and it can be used for good and it can be used for bad. And unfortunately, I think our current trajectory is going down some not so good routes. And so that's why I wanted to write the book to um spread the word that there is a better way of using it in order for us to use it that way. So the the premise, the core premise, is that AI is this tremendously powerful tool. It can model human best practice and then make that best practice more widely available. Uh, and it can do this with things like playing chess or um radiographers, these are some of the famous examples. Well, it can do it with conflict resolution as well. Uh, and humans do know best practice when it comes to conflict resolution. You just have to look at examples like Northern Ireland or South Africa or Colombia or France and Germany, even, that were at war for hundreds of years and now, you know, absolute best of friends, kind of thing. So we do know how to do it as a species, but unfortunately, that knowledge isn't widely spread. Um, but AI can model that best practice and then make it more widely available. And so we can then tap into that uh as a resource um to use to resolve our conflicts or to even avoid, preempt our conflicts. And so I'll and I'll give you uh a couple of examples, kind of one quite simple and one quite um well very, very powerful and very, very hopeful. Uh so you know, at its simplest, chat GPT can help you with your negotiations. So, you know, just if you've got um a meeting with the FD coming up or something like that, and you're thinking, hmm, I need some more budget, but I bet you they're not going to give it to me. How should I go about this? And you type the details into ChatGPT, and it's going to give you some pretty decent advice. It's not bad at all. And obviously, the more details you put into it, the better advice you're going to get, and you can ask follow-up questions and so on. But here's the uh, and and by the way, I've got a better version of that on my website. I've trained, I've got uh a bot. If you go to my website, which is theendofconflict.ai, uh on the bottom right-hand corner, there's a little web chat icon. Click on that, up box a window, and that's uh a negotiation bot. You can ask it questions and it will give you advice. Now that bot has been trained on my material, and so it's been trained on best practice negotiation uh material, uh, it's got special system prompts, etc. So it's much more specialized on negotiation expertise. But here's the thing that if you use that once for for an upcoming meeting, for example, and you think, oh, actually that's quite useful. And so the next time you have a meeting, you'll you'll probably use it again, and you go, it gave me some good advice last time, I'll try it again, and and then the next time it'll and then the next time, and you'll start getting into this doing this as a habit. But after a while, you won't need to go to it because you'll know, because you'll know what it's gonna tell you. If you if it keeps giving you this good advice, etc., after a while you'll pick it up and you'll learn the best advice from it. So uh, and you'll stop just doing it, and sorry, it won't just be in that context with the finance director or whatever that you'll be using it, you'll be taking it out into other conversations, other conflicts, other negotiations and collaborations. You'll take it out into the world outside of work. And if other people start doing this, now we're just beginning to build a very, very harmonious collaborative world. And whereas AI learned its methods from us humans, well, we can learn those same best practices back from the AI as AI becomes more embedded uh in the world, in our background processes.
SPEAKER_02:On Leading People, the goal is to bring you cutting-edge thought leadership from many of the leading thinkers and practitioners in leadership today. Each guest shares their insights, wisdom, and practical advice so we can all get better at bringing out the best in ourselves and others. Please subscribe wherever you get your podcasts and share a link with friends, family, and colleagues. And stay informed by joining our Leading People LinkedIn community of HR leaders and talent professionals. I've been pretty experimental and to a large degree enthusiastic about the potential of AI. And of course, I've read your book, and I have also been working with AI in the context that you talk about, and it I hadn't actually really made that connection. You know, I the aha moment there is actually it becomes some a tool to teach us because we teach it, we we feed it, best practice, we interact with it, and then it becomes like our best personal advisor, and as you say, is able to teach us. So, therefore, collectively, it would strike me that if in a corporate or in organizational context, if people pool that into a set more central kind of AI support tool, they're going to get a very deep and very rich uh um learning support.
SPEAKER_00:Massively, massively, totally agree with that. Now, Jerry, do you mind if I come back to that? Because there was another example that I wanted to give from the book, which will also feed into the um into the work example as well. But um, so before we get into the work context, which I'm sure we'll go into in depth, um, so the the other example was um so one of the very first people I interviewed for the book was uh a guy called Colin Irwin. Uh, and he's a research fellow at Liverpool University, and he does something called peace polling. And uh he he was involved in the Northern Ireland Friday Agreement. Uh, and what he did then basically, him and his team would walk the streets with their clipboards, walk down the high street, stop people as they were shopping, ask them a whole load of questions, and ask them basically what did they want to see in the agreement. Uh yeah, and he would then ask them some demographic questions, age, gender, um, religion. Uh, and he would then capture all of this information and feed it back to the negotiators. And the negotiators would be able to see, ah, so the Catholics really want this, they'd accept this, but they'd never accept that. Okay. And the Protestants really want this, they'd accept this, but they'd never accept. Right. I think we can see where the deal is going to be. And they could see from that information what the what the deal is likely to be. And he and he did this with every issue. And he would then go back onto the streets with the next one and so on. And in the end, the negotiators were able to come up with an agreement that they were confident that when it went to a referendum, it would be supported by the people. Um, and it turns out that this community input is really, really important to peace agreements working. So, as as a as a counterexample, um the Oslo Accord uh round about the same time, give or take a few years, the the negotiators there struck a very good deal. Everybody in the room was very happy with the outcome of the Oslo Accords. Unfortunately, because it was done in secret, they didn't include the communities, and they didn't get any community input. So then, when it was put to the communities, they didn't buy into it. And so it it died of death. And and that's you know, obviously, in retrospect, quite tragic. Um, the it require these peace agreements require the community input. Now, the problem with that is that that it's quite difficult scaling these things. So there are platform AI-supported platforms now, uh, and they're in the it's a field called deliberative AI, um, which is a very growing field uh that enable these kinds of conversations to be had at scale, cheaply and quickly, and reaching nuanced agreements across thousands of people, even on divisive topics. So, Colin Irwin, basically he's still doing the same around the world, but he uses platforms like this. And so, for example, in Libya in uh 2020, uh at the end of the Civil War, um, there was a ceasefire. To be honest, nobody expected the ceasefire to last long. There were all kinds of gangs and warlords and tribal gangs, all of whom were armed to the teeth and all of whom hated each other and wanted revenge. They want they tried to form a government of national unity. And nobody expected this to happen whatsoever. But what they did is they conducted a conversation using one of these AI platforms between a thousand people, randomly selected but ref representative of the demography of the country, um, and a thousand people had a conversation about what they wanted to see in uh the government of national unity. The conversation took two hours. It was conducted live on TV, and a third, on national TV, a third of the country watched it. Uh, and whereas social media is optimized for disagreement, basically, these platforms are optimized for agreement. And they came to an agreement in the two hours of what they wanted the government of national unity to be to be. And because it was watched and observed by a third of the country and everybody felt part of that process, five years on, that government of national unity still exists. There's I'm not saying it's perfect in Libya, but it's infinitely better than it was before. Uh, and that's very much part of this AI process that enables conversations to be held at scale, quickly, across between thousands, even on divisive topics, and find nuance agreements. Really, really powerful and really, really hopeful.
SPEAKER_02:Yeah. And of course, uh some of the people out there maybe thinking about this would instantly identify this concept that is you know quite well known in corporate circles or organizational circles around stakeholder buy-in to change. And um I come from a country, Ireland, although I don't live there anymore, but I come from a country which in the last 10 to 15 years would have transformed itself from what the country I knew as a young man and as my the certainly the country my parents grew up in. And one of the key things they adopted was this notion of the citizens' assembly to to first get input from a cross-section of the population into issues that had never, you know, could never get debated, could never get anywhere in the past. They just created division in society. And this is for me very encouraging that now you can actually scale that up and that there's evidence out there to demonstrate that it actually is an important and valuable part of attempting to get agreements. And we we know at the present moment there are many uh conflicts around the world where perhaps hopefully they might start to consider some of these tools. And having said that, let's get to the the workplace, let's just take the stakeholder concept now, and we touched on it earlier a little bit about how maybe teams of people could start to harness the power of the of AI in the workplace. You've long argued that better collaboration leads to better results, and what what might the um end of conflict look like in the workplace? And how could AI supported collaboration actually improve organizational performance or relationships between teams?
SPEAKER_00:Yeah, yeah. Um so, well, it's interesting actually. Just just last week I came across um a figure. Uh I was talking to the very senior executive at ACAS, and they they told me that their research shows the UK, the UK economy loses 28 billion pounds a year because of conflict within organizations. And personally, I think that's an underestimate. I think that's probably based on formal disputes that go to ACAS, but I suspect there's an awful lot more lost because of the micro disputes, the hidden disputes that we don't really pick up on.
SPEAKER_02:Can I just ask for our non non-UK um listeners what what ACAS is? Just can you explain briefly what that is?
SPEAKER_00:Yeah, sorry. It is the what does it stand for? It's the arbitration service, basically, uh, for that's a um it's a governmental body set up for resolving business disputes uh and rather than go to court, uh, classic ones being trade unions to employers, that they will always go through through ACAS kind of thing. Okay.
SPEAKER_02:Sorry, I interrupted your flow there. Sorry.
SPEAKER_00:No, that's okay, that's okay. Uh thanks for the clarification. So um, and so I do believe that an organization is built on its collaborations, it's built on its micro negotiations that take place every day. That I think every meeting you have is a kind of a negotiation, every conversation with your colleague at the coffee machine is a kind of negotiation. And the organization that can conduct those interactions, those micro negotiations quickly and smoothly to a win-win outcome, that is going to be a high-performing organization. And unfortunately, most organizations can't conduct these well. Um so traditionally, in fact, um, traditionally, the approach has been if they're aware of this, the the approach, the intervention is let's get in a uh negotiations trainer or a collaboration skills trainer or something like that. And they put everybody through on a one-day program. Uh, if you're lucky, a half-day program, if you're they they haven't got the budget or whatever. And and this this is kind of my bread and butter for my work, but even I would be one of the first people to say that what can you expect from a day's work or a half day's work? You know, you are not gonna, there's gonna be some improvement, some effect, but you're not gonna have a deep cultural transformation on the back of that. Um, but the follow-on training or follow-on coaching that's gonna require be required to really embed these skills and attitudes so that you do get the deep cultural transformation, it's gonna be too expensive, uh, typically, typically. So I believe if the organization perhaps might run the program, uh, but then rather than have follow-up one-to-one coaching, which will be expensive, it uses the bot as its negotiation advisor. Um, and so that everybody can have access to that. And everybody at work thinks, hmm, got a meeting with my finance. Or what did that guy say? Oh, he said, yeah, use the bot. Okay, I'll give that a go. And they'll ask the question of the bot, and the bot will give good advice and we'll go through that process that we just discussed. And and then slowly, or maybe even quickly, that organization will become a much more collaborative organization, and as such, will be much higher performing. Now, then there is, then there's the other thing that you talked about employee engagement, the the many-to-many negotiations that we were talking about with the Liberative AI, that we talked about with like scaling up citizens' assemblies, that kind of stuff. Well, again, as you said, we all know, and we've known for a long time, that uh if our staff are fully brought into a uh a policy because they felt they have contributed to that policy, um, that they feel some kind of ownership of it. Well, again, that that staff, that workforce is going to be much more motivated. Um but typically the forms of employee engagement we've used in the past have been things like uh town halls, which generally means the loudest get heard and nobody else, or some kind of employee survey where there's a set of multiple choice questions. The questions are never really the right questions, and the multiple choice answers aren't really the right answers either. So there will it's always been a little bit underwhelming. But those platforms that we were talking about can be used in the employee engagement situation in that kind of context. So it really can be all employees contributing and all contributing fully in free text form. It's uh you know, you can imagine it typically uh if it's a free text input, it almost it almost instantly becomes unmanageable because of the amount of information that needs to be consolidated and reconciled. Well, AI can do all of that, so we really can have thousands of people putting in their full opinion, their nuanced opinion, and then the AI making sense of that and coming up with an agreement that let's say two-thirds of the people support. But what's interesting with these processes is that uh when people go through these processes, even the dissenters, the people who would have voted against it, they end up supporting it because they say this was a very fair process. I like this process, I was listened to, I did contribute to it, I was hurt. I'm gonna support this outcome, even though I would have preferred something different. So really, really powerful. Then we've got our staff fully behind whatever policy it is, and they're gonna make it happen.
SPEAKER_02:Yeah. That that is uh probably uh a fantastic example of um how the the AI-based tool can can save people a huge amount of time. And uh there's a couple of things coming out of what you just said there, um, which I'd like to just go a little bit deeper in with you in a second. However, the the the point about the surveys, you know, they tend to be fixed questions and fixed answer choices, and this idea that you know you can just get people to give freeform answers, right? Um and do something with it meaningful. I'll give you, I want to give share a little example from uh the this summer when I uh we were in the United States and we were traveling to visit some good friends of ours, and there's a particular um like what would you call it? Uh like bio, bio food, like uh, you know, one of these natural food uh supermarkets which has lots of lots of great produce in it, and it's quite a drive from their house. And we got a message saying, like a text message saying, uh, could you stop by the store and could you perhaps get us some of these? Oh, and then by the way, maybe we'd like a little bit of this and a little bit of that, and a little bit of and it was one of these messages with like bits and pieces of quantities and everything else and and other phrases in there. And we're going into the store and we were looking at this message, going, Oh my god, how are we going to decipher this? And you know, luckily I have got teenage, well, late teenage girls, like kids, and they said, just drop it into GPT and and ask you to turn it into a shopping list for you. Brilliant. It not only did that, but it organized it by category within seconds, and we were able to walk around the store in the veggie place, get the veggies, in the whatever else, the deli place, get the deli stuff. And so everything that what looked like a sort of mess turned into a very structured and very um very beneficial experience for us. So, I mean, I'm not sure if people are starting to explore this, but this this is something that's that certainly that's a brilliant example of, you know, don't get hung up on the fact that you have to administer these very fixed questions and fixed uh answer type surveys. Now it's possible to have something alternative. But more importantly, what I wanted to get at was people who are listening now, some of the listeners out there might be thinking, well, this all sounds a bit utopic, you know, and um what about the human, you know? Okay, so I have my bot, and I just talk to my bot, and my bot's going to advise me, um, what if the the bot doesn't recognize the human factors at play here? And I think you have some quite some strong arguments to make around how this still remains human. So I'll let you go with that.
SPEAKER_00:Yeah, so look, again, it goes back to that thing of it being a tool, and it's an early tool, it's uh in its early stages of development. So we've got to be very and as we move forward, we've always got to be very careful about how we use it. Um, in terms of the human benefit, human welfare, I think we've absolutely got to make sure we use this for human welfare and for everybody. And I'm saying I'm saying human as a human race rather than just the tech woes kind of thing. Uh, you know, we we really have got to make sure that uh that we do you use it for everybody's benefit. Um I think keeping people, keeping humans in the loop is really important. Any process that you're doing uh using AI right now, absolutely keep humans in the loop, A, for that human factor, but B because it's still an early technology and it can go wrong, and it can still give you completely wrong advice unless you check it. And so I, you know, I really encourage people to use the advice sensibly with a questioning uh eye on it. Uh, don't take everything that it says for granted. You will definitely come unstuck. I wouldn't like to see your recipes uh if you just took your uh the AI sort of advice without actually quick uh quick questioning it. So definitely always keep humans in the loop, always consider the human benefit and potentially human non-benefits, uh, and always double check uh any answer it comes up with.
SPEAKER_02:Yeah, I would concur with that. Um I uh have created a whole prompt sequence for helping me produce this podcast, and generally speaking, it's pretty effective, very reliable, and I've caught it out now a few times doing stupid things. And and the only great thing about it is when you you can tell it has done stupid things and it won't get upset with you uh too much. Well, we'll we'll find out how how it progresses in the future. And and it is true that um if I was just to take it for granted that everything it would suggest to me was correct, then I I this podcast would probably come out back to forefront sometimes, but at least I I check it and I say this doesn't look right, and the same way you would in real life. But one of the more interesting things that I found, or more important things I've found, is even if you're uh and you can comment on this in a minute, even if you're finding the AI is quite useful, both in terms of efficiency and effectiveness. So it's it you're it's helping you address the most important things, but it's also helping you do it very, very efficiently. I have found that the the kind of leverage variable with AI, and I I did a previous podcast, short podcast on this, was for me, context is really, really critical. And a lot of people I have found are you know, going to AI and saying, but I asked it this and it gave me stuff back that I didn't like or I didn't think made sense. I have found that if you contextualize things quite well, so don't just go in there and say, I want you to help me with negotiation, I've got to, I've got to get this price down or I've got to get a better price. What can I do? Because that's not going to lead anywhere. If you explain the context, and what I wanted to say in the context is I always um still apply those basic principles of negotiation. What's in it for them? What's in it for the other person? What stepping into their shoes? Which is when I teach this stuff in the classroom and you've done it yourself. This is probably the bit that challenges people is how can I see things from the other person's point of view in a way that then helps me also get my needs met, but can maybe we can find a way to get both parties' needs met in such a way that we can walk out of a room feeling that we we got something that mutually beneficial. So maybe talk a little bit about this this area of you know, how do you actually uh teach and instruct AI to be more human, to to you know, and to be more reliable.
SPEAKER_00:So the quality of the prompt is so important. Um, and as you say, you can say, Oh, I've got a negotiation, can you help me? And it'll tell you one thing, but that's not necessarily going to be that helpful. Whereas if you say, Oh, I've got a uh negotiation with my finance director uh later on this afternoon, this is the situation, and this is the backdrop, and this is what I'd like to achieve from it. And then you put in as much, I end up writing quite long prompts, quite long prompts to give it that information, uh, that context. And then the more context that you can uh give it, the more likely it's going to give you an accurate and relevant answer. And then, of course, you can always, as we were saying, don't take that answer as God given, but we can question it and you can say, hmm, you mentioned this, what do you mean by that? Or or you can even say, um, that you just mentioned this uh for me suggesting for me to say it in this way. That's really great. I'm not comfortable I could do that. Can you give me examples, different examples of me phrasing that kind of thing? And so, in other words, you keep it as an ongoing conversation with it. Uh, it's not just a one-shot go. Um, you can always, if you are worried about it, hallucinate strangely, saying things like, please do not hallucinate, or take your time and thinking about and think about this, um, it's it's less likely to hallucinate and it's more likely to give you a thought-through answer. You can set up your own system prompt. So uh I think I'm right in saying uh in ChatGPT and most of them, there's a place where in your settings you can say, tie this to, you know, uh add this onto every single question that you are. So, in other words, you can give it a persona of I want you to be a thought-through, reliable person who doesn't or uh who doesn't make things up or that kind of stuff. And all of that will just help you get better answers.
SPEAKER_02:One of the things I I will tend to do is if I think like you, I mean, I I try to make the briefing as comprehensive. I ask it, I I like to treat it as though it was somebody in the room with me, and I would say things like uh, is everything clear? Do you know what to do next? Um, have I missed something? If I have, what would it be? Yeah. Could you just clarify for me before you go and do this what it is you're going to do and how you're going to do it? Which you, if you're teaching somebody new on your team, for example, with something new that you want them to go do, this would be best practice in just making sure that that person for their sake is clear, leaving the room, and for my sake, that when they've gone off and spent a day or something, they come back with something that is close enough or maybe even better than I thought it would be. So I find that treating it more like a conversation partner, somebody who you might just ask to go and do something for you, and asking it to come back and ask you questions, and then when it does make mistakes, or it adds some new insight, asking it to remember that.
SPEAKER_00:Right. Yeah.
SPEAKER_02:Just the same way you might say to somebody now, for the next time, let's remember to add that step, right?
SPEAKER_00:Yeah, yeah. Yeah. And as I, you know, as you say, it's it's it's a bit like treating it as a very intelligent and hardworking um intern, recently graduated intern, or something like that. So they're good, they're clever, they will work hard for you, but often there's just some things that we take for granted that they miss. Uh, and so we need to double check that our their understanding is the same as our understanding.
SPEAKER_02:Okay. So maybe something else that uh are is on the minds of any listener out there is that okay, so these tools are becoming more accessible by the day. You and I are both exploring and using you you've been doing it since 1989, since you were at school when you were a little little boy, right? Um and and I I maybe have to admit that I've been doing it a little bit less long than that. However, people out there might be saying, Yeah, but these two guys are just getting off on this now because they they like to use it and they're finding and they might be saying, but what about all this stuff about bias and miscommunication? And are are we not in danger of just amplifying a lot of BS and a lot of negative stuff, etc.? So, from your experience in terms of the research you've done, and and you're actively out there with your bot and with your trainings, you're you're doing a lot of trainings. What what what should leaders and managers be doing to make sure AI enhances things like empathy and understanding rather than eroding them? And how can they use it to bring out the best in their people, which aligns with um one of the key uh tenets of this podcast, which is how do I bring out the best in myself and others? So maybe share with my listeners some aspects of that. You're listening to leading people with me, Jerry Murray. My guest this week is Simon Horton, negotiation expert and author of The End of Conflict: How AI Will End War and Help Us Get On Better. Still to Come. We're diving into one of the biggest questions surrounding AI. Can it help reduce human bias or are we at risk of amplifying it? Simon shares what his research uncovered, why he's cautiously optimistic, and how the future of AI may depend less on the tech giants and more on the rest of us. That's coming up next.
SPEAKER_00:Well, to address the bias thing specifically, and and you're you're absolutely right, and there's been some quite high-profile cases that AI can be biased. Um and yes, the danger is that we it will amplify that bias. Um the where does the bias come from? Obviously, it comes from humans because it's trained on human data, and humans are biased. Um and so that that has then got into the training data, and so it thinks in the same biased kind of way. Um now I'm optimistic on that, and I appreciate I am an old white male, so it's easy for me to be optimistic on that. But I am optimistic for a number of reasons. Firstly, because it has come out in the open, uh, work is being done on it. And that's both on a top-down way and a bottom-up way. So the big frontier models like OpenAI and Anthropic and Microsoft and uh Grok and all of these, uh, they are doing work on it. They are doing work on how to identify biases and then how to de-bias it. Uh, and in fact, there are platforms out there that you can uh that you can use to run any text that you've written. If you've written an essay or a speech you're gonna give, or something like that, and you think, hmm, I wonder if I've got any biases in here, you can run it through that and it'll check for the known biases at least. Obviously, there is always the danger of the unknown biases. Um, but the known biases, uh, it'll it'll check for those and it'll point them out in your uh in your in your in your text. So there's the top-down route, but then there's the bottom-up route as well, where a lot of the different demographic groupings that um perhaps might suffer from these biases. Um, so but whether that's based on race or ethnicity or gender or sexuality or whatever, uh, a lot of those groupings are um building their own models and working and collaborating with the with the larger frontier models, um, and building their own models very much based on a de-biased perspective as as they are bringing to it. Um so there's a lot of stuff uh going on in that level. So I think this combination of uh top-down and bottom-up and the collaboration between the two uh is very, very hopeful. Interestingly, I mentioned that the premise of the book is that AI models best human best practice uh and then makes that more widely available. And then humans can learn the best practice back from the AI. I think the same might happen, and I'm hopeful that it will, with bias, that our society is biased for for deep reasons, you know, you know, deep genetic reasons, for evolutionary reasons, etc. Um, and it's quite difficult to de-bias an individual, to de-bias a society. Um, but it's easier to de-bias AI, it's easier to identify biases in AI and then do something to correct for it. So I am hopeful that again, that the AI will learn the best practice from about regarding uh bias from humans, and it will become less biased, and then humans, our society, our wider society, will become less biased from the AI because of the benefits of the AI.
SPEAKER_02:Yeah, and and one of my questions here was really around where you see the see the real synergy between human and human ways of approaching negotiation conflict and artificial intelligence. So, would this be one example where you see there's a potential for synergy where we can reduce bias in society?
SPEAKER_00:So I I I just yeah, I think the potential for synergy with AI and humans is huge, is huge that um because AI all technologies that we've invented in the past, whether we're talking about the steam engine or electricity or the plow or whatever, they have amplified our strength or our speed or our reach, basically physical capabilities. But humans have always had the intelligence bit, uh, and that's been amazing, and it's done us tremendously well. It's built all of these things that we build around us. However, it hasn't been quite good enough in many ways, and hence the inequalities in society, hence the wars that still exist, hence poverty, etc. And AI isn't an amplifier as a technology, isn't an amplifier of our strength, it's an amplifier of our intelligence. And so I think that those issues that humans have found just outside our reach, how to build fair societies and such. Um, I think AI will help us with that. In fact, and things like climate change, if it's left to the humans, I just don't think we're gonna do it basically. We've we've been trying for 50 years, and we just people are still eating meat, people have still got the temp the uh the radiators on at 24 degrees, etc. Um, we're just not gonna do it. But I think it's the AI with its greater intelligence than humans, that but working with humans, um, not at the expense of humans or whatever, but working with humans, it really will solve all of the deep human problems that we face, whether that's happiness or equality or diversity or wealth, abundance, health, longevity, etc. Uh, I think it's AI the thing that is going to be the thing that really does help us solve those things.
SPEAKER_02:Okay, so there might be a few people running the expression where there's a will, there's a way. Yeah. And now we know there's a way. Um it maybe you go back a little bit full circle to the what you said very early on in this conversation about the potential for good and the potential perhaps for evil uh in anything new in society, and particularly in a technology like this, which is uh you know exponential almost in the way it grows and and in infiltrates our day-to-day life. Um there's always sort of money behind these things, and and we see see the massive amount of investment to the point where uh at the moment it it's only a few of the really big guys who can get into this and afford to waste money if nothing else, you know. Um and if we looked at this negative side uh and we think about this will, you know, and the the asset the the influence money has on our willingness to do things, um are the tech bros, as they're called, you know, the tech brothers, um, sufficiently motivated to actually do good with this, or are they in danger based on what we see at the moment? Are they in danger of falling into the sort of evil pathway? Or is there is there something in society that might be able to prevent that from happening in your observation? This might just be an opinion from you, or maybe you've seen some data to encourage you.
SPEAKER_00:Uh so my opinion is you're absolutely right. Right now it's the money, it's the tech bros who are driving the conversation. Um and not just driving the conversation, they're driving the technology and how it's used, and therefore they are driving the future of society and they're driving the future of humanity. And do I trust those tech bros? Nope, I do not trust them. I do not trust Google as far as I could throw Google. I was gonna say I do not trust Elon Musk as far as I could throw him, but I might be able to throw him at least a couple of feet, whereas I couldn't throw Google very far at all. It's very, very big. Uh so I don't trust him in the slightest. Um but there is the there is this alternative route possible, and this is exactly why I wrote the book. So that because I think the why they got into this place is because they have been running the conversation for the last 25-30 years. Uh, whereas pretty much everybody else has been shying away from the conversation. And I if most of them had most people for a long time didn't hear about AI, as you said, until the Chat GPT moment in November 2022. If they had, most people kind of go, yeah, don't really know much about it, don't really want to talk about it, I'm a little bit worried about it, and don't want to talk about it. So this is why I wrote the book to say, listen, we've got to reclaim the conversation. And I do think, look, there are only seven chief execs of big tech companies. There are eight billion of us. And if there was a fight between us, we'd win. Right. If there was a uh a wrestling match between us eight billion and them seven, we'd win. So I do think that there is still the possibility for us to have this conversation, put pressure on, uh, put pressure on the government, put pressure on the techcos themselves, and just raise that conversation that we so that we use it in the right way. And whilst you're you're right about the money being the main driver, and they've got all of the billions that can help build these data centers and do all of the research, etc. But a lot of AI is actually being driven bottom up. You you know, you you said that you've been playing around and you've knocked up some things, and I've been playing around and I've knocked up some things. Well, people who are more technical than you or I are doing the same in their in their living room or wherever, and they're knocking up some really interesting things. And I think the future is there as much as it is in the the frontier model top-down kind of approach as well. So I do believe it is up to us by spreading this conversation and making sure that it does. It's up to us what outcome we get. What you know, do we get tech bros living on Mars after blowing up Earth? Um, or do we get uh a utopia for everybody and all humans flourishing? That is up to us.
SPEAKER_02:So uh a question that just comes to mind here is um how did AI help you write this book, or did it? Uh did you sort of get it to involved and and provoke it and ask it questions?
SPEAKER_00:And so I did use it. So I was I started writing the book a couple of years ago, and um I did use it. A couple of years ago, AI wasn't wasn't at its best then, it was it um, but it was still useful. So a couple of uh ways. So, for example, I remember one chapter um that I was thinking, I was thinking, oh, I've got to write about this. Well, what and this was a bit of an outlier chapter in the book. I thought, well, I don't know anything about this, but I need the chapter in there, so what am I gonna write about it? Um so firstly, I asked it to write me a kind of the a dozen subsections within that chapter, uh, and it came up with real with a really good plan for the chapter. Uh, and so I ended up doing that for every chapter, not necessarily. I'd I'd I would typically do my thinking first, then I would get it to do some thinking, and then I would kind of marry the two. But with this particular chapter, with each of those subsections, yeah, great. Um, so I then said, okay, can you take that first subsection and can you write me, let's say, 1200 words on that subsection? And it this it did. I printed it off and I read it, and I thought, oh, this is quite good. But I I think and it's not really my voice. So I said, can you do the same content but in my voice? A little bit more of a joke, you know, that, that, that, that, that, that, that, that kind of stuff. And again, it came back with something, and I printed that off. And I thought, oh, this is really good. There were some literally some laugh-out loud jokes, and there were some great analogies and metaphors in it. So brilliant. I know what to do for the rest of the chapter. So I printed uh so I did I basically did the same for every subsection, printed them all off one after another as I was uploading the next one, and then eventually I got them all from the printer and I started reading them. And unfortunately, every single subsection was identical with and I with the identical jokes and the identical, identical metaphors. So I thought, oh, well, this isn't gonna work then. Um, so so I didn't use it that for that for that kind of thing. But occasionally I how I used it was you know, that time when you think, hmm, I'm saying it this, this is a I'm saying it in this way, and maybe you read out, read it out loud and and it sounds really clunky, or you're maybe there's a word you're missing, what's that word, or whatever, or or sometimes I would come at come up with catchy titles for a subsection, that kind of stuff. So I used it more as a writing coach specifically around language, around uh a phrase or a word in that kind of context.
SPEAKER_02:Yeah. I I can ask you that question because you've written several books without the the uh the chat GPT uh being around. And I I was uh sort of smiling to myself when you mentioned the humor thing, because one of the things that characterizes your books is that little bit of dry humor that uh sneaks its way in. And I'm I'm not sure that you can get uh uh even with training, I'm not sure you can teach at the moment, anyway, the AI to reproduce that turn of phrase, that just that one little thing that comes from something in your experience or some way you've looked at the situation, uh, because it's very personal, isn't it? Humor, uh that kind of humor, observational humor, can be very personal. And so it's kind of uh actually intriguing to listen to you talk about how you experimented with the AI, and in the end it you still wrote the book yourself. It became a writing assistant, a sort of maybe sub-editor type, like I could throw something at it, and it might suggest new ways of doing things. So, for all those people out there thinking of, well, I need to write a book or I want to write a book and I'll just fire up my chat GPT now, beware, beware, it doesn't work quite like that. Okay, so before we we get into the end and before we close off, I have a couple of short questions for you. If there was one simple idea or insight you'd like listeners to take away from this conversation, and I guess the book, oh by the way, you're gonna have to read that book now to find out which chapter Simon was talking about. So, um, but if there's one simple idea or insight you'd like listeners to take away from this conversation or the book, what would it be?
SPEAKER_00:Uh, it's that AI can be very helpful, uh, that it can help you resolve your conflict, it can help you be more collaborative, and you will get better outcomes in your life. And the people you're negotiating with will get better outcomes, and you'll improve the relationships in your life, perhaps if you if you if you use it uh in in that way.
SPEAKER_02:Yeah, actually, there's a great chapter in the book about relationships because we're we're more focused on the the the leadership aspect here. There is a great chapter in the book. I mean, you also talk about world peace, but there's a great chapter in the book about just our day-to-day family stuff and partners and all that stuff. So I would encourage if you're curious out there to grab a copy of this book. And what's next for you, Simon? Are there any more ideas or experiments emerging from the end of conflict that you're excited to explore?
SPEAKER_00:Yes, so well, all all all of this stuff that we're talking about, you know, I am I am building on and uh the the bot, etc., so the organizations can use it uh in in the context that we were talking about, making it more collaborative. Also, um I'm talking to a lot of organizations about using deliberative technology uh in in within their organization to get it more collaborative, to get greater employee engagement and therefore higher motivated workforce. I'm also looking about using that in the community context. So maybe local councils making policy decisions and such like using these technologies. Uh and lastly, um infrastructure companies. Uh so uh just very briefly, infrastructure companies classically have lost huge billions of pounds and gone years over budget on their projects. Why? Largely because of community resistance to the new infrastructure project, whatever it is, whether it's a ray, road or railway or housing building, the NIMBY's that are not in my backyard people say, yes, of course we need housing, but not here, not next to not next to my house. Um well it turns out that most people aren't NIMBY's, they're MIMBies, maybe in my backyard, maybe under certain conditions, if you were to do it this way, and if you were to take this into account, and if you listen to my voice and listen to my opinion, then yet I would say yes. And these platforms that we're talking about, uh the deliberative platforms, enable those kinds of conversations to be had had amongst communities. So again, they reached a nuanced agreement that everybody is behind. The infrastructure project can now go ahead quickly, on time, and on budget.
SPEAKER_02:So I'm gonna put links in the show notes. And do you have us any special offers for my listeners here today?
SPEAKER_00:Uh that there's a good that's a good negotiation, Jerry. Uh so yes, uh, there is. So if anybody wants to reach out to me on LinkedIn and then say that they heard me on your show, then I'm happy to give them uh a discount on the book. I think the book is normally£14.99. Um you can get it$9.99 on the PDF on my website. I'm happy to give it to$7.99. Um, so that's two pounds off, or indeed seven pounds off uh the the Amazon uh price. But I'm also I'm willing to negotiate further if they wish. So if they go to my website and it's the end of conflict.ai, um uh on the buy the book page, there's a bit where they can negotiate with me, and I'm willing to give it um further negotiations, further, sorry, further discounts uh down to a bottom a minimum price of$3.99, depending on what they what they bring to what value they bring, what contribution they're they're willing to bring. But if they just let uh connect down to me on LinkedIn, and as you say, mention your this is where they came across me, uh, then I'll give them a uh discount from there anyway.
SPEAKER_02:Okay, so for the brave of heart um who want to negotiate with a negotiation master, then uh you have your choices now how you do it. And Simon, it's always a pleasure having you on Leading People. And I'd like to thank you once again for sharing your insights, wisdom, and tips with me and my listeners here today.
SPEAKER_00:It's always a pleasure to be here, Jerry. Thanks so much, and I just hope that your listeners really enjoyed it.
SPEAKER_01:Coming up on leading people. And my question always is to people in LD do you want to deliver some content or do you want to make a real impact? Now, they're always going to say a real impact. Therefore, you're going to have to do something more than just deliver content. And it's that more bit that's where my expertise is and and the stuff in the books and so on. Is what else do you have to do as a wrapper around the content to make it have an impact?
SPEAKER_02:That's Paul Matthews, one of the most practical and respected voices in learning and development. In our next episode, we dive into what really drives capability at work, why so much training fails to transfer, and how to shift from delivering content to delivering impact. We also explore the five components of capability, what leaders need to understand about learning, and the hidden engine of informal learning that keeps every organization alive. You won't want to miss it. And remember, before our next full episode, there's another one simple thing episode waiting for you. A quick and actionable tip to help you lead and live better. Keep an eye out for it wherever you listen to this podcast. Until next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Josh Bersin Company
Josh Bersin
McKinsey Talks Talent
McKinsey People & Organizational Performance
The News Agents
Global
The News Agents - USA
Global
Digital HR Leaders with David Green
David Green
Be Worth* Following
Tim Spiker
HBR On Leadership
Harvard Business Review
Economist Podcasts
The Economist
The Habit Mechanic - Unlock your Human-AI Edge
Dr. Jon Finn