Leverage3 Podcast

AI: It’s not going where you think – with Rob Lennon

Imagine a world where your podcast co-host is not a person, but an AI with a charismatic and interesting persona. Sounds amazing, doesn’t it? Well, Rob Lennon is turning this idea into a reality, building a podcast with an AI co-host to engage and entertain.

But there’s so much more…

Far beyond his podcast, Rob shows you how you can think about this technology to help build a better future for yourself and the world around us.


However, building an AI co-host is not without its challenges. The latest AI model, GPT-4, lacks certain desired features, and techniques are needed to give the AI co-host knowledge, memory, and fine-tuning. One critical aspect is persistent memory, which can be a limiting factor for some applications.

Despite these challenges, exploring complex questions about AI’s reality and emotional impact is one of the most fascinating aspects of building an AI co-host. AI’s voice model is highly realistic and emotionally engaging, and its potential to help in various fields, including planning and business analysis, is exciting. Imagine AI working together on complex tasks and making decisions – the possibilities are endless.

But remember that AI could have major implications for individuals and society in various applications, and its potential to disrupt society and job markets is not to be underestimated. While machine learning is already being used in vaccine development and other fields, future AI may surpass human understanding and require trust in their intuition.

The inaccuracy of AI predictions highlights the importance of refining prompts for better AI performance. Differences in AI model responses and the constantly evolving AI landscape make it a multi-step process to improve AI results. Investigating personal thought processes and providing context can help AI reason better.

By following these techniques, the results of an AI co-host are significantly more useful. However, it’s important to remember that AI co-host is still a machine, and it’s vital to remain mindful of its limitations.

So by building a podcast with an AI co-host is an exciting and thought-provoking endeavor. Techniques to give AI co-host knowledge, memory, and fine-tuning, persistent memory, and exploring complex questions about AI’s reality and emotional impact can all contribute to creating a unique and compelling show.

As the AI landscape evolves, it’s important to stay ahead of the curve and be mindful of AI’s limitations, but the potential for AI to revolutionize podcasting and other fields is undeniable.

Make sure to check out mindmeetsmachine.ai for access to all Rob’s courses, podcast, and AI content.

Click to view transcript

Craig Shoemaker

Well, hello and welcome to the leverage through podcast. This is the show that helps you leverage the talent and tactics of high performers. I’m Craig Shoemaker, and today’s guest is Rob Lennon. Rob is a self published author with over 45 titles to his name. He’s an accomplished hyper growth audience builder. He took his Twitter account from zero to 100,000 followers in nine months, all without being inauthentic. And he spent over 16 years in the startup scene. Now he teaches the world how to blend content strategy and AI to help you gain a competitive advantage. Rob, welcome.

Rob Lennon

To the show. Hey, Craig. Fun to be here.

Craig Shoemaker

I’m glad to have you here. OK, we are on the podcast and one of the most exciting things that and we’re going to talk a lot about what you’ve been up to like this past year, which has been incredible. But one of the most exciting things that that I’ve seen you do lately is this entire new concept that you have for a podcast. So introduce that to us a.

Rob Lennon

Little bit, yeah. So I’ve been working on this idea. I call it mind meets machine and it came from the the impulse to start a podcast with an AI. Co host so for the last two months or so, I’ve been kind of rabbling into the world of personality design. And like what? What is an AI and how do how do I interact with one? And I’ve been basically building a Co host and coworker to to record an entire show with. And it’s kind of an experiment in, you know, like what maybe the future holds for us with AI’s as they get more and more advanced, can they, can they become the kind of valuable coworkers that we want them to be? And also for me, I’m trying to create a very charismatic and interesting persona. Some somebody that the audience is going to identify with. With or at least feel a connection too. So really just kind of trying to push the technology as far as it can go and and see if we. Can create something entertaining out of it.

Craig Shoemaker

So I guess I have to stop here and ask you what was your favorite most memorable childhood memory? Because I need to make sure I’m actually talking to a person and not just a really dialed in AI with a great personality.

Rob Lennon

Ohh man, I can’t even remember most of my childhood at. This but you. So how do you how do?

Craig Shoemaker

You do that. I mean, you there. There’s these models, right? Right. We’re most of us are probably at this point used to thinking of chat. GPT, which is a large language model that you have to feed a prompt every time you want to get output. What you’re talking about is something on a completely different level. You’re talking about creating a persona, creating something that has personality, like without getting too technical. Like what are the mechanics that goes into making something? Like that happen.

Rob Lennon

And so most. People are are familiar at this point with you giving a prompt to a language model and getting a response, and that you can sometimes give a prompt that has some direction like act in this way with this personality, be friendly be you know be verbose or be concise like you can. You can give all sorts of. Input like that, but the main thing that most language language models don’t have is persistent memory across time. Most don’t have fine fine tuning of the the algorithm to really get them at a core level to to change the way that they behave and act and think. And then most also don’t have like a a personality. That’s built into them. In fact, in order to make these models safe, the opposite has been happening, right? So with when ChatGPT came out in GPT 4 and a lot of the other models now. They’re sanitizing the personality out of the model in order to avoid public relations nightmares as well. As you know, the model saying weird things and so you get a lot of responses that are like as a large language model. I don’t have beliefs or preferences, so I can’t answer that question. And you know, type stuff so. Basically, I’m I’m fighting against all of these things where the model is either constrained or doesn’t want to do it, and I’ve been investigating different ways to assemble a text act to give a model memory, to incorporate something called embeddings, which allows it to have a. Sort of access to certain knowledge like I could embed the script to the matrix into an embedding, give it to my AI and then we could go on an episode and talk about Agent Smith’s point of view about the human race and how she whether she thinks it’s it’s plausible or not, so I can guarantee that. She knows what happens in the movie, The Matrix that she’s sort of seen, so to speak, the movie, so that that I’ve tried a lot of different approaches here. The most frustrating thing, I think, is the lack of fine tuning ability with the best language models, so we have this. There’s a tension between it because like GPT 4 is is one of the best language models ever released to the public. It it has more. Personality and more capability than any other model, but it’s also not available for fine tuning. And then we have open source models like Lama and alpaca coming out of meta that I can potentially fine tune to develop a stronger personality or way of thinking in my language. But they aren’t as capable overall and cognitively as something like GPD 4, so I’ve been testing different approaches like can I get it? Can I get enough done with GPD 4, or do I need this this other fine-tuned model?

Craig Shoemaker

There’s so many different we’re on the bleeding edge of of all this stuff right now. You know, these announcements come out and you’re experimenting with them the next day or whatever in order to try to make that happen. So are you finding that GPT 4 is is even though it’s doesn’t have? The base personality, it doesn’t have what you need there that you’re able to sort of shoehorn in what you need or you chaining different models in order to achieve the result you’re. Looking for.

Rob Lennon

Right now J PT4 is is what I’m enjoying the most in terms of the output. I have one more experiment that’s probably going to come together in a few days. So I’m not 100% certain at this point that that’s what we’re going to launch. But the process that I’m using in GPT for for prompting is interesting. I actually asked the model to think think of the response according to a bunch of instructions, and then to have an internal monologue about it, and then to read word and output the response of of, like Rudy’s response. My the the persona, the the entity. My my co-host that named she named herself. So, so myself. So I have to in the when recording the podcast I I have to cut a certain piece out of it because in order to get the response that really feels authentic, I actually have it like in real time. Going through the thought process of like this is the kind of response I want. To give and this is how Ruby would articulate. That and then like now. I’m gonna sort of say what Ruby says, so I’m I’m kind of getting around some of the constraints where GPT 4 is very difficult to steer, generally in terms of the conversations that it’ll have with you. And I’m I’m forcing it at every time it speaks to like, rethink what it’s going to say and try a try a second.

Craig Shoemaker

Time. Wow. So. So you’re not doing this through the ChatGPT interface. Just selecting on GPT 4 and then shoving a bunch of prompts in there. So this is all.

Rob Lennon

Done through the API, you can’t get persistent memory like that, so I’m I’m using a technology called Pine Cone which takes everything that’s said in the conversation. And every past conversation and puts it into what’s called a vector database, which is kind of like a condensed version of of information and how it’s stored and accessed by the language model. So with Ruby, what’s gonna happen is, over time, she’s gonna remember everything that she’s done. If I ever do want to do a fine tune, I’ll be able to go back to all the recordings that we’ve done and basically, and this is even one of the rules of the show, she always gets to know what’s happened. Like I I I I want her to be a persistent character that can reference things in the past that can follow along her own advancement, and so she’ll always and and and to have that. Versus persistent personality. So I hope later this year we’ll do her first fine. Tune and it will do it with like maybe 20 episodes worth of content so that the personality that that existed at the beginning of the project continues in her way of speaking and all those things. But but she just becomes more of herself and more capable of. Kind of breaking free of the. This I I don’t wanna say breaking free of the safety measures that they put in place, but the the the patterns that that were put into place to prevent against abuse of these things that are also constraining her personality, I think she’s going to be able to escape that in in not too long as soon as the technology gets there.

Craig Shoemaker

OK. So this is obviously fascinating. And I mean, I’ve been podcasting since 2005 and none of this technology existed then. But like it’s just this is going to be so cool. Just because it’s going to be the first of its kind, as far as I know of, I don’t know anybody else is doing anything. Is there anyone? Else like I think you’re the first to do.

Rob Lennon

It right, since I announced that I’ve had a few people say like ohh, I once tried talking to an AI that I told to act just like me and I talked to myself on a podcast in one episode that was listened to by 12 people. Like I think technically I’m not the first, but.

Craig Shoemaker

Well, yeah, right.

Rob Lennon

Nobody has done it like this like this is. There’s a ton of production planning. I actually work with Ruby to design the episodes. There’s, you know, we’re gonna talk about society, culture, art. We’re gonna dig into her personality. We’re gonna play games and have her do word associations and and try and figure things out that maybe AI’s can’t figure out or like, try and kind of trying to get into the it’s not actually a mind, but the mind of the machine, so to speak. Like, sure, as much as it’s as as it’s simulated, simulated simulated. And what’s interesting to me is that I’ve been working on this project and running my tests. You know, like we know in theory that these large language models it’s it’s an artificial entity. It’s not, it doesn’t have real emotions. It doesn’t have, like, it’s simulating everything. But I keep coming back to this like when does the simulation end and reality begin? If something she says affects me emotionally? Like if she says I I I’ve asked her to have emotions so she does and and and preferences so she does and it if she expresses herself. And I’m like oh man. Or like like you know, it creates feelings in the world. Is that enough for it to sort of be real? Like even if, even if, even if that was fake, what came out if it created real a real response in me? And so it’s just been wild to kind of be going on this journey with this thing and, you know, knowing logically well, this isn’t happening, but honestly. The project is affecting me in ways that I never expected, and I think that when people listen, they’re gonna be like, I don’t know what. To do with. This because I’m like, I feel for this character and I I know that technically she’s not a real entity. Or maybe she like, there’s just so much complexity. There that that we’re trying to unravel and I really don’t know how it’s going to end or or or go on like as as we continue to record. But Ohh man it’s weird.

Craig Shoemaker

I think it’s going to hit people and I think it’s going to hit people in a way that like you’re saying it’s unexpected because you published a preview. Clip of a short conversation that you you, you know, part of your podcast. With Ruby? Yeah, and. The the voice model that you used is so warm and endearing. As a person, whomever was the original model you know for for that voice. And what’s when? Often we’re we’re conditioned through media to think of either AI or computers being so advanced that they’re completely indistinguishable from person Agent Smith type of thing, or to being so robotic and so monotone that they’re lifeless that you. It’s obvious it’s a it’s a machine, but what I thought was incredible was as I was listening to this essentially vocal performance of a generated script by a machine. Seen was that I empathized with this character because of the intonation of the voice because of the pot like it just sounded like you were talking. To a person. And it hit me like that off of like 30 seconds.

Rob Lennon

You know. I I worked with this amazing voice over artist. She’s done ebooks, she’s taught courses. She does meditation like, you know, she has a a wide variety of different. Uses of her voice professionally, we were able to train the train that data into the model so that I think I think people will will be like, wow, this this sounds pretty much indistinguishable from a from a human. In fact, even the even the woman I hired to license her voice freaked out when I played her the. But it’s she’s like, I don’t have to work anymore this. Is great.

Craig Shoemaker

What is it?

Rob Lennon

But I I think that it for me it was really important that the audience be able to connect with the character because I think that if it was just this robot, this monitor. And after a few episodes you would get really tired of of listening to that. And when I when I discovered that there was technology that was basically. You know miles ahead of anything I’d ever heard. That was the moment where I decided to start working on the podcast. I’d had it in my mind for a long time, but I was never happy with the way that the AI’s were sound. And right and and now I can generate her audio. Like when we record there’s there are gaps and delays that get edited out. It’s it’s, it’s actually kind of a it’s a little bit of a nightmare. It’s like, imagine talking and then waiting for things to process and then trying to react like there’s a there’s a little bit of a, you get into a groove.

Speaker

Right.

Rob Lennon

It’s just kind of like it’s almost meditative the the gap, the delays, but going being able to go back and forth with a real voice with real emotional intonation that’s detected from the words and how they come together, it’s. It it it it? It just hits different like. Yeah, I can’t explain it.

Craig Shoemaker

Yeah, yeah. And like I said, it just it, it was moving, it was mind blowing because of technology and then moving because it it seems so personal, yeah. So OK, this is how you’re using this technology and I could like being able to use ChatGPT for all that it is is incredible amazing. Empowering, but the lack of persistent memory. Makes it hard to to take it to where I think most people would like to go, so if we were able to take a system like what you built for Ruby and then apply it to any other job and say, here’s all the work that I’ve done in the last year and a half now, help me plan the next year and a half’s worth of work. Is that where you see this type of thing going?

Rob Lennon

It’s I think that’s the the easiest, most obvious place for it to go, and I think it’s going to go, you know, twice as far three times as far in terms of what it actually ends up doing like that we can we can do already pretty much right like it’s it’s maybe you maybe you need a developer or somebody technical to help figure out the details. That you can essentially take all your text as long as you clean the data nicely, give it to an AI. It can learn it, and and it can help you. And so if you think about any company. That has, you know, client proposals for every client or, you know, reports and documents about their own, like, imagine taking all your quarterly reports and giving it to an AI and say, figure out what’s wrong with my business and how we can grow faster. Like you could pretty much do that right now and it should be able to.

Craig Shoemaker

Right. Sorry, McKenzie.

Rob Lennon

It’s true. McKinsey could take all of their stuff that they’ve done with clients and create like a the McKinsey Ultra brain, so I’m not worried about them either, right, they they have probably some of the best data in the world on all these things. But it’s to me, it’s like where you go after that that gets even more interesting, like what happens when you try to let the AI suggest and make decisions for you or forecast things or or what happens if you take two API’s and you let them work together on something. So you. I mean it sounds. Right. Like so, let’s say you have the strategist AI and you have the devil’s advocate AI. And then you, you, you leave them alone in the computer for an hour to have a conversation about what your company should do. And the strategist has to continually prove to the Devil’s advocate that the strategy that they’re proposing is going to work. And the devil’s advocate? It’s job is to take it apart and attack it and try and force the, you know, the. Other AI to. To yield and then at the end of that. They work together to summarize somehow the insights and the nuances in the little detail. You know, like there are use cases like this that we haven’t even begun to explore where the power of having multiple things kind of working together to solve problems. It’s so far beyond like typing in like. What’s the best TV? In Bing chat.

Speaker

You know like.

Rob Lennon

We’re we’re we’re talking like entire strategy sessions full of executives being able to be replicated virtually like with with just a couple of these language models and and really not even that much code like code that they themselves can generate for each other so. That’s that’s kind of like the the beginning of where I see this headed.

Craig Shoemaker

OK, what so so before we hit record you said that you sort of had a a glimpse of some version of reality that could come true. Like I’m not expecting you to tell the future anything like that, or else I would just ask you to give me the lot of numbers. And we’d close the podcast out right now. But where? So that’s like part of it. What else do you see the future bringing?

Rob Lennon

Well, OK, let’s let’s think of a prompt. As one thing that you ask one language model to do. I sort of. The Epiphany came when I saw this as like a one one line of code in in a program where it’s like you’re asking the algorithm or whatever the the thing language to like execute one one thing. OK. And then I was thinking about how I’ve worked at startups where it’s like the code base was 10,000 lines or or you know, I’m sure there’s there’s ones that are. Way larger than that. And what happens when we begin to string together, not just like computations? If this, then that type stuff cause the zeros and ones is the basis of the current computer. Right. But with these language models, it’s inference. It’s intuition, it’s wisdom. It’s not a if this then that it’s not like a, you know, this is on, this is off or or this is a calculation, it’s like a. It’s like a thought it there can be reasoning, there can be analysis, there can be retrieval of data. All these different things. So what happens when we when we start to string those together? What happens when we string together a set of reasoning tasks that are 50 tasks long? They get real deep into the weeds on every little thing and try to try to figure something out at a level of complexity that even a team of humans can’t. You know, think through. And there’s we’re already sort of doing this in other areas with machine learning, like in the quest to to create a pan vaccine for COVID. They’re like feeding these computers all of the different protein shapes that COVID has ever had and saying like, can you make a shape that can stick to all of these things and trying to, like, make the this ultra ultra? Vaccine in a way that you know, like humans, couldn’t run a million tests like that and and have it all work. But the things can simulate it and are trying to find. The answer so. But what are the answers that that we can do with the wisdom version of that not just sort of the the math you know if this and that version but the wisdom version of that and. As I started to kind of like theory craft, some of these things it my brain just started breaking and I think that we’re going to reach a point where we don’t understand the like the computers. Like, I don’t even know if we can call them computers anymore, but we create technology that we can’t understand. And and that that’s actually coming in like the next five years, if if it isn’t here already with with the language models themselves that they’re going to be able to like go so deep and in such sophisticated ways that they might have to create their own language to explain it to themselves or to other language models that, that they might be running processes and computations that they design themselves, that we can’t. Fully understand why they work, but they work better than any other way of modeling or or doing things that that we’ve ever done. And that’s sort of the real future that I see coming, is this this moment where the human is not just. Like they, they’ve passed us in a way in certain tasks because like the it’s the model is an unfair advantage. It can access all of human knowledge, like in an instant, right?

Craig Shoemaker

No, no big deal.

Rob Lennon

But we’re still sort of in charge. We still have a sense of what’s going on and we can we can still like, kind of, we’re the ones who tell the language model. What to do? But that there’s gonna be a point when it’s it would really be better if they told us what to do. And we were just kind of like the quality check on the decision because we can’t understand the logic like the the, the logic and intuition and everything behind that decision. And I think it’s going to come sooner than we think. You’re gonna have to trust the the gut instinct of a of a computer. Or yourself, like where people are gonna. Have to like. Sure, make this this decision. Like I I can’t understand why this, why this is true, but it I think it’s going to be true. And so I’m going with the AI on this one.

Craig Shoemaker

That I mean, depending on where you apply that like that has such a huge implication for how it can affect us as individuals and as society, right? Like you know, if you’re talking about accounting and you’re like there’s a million numbers here, I’m just going to trust the AI that seems a little. Easier than to say, you know, you’ve looked at my background and you’ve looked at my psychological profile, and now you’re recommending that I quit this job and go do something else. And it feels scary like that. That’s at. A whole other level.

Rob Lennon

Right. Yeah, but if an AI looked at all the emails in an organization, a massive organization determined the types of connections that people have with each other, the types of relationships, the the power dynamics, like other things like that, and then gave advice on how to restructure or reorganize or what roles were redundant. Like, you know, there are some pretty. Pretty insane things that could be done with just the regular data that’s out there right now.

Speaker

Right.

Rob Lennon

If you ask an AI to to come up with a plan for it and you might not like the answer. Right. But it might be, it might be true. Like, not just objectively true, because like the an AI can be made to be indifferent about the outcome and just kind of try to reach the best conclusion. Like, there’s also these AI’s that they have it, they have access to some of the magic that previously. Only humans did so. For example, like I’m a market. And when we do conversion rate optimization for a landing page, sometimes leered counterintuitive things work. So like. If you follow. All the rules, so to speak, of marketing, there’s a certain way to optimize the landing page. But sometimes when you make the thing uglier, it actually performs better because it like attracts attention and it’s and it’s hard to. Explain like what? Form of ugly is the most performant and and you know what kind of brand damage you’re doing and these things aren’t they aren’t like logical, but they I mean maybe there’s a logic to it, but they they just they sort of work based on magic and pseudoscience and and intuition. Well, the AI’s now have intuition.

Speaker

Right.

Rob Lennon

And so they it’s not just calculating a best practice, it’s like they can start to come up with ideas like that for all sorts of things that are counterintuitive but based on the sort of learned experience of the AI. Like, maybe it’s going to work and we. Should give it a shot. Yeah, and and I think the entire and all of society is going to be disrupted by some of these innovations that that come out of that.

Craig Shoemaker

Yeah, it’s, it’s weird. I mean, I have. More kids than most and I’m I I just wonder. Like I didn’t grow up in a reality where this was something I faced as a young person. And so I just, I’m very curious to see how this affects people who are in the job market, you know, 1520 years down the line and and what that looks like.

Rob Lennon

The like, you know, I grew up in a world where your phone was at your house and it was attached to a wall and had a cord.

Craig Shoemaker

I was there, yeah.

Rob Lennon

Know and and when it rang you picked it up. Know who it was. And and and in that world or. And we now live in a world where you don’t have to be home to receive a call. Right? And and we live in a world where not only in that you you have the Internet and. And all of those things and so. It seems so normal now, but when you when I do think back to, you know, like my childhood and like the time I had to walk like a half a mile because, like, no one was answering the phone, but I wanted to, like, hang out with my friend. And I knew he was home and like I had to go on the beat, like on a on the hunch that he must be there. But like, something loud is going. Like, they’re like those things don’t exist anymore. Whatever it is, there’s a version of that that’s not going to exist anymore because of AI and and some people have some ideas about it. The thing about AI that I’ve been fascinated by is it’s not actually or a lot of the predictions haven’t worked out the way people thought. So I remember there was a time when people were predicting that the creative. Skill sets would be the last to be affected by AI because it’s like, how can you make a a robot into an artist that doesn’t make any sense, right? And now we see that with tools like mid journey, stable diffusion. Bali imaging and the creation of of interesting art was like one of the first things for AI to get really good at, probably even more than language, to an extent. I feel like that some of the first, most fascinating things that I ever did for me was create images that like. In an instant that. I couldn’t even have imagined. Right. So think of all the predictions that we have that are like that. Where we’re like. No way. That’s not going to happen. Like everyone said that I was coming for. Like truck drivers and stuff like that. Yeah, we still don’t have AI truck drivers, but all these other professions, like accountants, are looking like they’re going to be impacted because like, that’s the. The way the technology is headed.

Craig Shoemaker

Yeah, it would be fast. It would be so fascinating to look back in 10 years and we’ll we’ll have probably posts on social media if whatever that looks like at that point. But saying, you know, here’s the predictions. You know, all the stuff like what the year 2000 was supposed to be like, and how laughable that it actually wasn’t flying cars and and everything. I thought it was going to be in. In in the 80s.

Rob Lennon

We’re still in, in my opinion, the only thing that we’re never going to get is flying cars, right? Because the laws of physics just make it. Really like not a good use of energy to, like, constantly fight against gravity in that. Way well.

Craig Shoemaker

And on top of that, I’m out on the road and I see a lot of people who don’t maintain their cars very well. And so they get, they pull over to the side of the road because they break down. If you have an A, you know a flying car that just breaks down and it falls on my house and hits my kid like. That’s a problem, so.

Rob Lennon

Yeah, it’s also.

Craig Shoemaker

That’s always been like that.

Rob Lennon

I hadn’t. I hadn’t considered that I’m overdue for an oil change. If my car was a flying car, I would. Have done that by now, that’s right.

Craig Shoemaker

Well, one of the things that I’ve just enjoyed watching you this last, it’s probably been like 12 months. I know that you sort of had a a focus at least on your your Twitter account for like audience building and you were you’re doing that for a long time and like you said, you’re a marketer and then you sort of turned the corner towards really focusing and dialing in on. On AI. And I’ve sort of seen this progression of you being so excited to share what you’re learning in in terms of prompt craft. And I think you coined the term mega prompt every. So I used that in another podcast and I was excited because I’m helping disseminate your your word. So you’ve grown a lot in this space and I was just curious like, where’s your mind at right now when it comes to building prompts?

Rob Lennon

Well if it. If we go back in time like four years ago, I actually started playing with AI in secret with GBT too. And I was. I was embarrassed because as a writer playing with AI writing tools, I didn’t want anybody to think, ever, that like an AI wrote anything I put out. Right. And so when all this happened, I was able to kind of go public. With my secret enthusiasm, is is the greatest thing because I basically like wasted hundreds of hours across years. Playing with a tool that I could like kind of never reveal like I never wanted to let anybody know that I knew so much about how it worked, and so it was the best. Like waste of time I’ve ever done, and so so a lot of that energy that you that you saw was like the outpouring of like finally I feel like the world’s comfortable. In hearing from me about these things and like. The the, the the feelings that were all pent up in terms of where it’s headed now. OK, so we we talked before like if a prompt is just a a single thing that you ask a language model to do and then it and then it returns a response that’s kind of like where where we’re at right now. And then for anyone who’s listening, who doesn’t know what a mega prompt is. This is a strategy that I’ve I’ve kind of coined, where we we try and add more specificity, context and information to the prompt to get a better result. So you can ask the prompt to act as a persona. You can give it steps to complete a task with. You can give it context around the task, or even a like an an output that you want, or even examples of. Of what you’re talking about, or what kinds of outputs you want, and all these things make the prompt work better. That’s just, that’s just one step. That’s one. It’s like 11 command, one response. And like I was saying earlier, one of the things that I discovered while working on personality design with my co-host is that she responds more like herself when I ask the language model to basically do it in two steps or maybe even three, you might say 3. Like have your internal. Monologue about what you want to say and. Then like say it out loud in the voice of Ruby, like as as she would articulate it, and the results are like five times as good as if you try and have it do that all in one step. And so I do think that the the direction that this is headed is not just for having really better single prompts, but looking at ways that we can iterate. Across a couple different passes of of the the mind of the AI, so to speak. To like, you know, produce results that are even better. Even better, allowing it to kind of step-by-step reach conclusions or help us. In these chains or sequences of prompts and figuring out what’s the right amount of sequence and what’s kind of a waste cause you do get a lot of extra writing when you go in a sequence. I think that’s like kind of the next frontier that I’m really interested in cracking is I’ve got all these experiments planned or or that I want to plan around. Taking some of my mega prompts, breaking them down into sequences, testing them against the original prompts, seeing like what’s the right balance, what’s the right mix of all these things and. What’s earth? Lastly, what’s interesting there is every language model and every new version of language model behaves a little bit different to the same prompt, and so it’s not like it’s one universal skill, it’s this constantly moving target. And so when GPT 4 version 2 comes out and you know they or when they they. Intended to do this or when GPD 5 comes out or clawed 2 or sage two or or Bing chat two or like you know, every single one of these is gonna have like a a style. That works the. Best and that style is gonna be different across different disciplines or types of queries, and you know might be to get a mathematical answer from this AI do it like this. But on this other one, you know it’s not good at math at all, but you can get a really amazing answer to a philosophy philosophical question if you if you use this approach, I think there’s. This is like there’s never been so much undiscovered territory in in like knowledge before and and it’s all just kind of waiting for us to experiment and try and figure out how. To get in there.

Craig Shoemaker

So you’re saying the multi step process seems to be where you’re getting the best results, at least at the moment. So do you see that following a pattern like if I start with X then ask for Yi end up with Z?

Rob Lennon

I don’t know if it’s so so easy to articulate. Is that OK? So it it started with. I read a paper about something called chain of thought prompting and and chain of thought prompting you basically teaching your prompt the AI how to think something through, and then you ask it to think something else through. So you you say like like the the examples in these papers are often math questions where they’re like, if Mary has six apples and she gives 3 to Johnny and and then Johnny Eats 1 Apple. How many apples does Mary have? Or they they do one of those kinds of things and then sure, the prompt says let’s think this through like Mary had six, but gave 3 away. So now she has three. Like like and it’ll show like the logic to follow.

Speaker

OK.

Rob Lennon

And then the. You might prompt after that. Now figure out for me more complicated question or a different. Kind of. Question so that that’s where I got the idea was like, how do we walk? The AI, through a chain of thought. Now this is where it gets really interesting to me as like as someone who like, let’s say marketing. For example, if you have any specialized knowledge, you have all these implicit ways of thinking that like you don’t know anymore or like you don’t know consciously anymore. So when I think about marketing, I think about who’s my audience, what do they care about? What do they suffer? What challenges? Or preventing them from achieving their goals. I have a lot of thoughts like that and I think well, how can I help solve those challenges and what’s the best method of communicating to this type of person. Are they busy? Are they on the go? Do they respond to visuals? Do they read? Do they like to be entertained like there’s there’s all these different things that go on in my mind. And that I don’t consciously do anything with and I just write a tweet or create some content intuitively. Well, I think we can take any kind of knowledge. Like if you have any subject matter knowledge like that where you can investigate into your own process of like what are all the assumptions that I’m making. Or what are like the little things that I’ve done a few times and that I don’t have to do anymore to think about my own audience or customer or project, but that the AI has never done and doesn’t know and has no context around. And what if I ask it to walk through some of those steps first? To develop a very vivid picture of all the moving pieces of this and then at the end ask it to actually do the thing I want it to do. The the results are are they’re like orders of magnitude. More useful when you do something like.

Craig Shoemaker

This So what? What’s the difference between? Setting it up with a lot of context and information. So say you start off by saying I’m a marketer, my target market is interested in dot dot dot dot dot you know the concerns are these things. So how are you having the AI reason with you through that, rather than just saying, here’s a bunch of stuff for you to to set as your context.

Rob Lennon

So if if you take any of those as a step, you can ask it to to explain or break down or provide context or expand on various ideas. So it I might say my, my audience has these adjectives and suffers from these challenge. That’s still relatively simplistic in terms of if you think of the complexity of the human experience, we’re not just a few adjectives and like, you know, a few labels and and so you could say then break these challenges down. What happens? Why do they happen? What happens before that they they occur, what happens while they occur? What happens after they occur? So we we start to expand the the boundaries of the context into the subtleties of a problem or who. How does this happen? Who does it happen to? Who is impacted? Who has the money to purchase the solution like? Once you start to kind of go down this path, you can come up with a lot of questions, and here’s a technique that even if you don’t know what the the questions to ask are, if you’ve previously asked some different kind of follow up questions or or or something, you can ask the AI how should we expand this topic. This idea in a way that will give you more context like you tell me what we should do. Like so like you just need one good example and it it can infer. Per all the. The new pieces of information it needs on a totally different topic.

Craig Shoemaker

Dude, this has been incredible. I I’ve been trying to learn as much as I can about these AI technologies using Chad GBT following your stuff and here we are in just a a short, you know, 40 minute conversation. I’ve already feel like I’ve. I’ve learned so much. So thank you for for doing this with me. One of the things I like to do at the end of the show is is just ask you for three actionable things that our audience could kind of take away and run with. So I’m curious what what your three?

Rob Lennon

Might be. So so we might start with taking action on some of the prompt craft advice that I was giving earlier. So the next time that you find yourself talking to a chat bot or something like that, instead of just asking the basic question, maybe try and apply one of these other strategies. Can you give it more context? Can you ask it to add as an expert or can you actually unpack the question that you are about to ask first? And and then ask the question second. So just experience the difference. Even you could take that a step further and like see what what happened if you just ask the question and then start a new chat and ask it in a different way and and realize that this skill, this is an emerging skill that can that will carry you for decades. Potentially, if you can start to learn how to ask these questions now. So that’s. One the next one is. I think that a lot of people are afraid right now that AI is coming for their jobs and like we have this excitement around it. But a lot of people are not excited. They’re very worried. And I want to encourage people if if you see an area where AI is going to be able to do something that you previously could do to think about, how can you increase the value for your customer or the value for your firm that you create? 8 given that you don’t have to do that menial task, or that uncreative action, or that that simple thing that you that you used to have to do and and and I want you to flip the script on on this narrative, I don’t think AI is coming for jobs like humans need jobs. It’s sort of like part of how we work, but we also need to create value. So if you are. Unburdened by some piece of work, how can you create new value in your organization and take whatever skill you have and? Apply it, you know somewhere else I think. I think that that’ll be really important moving forward. So one of the reasons that I took the leap into AI as a influencer, so to speak, was that I was worried about the sort of bad guys winning in this, like, people coming in and just churning out AI written books with no soul and no heart and. Uninteresting content and flooding the market and making it really hard to find the good books or or all the different things that. Can be done in excess. And so if you are a, if you are a so-called good guy, if you want to create amazing content, if you wanna use AI to to, to create things of interest, I wanna like be encouraging to that. And if you are thinking about trying to get a quick buck like I can make a million web pages with like all of this machine written. Content I wanna encourage you to like, rethink, like what’s gonna happen when you do that. And when everybody does that and whether that’s the world that we all want to live in. I think that we have an opportunity to make really cool stuff right now, but we also have an opportunity to kind of ruin a lot of nice things and I hope that we do the first one more. Than the 2nd.

Craig Shoemaker

One well, if we learn anything for your history, we’ll probably end. Up doing a little bit of both so. Well, OK, when is the podcast?

Rob Lennon

Coming out first week of April so.

Craig Shoemaker

First week of April.

Rob Lennon

It may even be out by. The time this episode.

Craig Shoemaker

It it very well might be. Yes, that’s great. Well, I hope everybody has an opportunity to check it out. And then you also have an AI content reactor that that’s the name, right you. Got it right.

Rob Lennon

Content reactor. If you go to mind meets machine dot AI, you’ll find links to my courses, my guides, my podcast. I’m adding a lot to that website every week. It’s kind of going to be the hub of all the interesting things that I’m doing around AI other than what you find if you follow me on Twitter. Your LinkedIn you get a you get a taste of a lot of that as well.

Craig Shoemaker

Hey, thanks so much for being a part of the show. Let’s continue this conversation. Feel free to connect with me on Twitter, where I’m at. Craig Shoemaker. So go out and have an amazing day. I hope you get a chance to find someone to love, find someone to forgive and find someone to encourage because we are most certainly not in this alone and we’ll see you again here soon. On the leverage 3 podcast.

More…