Unprompted: An AI arms race is raging in Silicon Valley—but what comes next?

AI is the talk of the town. You could even argue it’s about to overthrow the weather as everyone’s go-to small talk topic.

“You won’t believe the argument I got into with ChatGPT today.”

But this generative AI boom didn’t happen overnight. In this episode, co-hosts Pete Housley and Unbounce Vice President of Growth Marketing Alex Nazarevich welcome special guest Harold Price Professor of Entrepreneurship and Technology at NYU’s Stern School of Business, Arun Sundararajan. Together, they’ll explore the now-not-so-secret arms race that’s been raging in Silicon Valley for the past decade—and what the future might hold.

In this episode, we lean on Arun’s expertise and try to get answers to some big questions:

  • How did AI development by tech giants like Microsoft and Google get us to where we are today?
  • How will AI-generated content like deepfakes affect the coming US election?
  • How should marketers adjust their workflow to rise to the new challenges (and opportunities) AI presents?
  • How can AI help us humans break through barriers and do things we could never do before?

AI might be becoming small-talk-appropriate, but its mysteries are far from revealed. Listen to the episode or check out the transcript below.

Episode 6: Arms Race

[00:00:00] Pete: Hey marketers! Welcome to Unprompted, a podcast about AI marketing and you. But today’s episode is gonna be more about AI generally because we’re so fascinated with what’s going on in the world today. I’m Pete Housely, CMO of Unbounce, and Unbounce is the AI-powered landing page builder. I’ve got some news. We’ve reached a milestone, and we now have a listening audience of almost 7,500 listeners. Thank you all for tuning in as we build this movement and claim our space as marketing AI champions.

Today is episode 6, and we will be going to DEFCON one as we explore the AI arms race and unpack the history of AI and what led us to get to this incredibly exciting and definitely scary time. 

What started out with our fascination of ChatGPT and a few simple AI design tools has now turned into a fascination with one of the biggest changes in technology since the Industrial Revolution, where it was literally characterized by the same circumstances. Are machines coming for our jobs as we have gone on this podcast journey of AI, AI marketing, and this AI tsunami, we seem to be getting more and more sophisticated in terms of the content that we’re bringing to you, our audience today. We are bringing on one of the top subject matter experts in the world on the future of capitalism, artificial intelligence platform-enabled change, anti-trust policy in tech and the digital future of work.

But I’m gonna keep this guest a secret for about 10 minutes while I introduce my co-host for today. Today my co-host is Alex Nazaravechi. And Alex is Unbounce’s Vice President of Growth Marketing. With over 14 years of experience scaling brands via digital marketing and e-commerce, Alex is a self-proclaimed data nerd who loves digging into the numbers to drive results. Prior to working at Unbounce, Alex was VP of growth at Indochino, and has led a number of organizations including working in the cannabis sector during the legalization of cannabis in Canada.

Something that I want to just call out to our listening audience is this role, growth marketing and having VPs of growth marketing, is an emerging and very important role in today’s marketing stack. These leaders are highly accountable for results and no longer can hide behind pure vanity metrics. They’ve got to deliver revenue, performance measurement, analytics, forecasting, and so on. Alex happens to be one of the best data-driven marketers I’ve ever met and happens to lead a fantastic team here at Unbounce. Alex, welcome to the show, and what is on your AI mind these days?

[00:03:58] Alex: Thanks, Pete. Well, I am astounded by the big moves we’ve seen in the past few months. I mean, the digital marketing landscape is not today what it was six months ago, and I don’t think it’s a hot take to say that six months from now it’s gonna be the same either. So, What’s on my AI mind today is how is AI adoption being fostered by our tech overlords and how does this change our approach to customer acquisition and activation? 

[00:04:25] Pete: Well, funny that’s on your mind because that is exactly what we’re going to unpack today. We are going to be talking about the giants, and I might get your take on before we talk to the real, expert of the situation. Alright. So Alex, you oversee an entire performance marketing team. 

[00:04:46] Alex: Mm-hmm. 

[00:04:47] Pete: And obviously I am nudging everyone to get on the AI bandwagon. So tell me a little bit about how you and your team habits and practices have changed over the past six months as it pertains to AI. 

[00:05:02] Alex: Right now our habits are pretty much the same as they were prior to the release of ChatGPT and the AI onslaught; it just makes everything faster and easier. Like if we need some new ad copy to test that’s kind of last-minute AI to the rescue. If we want to whip up some code to make a forecasting model work better or to troubleshoot the integration of a couple programs that our leads flow through, usually we can get a pretty great answer from ChatGPT or similar. 

[00:05:29] Pete: Alright, so that is great. Now I’d like to say on a scale of one to 10 on you and your team’s adoption of AI, in your day-to-day life, where are you? And be honest, please. 

[00:05:46] Alex: Um, okay. Well, if we’re really being honest, I would say that we are a 3 out 10. you know, we’re embracing the parts that work for us. I think all of us use ChatGPT or similar, but it hasn’t actually altered our digital marketing, or growth marketing in any way. I feel like we barely scratched the surface, honestly. 

[00:06:06] Pete: Well, and thank you for being honest. And I think to be honest, right now where we are, you know, as marketing departments across North America and the world, a 3 on the adoption curve is probably pretty high. And the next level of this is going to be when we are building systems and standards into our workflows where it’s scheduled and part of the day-to-day. Mm-hmm. And we’re going to get to the point where we start evaluating tasks. 

Should I do it manually? Should I use doing AI? Should I use a combination of both? But this is definitely coming and what I would expect is that all departments and companies will be evaluating AI. And determining what aspects of their workflow should be augmented. And what I always remind our listening audiences, if you’re not embracing AI, you might be replaced by someone who is. So please be curious about AI. 

[00:07:08] Pete: Let’s talk a little bit about the next segment of our show. And generally what we try to do is bring to our audience, topics that are in the news, and we follow the AI news and we have all of our feeds. 

But overarching the media has been the big giants. And so Meta and Twitter now have entered the race, and I just couldn’t help but just love the media frenzy around Zuck and Musk in a cage fight and just that sort of literal interpretation as well as the metaphor, uh, have been super funny. But I know you’ve been studying AI, so tell us a little bit about your take on Meta and X entering the AI race. 

[00:07:58] Alex: So maybe we can talk about Meta a little bit first. Meta is releasing LlAMA 2 and they claim LlAMA 2 is a fully open source large language model. And this is causing all kinds of waves in the industry because if it’s truly open source, this means anyone can get into the code, and anyone can use it. Supposedly anyone can use it for commercial purposes, and the public can generally get in there, or journalists can get in there and understand what’s actually going on underneath the surface. And this is in direct opposition to open AI and ChatGPT, but it’s also kind of in opposition to how Meta and, you know, Facebook before that have conducted themselves in the public sphere for years. So it’s quite a move, don’t you think? 

[00:08:40] Pete: I think it’s a great move, as we think about the sort of AI acceptance curve and some of the barriers on privacy, security, et cetera. So I look forward to finding out whether they deliver on that promise of open source or not. How about X AI, aka X?

[00:08:59] Alex: Yeah. So let’s compare that to Elon Musk’s X AI. So X AI sounds straight out of a sci-fi movie. They’ve literally made their mission statement to understand the true nature of the universe, big on showmanship. They talk about how they’ve poached all these brilliant minds from leading AI and AI adjacent companies, and at this point, they state that they’re actually gonna keep X AI separate from Twitter/X.

But I think with that recent rebranding, that’s probably not really gonna be the case. You know, It’s interesting. This is kind of full circle for Elon Musk. ‘Cause believe it or not, he was one of the founding members of OpenAI, and then he left OpenAI, due to a conflict of interest ’cause he was working on AI applications at Tesla. So this sort of marks a return to sort of pure play AI for Elon Musk. But again, it’s big on showmanship, like big on flash, kind of light on what the actual details are that underpins that. So two very different approaches. 

[00:09:56] Pete: Well, if anyone is close to unpacking the universe, certainly SpaceX is.

[00:10:03] Alex: Yeah. 

[00:10:04] Pete: So, pretty excited by that. 

[00:10:06] Pete: Yeah. Alright. And then what do you know about ChatGPT? So we’re relatively late people. We’ve been following the news; we’ve been studying it. What’s the scoop there, Alex? 

[00:10:16] Alex: Yeah, so to my understanding, OpenAI started as relatively a non-commercial venture, really dialling in on the ethos behind AI development. And then ChatGPT represented like a big swing in the commercial direction. So Open, AI’s CEO, Sam Altman, went on record and said something to the effect of, “Right now it costs them less than 10 cents per query in ChatGPT in infrastructure costs.” But the thing is, if you think about how many people are using ChatGPT and asking at things every day, that’s insanely expensive. So this is a pretty aggressive customer acquisition play to me. You can also see this in, uh, Bing’s AI chatbot as well, which also made a big splash when it came out, and I think put a little bit of a dent in Google’s market share. But then you can see that Bing is taking its AI chatbot to the two most popular internet browsers in the world. Soon you’ll be able to access it through Google Chrome and then Safari. Which says a lot about how desperate the big players are.

[00:11:14] Pete: It’s funny, as you think about all of these tactics or strategies, the audience wins, right? Yeah. It really doesn’t matter how much you pay for that audience. Mm-hmm. If you have the lion’s share in the world and you have the eyeballs, the money will follow the audience. So, yeah. So let’s shift gears a little. So that was super interesting, Alex. Clearly been paying attention to the news. You’ve clearly done a bit of study on AI, and in my opinion, you sound wickedly smart on this topic, but you are not an expert on this topic. And I’m looking forward to introducing our guest at involvement.

So let me Introduce today’s topic. On today’s episode, we’re exploring the competitive development of AI between the big tech companies like Microsoft and Google because this generative AI boom did not happen overnight. We wanna discuss the history of the arms race that’s been raging in Silicon Valley for the past decade and what might come next. It is my sincere pleasure to introduce our guest today, Arun Sundararajan.

Arun is the Harold Price Professor of Entrepreneurship and professor of Technology Operations and statistic at New York University’s Stern School of Business. That is a pedigree if I’ve ever heard one. His bestselling and award-winning book is called The Sharing Economy, published by MIT in 2016, and his research studies show how digital technologies transformed businesses, government and civil society. Wow. Arun has been a member of the World Economic Forum’s Global Future Councils on Technology Values Policy and the New Economic Agenda.

Arun, how are you today? And welcome to our show.

[00:13:33] Arun: Thanks for having me. I’m really looking forward to this conversation. 

[00:13:37] Pete: Well, I hope you can dispel any myths that we created in our setup. With that in mind, I’m gonna jump right to the first question. Okay. Arun, can you bring us up to speed with a simplified timeline of how we got here? What are some of the key moments in the development of AI and in the tech giants establishment of this space?

[00:14:03] Arun: Absolutely. First off, I’m really excited to be here. This is a great moment to be talking about how we’ve gotten to this stage where the entire world is really interested in artificial intelligence, not just a bunch of computer scientists and, um, like, you know, smart business executives.

It’s also a moment where we have to think really carefully about, what are going to be the broader long-run implications of the speed at which we are building new technologies, but you know, I like to think of the evolution of artificial intelligence, at least the modern evolution of artificial intelligence as sort of having gone through four phases since the early 1990s.

So I mean, let’s put this in the context of a simple task. Let’s say you’re trying to build a system that is going to decide whether someone’s gonna default on a loan. Back in the 1980s and early 1990s, you would build what’s called an expert system. So you would go to a bunch of experts, people who knew a ton about assessing credit risk, and you’d ask them a bunch of questions. You’d sort of extract the knowledge from the human, so to speak. Then you’d aggregate this knowledge across, say, a hundred experts and put it into a box that you’d call an expert system. So all of those rules, so to speak, came from a human being.

Now, in the 1990s, I mean, researchers were doing this a decade earlier, but people started to say, well, let’s use a different approach. Let’s teach computers to learn these rules by themselves by giving them data. So that was the dawn of machine learning. Where in this assessing credit risk example, instead of giving the computer a bunch of rules, you would give it a million examples: Half a million people who did not default on their loans and half a million people who did. And the experts would say, here are the features, or here are the additional pieces of information. You want to give the system about these people, their age, their income, their work history, and their credit score, and then the system would come up with its rules on its own. Okay? So that was the dawn of machine learning.

Fast forward about maybe 10, 12 years, and we entered the third phase, which was the deep learning phase. Over there, you have the systems, not just figuring out the rules on their own, but figuring out the features, what characteristics of the people to use. So it’s like you’d give the system a gob of data about the person, all of their browsing history, all of their search history, all of their social media, and the system would then construct these features that human beings can’t understand. This actually started with image recognition. Where like, the way a computer recognizes a dog is very different from the way a human being does. It constructs these sort of patterns that we don’t have words for.

And we are sort of entering the fourth phase now, where all of these earlier artificial intelligence systems were making predictions in parallel. There was a branch of AI called natural language processing that very dramatically deepened our ability or the ability of computers to analyze text. So you can think of at least the early generative ai, the large language models, as sort of the coming together of the natural language processing dream and of these prediction systems. Wherein now, a large language model is given a very simple task.

It is given a set of words, and its task is to predict the next word. But because we have so much computing power and we are giving it so much data, that simple task has led the large language model to be able to, at this point in time, converse the way a human being does. So, yeah. If you think about it, there’s the expert systems phase, which was phase one. There was the machine learning phase, which was phase two. There was the deep learning phase, which was phase three, and now we are in phase four, which is the generative AI phase.

What’s really interesting is that over the last 20 years, it’s not like people have been coming up with fascinating new algorithms or methods that are completely different from what people were using. In the past, what’s really happened is that computing has gotten cheaper, and so the kernel of what a large language model uses was developed 40 years ago. Of course, there’s been some advances, but fundamentally the real innovation here is cheap computing, and you throw more and more computing power at the same set of methods or the same algorithms, and you get to the point eventually where you have something like G P T four. Which is making these predictions so well that it seems like you’re talking to a human.

[00:19:24] Pete: I’m stuck on something you said a few minutes ago, which was that breakthrough where you could predict the next word. That sentence alone, to me, Arun, it’s so evocative of how it seems like the world is behaving. ’cause not only does it predict the next word, it writes the next five sections and paragraphs and everything, but relatively borrowing on that same principle. So for me, when you mentioned that, that was an aha moment for me, and thank you for putting it so simply. So that was great. But I want you to continue where you start. Started picking up with GT4, and so on.

[00:20:03] Arun: I think people are often stunned by the blinding simplicity in some sense of the task that these, large language models have been given. I mean, it’s made many of us in academia start to question how do human beings create the sentences that they do? Are we actually just sort of in our neural networks predicting the next word? Or do we actually know the sentence before we say it? And you know, if you sort of try this exercise on yourself, you’ll realize that you’re not quite sure. Sometimes you pause, formulate your idea, but do you actually know exactly what the next word is going to be until the last word has been said?

[00:20:46] Pete: Well, and Aru, I feel like if I’m nervous in a public speaking situation, I have no idea what the next word is going to be. But if I’m relaxed, I’m quite confident. I’m assuming the computer doesn’t get nervous in these situations.

[00:21:00] Arun: not yet. I mean, like, you know, we’ve seen Grammarly certainly emerge from this predicting the next word. Being able to pick up on the importance of a situation and getting butterflies in its digital stomach that we haven’t seen as yet. but the simplicity of this task also sort of underscores something really important about the reliability of what comes out of a large language model. ‘Cause my students often ask me that, you know, why does ChatGPT sometimes get things wrong? To me, the more amazing thing is that it gets so much right. Because it’s not like a Google search engine where it’s pulling information that a human being has prepared and retrieving it to you. It is actually making every single thing up, every word that comes out of ChatGPT or bar or anything built on Facebook’s lama is generated on the fly by the computer.

And so what it means is that yes, it’s technologically fascinating, but there’s a very important need for human beings to pay attention to what has been created because it’s not like it’s coming from some preexisting source. It is actually being created on the fly.

[00:22:21] Pete: you know, I’m, uh, I’m fascinated by the ideation possibilities that something like ChatGPT creates, you know, I think people who are looking for it, to give you the exact answer, are probably using it wrong. What we should be doing, at least in part with ChatGPT and with other large language model-based systems is using them to come up with things that we might not have thought of and then bring the humans into the loop to take the good ideas to completion.

[00:22:54] Arun: Meta has some of the best AI scientists in the world. My colleague Jan Laun, who’s a computer scientist at NYU, heads Meta’s AI research team, and they have been on the cutting edge of AI research, but the market. Reality today is that OpenAI has leapfrogged ahead of everybody else, OpenAI and Microsoft, with Google Hot on their heels. And so I see Meta’s open source decision as partly a philosophical decision. Hey, it’s good to put the tools in everybody’s hands, but it’s partly also a competitive response because when you are dealing with two powerful market leaders who have already established themselves, releasing a third closed-source system isn’t gonna make too much of a dent. And so for them, this is as much about ideology as it is about competitive strategy.

[00:23:53] Pete: That’s a brilliant take. So we now have Google, Microsoft, Meta, little bit of AI X thrown. And what is your take on what’s going to emerge in the next few years among those four players?

[00:24:10] Arun: well, I think that there’s gonna be a tremendous amount of value creation from the Microsoft OpenAI partnership, but also from Google and Meta, the scope of what we use digital technology for. And the fraction of capital spending that is on AI, the number of things that aren’t being done purely by humans but are being done through human AI partnerships, is just going to expand significantly. And so I see generative AI as being a huge growth opportunity for all three of these players.

You know, Microsoft, Google and Facebook. You know, a lot of people wonder why Google wasn’t first to market. In many ways, it also has been hiring some of the best AI minds in the world. It’s been pouring billions of dollars into AI research for a couple of decades. It was the first company to get a self-driving car on the road. And so clearly, its AI chops can’t be questioned. And, you know, I think there are a couple of reasons why we didn’t see a ChatGPT like a product from Google, and neither of them are technological.

One is that Google is a trillion-dollar company with 2 billion users. And so it has a more cautious approach to releasing new untested products. In some ways, there’s a much greater level of brand risk. There’s also a much more active conversation within the company about the risks associated with AI. And so in some ways, I feel that open AI’s rush to release ChatGPT has pushed Google into releasing Bard; it may not have been ready for prime time.

[00:26:10] Pete: what is actually at stake for these companies? Like, could it be winner take all? What is your take on what is at stake?

[00:26:19] Arun: Well, I think that there are a couple of different things that are at stake here. The simple thing that’s at stake is wallet share of corporate spending on digital. There are tons of applications that generative AI has, in both the consumer segment and the business-to-business segment, from being a customer interface to actually taking over some of the tasks that human beings used to do.

You know, in our economy where trillions of dollars of the G D P are services, And information and so simply, competing to get that corporate spend. But there’s also sort of a deeper competition underway here. ‘Ccause I get the feeling that we are fundamentally changing through generative AI, how we as humans get information. ‘Cause if you think about it, before, there was widespread literacy before there were books like hundreds of years ago. The way you got information was either you worked it out on your own, in your head from what you knew, or you asked someone. And then, for a few hundred years after that, books and other written materials started to be an authoritative record of something. So you didn’t have to figure it out on your own or ask someone else. You could go to that non-customized source. You were trying to figure something out about building. You could read a text about how to build in general and then sort of fill in the gaps yourself.

And in many ways, the internet put this on steroids because it wasn’t just encyclopedias and libraries anymore. it expanded the set of pieces of information you could access dramatically, but it was still information created by someone else. Now, generative AI is taking us into this sort of uncharted territory where you have an expert on-demand who is working it out for you on the fly and giving you that information.

It’s not like a search engine where it’s giving you someone else’s pre-created information. It’s almost like a third form of getting information at a very fundamental way. It’s not ask an expert; it’s not figure it out on your own. It’s someone else figures it out for you in an incredibly customized way.

[00:28:53] Pete: Well, and that begs a huge question for us as marketers. For 20 years I have been a content marketer and trying to make sure that my industry keywords and my content and my ss e o best practices are there so that on a search, customers with any intent for my category are going to find me. None of us know how to deal with what’s gonna happen in a ChatGPT world where we can’t necessarily stimulate the results. And so, as marketers, we don’t know what to do about the new way people are consuming information.

[00:29:29] Arun: Yeah. And this is gonna pose a really difficult challenge for the companies that are building these generative AI systems because if a brand is represented in a particular way, for example, by ChatGPT, the people who own that brand are certainly gonna approach Microsoft and OpenAI and say, Hey, either, this is not representing me right. Or we’d like to be represented in this way, and then it’s up to open AI or Google whether they’re going to refine their guardrails to sort of tailor the content to be aligned with. But if they do that too much, then the beauty of the generative AI system, the charm, starts to be lost. And so it’s sort of a fine balance that they’re going to have to strike.

[00:30:18] Pete: So we talked a little bit about. You know, Microsoft and ChatGPT and Google have deep pockets and coming back. Do you think Google will take over the lead here, or what? What’s your crystal ball tell you?

[00:30:35] Arun: You know, if it was just open AI competing with Google, my prediction would be that Google would eventually catch up. Since it’s open AI and partnership with a company that has the scale that Google does Microsoft, I think open AI’s lead is going to persist for a while. Google’s also got a couple of other things that cause it to approach generative AI with some caution. It’s got an existing search advertising business that is generating hundreds of billions of dollars for it.

This search advertising model rests in part on Google being an intermediary rather than being the actual publisher of what we get when we search. This is where section 230 comes in, in the United States, where Google is an intermediary that has limited liability against, like, what people get through Google. And if Google throws away its search model and replaces it with a pure l l m generative AI model, it is potentially squarely putting itself into the role of being a publisher. And I’m not saying that that’s necessarily the wrong strategy, but you can see how a company with hundreds of billions of dollars of revenue is at stake. Yeah, it’s very conflicting.

[00:31:59] Arun: And you know, in many ways, this has been incredibly empowering, right? For tens of millions of small businesses. They’ve got access to global marketing, sort of at the click of a mouse. there’s also some, I guess, irony in the fact that part of what might be holding Google back here is because they are in a fairly strong position in an existing market, right?

They’ve got 2 billion plus search and YouTube users. They’ve got three antitrust cases pending against them. And so, if they rushed into launching a large language model-based system, the thinking within the company might have been that do we really wanna sort of draw further attention to the charge that we are extending our market power from search into a new domain? And the reason why I think it’s ironic is because I think this is precisely what held Microsoft back from entering the mobile operating system market aggressively 20 years ago. ‘Cause they had just come out of this huge antitrust case. They had the world’s best OSS engineers. There was something that prevented them from building the standard for the smartphone. They had some of the best researchers in the world. It was probably, hey, let’s be cautious. And then, along came Microsoft with the iPhone oss and Google with Android and, uh, so yeah, I guess the tables are turned now.

[00:33:29] Alex: Yeah, Arun, you’re just so, so clearly summed up the fascinating problem faced by Google. So where on earth is AI going from here?

[00:33:39] Arun: Well, I would love to make concrete predictions about this is what AI can do in a year and in five years. Anybody who makes concrete predictions about exactly what the capabilities of AI are going to be even six months from now probably shouldn’t be relied on because the top, large language model, the top generative AI people, don’t have a very precise understanding. Even the top people don’t have a very precise understanding of how their systems are doing, what they’re doing. We know the technology that is generating the images that Midjourney and DALL-E are generating. We understand the basic process by which ChatGPT is generating that next word. But the fact that ChatGPT3 learnt grammar and was generating these words in a grammatically correct way, it was not possible for any of the engineers at OpenAI or Google to predict exactly when that capability would emerge.

So we’ve sort of entered a new phase where we don’t have sentient AI yet, but we’ve certainly entered the phase where we’ve got ai, where the. Capabilities are emerging on their own. And so I’m certain that there are going to be significant improvements in the accuracy of what large language models generate.

I think that there’s going to be great strides in the sophistication with which AI can generate sort of short and full-length video. All of which poses sort of problems for artists, for creators, normally I make predictions about, you know, what’s gonna happen in 20 years. And I figure either I’m right, and I can go back and point to it, or I’m wrong. But if we would’ve forgotten about it.

But in this case, you know, I’m sort of being cautious because I understand the technology well enough to know that you can’t predict what its capabilities are gonna be.

[00:35:46] Pete: Well, and it, it’s interesting; I mean, you’ve taken us on a journey of really, you know, 30 years. but the world has just changed so quickly in the last six months. So obviously, it was the tip of the iceberg. but I am really wondering what the next six months will be. So, as a bit of a segue, ’cause you did talk about some of the societal and, cultural impacts.

Let’s shift gears a tad here and let’s talk about some of the risks or negative consequences of ai, whether that’s cultural, societal, or environmental. I’m not sure what those right buckets are, but I’m sure you’ve pondered this.

[00:36:27] Arun: Yeah, and you know, I’m encouraged by the fact that the conversation around AI risk and AI governance is so robust. But you know, some of these risks are well discussed. I mean, there’s certainly, as we throw more computing power at AI, there’s the environmental impact. It’s unclear to me that this is going to be the primary risk associated with AI, but it’s certainly one. I think that generative AI’s capability to create really sophisticated misinformation, really sophisticated, deep fakes, that pose risks to the stability of democracies and to our ability to sort of trust information in general. There’s a fraction of experts who believe that it poses an existential risk to humanity. 

[00:37:16] Arun: people have talked a lot about meta being open source and Google and Microsoft OpenAI being sort of closed source. A related challenge is whether we should designate certain kinds of AI as sufficiently high risk that it can only be run on a licensed server or a licensed cloud. A lot of people call this K Y C, know your cloud, and, to me, this is the only way to prevent certain things from happening. You can’t prevent the creation of deep fakes. Unless you know where the software is running. The trade-off, of course, is that you know, with open source and unlicensed technology, you’re gonna end up with far more innovation.

So it’s sort of that usual innovation versus regulation trade-off. Another risk that isn’t getting a ton of airtime is us as human beings being able to own our creative process. Because the way that these systems work, aren’t just of course trained on, like, you know, a system that generates music, is trained on all the music that’s ever been generated, but they can very easily be customized to generate new content in a particular style, in a particular voice.

[00:38:40] Arun: you know, many of us have gone to ChatGPT and said, “Write me a poem in the style of,” or “Write me an essay in the style of,” and if you think about it, what the place that we’ve come to is where we need protection over not just what the works that we create as people like a musician.

You know, if a musician composes a song, the song is protected. If a company creates a particular piece of content, they own that content. If a novelist writes a book, the book is protected. But that same intellectual property regime isn’t protecting the creative process. And so, to me, a big risk of generative AI is that it’s gonna commoditize human creativity.

It’s going to give people the inability to, for lack of a better word, own their intelligence. You can think of a really cracked business development person who leaves a company, and then this company sort of replicates this person’s process for everybody else. And so they’ve sort of left some of their human capital behind. So expanding intellectual property law so that I own my I or that you own your artificial intelligence.

[00:39:55] Pete: I often think back to the early days of the internet when Napster came out, and we were sharing music back then. I mean, today, we are not sharing music. We’re sharing musicians. You know, we’re not downloading art; we’re downloading artists. And so it’s sort of a scary moment where we have to decide as human beings, how much of our intelligence do we wanna claim for ourselves, and how much of it do we wanna just cede to this collective intelligence that is generative AI.

[00:40:26] Pete: Well, and as we think about as human beings, can we go to the topic of humanity? ’cause obviously, that’s profoundly provocative when you say there is a risk to humanity. So we would love to hear your thoughts.

[00:40:41] Arun: I guess there are two points of view on this. I mean, one point of view is that these fears are overblown. The only reason why we’re worried about AI taking over the world is because we’ve given it the label artificial intelligence. ‘ cause fundamentally, inside these boxes, there are complex algorithms and mathematics going on. You know, optimization going on. We happen to call it AI, and so we worry about it taking over the world.

On the other hand, you have millions of people playing network to video games with a ton of AI built in there. And in some ways, like the intent that these human beings are expressing to the gaming systems is violent, but nobody’s worried about, you know, the latest video game taking over the world. So that’s one point of view that these fears are overblown. It’s just about a label. The more worrying point of view is that, as AI systems get more and more sophisticated and start to generate their own artificial intelligence, it’s not like they’re gonna, you know, rise up one day and say it’s time to wipe out humanity. It’s just that we may end up being an inconvenience.

You know, if you think about the history of humanity, we didn’t set out to wipe out a whole bunch of species. We just sort of maximized our own progress, and they sort of fell by the wayside as a byproduct. And so if some AI system suddenly decides that, you know, space travel is a high priority, this pesky atmosphere is getting in the way, you know, let’s sort of get rid of it and then we can dramatically accelerate space exploration.

That’s the kind of scenario that I think a lot of people say, hey, you know, maybe there are safeguards we need to put in place to make sure that we are not an extinct species. Ease.

[00:42:35] Pete: Well, I mean, speaking of the atmosphere, I do worry about climate change right now and what’s going on in the world today, and I wonder whether AI could help solve that problem or not. I’m sure that’s probably, a use case that someone’s working on.

[00:42:50] Arun: Oh, absolutely. I think that part of the the reason why there’s a lot of conversation around the need to slow down AI research and the risks of AI is that, the race is happening without a clear objective. You know, a student once asked me in classes, I was talking about the AI race, uh, he said so professor, what’s at the finish line? so he was trying to figure out what are these companies going towards.

I mean, like, you know, why do they wanna build a system that speaks like a human or writes like a human? And we are at the point now where we don’t know what the eventual use of these AI systems will be.

Once there’s a demonstration that the successor degenerative AI can actually make a dent in curing and curable diseases in developing new drugs. I think that there’s some progress that’s being made there in addressing climate change. You know, I think that’s when the value of the innovation is going to become clearer. humanity has generally borrowed from the future. Some sense, you know, we invested in petrochemicals, and now we’re sort of like, you know, a hundred years later, our generation is paying the price of

[00:44:09] Pete: Sad about it.

[00:44:10] Arun: Yeah. It’s the same thing with plastics with drug discovery. And now we are deep learning. We don’t know how these systems are working. They just work incredibly well. So we keep making them faster and faster and more powerful, but we don’t know exactly what the costs are going to be in the future. But I’m generally of the view that climate change and addressing climate change isn’t just something that is gonna be addressed by AI. It’s gonna be addressed by an increasing fraction of the workforce. And, paradoxically, it’s probably gonna dramatically grow the world economy because there are these massive new problems that human beings have to solve.

[00:44:48] Pete: So it seems like this starts talking about the ethics of AI. So depending on your viewpoint, either AI’s gonna be this, you know, humanity’s best sidekick or it could just wipe us all out on a whim. Can you give us a crash course in ethics and AI and how those two interrelate,

[00:45:04] Arun: Well, some of the earliest conversations about AI and ethics often came up in the context of what is known as the trolley problem. It’s an old philosophy problem, you know, do you take an action that causes greater damage or do you not take an action and cause less damage? And there’s no correct answer to that, but when people started building self-driving cars, That was front and center with AI ethics, right?

How do we imbue the right values into these artificial intelligence systems? there was a similar discussion around embedding AI into weapons systems. and that’s been going on for a while as well. I think what the last five or six years has brought to the fore is that because of the way that new AI systems are created by machines learning from data, if there was bias in the data, in the decisions that are reflected in the data, Then the machine learning system might be biased as well, not Malevolently, but because it was trained on bias data that is still a very important part of the AI ethics conversation.

But I think with generative AI, we are entering a deeper ethical conversation about what should human beings own about their intelligence. How fast should we allow technology to progress when there is a very real risk that it’s gonna displace very rapidly the demand for a lot of human employment? Do we slow down so that we put in place a transition process? So, You know, what used to be a very simple conversation has suddenly become very complex and multifaceted. You know, it’s not the philosophers, as it turns out, who are leading the AI ethics conversation. It’s the computer scientists 

[00:47:04] Pete: It’s the heads of the giant tech companies in the first place that are saying we need to be regulated for fear of the outcomes. And that’s just a scenario that we’ve not seen before. Normally, big businesses lobby the government to stay away, uh, lower taxes. And, I think, you know, some of the news that we’ve covered recently is Congress’s involvement or the White House’s involvement in trying to get ahead of regulation, and they know they need to do it and know they need to do it now, but as we said, the genie outta the bottle here. And will they be able to put in some of these regulatory guardrails to mitigate some of the risks or ethical concerns?

[00:47:45] Arun: Yeah. I think part of why the tech companies are calling for regulation is that they realize that the governance issues around artificial intelligence or the risks around AI can’t be completely solved using technology like there is no complete technological solution. We’re gonna have to use the law as well. Like some things simply have to be illegal.

You can’t prevent, technologically, for example, deep fakes from being created. You just have to make it illegal. And then you need some sort of regulation to implement and enforce. And maybe some sort of new government agency that helps interpret existing laws in the AI context. You know, where is existing copyright enough? Where do we need new laws? And so, I think part of it is also self-preservation. You know, things are gonna go wrong with AI and, uh, if the world believes that something going wrong meant that the tech companies didn’t do enough to prevent it, that’s not good for the tech companies. 

[00:48:54] Pete: Well, and in the context of all of this and DeepFakes and misinformation and ethics, I do wonder what the role of AI will be in election campaigning. Right. We’ve had, you know, election results deniers as front and center for the better part of two years, in the news, and we are just, you know, going into a whole new election period. And I just wonder what the role of AI is gonna be in all of this.

[00:49:20] Pete: there’s certainly going to be a much greater flood of misinformation in this election cycle than we had in 2020 or 2016. and it’s going to be far more sophisticated and targeted. I think that generative AI isn’t just good at creating misinformation but at generating it in a way that is customized to each individual.

You know, in some ways, the same way that marketers are using generative AI for lead generation and to sort of customize those email messages, those cold email messages to customize the content of advertising, the same thing will be brought to bear, like, you know, in the creation of misinformation. And so, I think that there’s a growing recognition that over time, Again, there is the only solution to misinformation is end-user education. You know, you can’t create laws; you can’t create technological guardrails that will prevent misinformation. You just have to educate people to recognize it A hundred percent.

[00:50:24] Arun: we have just a few minutes left. 

[00:50:26] Pete: Let’s do a big transition back to the concept of the original show. Of course. Unprompted is a podcast about AI marketing and you—so how do you take all of that big information on how the world is changing, and what are some implications that you would advise marketers to be thinking or doing? Let’s take that right down to the marketing level now.

[00:50:54] Arun: In many ways. I’m glad that we’re talking about the intersection of generative AI and marketing rather than say, finance or accounting or operations because the manifestations of generative AI that we have seen are very much in the creative and messaging space. And so there’s a very logical overlap with what marketers do.

So, you know, some of the obvious ways in which marketers should be using generative AI if they’re not already is in customizing their content more precisely to suit the needs of their customers. I think that, using ChatGPT to generate potential sort of marketing content or potentially what goes on the side of the cereal box that’s helpful, but that’s not really tapping into the power of this system.

You know, you can’t hire 10,000 people to customize a message to 10 million customers. You can use generative AI for that at a very small fraction of the cost. And so marketers should be thinking about expanding rather than substituting, what the human content creators do. I think that there’s tremendous potential for intelligent summarizing, which then gives you insight into your customers. If you’re trying to gain more insight into what kind of message is gonna work for this person, if you have a whole bunch of content that they have responded to, I mean, like, you know, you can use summarization capabilities to understand your customers better.

[00:52:37] Arun: that having been said, I still think we are in the experimentation phase. I think it’s risky for a brand to completely cut human beings out of the loop at this stage. And so I would say take small steps forward, raise the stakes gradually. Make sure that you, as a marketer, don’t lose your voice or your authenticity because you know you are relying on gen AI tools.

[00:53:05] Pete: Well, or lose our jobs, for that matter. So what we wanna do is lean in, be more effective, expand our capabilities, but drive and still be creative and strategic and successful marketers.

Alright, we’re at time today. That was an incredible journey. Thank you so much to our guest, Arun Sundararajan. You were the hit of our series. 

[00:53:36] Arun: Oh, thank you, Peter and Alex. This was really fun.

[00:53:39] Pete: Thank you for being such an elegant guest today. 

[00:53:43] Arun: Thank you.

default author image
About Banafshe Salehi
Banafshe is a writer and creator who loves long walks on the beach (kidding?). When she's not selling you on her puns or her pop-culture analogies, she can be found at the busiest intersection in her city with her headphones. Which are totally not falling apart.
» More blog posts by Banafshe Salehi