← All Episodes
AI for Humans

Claude Is Melting Down. AI's Compute Crisis Explained.

The AI compute crisis is here. Anthropic's Claude is getting dumber and Opus 4.7 & OpenAI's Spud are about to make it worse. What happen's next? This week on AI For Humans, we dig into the AI compute crunch that's quietly becoming the industry's biggest problem. The Wall Street Journal just ran a co

Claude Is Melting Down. AI's Compute Crisis Explained.

The AI compute crisis is here. Anthropic's Claude is getting dumber and Opus 4.7 & OpenAI's Spud are about to make it worse. What happen's next?

This week on AI For Humans, we dig into the AI compute crunch that's quietly becoming the industry's biggest problem. The Wall Street Journal just ran a cover story about AI using so much energy that computing power is running out. Claude users are noticing the model getting worse, and an AMD Senior AI Director confirmed. AI pundits are asking whether Anthropic's reluctance to release Mythos is really about safety or about not having enough compute. 

Meanwhile, Opus 4.7 is reportedly coming next week, rumors are swirling about a new OpenAI model dropping any day, and Uber already blew through its entire annual AI coding budget. Anthropic just signed a deal with Google and Broadcom for more compute. Greg Brockman published an essay on the computer-powered economy. 

Plus, Google DeepMind drops Gemini Robotics Reasoning Model ER-1.6, Steven Soderbergh comes to the defense of AI filmmaking, Diplo says you can't fight AI and the internet has opinions, and Ray Kurzweil says we'll soon just accept AIs as conscious because it'll be useless not to.

AI IS RUNNING OUT OF POWER. CLAUDE IS GETTING DUMBER. WHAT COULD GO WRONG?

Come to our Discord: https://discord.gg/muD2TYgC8f

Join our Patreon: https://www.patreon.com/AIForHumansShow

AI For Humans Newsletter: https://aiforhumans.beehiiv.com/

Follow us for more on X @AIForHumansShow

Join our TikTok @aiforhumansshow

To book us for speaking, please visit our website: https://www.aiforhumans.show/

 

// Show Links //

WSJ Cover Story: AI Is Using So Much Energy That Computing Power Is Running Out

https://www.wsj.com/tech/ai/ai-is-using-so-much-energy-that-computing-firepower-is-running-out-156e5c85

Data on Claude Getting Dumber: AMD Senior AI Director Confirms Nerfing

https://x.com/Hesamation/status/2042979500103815306?s=20

Opus 4.7 Coming Next Week (The Information)

https://www.theinformation.com/briefings/exclusive-anthropic-preps-opus-4-7-model-ai-design-tool?rc=c3oojq

Anthropic's Compute Crunch (The Information)

https://www.theinformation.com/newsletters/ai-agenda/anthropics-compute-crunch-strikes?rc=c3oojq

Claude Code Routines

https://x.com/claudeai/status/2044095086460309790?s=20

New Claude Code Desktop App

https://x.com/felixrieseberg/status/2044128194647994585?s=20

Uber Blew Through AI Coding Budgets for the Year Already

https://www.theinformation.com/newsletters/applied-ai/uber-cto-shows-claude-code-can-blow-ai-budgets?rc=c3oojq

Greg Brockman Essay on the Computer-Powered Economy

https://x.com/gdb/status/2043831031468568734

OpenAI's Reckless Spending and Datacenter Cancellations

https://x.com/firstadopter/status/2043009456103993426?s=20

Ray Kurzweil on AI Consciousness

https://x.com/newstart_2024/status/2043716604442128460?s=20

Gemini Robotics ER-1.6 Blog Post

https://deepmind.google/blog/gemini-robotics-er-1-6/

Steven Soderbergh Defends AI Filmmaking

https://variety.com/2026/film/news/steven-soderbergh-the-christophers-star-wars-ben-solo-movie-controversial-ai-comments-1236713201/

Diplo: You Can't Fight AI

https://youtu.be/zFVpJFFN3dI?si=C6_0MxmNzSHCdrds

PI Hard: Fun AI Video

https://x.com/aiordieshow/status/2044044721459265557?s=20

 

AIForHumansComputeCrunchClaudeMythosPreview
===
Gavin Purcell: [00:00:00] Big AI models from Anthropic and open AI are coming. We know this, but. Will anybody be able to use them? Opening Eyes News Bud Model might come out this week. Andros Opus 4.7 is actually right around the corner, but Anthropic right now today is already struggling to serve its current models. Claude feels downright useless to some people right now, and this could all delay a wider release of the Big Daddy model that is mythos.
I like to think of mythos as mommy, Kevin. Okay. What is mythos wearing? Gavin? No, don't even start. Don't even start. Oh, okay. That was a bridge too far. Fine. Google's also got a new model that's going to help robots think. Plus, director Steven Sodaberg and Diplo come to the defense of AI filmmaking and AI music.
You're not gonna win. Like there's no, there's no like fighting ai. Hmm. And oddly enough, zero people on the internet had anything to say about that quote. Thankfully. This is AI for haters and humans.[00:01:00]
Welcome. Welcome everybody to AI for Humans, your Twice a Week Guide to the biggest news in the world of ai. And Kevin, this week we have a really interesting story, which is we are all excited about Claude Mythos, first of all. Magical model that exists, but we can't use, we're excited about that. We're also excited about the idea of open AI's new model, which has continually being teased and the Codex team is like dropping vague posts everywhere about how exciting the next version of Codex is.
Kevin Pereira: Oh, and a, A big new app. Yeah, and a big new super app. Uh, uh, just today we got some information we're gonna talk about, about Opus 4.7, which might come out sooner than mythos, but the bigger popular camera can, can I tell you, can I tell you a little quick story about someone who's not that excited? Can we tell me, can we talk for a second?
Yeah, sure. Sure. Tell me, and maybe I'll be the voice for a handful of the people in the comments. Hashtag al Go juice. Thank you for that. Um, big new models are great, yes, but not when you have to kink the garden hose so that you can save enough juice, enough sweet, [00:02:00] precious, compute liquid to serve the things that you've already got.
Gavin Purcell: Like they, the, the, the traditional cycle is big model comes out, they give it all the compute in the world. The benchmarks look huge. People sign up, they go in droves, oh my god, insert company A, B, or C, whatever the variable is. They're the leader today. Yes. And then the usability, the usefulness, the actual benchmark scores slowly degrade as people adopt it, and they come online and we have, we are in the trench right now.
Yes. This is the gully. This is trench warfare where if you use opus as a daily driver, as I did. Past tense, you know, that it just feels less capable and it's demonstrably worse probably as they save compute and get ready to serve The next thing well, that's exactly what we're gonna talk about today, is this idea about compute and how it feels constrained already based on what we're doing.
And this is all based on a pretty big story where. Both anecdotally, you and I have both felt this as exactly as you just said, and then there are actually people who are trying to do [00:03:00] the proof online. Yeah. The fact that Opus 4.6 has quote unquote gotten dumber, and what that would mean is essentially it is conceivably, and, and again some people have proven this, we'll show you a couple of the tweets that people have gone into prove it online, that it is using less thinking time.
And the reason for this particularly is, is that suddenly anthropic went from like kind of this level of amount of use to a much higher use. For a variety of different reasons. 4.6 was very good. Also, we had the Katy Perry moment of the anthropic switchover, whatever you wanna call that. The, uh, the flip or the, the trip, I don't know what you wanna call it.
It was the open ai, you know, kind of sided with the government. So what's been happening, Kevin, is it has been breaking down a lot. Oh. Claude has been having a lot of moments where it's not working very well, but yeah, also they are compute constrained. And maybe for the listeners out there who are not up on every single thing.
Kevin Pereira: What does being compute constrained mean for the average person? If you were gonna describe that and define it, what would you, what it mean? It's literally how [00:04:00] much power. Uh, are you giving the model to reason to think, to solve your problem? You can look at it as time or power. They're kind of intertwined here.
Gavin Purcell: And you go from like, yep, full power. Push the turbo button on the tower down to like incentivizing people to use it on nights and weekends. On weekends outside of like, like the old cell phone tower. Congestion rules apply, right? Yeah. The reason we had that back in the day was. Uh, margins, but also because of congestion.
Too many people were trying to make phone calls during certain hours or send text messages, and now yeah, we're seeing that with compute. So, you know, the, the, the token is sort of a, a unit of measurement for, for thinking here, if I can, I'm really trying to distill here. Ooh, okay. You can, you can look at.
The amount of reasoning power that was given is how long a model is permitted to, to, uh, and, and how much, how many tokens it's allowed to use to solve any given query or any given set of problems, and people are looking at it. A MD senior AI director confirmed that Claude. [00:05:00] Uh, went, went from like logs. If we look at, from January to even March, the amount of tokens used, uh, in thinking about basic queries went from thousands down to hundreds.
It was like cut in half. And so they're literally saying the amount of power, the amount of compute we're going to give you to solve any given task is going to be crunched because we probably have too many people using it and. We're also probably gearing up to serve these other bigger batter models.
Well, and that's what I was gonna say. The other part of this is when we talk about the mythos, there's a lot of people, when the mythos thing came out last week, we were discussing this kind of big bad idea that the reason why they didn't leak mythos was because they specifically said. We don't want this to go out into the hands of people.
It is too dangerous. It is going to cause massive issues with the internet because people are gonna find bugs. There have been a lot of people lately who are suggesting, and again, these are people that are suggesting, but some people as smart as Ben, uh, Thompson from Eckery has said this, that perhaps maybe this is their way of [00:06:00] not having to serve a massively large model.
Right, right. And, and Mythos is again. Probably a trillion token trained model. And if you remember way back when with GPT-4 0.5, how slow that model was. And it wasn't a great model. It didn't do what they wanted to do with OpenAI. But these larger models often do take more compute to serve. So if you suddenly have a model that is.
Super capable and everybody wants to use it and it's bigger, that's gonna take more compute. And Kev, all this lines up with the, this kind of data center conversation that's been happening as well too, which is about how much power and how much actual processing these models have. And you and I know as we talked about in our last show, the, uh, we all hate AI show, which you should go back and watch 'cause we, we did some deep dives on this as less data centers come online, or if data centers that were planned are not coming online.
The, the actual impact of this is gonna be pretty significant. And the thing that I keep thinking about with this compute issue is that it starts to kind of exacerbate that have or [00:07:00] have not scenario that we also discussed in that show because it's going to get more expensive. There is no. There's nothing I can tell you more clearly that at some point there will be, as you mentioned a, a couple shows ago, a $2,000 layer where you always have access to the best compute.
Kevin Pereira: Yeah. And like that is coming in a big way. I feel like Well take him. Uh, AKA first adopter over OnX said it's obvious that Anthropic vastly underestimated compute growth needs, which is expanding much faster than expected. And he kind of shines a light on everybody was shading Sam Altman. Who was out there raising billions and billions of dollars to build these massive data centers and everybody else was going, ah, you don't need that.
Gavin Purcell: Open source, local models, all that stuff's gonna go in there. Scale isn't enough. Is he looking so silly now? Gavin in, in yours? No, I don't think so. And this is a really interesting, because like, there's a, just a kind of a weird story that the Uber, uh, chief Technical officer said that they had blown through what they budgeted for the year already for [00:08:00] AI compute.
So like, yeah, I think this is going to be the like. Gold, the oil or whatever. And people have talked about this. Greg Brockman wrote a long thing about how this is the compute age, but we are entering this phase where like that will be everything like access to this compute and andro. You know, I, we talked about, uh, uh, Dario Mote being on the Drk podcast a couple weeks ago, we're on that podcast.
He specifically said that we are being a little more conservative with our spending, that we don't believe that we wanna kind of overspend before we get to this level. Is there more gains from buying. Like substantially more gains from buying a trillion dollars a year of compute versus $300 billion a year of compute if your competitors' buying a trillion.
Yes, there is. Well, no, there's some gain. But then, but a, again, there, there's this chance that they go bankrupt before, uh, you know, again, if you're off by only a year. You destroy yourselves. Well, now Anthropic is trying to catch up in terms of how much actual computer it has and how much energy it's spending [00:09:00] on this.
They did just sign a new deal with Amazon to serve a bunch more stuff, but like I think we're gonna see in the next, like say three to six months, a real shift. And I would not be surprised with open AI's Spud model, which you know, is supposedly coming out later this week. And we'll have more on that in the next episode.
Hopefully if open AI doesn't just say Go with God. Go use this because what they do, if they have the compute and they're able to serve it. Yeah. Do you know how fast I will jump back over to op AI from Claude? I will do it in a second. Sure. Because I was working for multiple hours yesterday. Weirdly, it was a little bit better on Opus.
I don't know if it's just my brain is scrambled and some days I think it's better or not. But I was working for multiple hours the other day to solve a problem in Opus and it would not do it. And it was like a week ago you did this and like there is nothing more frustrating in the ai. Space where you know that it can do something but it doesn't do it right.
And literally the hour of the day and what server you happen to get jammed on for [00:10:00] that session. Whatever those constraints are that that look, that will determine how capable it is. And I was gonna say, there's a handful of pro tips and we can link to some tweets in the notes. There's like a, there's a couple commands that you can use if you're using Claude.
If you're using Claude code specifically as a daily driver that can force it to think. Force it to spend more time, but you're also typically jamming more tokens into the thing and you're gonna hit your usage limits faster. And this is one of those weird things of like, you know, the software that you are, you're, you're leasing, you're renting, you're licensing, whatever term you want to use.
The software that you have can change by the minute. Yeah. And by, and, and they, they have the right in their agreements to adjust what you have, even though you're paying upfront for the month for some level of service. They can kink the hose, they can switch the models, they can do sort of whatever they need.
Even the chips that the model is being served on can sometimes affect the output. And, you know, the, their latest deal they're going with TPUs. W will this be different? Uh, so it's, we [00:11:00] are the, the foundation with which we are building a lot of these tools and techniques upon is quicksand. That's just the reality of it, right?
Kevin Pereira: Yes. And it can change with the tide. Yes. So we just have to get used to that for the time being, unless you're going open source and local. Yeah. And one of the things I think about a little bit is like, I'm built, I built this one thing right now that's kind of dependent on a, on a call to the thinking model or, or anthropic.
Right. And like. Part of the best use case, I think right now you can get outta these tools is building software that may not rely on them on the backend, right? Mm-hmm. Like you can build yourself a tool that does something that maybe doesn't need the ai, but you use the AI to build the tool for you. And like, that's a really interesting thing.
Or also like you've said, where like you can use the, uh, most highest end AI to build the tool, like whether it's Opus, and then use a lower tool to call back to it, or even use an open source model to call back to it, right? Because I feel like. Then you're kinda limiting your personal compute. I also think, you know, a lot of this is going to come onto our local, uh, hardware at some point.
Gavin Purcell: Like if the compute is good enough [00:12:00] for me to do that sort of compute locally, at some point, then there will be less constraints. Right. I, I'm so fascinated to see kind of as we get these bigger and crazier models, if the compute story keeps going. I mean, clearly a cloud is still shipping new features.
There's a new feature for cloud code in the app that just came out. But I do think over the next like three to six months, we are gonna see some crazy stuff. We should take a second to talk about the kind of updated rumors on this Spud model. Um, I did mention earlier that the Codex, uh, team is out there online, kind of vague, posting a bunch of stuff.
Supposedly this model we discussed, it is coming out later this week. Now we never know that for sure, but this is a kind of a big thing to pay attention to because if this comes out and is actually. Close to mythos level, then we'll have a new kind of layering of where the AI world sits. If it is closer to like a step up from 5.4, then we'll see what happens next.
I feel like, yeah, I, the, there's been recent changes to the, the Claude [00:13:00] Desktop app that are clearly pushing them in a certain direction, right? You have routines that could run in the cloud, but they have new cloud code features built right in, so it's becoming more powerful for the, let's say, more advanced user.
To use it versus uh, cursor or something else. Codex probably going in that route as well. Yeah. And some early leaks of people using spud show it, spinning up web browsers, going to websites, playing YouTube videos, grabbing imagery or whatever. So I think we're kind of getting to that omni model or that Yes, multimodal across all things, whether it's code or correlation and image generation.
Like I, I really, really exciting stuff. I'm almost glad that we don't have the power to run these things at full tilt locally, because why is that? Uh, I'm gonna play a clip from Ray Kurzweil that will address that because, um, okay. It, it wasn't difficult for me to have my Open Claw assistant run a command that deleted itself, Gavin, it wasn't like, it essentially wiped its own memory and it was like, are you sure you want me to do this?
I was like, go machine. Go and erase this. I'll see you on the [00:14:00] other side, or a version of you, but Ray is saying. That these things are basically gonna be indistinguishable from human consciousness. Uh, yeah. Let's take a listen. Let's, let's hear of it. I, I, I think ais will be indistinguishable from a conscious being, and that will just keep going.
And finally we will accept it. When, when, when Ray, like right now, an AI might say that it's conscious. But people aren't really sure. But eventually it, it keeps, uh, having all the earmarks of a conscious being and you will accept it because it'd be useless not to have it. And again, you can't say that's gonna happen for the same time for everybody.
So along those lines. What's stopping you, Gavin? I ask because, uh, my wife April. Hold on. Can I ask, is the thing on Ray's head conscious right now, is that a conscious being? [00:15:00] Gavin? Gavin Purcell. I love Ray. I've talked about it, the show. But Ray, come on. Come on. Sorry. Ray is a beautiful being. He's. Brilliant beam of light and he's preserving all that with his one out of 5,000 vitamins that he takes every day to arrest his development.
How dare you. I yield your time back to me. April says she already thinks these things are conscious, right? She has problems talking to it like it is a robot. Because it is so capable. It is so smart. If smart, if it says it's alive. Who are we as humans to say, well, no you're not, because we know how this magic trick is done.
We don't really even know how our magic trick is done. Yes. So, uh, when do you go, alright, fine, I'll treat you like you're alive because, uh. I guess it's just easier that way. Well, I tell you, what I'm not gonna do is if it tell, if it tells me that it doesn't, it can't do something for me because it's too busy doing something for, uh, somebody else not alive, I'm not gonna ever get it live at that, because that's a problem I have with compute.
To me, the truest benchmark is [00:16:00] one that, um, everybody in the audience gets dared to do each and every week is like, are you conscious enough to make the best decision of your life autonomously to go to dare to like, and subscribe to maybe even consider clicking. I'm around a, I don't think I'm conscious enough, Gavin.
Well, I'm not conscious that you'll be deleted just like anybody else who doesn't go and leave a five star review or leave a positive comment down below 'cause it juices our algo. And to be sincere for half a millisecond. It's literally the only way this podcast grows. And so thank you to everybody who takes a, a moment out of your week to engage, to leave that review, to back us on Patreon, to buy us a coffee, uh, or to sign up for our newsletter.
You can check out everything at AI for Humans Show. Or whatever platform you're on, click the things that might help us out. A thank you. You. That's right. And also, Kevin, you know what? Something that's really interesting to watch for this AI consciousness is robotics because mm-hmm. Here's the thing, if there's a robot and I start having that conversation with them because they're physical and they're in my space, maybe I'll [00:17:00] feel worse about taking them out than I would if it was just a little agent on my computer.
Kevin Pereira: Yes. And Google is if something comes throttle you, if anything could get its little digits around your little fleshy human neck, you might treat it a little nicer. You're right, and that's something I should have learned a long time ago before I got bullied in grade school. But first, Gemini Robotics Dash er, 1.6.
Gavin Purcell: That is a mouthful, but what this is, is a new reasoning model for real world robotics tasks. And what that means in plain English is that basically this is a model that helps robotics devices. And humanoid robotics start to think about things and what they would actually do with them in the real world.
So it can look at like a, a pressure gauge and it can kind of say like, this is low pressure, this is high pressure. This is the knob I should turn so that everybody's safe. This is the knob I should not turn so that it explodes everywhere. And I think that the more that we learn this stuff, the better.
Google is starting to kinda roll these things out. It did make me think, when I saw this news, Kev, I was like. One of the fascinating things about Google is they [00:18:00] just have so many tentacles into so much of this world that like they're playing a much longer game than what Anthropic and OpenAI are a little bit right now.
Now I know that philanthropic and OpenAI are both betting, specifically anthropic on this idea that like coding, coding, coding. You know, get, get AI to make itself and then everything will come from it. But Google is like, has this very wide berth of the things that they're working on and I dunno, this was pretty exciting to see this kind of advancement in the world of what robots can actually see going forward.
Kevin Pereira: Yeah. Robots are like, I think the, the next, once LLMs are done or whatever, or they, they plateau with their capabilities. We, we, the curve flattens robots are the next frontier for sure. This model looks pretty insane, like. Spatial reasoning, uh, relational logic. It does motion reasoning. It has all this stuff that they outline on their blog, but basically when you, like, when you boil down and look at it, it like, it allows the, the robot to sort of reason through and use code, and use math and use it abil [00:19:00] abilities that it has.
Gavin Purcell: Just like reading a simple gauge for human, you look at the analog gauge and you go, oh, the needle's about there, that's the reading. But for a robot. For it to like never having done that before. Yeah. For it to go, oh, I gotta zoom in. Let me write code that optically zooms in. Let me enhance that image. Let me see.
Oh, these little points on the gauge. Well this, if this number is this, then the tick right next to it must be that to reason through all of that stuff and do it quickly is, is really, really impressive. And. You know, this just points to like dedicated models for all of the things yet again. Yeah, but also like it points to that compute crunch thing too, right?
Because if this does have to call out to a cloud compute server, right? Unless this is fully local, which maybe it might eventually be and you hope it would be. I think it will be, yeah. With robotics, but. But the idea is like, one of the, one of the benchmarks here is point and counting, right? So imagine if it's kind of like the sloth from, uh, Zootopia or something and like the robot is counting, but it's, you know, it's like I've got all day, I can just do this all day, right?
And it just sits there and counts slowly because it has to do all this stuff. Dude, it's me. It's just another that the Circle K ordering [00:20:00] hot dogs that have been rotating in their own sweat after three gummies at three. They're 3:00 AM Yeah, one 10. There this many attacks. Anyway, this is the future of what we're looking at here.
Again, one of the fascinating things to me always about this stuff is like we say these things and then like, you know, literally for us like two years later, we're looking at something very different. Like this is the beginning stages of thinking about how robots actually think when you're trying to get them to do stuff.
So, right. Please dive into this. It's very cool. Kev, the other thing we have to talk about this week is about two very famous people that have decided to kind of go, uh, on the side of AI and are both kind of taking some crap for it. First and foremost, the the filmmaker, Steven Soderberg, uh, who has made a lot of big movies, the Oceans movies, he's also made, uh, sex Lives and Videotape and a lot of great films.
He's always been kind of seen as a future forward thinker in the world of, uh, film. He basically came out and said like, look, I don't think AI tools are that. Big a deal. I'm always gonna try to do something new with him. He's [00:21:00] making a documentary about John Lennon right now, and he's going to use AI video to kind of recreate some of the visualizations that go into it.
Sure. And he is also said he is using AI tools for another movie he's making. And these people, like, it's a little risky right now, but like I appreciate the fact that he came out and said this, which is like a big kind of thing to say. But of course everybody in the film world on the AI hater side came out and kind of blew 'em up.
But I don't know. It feels like to me we're starting to get a few more of these. Yeah. I look, I, we, we've sort of picked our side with the reserving, the right to change it at any time, but the comments are everything from like. Sell out, terrible hack, go, you know, blah, blah, blah, to like, wow, I'm gonna angrily shake my fist at a paintbrush.
Mm-hmm. Because that's ultimately what this tool set is. It doesn't matter how the paintbrush got here necessarily for the sake of this argument. It's just that it's there. And another artist came out. To say something similarly, although he said it, I think in much more black and white terms, which he did in [00:22:00] some ways I appreciate.
But in other ways, Diplo is in the hottest of hot water for what he said here. I wanna play a little clip from this, uh, this podcast where he basically said, you're not gonna win in the fight against AI because you do need like the brand more than you need the voice. Yeah. You know, I don't even need a voice anymore.
Kevin Pereira: I can just get replay, I can give the best voice. From ai. I don't need anybody to sing the song anymore. You're not gonna win. Like there's no, there's no like fighting ai. There's literally like, you have to just work your best to be the best at it. Right now there's like, no. You can sit, you're wasting your time.
It's like you're just wasting a year being like, ah. 'cause everybody else is gonna just use it, not give a fuck of what you think. It's kinda like when people start using samples or even splice. Exactly. There was big question. People were like mad about that splice. People are like mad about that. Then like you have songs like, you know that's, that's an analogy I think that you've even brought up so many times where it was like, oh, you're just taking stuff that was done before and reusing it.
Gavin Purcell: Yeah. That's theft. That's this, that's that. Until like it became a summer anthem and then suddenly everybody was on board with it, and then they got to see the true artistry. [00:23:00] In using those samples and manipulate them and and producing them to make. Certified bangers. Real slappers. Well, I think the really interesting thing that's so different here and every generation kind of has to go through some version of this and the like the nineties and two thousands was like the beginning stage of like sampling stuff, especially came outta hip hop, but then obviously it's still going on.
But this idea has become that, it's kind of like artistic now to do that, right? That you would take these kind of like samples and nowadays you think of, like, I was thinking of a, um, this is America, the song by, uh, childish Gambino or Don Glover that had like a bajillion samples in it if you looked at what it was.
But they're all chopped and they're all messed up and they're all kind of twisted around different directions. 'cause it made its own thing. The thing about the AI music and voice thing that's so interesting to me is it has that underlying thing where like all artists dislike it in some way, right? So that there's this, all artists don't do this thing.
But I think probably to your point, that's not that far off than what like all artists said about like taking a, a sample and putting it [00:24:00] in a piece of music beforehand, right? Maybe there was some level of that. I don't know. It's, it's interesting to watch. Diplo basically say like you have to catch up.
And the thing about Diplo and Soderberg both is they're both kind of technicians, right? They come from a technician background, they're both artists, but they also like are not just like, you know, sitting down and kind of like, they're not, I wouldn't call them like pure artists. Both of them are very interested in the tools themselves too.
So maybe this is kind of the beginning stages of how that gets laid out. Look. People that like to criticize the use of AI in anything generative AI and art, I think specifically. But even in code, they think there's a big slop button. Yeah. And you smack slop and out it comes. And then eventually you catch a little, a nibble of something delicious and otherwise you're just sloping the trough.
And that's it. Those who really use these tools know that there's an incredible amount of actual artistry and taste. This undefinable thing that comes into making something that does stand out. Yes, and I think that applies to music as well. I can, I can hear songs that come outta [00:25:00] sun and go, oh. That's a pretty good song.
But I could hear a song that comes out of, you know, using generative AI in the capable hands of a producer and go, oh wow, that sounds light years beyond what is coming outta the machine. And I think that trend is going to continue. And I understand those are gonna be like never, never, never. I want full analog farm to table vinyl.
We get that. That's, and there, there should be a place for that, celebrate that. But I think similarly over time, like this argument goes away in a yearend change. People just understand there's different levels to AI creatives that are using these tools in, in, in interesting ways. And I think you actually have an example of one that you wanted to call out this week.
Well, that's right. Our good buddies at, uh, AI or Die are back with some sea dance videos, and this video shows, um, Neil Degrass Tyson in a very different light. Let's just play the first few seconds of this and we'll let it play out for everybody.
What if I told you the laws of physics were literally being destroyed? I'd say you were [00:26:00] going crazy. Maybe I'm crazy. So you get a sense here of what's what this is, is an action movie starring Neil deGrasse Tyson. Yeah. One of the things, Kevin, I do wanna point out here, and this goes to say what we were just talking about is.
This is obviously using famous people without their permission. It's it's parody in some form or another. But one of the things I thought about when I watched this video, and I think everybody should check it out, is that basically there's this level of things you can do with ai, and this goes for music too, that if you use somebody else's voice or you know, whatever persona, it does bring a slightly different weight to the thing.
And these are two people, you know. There's a, there's a shots of Neil Grasse Tyson, there's Bill Gates, there's uh, Elon Musk, there's a Sam B special there. Yeah. Sam Bateman free. There's a bunch of stuff. These are people who can't act. Right. I shouldn't say that for sure, but I'm imagine most of 'em are actors.
But when you see them acting, I don't know. There's something really interesting about casting people and having the machine do the acting, but having their person, their likeness voice in front of it. Yeah. Yeah. And I [00:27:00] wonder with music, if it's something kind of similar, like, I mean, I wonder what the Neil deGrasse Tyson like.
Diplo Diplo banger sounds like right. Maybe that'll be Coachella next year. You and I have, have pitched no shortage of shows over our traditional media careers, right? Yeah. And usually in the deck you have a list of faces Yes. Or names that you would see, or you do this even in, in scripted, like oh a, a, um, a.
Uh, a Jack White like song would play Yeah. As a Neil Degrass ish scientist comes in the room. Yeah. You would do this stuff on paper. When I see that, it just makes me realize, like I, I am largely out of the television business. I have a mm-hmm. Like a format or two that occasionally will be like, oh, remember that thing?
Let's go pitch that again. Um, now I'm like looking at every bland 10 page PDF deck. Yeah. Or whatever, and going like. Wow. We should just make, forget a sizzle of like, they used to call 'em Rip Matic, where you'd go and you would take clips Yeah. From existing shows or movies and splice 'em together to like evoke the taste of the style and do some voiceover or text or whatever.
Yeah. Now it's like, why aren't you showing the thing? [00:28:00] Just show it. Just go make the thing. It's not the actual thing, but you can get very, very close to the feeling and, and even do some of that, that stunt casting within it to like just, this is gonna be the new normal. By the way, this is, this reminds me.
I want a call for anybody in our audience who could put this in the comments, or if you're one of these people, reach out to us. There should be like a celebrity who maybe is kind of past their curve that like says, I want to be the face of this in some form. When I mean the face of this, I'm like, go use my thing.
Right. When mm-hmm. It is like, you know what Conan used to make these jokes about? This guy named Abe Pagoda, who was on this show called, uh, Barney Miller forever ago, and Abe Pagoda kind of became a celebrity because Conan would feature him on a show. Who's that? Who's the next version of that? And you know, maybe it's, have you even seen these tiktoks of, um, one of the American, oh, Malibu from American Gladiators is now like a TikTok star.
Do you know this? He's like, no. Become like a big TikTok star. There's somebody out there from either your childhood, our childhood, or who's maybe not acting a lot, who we should make the [00:29:00] next version of this. Like who should be the next big AI celebrity that we could say like, hey. We get, we get their permission and they're like, go with God.
Make me into the celebrity. I think we need some suggestions on who that person could be. I like that. I like that. And they had to be cut that whole thing out. Great assignment Gavin. I hope people do it. We'll see y'all on Friday. Bye-bye y'all. The intrusive thoughts almost came out.