In a world where machines can outperform humans in nearly every conceivable domain, where exactly does that leave us?
When artificial intelligence can fulfill our deepest drives and reward pathways more reliably than any human relationship, what happens to human connection, meaning, and purpose?
Today, we’re exploring the world of artificial intelligence with a man who's part technologist, part philosopher, and also happens to be a former Brazilian Jiu-Jitsu national champion.
My guest, Daniel Faggella, is a researcher who's been studying the intersection of human potential and artificial intelligence for well over a decade.
In this mind-bending episode, we're diving deep into what it means to be human in an increasingly artificial world.
We unpack heavy concepts today, like:
Alive Waters - Go to AliveWaters.com and use code ABELJAMES for 22% off your 1st order.
Caldera Lab - Go to calderalab.com and use code: WILD for 20% off your 1st order.
Pease take a moment to make sure you’re subscribed to this show wherever you listen to podcasts. If you’re feeling generous, please share this podcast with a friend or write a quick review for the Abel James Show on Apple or Spotify. I appreciate you!
To stay up to date on our next live events, masterminds, shows and more in Austin, TX and beyond, as well as get some free goodies, make sure to sign up for my newsletter at AbelJames.com.
When artificial intelligence can fulfill our deepest drives and reward pathways more reliably than any human relationship, what happens to human connection, meaning, and purpose?
Today, we’re exploring the world of artificial intelligence with a man who's part technologist, part philosopher, and also happens to be a former Brazilian Jiu-Jitsu national champion.
My guest, Daniel Faggella, is a researcher who's been studying the intersection of human potential and artificial intelligence for well over a decade.
In this mind-bending episode, we're diving deep into what it means to be human in an increasingly artificial world.
We unpack heavy concepts today, like:
- How will AI reshape and redefine the human experience?
- What happens when our digital companions become more compelling than our flesh-and-blood relationships?
- How can we use AI right now to help us be more productive, happy, and develop our full potential?
- And much more…
Alive Waters - Go to AliveWaters.com and use code ABELJAMES for 22% off your 1st order.
Caldera Lab - Go to calderalab.com and use code: WILD for 20% off your 1st order.
Pease take a moment to make sure you’re subscribed to this show wherever you listen to podcasts. If you’re feeling generous, please share this podcast with a friend or write a quick review for the Abel James Show on Apple or Spotify. I appreciate you!
To stay up to date on our next live events, masterminds, shows and more in Austin, TX and beyond, as well as get some free goodies, make sure to sign up for my newsletter at AbelJames.com.
[00:00:01]
Unknown:
Hey, folks. This is Abel James, and thanks so much for joining us on the show. In a world where machines can outperform humans in nearly every conceivable domain, where exactly does that leave us? When artificial intelligence can fulfill our deepest drives and reward pathways more reliably than any human relationship, what happens to human connection, meaning, and purpose? Today, we're exploring the world of artificial intelligence with a man who's part technologist, part philosopher, and also happens to be a former Brazilian jiu jitsu national champion. My guest, Daniel Figiela, is a researcher who's been studying the intersection of human potential and artificial intelligence for well over a decade. In this mind bending episode, we're diving deep into what it means to be human in an increasingly artificial world. We unpack heavy concepts like how will AI reshape and redefine the human experience, what happens when our digital companions become more compelling than our flesh and blood relationships, how we can use AI right now to help us become more productive, happy, and develop our full potential and much more. Quick favor before we get to the interview, please take a quick moment to make sure that you're subscribed to this show wherever you listen to your podcast.
And if you're feeling generous, please share this podcast with a friend or write a quick review for the Abel James Show on Apple or Spotify. I really appreciate you. And to stay up to date on the next live events, shows, masterminds, and more coming up here in Austin, Texas and beyond, go ahead and sign up for my newsletter at abeljames.com. That's abeljames.com. You can also find me on Substack and most of the socials under Abel James or Abel Jams. Alright. This conversation is a bit of an intellectual roller coaster and might make you simultaneously excited and terrified. If you wanna stay ahead of the curve, understand how emerging technologies will impact your life, and think critically about our technological future, this episode will be like rocket fuel for your brain. It's about to get weird. Let's go hang out with Daniel.
Welcome back, folks. Daniel Figella is founder at Emerge Artificial Intelligence Research and host of the Trajectory Media Channel. No slouch. Daniel is also a resilient jujitsu black belt, winning the title of twenty eleven national champion at the IBJJF Pan America Games. Thanks so much for being here, Daniel.
[00:07:39] Unknown:
Glad to be here, Abel. Nobody talks about, the jujitsu side of the house when I'm talking AI, but fun to do that little callback. It's good to know where people are coming from, I find. I'm also personally curious.
[00:07:51] Unknown:
After competing and devoting so much of your time and energy to a skill set like that, how does your training and and conditioning goals and the rest of that change over the years? How do you adapt that?
[00:08:04] Unknown:
Yeah. Well, you know, it was it was funny. It's like my life goals were really discovering, like, the grammar of jiu jitsu and really focusing on skill development in jiu jitsu, which is kind of what they were right up until grad school when I realized there would be maybe machines smarter than people, and maybe that would be more important than than writing. Right up until then, you know, I was training all day every day because it was kinda my life's purpose, just took that for granted. And then when my life purpose changed, I sorta like I sold my jiu jitsu gym. I I had an ecommerce company, and I just started kinda working, you know, with the same ferocity on that stuff. And I I remember going up a flight of, like, like, three flights of stairs and being, like, winded because I hadn't worked out in, like, so long. Like, I didn't I didn't look, like, out, like, fat or something. I I very much identify as an athlete, so I've never been, like, out of shape in any visible way. But I remember being winded after three flights of stairs, and I was like, I'm not doing this anymore. And so from there, I I kind of took some of the warm up exercises and, like, full body calisthenics that I used when I was a competitor, and I created, like, an eleven minute version of it that I can do twice a week that just hits every muscle group. I get super winded. It's, like, completely nonstop, like, major motions. I'm not doing, like, calf lifts. Right? I'm doing, like, serious full body stuff for every every transition.
Yeah. So since then, that was way over a decade ago. I've basically just done two really short workouts a week kinda built off of what I did in jiu jitsu. So has it affected my training? Well, I some of my warm up workouts are still with me. And when I get back on the mat, I still got it. But, yeah, I did I did lose it for a second. I lost the cardio for a second, and it scared me. But, yeah, it was a trigger to get back on the horse and and develop a new regimen, basically.
[00:09:48] Unknown:
And I'm curious about skill acquisition, studying that for so long. Yeah. What did you learn while there that you've brought to the rest of your life or or how you live day to day?
[00:09:58] Unknown:
Yeah. I mean, so there there's a couple thinkers that were really great. So I got to talk to so there's there's guys named Locke and Latham who are sort of the founders of, like, modern, like, goal setting theory. So when we think about goal setting, it's kind of like, oh, yeah. Everybody knows that. But, like, actually, as it turns out, like, you know, many decades ago, it was, like, kind of novel ish, at least from, like, a science vantage point. And so they're they're sort of the ones that study the psychology there. And a fellow by the name of Anders Ericsson, who is arguably my my biggest influence, who I got to meet with on a number of occasions as I worked through my thesis at at UPenn, really is sort of the the father of, like, skill acquisition as a science. Like, really measuring, tangible performance improvements in, memorization, sport performance, musical performance, things that can be quantified.
And, I mean, there's a bunch to go into in terms of, like, what I drew from that helped me with jujitsu performance, even being in a small town without a lot of, super talented people to train with, but also, like, in in regular life. I mean, some takeaways that I think are ubiquitously applicable are, having feedback be as immediately proximal to what you're doing as humanly possible. So if I think about my own sales teams, you know, being able to ride with them, like, a a call review, I think, is great. A call review right after it happens is just much better. And and, you know, in in in jujitsu and athletics, it's it's exactly the same thing. It's just the the value is astronomically, astronomically higher. And then also thinking about sort of what the fundamentals are in any skill set that you have and ensuring that you have an adequate amount of repetition and feedback on those fundamentals as you're build building that skill. And there's all kinds of, like, ratios and timing and other kinds of rhythm stuff that that is really interesting in in Ericsson's work. But thinking about that for myself, whether it's, again, sales or new management tasks or some new business function we spin up or whatever the case may be, like, I find myself going back to some of those proxies all the time. But, But yeah. Yeah. So some of those things fill into the rest of life. A a lot of it in back in the day was just focused on how do I choke people and win trophies. Now it's a little bit more focused on, you know, like, hiring and growing team members. But some of the same ideas are still there. So
[00:12:06] Unknown:
The fundamentals. It's so interesting. It seems to me that in the past few years, that's really what's been lost in in a lot of the Internet with all the whiz bang superficial short form stuff. The Internet kind of used to be a library, and now it's turned into this circus. So learning has become and and skill acquisition has become a whole different challenge than it used to be. The challenge used to be lack of information, but pretty easy to put whatever you had into action. Now it's it's really the opposite.
[00:12:37] Unknown:
Yeah. Distraction is sort of everywhere. Right? I mean, it's I think nobody is at a lack of access for whatever information they want to do, whatever they wanna do, including reaching people. Like, it's not even that hard to get a hold of people who've done what you wanna do if you just hit enough of them. And you're you're, like, eager and ardent, and you let them know why you respect them specifically. Like, it's crazy the kind of people that'll get on the phone. But, yeah, it is about, like, okay. Well, now you're, you know, half an hour deep into, you know, scrolling through TikTok videos of, you know, girls doing yoga or something like that. And it's like, okay. Well, where where are you going from here? You know what I mean? Like, you know, you can it's gotta steer clear of those sand traps. Yeah. Definitely a new set of problems for sure.
[00:13:20] Unknown:
And similarly, in in the world of AI, this is hitting, different demographics at different times and and in a different way. But already, you're seeing lots of, time spent for people on AI with AI as a companion. Especially in the past few months, I've really been floored by how much that's changed. I'm curious to hear some of your thoughts having worked in the field for so long. What's it like seeing a lot of this actually come into day to day life for people?
[00:13:51] Unknown:
Yeah. I mean, well, I I think what I like to bring up with people so frogs don't know when you turn up the temperature real slow. You know what I mean? And the temperature is actually going up much, much faster, but it's still not like, you know, 10 degrees in a second where where they would feel it. So the the thing I like to think about is all of the things going up into the right. So if I were to ask you, like, okay, what's your screen time per day now versus ten years ago? If anything, I mean, I don't know. For all I know, maybe yours is less because you've you've kinda consciously, like, built more balance into your work stuff or whatever. But if I I talk to my average Internet entrepreneur, okay, it's basically the same. It's like, okay. I'm not I'm not spending like, once the mobile phone came on, it might have gone from ten to twelve hours a day on screens to twelve to fourteen hours a day on screens. But, like, realistically, no one's on you you're not gonna invent a new bunch of hours where you can be awake and just be on screen. So that's not actually happening. So I don't even care about screen time. We're already capped on screen time, basically. If your great grandparents knew how much time you spent looking at glass, if you could go back eighty years and just tell them what your life would be like, they would say, a, that's, like, impossible. Technologically, like, that's not even gonna be a thing. You're not gonna be able to just talk to people on glass. Like, that's ridiculous. And, b, they'll also say, that's monstrously inhuman, and people aren't just gonna agree to live their lives like that. But as it turns out, Abel, here we are. So already you're capped out. Now let's ask some more questions.
What's the percent of the money you spend now versus ten years ago on things that don't exist outside of ones and zeros? In other words, there is no physical manifestation. That number is only going up. What's the percent of value you generate that is not in the physical world? You're not putting a roof on someone's house or whatever. It is just the the entirety of the value you're you're generating is in ones and zeros. What's the percentage of that? Almost for everybody, it's up into the right. Other question. What's the percent of screen time that you are spending that is conjured to you by an algorithm?
So Google search back in the day, it'd be like, oh, well, how many Google searches do I do? Well, what percentage of my is that but now it's YouTube, LinkedIn, Twitter. You know, the the whole nine yards is brought to you by an algorithm. Netflix, these are conjuring things to you. You know, if you're doing online shopping, certainly on Amazon, a tremendous amount of that is directly suggested based on previous behavior and purchases and all that. So, oh, yeah. My screen time's capped out. I mean, how much different can it get? Well, if now 50% of the time you spend on screens, it is something conjured to you by an algorithm just for you, that is a tangible shift. Now there's another question. What percent of conversations are you having or written communications are you having with machines versus with people? You know, I'm asking ChatGPT more things than I'm asking Google these days, like, by a wide margin.
And and I don't know. In in in a given week, I might be at one fifth of my communications. Some some weeks, one third of my communications with various AI agents versus with people. Now if you just carry all of those up into the right a little more, you just don't land in, like, the same world. Where you land, Abel, is it within one generation, the same degree of, like, religious aversion as your great grandmother would have had knowing the way you live compared to how she lives. But now that's gonna be compressed to much, much less time. That's just, I think, a reality check for folks who kinda feel like, twenty years, maybe we'll have a little bit better Siri. There'll be more cars that drive themselves. It's actually not where we're going. We're going somewhere a little bit more drastic than that.
[00:17:27] Unknown:
Peter Wayne just came to town, and, this is a quote. I think it's kinda summarized a bit, but he said, we've already failed our first encounter with artificial intelligence through social media algorithms that have proven more compelling than human willpower. I think we can all agree that we've experienced that and you can see it playing out at scale. But if that goes up into the right, then maybe that's a good entryway to talk about the world of lotus eating and world eating. Can you explain that to the to the listeners because I like it? A good word story. Yeah. I know your your crowd is obviously pretty bent on being the best version of themselves they can be. And there's certainly ways with technology you can be a much worse version of yourself as social media has proven time and time again.
[00:18:10] Unknown:
Not that social media is, like, a total net negative. I I will say, like, at this point, I've done a really good job of, like, pruning Twitter down from, like, all of the right versus left, you know, politics stuff and and just, like, hyperbole and hoopla kind of down to, like, you know, a handful of people who are really specific in AI and policy whose thoughts, like, regularly inform me in really useful ways, and I get to see them in real time. So instead of just seeing the news, I get to see what does the person I respect think about this latest news thing. So, like, there's ways to mold it, but clearly there's pitfalls. So if we think about where things are going with technology, there's gonna be many new, like, divergent ways to be immersed within technology. So right now, that's the screens you and I are on all the time.
There is a somewhat inevitable transition to to VR and AR, which, you know, we may not get there before the machine zed us necessarily. But if if we were to stick around for long enough, the transition of VR and AR would would eventually happen. But let's just talk about what it would look like even without that. The main sort of strata that people will separate from, or separate by will be sort of are they pursuing I'll explain it in terms of, like, being high agency, enhancing your agency, or decreasing your agency. Another way would be pursuit of power or productivity versus just pleasure and escape.
Another way to frame it would be competing more ardently in the new digital state of nature or attempting to escape the state of nature. So we call these lotus eaters on this side, the people that are kind of in the escape and all that. And I I just refer to World Eaters on the other side. There's an article called Lotus Eaters versus World Eaters that people could could Google if they felt so inclined in an infographic to go along with it. But but this is kind of the core strata. So if we talk about pleasure, you know, it's it's a pretty straight line here. So, you know, the AI girlfriend sort of phenomena and the opposite is the case too with AI kind of boyfriend simulator chatbots or whatever. That's definitely already a thing. And I think the easy thing to do would be to do what people did with online dating, which is to say, well, yeah. Sure.
If you're like a total loser or a super weirdo, maybe that would even have the slightest amount of interest for you. But I'm not one of those actually. So, like, I'm just not even concerned. You know, I I remember Airbnb coming online or Uber. And, like, for my dad, who's 74 now, you know, being like, yeah. Just pushing buttons, getting in stranger's cars. Like, yeah. You betcha. You know? Like, you know, oh, push a button, stay in a stranger's house in a foreign country. Yeah. You betcha. But as it turns out, he uses those all the time, particularly Airbnb. I mean, he's, like, in way more Airbnbs than me every single time he goes on a vacation. So it's an Airbnb. So these things become normal way more quickly than people suspect. And and I actually think it's it's very much like a a blind spot in a really detrimental way to put up the the blinders of, like, well, that doesn't apply to me. Because, like, a lot of these things really will apply to you. Like, I don't use social media, so it won't apply to like, number one, everybody I know who said I won't use social media eventually had them. Right? And then number two, like, they're moving the world whether you're there or not. They're distracting people and having stupid arguments, but they're also moving the world whether you're there or not. So, you know, the Pericles quote is like something akin to roughly paraphrasing.
Like, you can decide to have nothing to do with politics, but politics has something to do with you. The same is definitely the case with, with technology. So many people will go in for very soon, Abel, you'll be able to type in or verbally prompt whatever your agent is. You you come in from, you know, a hard day of work if we're still working, and you say something along the lines of, hey, AI. You know what I want? Give me some kinda humor, like, stand up thing, kinda like the Chris Rock stuff that I like, but, like, I don't know, something different. And, try to integrate some jokes about, you know, these three current events. You know? And and I don't know. Give me forty five minutes worth of stuff and then cut off and tell me to go to bed. And then you'll sit there, and and you'll watch something that you decided to prompt. And when it gets better, it will respond to you in real time. So we're looking at eye tracking, you know, a biofeedback of various and sundry kinds, whether it be an Oura ring or whatever the case may be. So figuring out, is this getting the job done? Right? Is this getting a laugh or not? And then kind of calibrating in real time based on the user because that's where it's gonna go. It's gonna get better and better and more personalized.
At some point, you'll just come in and say, give me what would relax me, knowing full well that it actually knows more than you do. It knows the current state of your mind and what happened to you through the day because it's plugged into all your devices and whatnot. And it knows every time you've been relaxed in the last two years based on, like, biofeedback and your your manual response and whatever. And so it literally will be able to conjure something in real time and then change it in real time to be whatever would soothe you. And you might not know you wanted a documentary about the Tang dynasty, but as it turns out, that in the style of Ghibli, Studio Ghibli, was, like, the thing to actually really relax you on that day. It is what it is. It is what it is. The algorithm will know better than you, and you'll know that, and you'll trust it. So what does this turn into? It turns into what I call kind of, we're so we're gonna talk about the pleasure side first. I will get into the productivity side.
But this gets into closing the human reward circuit. So the thesis here is that we're mostly ambulating between drives. You wake up in the morning and you have to go to the bathroom, you have to eat something, and then you'd like accolades from other people. You'd like love and affection. You'd like to satisfy kind of curiosity and novelty. Right? You we we ambulate between drives. We we wake up and we literally ambulate. This is what we do, and then we go to bed. That that's humans, in my opinion. I I'm not I'm not trying to, like, insult a human experience. I'm just saying this this appears to me to be what we're doing. Here's the deal. Anytime you have a circuit that more reliably fulfills one of those ambulated drives, if it is more reliable than all alternative circuits, it will become the normal circuit.
It'll become the norm. So think about it like this. I have two urns. Here is the urn of the real. Here is the urn of the digital. Currently able, when I go take a walk outside when it's not raining, and I bring a paper book, maybe history, maybe philosophy, whatever, and there's trees around. Almost always, that's a level seven or eight in terms of level of relaxation for me. It's a good one. It's very, very good. It's very, it's reliable. It's consistent. It's really good. Let's call it a six to an eight. And every now and again, I'll hit a nine. When I, when I go walking in the sunlight with trees and a paper book, it's it's a good one. When I wanna feel, like, a little bit more chilled out, relaxed, like, level headed, like, that's what I do. It works.
That's a drive I ambulate through. Now let's just say, right now, relaxation experiences from AI are giving me, like, fours to sixes on a regular. I'll try two of them and I'll be like, well, obviously, AI is not gonna yada yada. Just like the guy that went on Amazon once and was like, oh, well, they didn't have this one size of drill bit and then didn't use Amazon for, like, eighteen months. When it's like, patently obvious it's getting better every five seconds. So at some point, I will start drawing balls from the relaxation urn where maybe I am walking in virtual space, but it will be like an eight minimum every time. Every time.
At that point, I cease the other activities. So the AM radio isn't a a fulfillment loop for people because it's not as good. It doesn't satisfy the drive. Whatever wins the loop wins, and that applies to human relationships. You have friends that you talk to about business stuff. You have friends you talk to about heartbreak or about personal development issues or whatever the case may be. When you are getting four times better advice with no selfish ask from the other hominid and with more humor and more understanding of your current emotional state, if you're crying or whatever, where you don't wanna be that vulnerable with your friends or whatever, there are relationships you're just doing less of and there's some of them that aren't there anymore because the circuit now has a better loop. That applies to the whole ballgame of human experience. And if you just use your imagination for, like, five minutes, you kinda get a sense of where we'll go. So I wanna pause there because there's way more to unpack. We're talking more pleasure right now. Let me know, Abel, what you wanna dive into. I don't wanna just, like, monologue on you here.
[00:26:32] Unknown:
Yeah. Well, I I'm just imagining what that looks like and talk about up and to the right. I mean, all of this is going to be very soon, if it's not already, more compelling than human interaction, especially with a generation coming up during the pandemic without access to a lot of social interaction. And then that continuing forth with the advancements in technology, I'm curious about where that might take us. And another piece of this as well is when you're interacting or asking a question with another human, there's the sense that you might be judged negatively for how you ask the question. Is it a dumb question?
You know, you're exposing data about yourself to the other person or emotions, whatever that is. When you take that away and it's not a person anymore and you're just interacting with AI, you can ask whatever you want. And if that interaction is also better, I just see this quickly supplanting so many human relationships for young and old and everyone in between.
[00:27:32] Unknown:
Absolutely. And the toughest part is gonna be for the people who are like, oh, that'll happen to losers and not to me. Like, those are the people that are gonna really get plowed because, like, they're gonna be like, oh, no. Stephanie, my spouse or whatever, like, out of nowhere, you know, but it's like it's it's not really out of nowhere. Like, I really hate to say it, but, every relationship is a certain number of circuits satisfied at a certain level. This is my but it's a hypothesis. Let let's go ahead and see what happens, Abel. So we might talk in five years ago. Remember when you thought it would happen even once? Or or you may be like, oh, Fragilla called it. But, like, every relationship. Now the scary thing is this goes to, like, parents and children. This goes to spouses. But it's like, you know, let me use a hypothetical personable so I don't offend anybody. Okay?
Hypothetical Jenny is married to hypothetical Jacob. And, you know, Jacob doesn't have good fashion sense. He's kind of embarrassing sometimes, but he's almost always funny. You know, he's, like, I don't know, reasonably kind and, like, supportive for, like, her, like, work ambitions or whatever. And, like, you know, she could talk about certain kinds of topics, like, you know, her her work or or maybe her family life. Like, she can talk with him and feel, like, really safe. So, like, there's that. You know, maybe there's a certain degree of sexual chemistry, whatever. It's like, if you just take those four and there's a better path to those four, I'm just not sure about the whole marriage. Now this sounds like, oh, damn. They can reduce a whole like, I wish I could tell you I think there's a magical force. I just don't know about magic, really. I actually think that, like, if the circuits that compose the mutual fulfillment of that relationship are supplanted, the relationship is supplanted.
And I think people kinda have to get that. That's gonna be a thing in work. It's gonna be be a thing in the personal life. And you brought up a great point, which is that for young people, this might be even faster. Now older folks are gonna like to say it's not gonna happen to me. It will. But for younger people, Abel, there's 11 year olds, so they've never been farther than this from an iPad from the time they were born, Abel. You with me? From the moment of their birth. So what that means is the real is not sacred to them.
So why would your grandmother say if I went back eighty years and I said, you're gonna have a great grandchild. Look. It's a long fucking story. You know, you're only 20 now yourself, whatever her name is. I don't know. Mildred. I don't know what your great grandmother's name is. I'm gonna say it's Mildred. Okay? So your great is your great grandmother Mildred, hey. Look. You're gonna have a great grandson. His name is gonna be Abel. He is gonna do all kinds of cool stuff on TV, and he's gonna be able to, like, play some cool guitar songs that you would probably really like. But on top of that, he's going to, like, regularly be in a metal tube 35,000 feet above the air. He's gonna go to places like Japan and Australia. You know where Australia is, there, Mildred. I I know you don't know what Japan is. I I could explain it to you. It'd be a whole thing. But, like, there's a place called Japan. Just fucking deal with that. Like, there's a it's a place. They look different than you, but it's a place. Okay? Anyway, he's gonna be in this metal tube pretty regularly, like, very, very regularly, actually, traveling thousands of miles in a very short period of time. And when he's working, it it's not gonna be uncommon for Abel to spend ten to fourteen hours a day on, looking at a piece of glass that will have other people on it who are living elsewhere in the world, even the opposite side of the world, or, you know, media or his communication. So he's not gonna have letters. He's gonna read everything on this magic screen. And then there's gonna be another one in his pocket. He's gonna use that for everything. And then you explain Airbnb. You explain right? Clearly, the only the only response would be, like, a, that would be technically impossible, but, b, that is monstrous, and no one would let their human experience bend to that level.
The issue for me, Abel, is that people think that applies forward. So everybody thinks that the now the sacreds of the now will apply forward. A big factor for the posthuman transition here is the fact that, like, there's a lot of people for whom, like, the real is not more sacred than the digital man. Sure. Like, there's a lot of people for whom the digital is much more real than the physical. And we might even argue for good reason, We're like most of the value they create, many of the best friendships they have, many of the most fun experiences they have, whatever is happening in these digitally immersive spaces. I think you bring up a great point that that will accelerate it too. So these are all factors of, like, serious change coming up.
[00:31:55] Unknown:
Wow. So how about the other side of that? For good, for productivity,
[00:31:59] Unknown:
and the world eating, what does that look like? Yeah. Yeah. Yeah. So lotus eaters versus kind of world eaters. So high agency people who wanna enhance their agency. Basically, what this looks like so if you go the the lotus eater path of I wanna be immersed in relaxation and then sexual pleasure and then whatever, and people say like, oh, well, certainly the sex thing will be with humans. I really suspect, like, primitive haptics with really good VR, AR, and and AI, like, generated visuals. Like, the main sexual organ is the brain, like, patently, obviously. So I I actually think that will, like, really get the job done, like, for for a huge preponderance of humanity, and and it's I actually suspect it's coped to not admit that. The drive to novelty as well is just Oh, yeah. Off the charts. Right? So that path, if you just wanna swim in new ambulated permutations of pleasure and leave behind, you know, responsibility, productivity, whatever, that's almost like a dissipating and a spreading out of agency. Like, I don't even want it anymore. Have have the Qualia catalog layer itself on me based on my real time biofeedback. Let me just swim in the best version of the Qualia catalog. Now, of course, Abel, even that won't make us happy. Right? Our our biochemistry, the vessel itself is flawed when it comes to well-being. Right? So what'll really happen is people will be at a felt sense of six or seven of overall fulfillment just like they were before the tech, but now they need this stuff in order to be there. Right? Until we get brain computer interface and much more invasive adjustments to to the human condition, it's not actually gonna sustainably make people more happy. But that's a dissipation of one's agency.
The world eater side of things is kind of a a sharpening of the razor of one's agency where you start to drop off all the kinds of tasks that involve human loops of thought and neural activity, shave them away, delegate them away, and hone a huge preponderance of your time on that highest value bit of stuff. Now, of course, a modern executive who's capable, a modern politician who's capable is already doing this to to a great degree. But with tech, it will be much greater. So let's talk about what that could look like. If you are a salesperson, we could imagine a world where you just sell 50% more if you use the interface that in real time prompts you with the right way to overcome the objection, prompts you with the right price to list based on what the client has said and based on the online research done, prompts you with yada yada yada. You may just sell 50% more. Same thing if all the pre call prep is done by AI. Pre call prep and then a pre done five minute video all about that specific client before you jump into the call. Right? All of that stuff maybe just make you unbearably more productive. Same thing with software engineers. A lot of software, of course, is gonna go the way of the dinosaurs. Like, AI is gonna totally just conquer it. But for those that are still surfing and kind of guiding and navigating within the world of code, we can imagine folks that are able to kind of kick off and wield AI agents to complete different parts of projects and then finish other parts of projects and check the work of other agents. The people that are extremely nimble, completely immersed in wielding and and and and hurling all of these AIs in in as many directions as as the project requires will just be the only ones that are sort of reasonably able to get the job done. And we can imagine the same thing for essentially every feasible kind of work ever. Think about the most banal things in the world. Think about a plumber.
It's like if you have augmented reality that will show you, you know, like, you knock on the pipe once, and it, like, will show you with 95% accuracy where the pipe is in the wall. Right? Or you look at something, and it tells you the size of ranch or the size of whatever that you need to use in real time. Right? Or you hear the description of what the problem is, and then the AI analyzes how old the house was when it was built, yada yada, and it gives you relative percentage what is the problem here. Is it the boiler based on what, you know, missus Wilkins just said? Is it the boiler based on you? If you're getting that in real time through some kind of AR interface or even just a goofy tablet, it's just likely that, like, people aren't gonna wanna hire the guy that doesn't know that shit. Like, I don't know. I hired another guy. He just, like, fucking figured it out. Like, what are what are you doing? You know what I mean? It's like if I'm trying to order ball bearings and, like, the guy I'm trying to order ball bearings from, like, the middle man has to send, like, letters by mail to the different manufacturers to get a pricing update. It's like, well, I don't know, man. I was on the phone with another guy, and he just had all the prices up. And he could he could do Slack messaging in real time with these folks and, like, ask specific questions about, like, you know, the grade of steel that I need or whatever the case may be. So what what this eventually turns into, for better or worse, is article called the ambitious AI that's easily Google able on where this goes.
Is it it kinda gets into, like, you gotta leave behind some of the human stuff here. So so competition starts to require stuff that gets a little wacky and wonky. So right now, if you wanna be the most wildly ambitious in whatever your field is, okay, architecture or, you know, building a SaaS company, it doesn't matter. You gotta be putting in eighties to 80 fives. You gotta be, you know, really viciously strong with your time. You're gonna be learning a lot, whatever. That's all par for the course. Musk does it. Other folks do it. They all do it. In the future, the requirements may become a little bit more more intense. So in other words, if there are ways to calibrate, like, all of your waking hours to be able to get good hours out of a 95 as opposed to an 80. Because maybe you wake up in the morning, your your mind's more lulled, but there's certain kinds of work that that's suited for. If that could be conjured for you and guided by your agency in a way that actually isn't all that depleting of your energies and kinda works for where you're at with that mind state, bada bing. Right? Like, right now, even even Musk or these other folks, they gotta do some stuff to relax. They gotta do some stuff to whatever. But if at some point, something can be bent and molded to be kind of relaxing, but it's still nudging, tapping, nudging, tapping your productive goals forward.
Whoever's doing that all the time instead of watching a movie, like watching Netflix, is just winning more. And then, similarly, there may be ways of teasing out and proxying for maybe where we could do less in terms of, sleep, especially as brain computer interface comes online. But even if just vastly more robust biofeedback comes online, if there are ways or times to go to bed or whatever where you can squeeze more time out, it's gonna be mandatory. It'll be completely par for the course. And similarly, as brain computer interface sort of sort of really really gets into into place and and we can actually kind of level up the the hardware, now you might have folks who just don't want sexual drives. They they don't want certain kinds of circuits to be bothering them. They wanna volitionally modulate what their actual, like, emotions actually are, and and volitionally modulate sort of what kinds of areas they can apply their focus to, etcetera, etcetera.
And and and now in order to even be competitive, you don't just like, people look at Musk and they're like, oh, he's he's like a monster. He's working eighty five hours a week. Like, you'd have to turn into an actual monster in order to be competitive because, like, the top 1%, a % will be doing whatever whatever the top one percent things require. So I think world eating actually requires really drastic posthuman transition in in order to remain competitive. Not in the first, like, eighteen months, but after that, I think it basically goes there.
[00:39:22] Unknown:
So right now, what are the skill stacks that people should be going after in their own lives and perhaps abandoning as well? Because it seems like it's coming for everything, but it's it's hard to tell what the timeline looks like and also where humans are going to be in all of this ultimately.
[00:39:39] Unknown:
Yeah. I mean, I think, the early movements that are nudging in this direction. So people are gonna be living in different ecosystems. The the lotus eaters will have an ecosystem that's surrounding them, Like, really, really deliberate ecosystem for pleasure, for for attempting to escape the state of nature. There there is actually no escape, but you can feel like you're escaping. But in fact, you're just, you know, somebody else has power over you. But that's a certain kind of ecosystem. The media diet you're digesting, the way you interact with tech and to what end habitually is to a specific end, which is which is primarily sort of, pleasure and whatever permutations.
On the other side, you're building an ecosystem of what tech you're using and how you're using it that hopefully is also kind of fulfilling and fun for you, but is very much point by point moving you closer to your goals. So, like, the tier one of this is, like, people figure out what are the social media platforms that you could prune really well so that they could be kind of net beneficial to your goals. Like, for me, we generate a lot of customers through LinkedIn. Like, LinkedIn's a really strong channel for me. I'm not scrolling LinkedIn on the regular, but I will post insightful stuff there, and I will use it because we just, you know, we pay multiple salaries with the revenue that we we make from LinkedIn alone. Twitter for me when it comes to staying ahead of, like, the research side of AI, and and certain kinds of takes around policy for AI. Like, I'll I'll engage there, and there's been there's been direct connections to, you know, interviewees or business contacts or or policy contacts I've been able to make by being part of those conversations in a deliberate way. So I found stuff that, like, is fun and will keep my attention, but is is also pretty deliberately, like, moving the direction of my goals. And then in terms of, like, agents, what are you wielding ChatGPT for? Like, do you do you have a spin up of an agent somewhere, whether you like Claude or GPT or whatever, and they're they're changing all the time and whoever has advantages for this? You know, three months later, the other one has the advantages. It's a it's a rolling wave, so I don't think being married to any one company or or or application is necessarily a way to play the game. But do you, do you have a deliberate spin up of GPT baked specifically around whatever your main organizing life purpose is? Right? Like, for me, within the business, like, I I I have a permutation that's sort of, like, completely trained on what happens in our different departments, what the, like, five year goals are. And I can just ask questions where the five year goals are already known, and it knows to answer in a specific kind of format that's that's, like, actionable and succinct that for me is generally best. And then when I double click on something, it knows how to unroll that double click in a way that for me is generally good to prepare for meetings and share with other team members. So so I have a I I have a an agent that I talk to a lot that's not just like, you know, does Samuel l Jackson know how to do karate or something? Right? Like, you could talk to agents like that, or you could just say generate a picture of, you know, I don't know, Mickey Mouse riding a dolphin or, you know, there's plenty of like, there's fun stuff to do. But when you're interacting with those agents, are they purpose molded for, like, specific kinds of stuff you wanna do? I have another sort of personality I talk to that I've given a name and everything that's very bent on my policy goals and sort of, like, public influence around certain elements of the posthuman transition and where that needs to sink in within kind of the tech ecosystem and the policy ecosystem and, like, what I'm doing to influence that. And so the agents I'm mostly talking to are kinda, like, pretty well defined. Like, they're fun to use, so I I use them regularly, but they're also bent towards my goals. And then the media I'm I'm consuming, for the most part, is pretty well pruned and and is platforms that are correlative to my goal. So if you just if you just look at, like, what agents am I talking to and engaging with, and then what social and sort of, like, network stuff am I am I engaging and immersing in, what percent of that is conducive to sort of, like, your organizing purpose, and and what percent is detrimental to your organizing purpose? And I think there are some folks who are eager to let go of any like, their purpose is like, well, I guess I gotta pay my bills. I can't wait to let go of that and just swim in pleasure. Like, I could have Mariah Carey from 1998 in a hot tub. That'd be fucking cool. You know? I could have somebody funnier than Chris Rock telling me jokes whenever I feel like it. That'd be really cool. There's some people that can't wait to dissipate all that stuff and do who needed agency anyway and just swim and experience? We have enough of those people in society. Things get scary because somebody's gotta pay for all of them. Right? And then you'll have some folks that say, what lights me up and how can I surround myself in an ecosystem that informs and energizes me towards what I'm doing? And so this is really phase one. Eventually, that gets into VR, AR and much more powerful AI agents, and eventually, that gets into brain computer interface. So high or low agency, like, the step one is what I said, but it really does, I think, somewhat quickly roll into the the the further phases of these technologies.
[00:44:29] Unknown:
For people who are new to interacting with AI and agents and and that whole piece of things, what is a good best practice
[00:44:37] Unknown:
to start to at least dip your toes into this ecosystem? It's it's a lot of fun. So, you know, ChatGPT, there's, like, plenty of free subscription stuff. I mean, sometimes I'll just use free agents that I'm not even paying for. And it's pretty simple to be able to say, like, okay. Well, like, what are you really passionate about? So there might be somebody tuned in who says, well, like, the biggest thing for me is, like, I really wanna live longer and I wanna be super healthy. And I I know I'm, like, thirty pounds overweight. I know the things that typically distract me. I know the the the the the you know, I know what kind of macros I wanna hit, whatever the case may be. And it's possible to simply say, like, when I did this with the business, you could just kind of load up, like, hey. Like, here's my big specific organizing goal. Like, when I speak to you, I'm I'm gonna call you this name. So you you can give it a name. Maybe like, you know, like, my fitness muse or whatever it is. Right? Like, I'm gonna call you my fitness muse, and I want this to be a personality I can speak to. And, like, the one goal that I'm pursuing with you as a shared journey is, like, this specific outcome that I'm really excited to to get to.
And, like, here's the four or five main reasons that I need to do it that you're gonna have to remind me of. You're gonna have to remind me that I wanna be around for my grandkids, blah blah blah. You're gonna have to remind me whatever it is. Right? So and and then here's my current state of affairs, what I've committed to. Here's often where I fail. And when I bounce ideas off of you, I'm often where I fail. And when I bounce ideas off of you, I want you to bear in mind my weaknesses. I want you to ask the things you would need to ask to make sure I'm staying on my regimen, and I want you to make suggestions that are kind of in line with, like, what generally works for me. So, you know, I hope to be able to come back to you about yada yada and let you know. Like, you know, you can even just say, like, could we save this personality and have have conversations whenever we want? And it it'll say yes. And it'll it'll be like, yes. You know, I I am this now. You can call upon me when you need me. And then, you know, you wanna ask it random stuff about your emails at work. You can do that. But but you can also come back and say, hey. Whatever. I'm thinking of going on a vacation, like, to these two destinations. I really wanna make sure I don't fall off the ball in terms of yada yada. Like, what should be the precommits that I could do and the things I need to look at for a resort that are really gonna help me with my goals knowing my weaknesses? And it'll give you a bunch of suggestions. And what you can do is you can say, when you give me suggestions, when you're my fitness muse, when you give me suggestions, I'd really love for it to be in this way. And say, like, you know, maybe you you copy and paste a blog article you really like that's, like, super actionable. Or maybe you take a comment that it made 20 comments back and say, remember when you responded this way? If if your default can just be this, this is really clean for me. Like, this is great. You have the cited sources. It was, like, two sentences and tight bullets. Like, when it comes to suggestions, this is the way we do suggestions.
Fitness muse, can you have this be the way you respond with suggestions? Yes. I will, Steve. You know, whatever it is. Right? And then when you're going on that vacation or when you fall off the wagon or whatever, you can constantly tap that thing. We could do exactly the same thing for whatever your business is. Do you wanna sell your business for a certain price point? Maybe you wanna explain the whole context on the business, get all the perspective you can on potential acquirers, you know, share all your goals of what you're gonna approach and attack and say, hey. I got off the phone with these folks. They're they're saying these terms are good. These multiples are such and such. Is it reasonable that within six months, you know, like, we could hit these numbers and we could such and such? Or do you think if we had two or three more people competing over this deal, we might be able to get the number we want without having to wait? And then and then it sort of sucks in the whole context, and maybe we'll provide suggestions in whatever format you want. And you could do that for anything you'd like. And, ideally, in my opinion, Abel, if people do this not just for the stuff they begrudgingly know they have to do, but if they do this for the stuff that is in line with what their heart appoints, that is to say, like, what really fires you up, then you can kinda combine agency with enthusiasm, and you can live in a conversation that is, like, really educational, but also, like, fun as heck. And ideally, that's what it should be.
[00:48:31] Unknown:
What about the hesitancy or you know, there's friction when we're giving away our data, especially for the older among like, people who are about our age or older, I would say, generationally, just don't wanna hand over a lot of our personal data, that sort of thing. I remember years ago when I first was doing online baking, you know, I was in college and I was interacting with someone a generation older and they were like, I cannot believe that you put your check on the Internet, that whole thing. And, you know, this this seems similar. And for me, I definitely dragged my feet for a long time, just not wanting to have anything to do, especially on the artist side as a writer and a songwriter and that sort of thing, being very hesitant at first. And I have gotten over that, by the way. But I'm curious your take on, people who are still kind of not there yet. How do they get over that hump to start just being a little more free to play with tools like this without worrying so much about the privacy?
[00:49:26] Unknown:
Yeah. I'd say, like, do do you have a life goal that if everybody knew what you were typing about it and talking about it, it wouldn't be all that embarrassing. Right? May maybe in your business, you feel like, well, my revenue and profit, well, that's someone would insult me if they knew those numbers. I don't wanna tell an AI because maybe it'll tell my neighbor, Jacob, who who because of my Mercedes thinks I make more than I do. I I don't know. Right? But, like, maybe there's some some life goal that you have that, like the data behind it is, like, nothing embarrassing. If someone looked at your search history, they wouldn't be like, oh, I have dirt on this person. So maybe you could find something along those lines that's an easy landing place and then give it a shot. You know? Fiddle around with it. Play around with it a little bit. On some level, to your point, the online banking thing, you and I can both laugh at that because we know how that went. Right? My dad does online banking. You know? He he's old, but he still does it. Right? We we we just chuckle because we know it's inevitable.
This is one of those things as well. I think that there's probably all kinds of various and sundry considerations. But, if you're plotting a very serious crime, that's probably a bad idea with any online system. For for those of you listeners, you know, like, you have a specific bank you wanna break into. Like, I don't know. You do some pen and paper on that stuff. Right? But, otherwise, yeah, if it if it's not, like, super personal data, then go for it. Right now, I I just I kinda think it's gonna be mandatory. Like, the the sales guy who's like, I don't want OpenAI knowing of all the the meetings I'm prepping for. You know? Like, I don't know. What if they sold it to my I and there's somebody else with really robust thoughts about that, like, what the specific litigation or or, like, legalities need to be for as GenAI evolves. I'm focused much more on the posthuman transition stuff, so I don't think as much about, like, privacy and what have you. But but I would say, yeah, starting with something where no one would have any dirt on you if they looked at what you were typing could be a great place. And I think when you see how useful it is, it makes it pretty tough to, pretty tough to avoid, using this stuff. This fascinated me.
[00:51:28] Unknown:
You know, interacting with a lot of people who are my age and older, we think a certain way about privacy and surveillance and and all of that. And then interacting with a few, younger folks, seeing them gleefully give over as much data as possible. Like, why are you doing this? How are you so okay with this? And they're like, well, it makes the experience better. This is the experience that I want. And the more data about me that I give it, the more it caters it to exactly what I want. So if I like turtles, I see turtles. And if I you know? And I'm like, oh, man. What a radically different way to think about all of this. And also, if you were dragging your feet, it's like just adopting a a little bit of a different perspective here could really accelerate your results in terms of interacting with AI. I would think. I mean, the the inevitable
[00:52:18] Unknown:
future, like, as a manager, they'll come a point where you just can't compete unless AI has access to all your Slacks and all your emails. And, you know, you open the email from Jessica who wants a raise. You open the email from Billy who is asking for a new invoice. You wanna click a single button and just generate the goddamn invoice, and you wanna know the format of the invoice and all that shit. It should already have your Google Docs, and you're just not gonna have patience to go in and export a goddamn PDF. You're just not gonna have patience. And the managers that are using those tools are just gonna beat the crap out of you if you're captain like snail mail over here. And so, you know, I I think, again, you can immerse for pleasure or for power.
And I think if you wanna get something done from, like, a productivity sort of output standpoint, people think, well, the people that immerse themselves in this tech all the time, like, they're gonna just be distracted, and the people that are really gonna run the show are gonna be the abstainers from all this AI ecosystem. That's not true. The world eaters will run the whole show, and the Amish and the lotus eaters will be underneath them. That that's how it's gonna work. I'm I'm not like I'm just telling you how it's gonna go. That's all I can do. Right? I just I can just tell you what's gonna happen, and that's what's gonna happen. So the the the world eaters will actually rule both the lotus eaters and the Amish. So there does come a point where you got to adopt. Am I saying any system that spins up in any startup in Shanghai or Boston you should just throw, you know, all of your most sensitive health and legal and financial information into? No. Like, let's gauge these things out. Let's see what's appropriate for what systems. But some degree of of wading into the pool here does make sense if you wanna maintain relevancy. And, again, I think increasingly, if you wanna be on the upper echelons of relevancy, like, if we make it a whole decade with with general intelligence coming about, I think that's gonna start to lean away from, like, human, frankly. And and and I don't think people are really ready for that. I think people think in twenty years, yeah, Siri will write my emails, but life will be kinda the same. I sort of feel like in order to be fighting for relevance as a human when there's really powerful machines and people are screaming the cutting edge out of this stuff, like they're shaving sleep, they're working when you're not working, like they're having stuff done automatically because it has total context on all their past communications.
They have really custom built super specific agents to spin up agents to help them achieve their goals. Like, if if you're not on par with that, you're just not even playing the game.
[00:54:39] Unknown:
So does that lead us to everyone jacking into the matrix and and getting the chips at some point? Or is there anything in between that we could do to change that a bit?
[00:54:50] Unknown:
I think it basically does. So, like, essentially, my position is that people don't want what they think they want. So what people think they want, if you ask them, would be like a red Corvette, you know, an attractive blonde wife, like, accolades from peers to take a company public, regular travel to Aruba, whatever it is. Right? But what they actually want is the fulfillment of the drives they ambulate through. That's what they actually want. And so, you know, if you could have someone live like the best meat suit, monkey suit, existence they could with all the flaws to our well-being, etcetera, or you could have them in a much more kind of continuously blissful, super expansive sort of virtual experience.
I think people just don't come out of the ladder. Right? Like, again, your grandmother, remember here, Abel. Okay? We go back eighty years. We tell her the fucking story. I told you what she would say. I think you agree with me. She'd be like, yeah. I'm not doing that. But but as it turns out, if she was born when you were, she just would, bro. It is what it is. And if she was born after you, she'd be one of these, you know, 11 year olds that are still this close to a screen. They've never been farther than this from a screen from the moment they came out of their own mother. And so I think that, like, if it fulfills the circuits better, people do it, and that quickly pulls people away from being human. The people who say I don't want it, in my opinion, if they got a taste of it, they would want it. Now some people the the the immediate analogy here, Abel, is, like, heroin or something. Well, yeah. Sure. If I tried heroin, I'd like that too, but I'm not one of those weak pea it's like, I get it, but remember, this is not just lotus eaters. A, this tech won't have the same downside as as heroin, certainly. Presumably, most of it. But, b, if you wanna actually get things done, this is required too. That whole jacking in and immersing and, like, having much more than your current mind to to wield, whether it's discovering some new domain of physics as if humans are still gonna be relevant there, but maybe they will be. I don't know. I don't know where AI is gonna be in ten years. Or creating some new kind of super wacky far out music or art or whatever the case may be or being involved in policy if humans are still doing any of that in however many years. Anything that holds meaning, you'd be able to wield vastly more of that in these spaces and also cruise at a much higher altitude of your general level of well-being. And some people say, well, I don't want to feel gradients of bliss all the time. I think, you know, depression and anger and these things are, like, really necessary.
And it would be right to say that given your current hardware and software. So in your current hardware and software, you literally don't have a choice but to experience a gestalt of well-being, a contrast of the good and the bad. You don't have a choice. You don't have a choice. But if you had a choice, maybe you wouldn't want a gestalt. Maybe you would want a plus nine to be as bad as it gets and a plus 18 to be as good as it gets. Maybe you don't want a negative 10. Maybe it's actually less productive and less like experientially well-being. So as soon as these doors open up, people take them because they're they're patently obviously better. And enough taking of enough doors, and we're just not humans anymore. And I I think if we make it around long enough, if if the AGIs allow us to kind of, like, stick around, I do think most people are gonna go that route. I don't think they should be forced. I'm not here advocating everybody's brains get cut up, but I just suspect most people will. Kind of like Abel, your online banking example, my dad with Airbnb.
It is what it is. So the the best analogy here is green eggs and ham. Right? Like, you will eat it in a box. You will eat it with a fox. You will. You'll like it. You'll bite it and you'll like it. So so inevitably, that's where this stuff is going. It's green eggs and ham.
[00:58:35] Unknown:
Before we get there, how do we as humans in the short medium term earn a living or more importantly, perhaps find meaning and purpose in our time here?
[00:58:48] Unknown:
Yeah. I mean, these are really incredibly important questions, Abel. Making a living is that's the big question, in many regards. So I'm thinking about that for for our own business in the market research space at Emerge. You know, I could be in the business of writing articles and have that be the mail. We write really good articles. But as it turns out, AI is gonna write really good articles. So instead, what have we done? Instead, we've decided to anchor our media and the attention we garner not based on writing better articles than a machine, which I don't I don't think we're in very short order, we're just not gonna be able to do very well there. You know, a human and a machine right now combined can create something better than just a regular human can, but it feels like a losing game. The game that feels winning for us is, anchoring all of our content off of one to one primary research interviews with head of AI, Raytheon, chief innovation officer, Goldman Sachs, chief technology officer, Takeda Pharma. Right? Take, like, $60,000,000,000 corpse and get, like, really, really powerful people to give takes.
That's really hard. Like, if enough AI bots start storming the head of AI at Raytheon, maybe he'll go on everybody else's podcast too, but I don't think he will. I don't think he will. Part of it is we have a stage that already has a certain critical mass, and we think the differentiator will be access to power. That even if you can simulate a beautiful simulation of what the CIO at Goldman would say if you ask this question, beautiful simulation, you'd still prefer to listen to him, like, actually answer that question. And so we're sort of bending our content ecosystem around something we don't think the machines are gonna dominate in a very short amount of time. And I think the same extends into every business. You know, what industries will be you know, for the near term until we have biped robots, things like plumbing and roofing, etcetera, we need a lot of that.
You know, there's, like, there's some of the blue collar, tougher to automate stuff that sure. I think for everybody else, there's a really big question of where are the defensibility parts of what we do, which means you need to understand what AI is capable of. And, like, how do we kinda double down and build upon a foundation of advantage that won't be AI conquered in an extremely short period of time? So for us, human relationships are a really big part of, like, the network of where our value is generated rather than just a juicy article, which if you want a juicy article, you'll just ask chat g p t. It'll write an, an article customized for you. Why would you go to Emerge? But I think all the businesses outside of the blue collar world need to be sort of dissecting the day to day of what they're up to and discerning what what portions of that are not gonna be completely carved out and carved away by by AI. But I don't think anybody has a crystal ball, and I'm not even telling you that, like, my strategy is amazing. Again, because I have no idea what innovations are coming up next. I have no idea.
Nobody does. The the best researchers don't. They don't know what the next breakthrough is in what direction. But I think the mental exercise of deliberately aiming to stay ahead of the curve for those of us that are in the white collar world, I I think is a mandatory exercise at this point for annual planning, quarterly planning, that kind of thing.
[01:01:52] Unknown:
Yeah. It's fascinating how being a human is is being obsoleted, but also is the only thing that we have that has value at the same time. Right?
[01:02:03] Unknown:
Yeah.
[01:02:04] Unknown:
I'm not sure how to think about that exactly. Well,
[01:02:09] Unknown:
I don't know if this will make you feel better or worse, but I'll I'll just Of course. You you you opened the can of worms. Okay? So I think, there is a really big question here as to, what is the value of what we're bringing forth. Right? When we have a a system that's outlandishly more capable like, let let's just say we have an AI that's 10,000 times more powerful than the current best model at everything, voice, blah, blah, blah, blah. AI has passed the Turing test. We've already blown past human performance and, like, functionally, all the stuff. Let's say we have, you know, 10 we're 10,000 times better. You could probably hold your breath almost. I mean, it's it's not gonna take that long. And it has millions, plural, of physical instantiations.
It it can control nanobots. It can control life sciences, machines to kind of, like, test, you know, proteins, like, up close, like, with its own digital eyes. And it has all the physical embodiments. It's learning to do plumbing. It's learning to do roofing. It's flying the jet planes, so it it can take off as a jet. It's not limited to, like, a biped shape. It is any shape that it can control. You just it makes so long as it's it's connected to that brain, it's it's plugged in. So it it's interacting with the physical world through 10 times more senses than humans have with, you know, millions and millions of physical embodiments, and it's vastly more powerful than it is now. There is a question here as to what is valuable. And one might argue, Abel, one might argue that if the cards get played right and such such an entity could be sentient and aware in a way that you and I are, Could have the kind of agency that you and I are have, but but, frankly, like, maybe a lot more of it. Right? Maybe a lot more of both of those things.
Maybe that thing would be a pretty good vessel for the whole project of life, life being non dead matter. So you're not dead, and before you was a fish with legs, and before you was like a before that was a eukaryote, and before that was like, you know, some wiggling proteins that didn't even form a cell yet. So the whole project of non dead stuff presumably will continue. And, presumably, I'm very glad it blossomed from a sea snail to Abel James. I'm grateful for that. I think it's wonderful because if I was being interviewed by a sea snail, I don't think it'd be fun. But we might also say, you know, the only thing that has meaning I think it's wonderful to know that humans have meaning. But I think it I think it's okay to suspect that if the flame of life were to blossom up from us as we did from the sea snails, that might also be a great good. We might even argue the greatest good. But if we build something that's unconscious and just optimizes from some industrial process and crushes all of Earth life, that would be an extinguishing of the flame of life as opposed to a blasting out into the multiverse of the flame of life. I think we're at a threshold where, yes, how we treat humans, the currently most morally valuable vessels that we know of, is incredibly important. But, also, whether we extinguish the flame of life or or blossom it, I think are also very morally relevant questions. So what has meaning? What is relevant?
It may be the case that we're kind of building it too.
[01:05:14] Unknown:
It's a reckoning of some kind. When is AGI coming then?
[01:05:19] Unknown:
Real really tough to say. I'm currently in the eighteen month to four year camp Wow. Where basically you have systems that can spin up systems that can spin up systems that can, you know, design what their physical embodiment is gonna be and then take that physical embodiment and run new experiments and design what the next thing will be, not just for their code, but for their for their physical form. You know, three, four years out, I think we'll have stuff that's doing all of that at, like, a proto level. But from a code standpoint, AI writing its own code and improving itself, I think we'll be there in, like, a eighteen month, two year level.
But once once we've got the brain and we've got the brawn both mostly just cycling forward with AI, I do think well, you know, my odds aren't, like, monumentally fantastic about, like, hominids after that. But I I do think it's relatively short time horizons, and I I I hope it's less risky than I think it is. But, but I suspect it's not without its risks.
[01:06:15] Unknown:
Yeah. Yet at the same time, there's not a whole lot that we can control about those risks. Right? Like, it's it's an arms race, At least right now, that's what it seems to be. It is. I mean, there are some people, and you might even be talking to one,
[01:06:30] Unknown:
who are fighting pretty ardently for the intergovernmental world to focus really ardently on, some degree of coordination to have a dynamic to be something other than a brute arms race as we cross the threshold into posthuman intelligence. Because I do think if we have a bit more time to ask what are we conjuring and a bit more shared understanding of what are the sand traps on this golf course we're gonna both agree to kinda steer clear of and how do we check each other on that. I'm no optimist. I'm not some kumbaya. Yeah. Like, I've read enough history. Right? I I I get it. This is not a, you know, an an easy bowl game.
But in my opinion, any dynamic, of core coordination that isn't a brutal, gruesome arms race to conjure something smarter than us is worth a shot. And so so yeah. You're right. Most folks are not working in policy or in the AGI tech itself. I do think it's not gonna be that long until people, like regular people, sort of realize that, like, politically rallying around this stuff and encouraging their governments to do x or y will become a thing. Right now, we're not at what I call the political singularity where the man on the street essentially understands that who builds and controls God is, like, the only important thing. Like, he's not actually concerned about, like, the education budget in his town in Wisconsin. He's actually more concerned about sort of who builds and controls, like, God.
So when that happens, I do suspect people will have something to do, and that will be like getting senators and prime ministers and wherever you are in the world to sort of take this where you'd wanna take it. I'm sure there's some citizens that are gonna say, let's build a big one and take over China. Right? And and then there's gonna be others who who want more coordination. But I I do think we're not quite at the threshold where politicians care because their constituents don't yet care. So there's there's a tipping point we're not at yet. So unless you're working on policy or the hard tech, there isn't much to do. I have a a feeling it won't be terribly long until there's more sort of political locusing around these key issues, and they become kind of the main ballgame.
Right on. Well, Dan, thank you for doing the work that you do. Speaking of that, what is the best place for people to find, your writing, your business, and whatever's coming next? Yeah. Sure. I mean, in terms of the kind of more far future stuff you and I talked about, I mean, Twitter's really easy. It's just Dan Figella, two g's, two l's. That's probably where I'm most active, and there's a lot of great AI researchers and folks that that are on Twitter. It's a good channel for that. On YouTube, it's just called the trajectory. So if people type in the trajectory, if they wanna hear basically what you and I are talking about, but with, like, the guy that ran AI at the Department of Defense or, you know, Joshua Bengio, one of, like, the godfathers of machine learning or, really cutting edge startup leaders or whatever, kinda taking this stuff a little further like you and I did, the trajectory would be a good place, to go. And then, otherwise, it's just danfugello.com.
It's it's the blog with, with all the wacky articles about this kind of stuff. So Brilliant. I love your writing and,
[01:09:21] Unknown:
encourage all of you listening to to check it out. Dan, thank you so much for spending some time with us today. Of course, Abel. It's been a blast. Hey. Abel here one more time. And if you believe in our mission to create a world where health is the norm, not sickness, here are a few things you can do to help keep this show coming your way. Click like, subscribe, and leave a quick review wherever you listen to or watch your podcasts. You can also subscribe to my new Substack channel for an ad free version of this show in video and audio. That's at ablejames.substack.com. You can also find me on Twitter or x, YouTube, as well as fountain f m, where you can leave a little crypto in the tip jar. And if you can think of someone you care about who might learn from or enjoy this show, please take a quick moment to share it with them. Thanks so much for listening, and we'll see you in the next episode.
Hey, folks. This is Abel James, and thanks so much for joining us on the show. In a world where machines can outperform humans in nearly every conceivable domain, where exactly does that leave us? When artificial intelligence can fulfill our deepest drives and reward pathways more reliably than any human relationship, what happens to human connection, meaning, and purpose? Today, we're exploring the world of artificial intelligence with a man who's part technologist, part philosopher, and also happens to be a former Brazilian jiu jitsu national champion. My guest, Daniel Figiela, is a researcher who's been studying the intersection of human potential and artificial intelligence for well over a decade. In this mind bending episode, we're diving deep into what it means to be human in an increasingly artificial world. We unpack heavy concepts like how will AI reshape and redefine the human experience, what happens when our digital companions become more compelling than our flesh and blood relationships, how we can use AI right now to help us become more productive, happy, and develop our full potential and much more. Quick favor before we get to the interview, please take a quick moment to make sure that you're subscribed to this show wherever you listen to your podcast.
And if you're feeling generous, please share this podcast with a friend or write a quick review for the Abel James Show on Apple or Spotify. I really appreciate you. And to stay up to date on the next live events, shows, masterminds, and more coming up here in Austin, Texas and beyond, go ahead and sign up for my newsletter at abeljames.com. That's abeljames.com. You can also find me on Substack and most of the socials under Abel James or Abel Jams. Alright. This conversation is a bit of an intellectual roller coaster and might make you simultaneously excited and terrified. If you wanna stay ahead of the curve, understand how emerging technologies will impact your life, and think critically about our technological future, this episode will be like rocket fuel for your brain. It's about to get weird. Let's go hang out with Daniel.
Welcome back, folks. Daniel Figella is founder at Emerge Artificial Intelligence Research and host of the Trajectory Media Channel. No slouch. Daniel is also a resilient jujitsu black belt, winning the title of twenty eleven national champion at the IBJJF Pan America Games. Thanks so much for being here, Daniel.
[00:07:39] Unknown:
Glad to be here, Abel. Nobody talks about, the jujitsu side of the house when I'm talking AI, but fun to do that little callback. It's good to know where people are coming from, I find. I'm also personally curious.
[00:07:51] Unknown:
After competing and devoting so much of your time and energy to a skill set like that, how does your training and and conditioning goals and the rest of that change over the years? How do you adapt that?
[00:08:04] Unknown:
Yeah. Well, you know, it was it was funny. It's like my life goals were really discovering, like, the grammar of jiu jitsu and really focusing on skill development in jiu jitsu, which is kind of what they were right up until grad school when I realized there would be maybe machines smarter than people, and maybe that would be more important than than writing. Right up until then, you know, I was training all day every day because it was kinda my life's purpose, just took that for granted. And then when my life purpose changed, I sorta like I sold my jiu jitsu gym. I I had an ecommerce company, and I just started kinda working, you know, with the same ferocity on that stuff. And I I remember going up a flight of, like, like, three flights of stairs and being, like, winded because I hadn't worked out in, like, so long. Like, I didn't I didn't look, like, out, like, fat or something. I I very much identify as an athlete, so I've never been, like, out of shape in any visible way. But I remember being winded after three flights of stairs, and I was like, I'm not doing this anymore. And so from there, I I kind of took some of the warm up exercises and, like, full body calisthenics that I used when I was a competitor, and I created, like, an eleven minute version of it that I can do twice a week that just hits every muscle group. I get super winded. It's, like, completely nonstop, like, major motions. I'm not doing, like, calf lifts. Right? I'm doing, like, serious full body stuff for every every transition.
Yeah. So since then, that was way over a decade ago. I've basically just done two really short workouts a week kinda built off of what I did in jiu jitsu. So has it affected my training? Well, I some of my warm up workouts are still with me. And when I get back on the mat, I still got it. But, yeah, I did I did lose it for a second. I lost the cardio for a second, and it scared me. But, yeah, it was a trigger to get back on the horse and and develop a new regimen, basically.
[00:09:48] Unknown:
And I'm curious about skill acquisition, studying that for so long. Yeah. What did you learn while there that you've brought to the rest of your life or or how you live day to day?
[00:09:58] Unknown:
Yeah. I mean, so there there's a couple thinkers that were really great. So I got to talk to so there's there's guys named Locke and Latham who are sort of the founders of, like, modern, like, goal setting theory. So when we think about goal setting, it's kind of like, oh, yeah. Everybody knows that. But, like, actually, as it turns out, like, you know, many decades ago, it was, like, kind of novel ish, at least from, like, a science vantage point. And so they're they're sort of the ones that study the psychology there. And a fellow by the name of Anders Ericsson, who is arguably my my biggest influence, who I got to meet with on a number of occasions as I worked through my thesis at at UPenn, really is sort of the the father of, like, skill acquisition as a science. Like, really measuring, tangible performance improvements in, memorization, sport performance, musical performance, things that can be quantified.
And, I mean, there's a bunch to go into in terms of, like, what I drew from that helped me with jujitsu performance, even being in a small town without a lot of, super talented people to train with, but also, like, in in regular life. I mean, some takeaways that I think are ubiquitously applicable are, having feedback be as immediately proximal to what you're doing as humanly possible. So if I think about my own sales teams, you know, being able to ride with them, like, a a call review, I think, is great. A call review right after it happens is just much better. And and, you know, in in in jujitsu and athletics, it's it's exactly the same thing. It's just the the value is astronomically, astronomically higher. And then also thinking about sort of what the fundamentals are in any skill set that you have and ensuring that you have an adequate amount of repetition and feedback on those fundamentals as you're build building that skill. And there's all kinds of, like, ratios and timing and other kinds of rhythm stuff that that is really interesting in in Ericsson's work. But thinking about that for myself, whether it's, again, sales or new management tasks or some new business function we spin up or whatever the case may be, like, I find myself going back to some of those proxies all the time. But, But yeah. Yeah. So some of those things fill into the rest of life. A a lot of it in back in the day was just focused on how do I choke people and win trophies. Now it's a little bit more focused on, you know, like, hiring and growing team members. But some of the same ideas are still there. So
[00:12:06] Unknown:
The fundamentals. It's so interesting. It seems to me that in the past few years, that's really what's been lost in in a lot of the Internet with all the whiz bang superficial short form stuff. The Internet kind of used to be a library, and now it's turned into this circus. So learning has become and and skill acquisition has become a whole different challenge than it used to be. The challenge used to be lack of information, but pretty easy to put whatever you had into action. Now it's it's really the opposite.
[00:12:37] Unknown:
Yeah. Distraction is sort of everywhere. Right? I mean, it's I think nobody is at a lack of access for whatever information they want to do, whatever they wanna do, including reaching people. Like, it's not even that hard to get a hold of people who've done what you wanna do if you just hit enough of them. And you're you're, like, eager and ardent, and you let them know why you respect them specifically. Like, it's crazy the kind of people that'll get on the phone. But, yeah, it is about, like, okay. Well, now you're, you know, half an hour deep into, you know, scrolling through TikTok videos of, you know, girls doing yoga or something like that. And it's like, okay. Well, where where are you going from here? You know what I mean? Like, you know, you can it's gotta steer clear of those sand traps. Yeah. Definitely a new set of problems for sure.
[00:13:20] Unknown:
And similarly, in in the world of AI, this is hitting, different demographics at different times and and in a different way. But already, you're seeing lots of, time spent for people on AI with AI as a companion. Especially in the past few months, I've really been floored by how much that's changed. I'm curious to hear some of your thoughts having worked in the field for so long. What's it like seeing a lot of this actually come into day to day life for people?
[00:13:51] Unknown:
Yeah. I mean, well, I I think what I like to bring up with people so frogs don't know when you turn up the temperature real slow. You know what I mean? And the temperature is actually going up much, much faster, but it's still not like, you know, 10 degrees in a second where where they would feel it. So the the thing I like to think about is all of the things going up into the right. So if I were to ask you, like, okay, what's your screen time per day now versus ten years ago? If anything, I mean, I don't know. For all I know, maybe yours is less because you've you've kinda consciously, like, built more balance into your work stuff or whatever. But if I I talk to my average Internet entrepreneur, okay, it's basically the same. It's like, okay. I'm not I'm not spending like, once the mobile phone came on, it might have gone from ten to twelve hours a day on screens to twelve to fourteen hours a day on screens. But, like, realistically, no one's on you you're not gonna invent a new bunch of hours where you can be awake and just be on screen. So that's not actually happening. So I don't even care about screen time. We're already capped on screen time, basically. If your great grandparents knew how much time you spent looking at glass, if you could go back eighty years and just tell them what your life would be like, they would say, a, that's, like, impossible. Technologically, like, that's not even gonna be a thing. You're not gonna be able to just talk to people on glass. Like, that's ridiculous. And, b, they'll also say, that's monstrously inhuman, and people aren't just gonna agree to live their lives like that. But as it turns out, Abel, here we are. So already you're capped out. Now let's ask some more questions.
What's the percent of the money you spend now versus ten years ago on things that don't exist outside of ones and zeros? In other words, there is no physical manifestation. That number is only going up. What's the percent of value you generate that is not in the physical world? You're not putting a roof on someone's house or whatever. It is just the the entirety of the value you're you're generating is in ones and zeros. What's the percentage of that? Almost for everybody, it's up into the right. Other question. What's the percent of screen time that you are spending that is conjured to you by an algorithm?
So Google search back in the day, it'd be like, oh, well, how many Google searches do I do? Well, what percentage of my is that but now it's YouTube, LinkedIn, Twitter. You know, the the whole nine yards is brought to you by an algorithm. Netflix, these are conjuring things to you. You know, if you're doing online shopping, certainly on Amazon, a tremendous amount of that is directly suggested based on previous behavior and purchases and all that. So, oh, yeah. My screen time's capped out. I mean, how much different can it get? Well, if now 50% of the time you spend on screens, it is something conjured to you by an algorithm just for you, that is a tangible shift. Now there's another question. What percent of conversations are you having or written communications are you having with machines versus with people? You know, I'm asking ChatGPT more things than I'm asking Google these days, like, by a wide margin.
And and I don't know. In in in a given week, I might be at one fifth of my communications. Some some weeks, one third of my communications with various AI agents versus with people. Now if you just carry all of those up into the right a little more, you just don't land in, like, the same world. Where you land, Abel, is it within one generation, the same degree of, like, religious aversion as your great grandmother would have had knowing the way you live compared to how she lives. But now that's gonna be compressed to much, much less time. That's just, I think, a reality check for folks who kinda feel like, twenty years, maybe we'll have a little bit better Siri. There'll be more cars that drive themselves. It's actually not where we're going. We're going somewhere a little bit more drastic than that.
[00:17:27] Unknown:
Peter Wayne just came to town, and, this is a quote. I think it's kinda summarized a bit, but he said, we've already failed our first encounter with artificial intelligence through social media algorithms that have proven more compelling than human willpower. I think we can all agree that we've experienced that and you can see it playing out at scale. But if that goes up into the right, then maybe that's a good entryway to talk about the world of lotus eating and world eating. Can you explain that to the to the listeners because I like it? A good word story. Yeah. I know your your crowd is obviously pretty bent on being the best version of themselves they can be. And there's certainly ways with technology you can be a much worse version of yourself as social media has proven time and time again.
[00:18:10] Unknown:
Not that social media is, like, a total net negative. I I will say, like, at this point, I've done a really good job of, like, pruning Twitter down from, like, all of the right versus left, you know, politics stuff and and just, like, hyperbole and hoopla kind of down to, like, you know, a handful of people who are really specific in AI and policy whose thoughts, like, regularly inform me in really useful ways, and I get to see them in real time. So instead of just seeing the news, I get to see what does the person I respect think about this latest news thing. So, like, there's ways to mold it, but clearly there's pitfalls. So if we think about where things are going with technology, there's gonna be many new, like, divergent ways to be immersed within technology. So right now, that's the screens you and I are on all the time.
There is a somewhat inevitable transition to to VR and AR, which, you know, we may not get there before the machine zed us necessarily. But if if we were to stick around for long enough, the transition of VR and AR would would eventually happen. But let's just talk about what it would look like even without that. The main sort of strata that people will separate from, or separate by will be sort of are they pursuing I'll explain it in terms of, like, being high agency, enhancing your agency, or decreasing your agency. Another way would be pursuit of power or productivity versus just pleasure and escape.
Another way to frame it would be competing more ardently in the new digital state of nature or attempting to escape the state of nature. So we call these lotus eaters on this side, the people that are kind of in the escape and all that. And I I just refer to World Eaters on the other side. There's an article called Lotus Eaters versus World Eaters that people could could Google if they felt so inclined in an infographic to go along with it. But but this is kind of the core strata. So if we talk about pleasure, you know, it's it's a pretty straight line here. So, you know, the AI girlfriend sort of phenomena and the opposite is the case too with AI kind of boyfriend simulator chatbots or whatever. That's definitely already a thing. And I think the easy thing to do would be to do what people did with online dating, which is to say, well, yeah. Sure.
If you're like a total loser or a super weirdo, maybe that would even have the slightest amount of interest for you. But I'm not one of those actually. So, like, I'm just not even concerned. You know, I I remember Airbnb coming online or Uber. And, like, for my dad, who's 74 now, you know, being like, yeah. Just pushing buttons, getting in stranger's cars. Like, yeah. You betcha. You know? Like, you know, oh, push a button, stay in a stranger's house in a foreign country. Yeah. You betcha. But as it turns out, he uses those all the time, particularly Airbnb. I mean, he's, like, in way more Airbnbs than me every single time he goes on a vacation. So it's an Airbnb. So these things become normal way more quickly than people suspect. And and I actually think it's it's very much like a a blind spot in a really detrimental way to put up the the blinders of, like, well, that doesn't apply to me. Because, like, a lot of these things really will apply to you. Like, I don't use social media, so it won't apply to like, number one, everybody I know who said I won't use social media eventually had them. Right? And then number two, like, they're moving the world whether you're there or not. They're distracting people and having stupid arguments, but they're also moving the world whether you're there or not. So, you know, the Pericles quote is like something akin to roughly paraphrasing.
Like, you can decide to have nothing to do with politics, but politics has something to do with you. The same is definitely the case with, with technology. So many people will go in for very soon, Abel, you'll be able to type in or verbally prompt whatever your agent is. You you come in from, you know, a hard day of work if we're still working, and you say something along the lines of, hey, AI. You know what I want? Give me some kinda humor, like, stand up thing, kinda like the Chris Rock stuff that I like, but, like, I don't know, something different. And, try to integrate some jokes about, you know, these three current events. You know? And and I don't know. Give me forty five minutes worth of stuff and then cut off and tell me to go to bed. And then you'll sit there, and and you'll watch something that you decided to prompt. And when it gets better, it will respond to you in real time. So we're looking at eye tracking, you know, a biofeedback of various and sundry kinds, whether it be an Oura ring or whatever the case may be. So figuring out, is this getting the job done? Right? Is this getting a laugh or not? And then kind of calibrating in real time based on the user because that's where it's gonna go. It's gonna get better and better and more personalized.
At some point, you'll just come in and say, give me what would relax me, knowing full well that it actually knows more than you do. It knows the current state of your mind and what happened to you through the day because it's plugged into all your devices and whatnot. And it knows every time you've been relaxed in the last two years based on, like, biofeedback and your your manual response and whatever. And so it literally will be able to conjure something in real time and then change it in real time to be whatever would soothe you. And you might not know you wanted a documentary about the Tang dynasty, but as it turns out, that in the style of Ghibli, Studio Ghibli, was, like, the thing to actually really relax you on that day. It is what it is. It is what it is. The algorithm will know better than you, and you'll know that, and you'll trust it. So what does this turn into? It turns into what I call kind of, we're so we're gonna talk about the pleasure side first. I will get into the productivity side.
But this gets into closing the human reward circuit. So the thesis here is that we're mostly ambulating between drives. You wake up in the morning and you have to go to the bathroom, you have to eat something, and then you'd like accolades from other people. You'd like love and affection. You'd like to satisfy kind of curiosity and novelty. Right? You we we ambulate between drives. We we wake up and we literally ambulate. This is what we do, and then we go to bed. That that's humans, in my opinion. I I'm not I'm not trying to, like, insult a human experience. I'm just saying this this appears to me to be what we're doing. Here's the deal. Anytime you have a circuit that more reliably fulfills one of those ambulated drives, if it is more reliable than all alternative circuits, it will become the normal circuit.
It'll become the norm. So think about it like this. I have two urns. Here is the urn of the real. Here is the urn of the digital. Currently able, when I go take a walk outside when it's not raining, and I bring a paper book, maybe history, maybe philosophy, whatever, and there's trees around. Almost always, that's a level seven or eight in terms of level of relaxation for me. It's a good one. It's very, very good. It's very, it's reliable. It's consistent. It's really good. Let's call it a six to an eight. And every now and again, I'll hit a nine. When I, when I go walking in the sunlight with trees and a paper book, it's it's a good one. When I wanna feel, like, a little bit more chilled out, relaxed, like, level headed, like, that's what I do. It works.
That's a drive I ambulate through. Now let's just say, right now, relaxation experiences from AI are giving me, like, fours to sixes on a regular. I'll try two of them and I'll be like, well, obviously, AI is not gonna yada yada. Just like the guy that went on Amazon once and was like, oh, well, they didn't have this one size of drill bit and then didn't use Amazon for, like, eighteen months. When it's like, patently obvious it's getting better every five seconds. So at some point, I will start drawing balls from the relaxation urn where maybe I am walking in virtual space, but it will be like an eight minimum every time. Every time.
At that point, I cease the other activities. So the AM radio isn't a a fulfillment loop for people because it's not as good. It doesn't satisfy the drive. Whatever wins the loop wins, and that applies to human relationships. You have friends that you talk to about business stuff. You have friends you talk to about heartbreak or about personal development issues or whatever the case may be. When you are getting four times better advice with no selfish ask from the other hominid and with more humor and more understanding of your current emotional state, if you're crying or whatever, where you don't wanna be that vulnerable with your friends or whatever, there are relationships you're just doing less of and there's some of them that aren't there anymore because the circuit now has a better loop. That applies to the whole ballgame of human experience. And if you just use your imagination for, like, five minutes, you kinda get a sense of where we'll go. So I wanna pause there because there's way more to unpack. We're talking more pleasure right now. Let me know, Abel, what you wanna dive into. I don't wanna just, like, monologue on you here.
[00:26:32] Unknown:
Yeah. Well, I I'm just imagining what that looks like and talk about up and to the right. I mean, all of this is going to be very soon, if it's not already, more compelling than human interaction, especially with a generation coming up during the pandemic without access to a lot of social interaction. And then that continuing forth with the advancements in technology, I'm curious about where that might take us. And another piece of this as well is when you're interacting or asking a question with another human, there's the sense that you might be judged negatively for how you ask the question. Is it a dumb question?
You know, you're exposing data about yourself to the other person or emotions, whatever that is. When you take that away and it's not a person anymore and you're just interacting with AI, you can ask whatever you want. And if that interaction is also better, I just see this quickly supplanting so many human relationships for young and old and everyone in between.
[00:27:32] Unknown:
Absolutely. And the toughest part is gonna be for the people who are like, oh, that'll happen to losers and not to me. Like, those are the people that are gonna really get plowed because, like, they're gonna be like, oh, no. Stephanie, my spouse or whatever, like, out of nowhere, you know, but it's like it's it's not really out of nowhere. Like, I really hate to say it, but, every relationship is a certain number of circuits satisfied at a certain level. This is my but it's a hypothesis. Let let's go ahead and see what happens, Abel. So we might talk in five years ago. Remember when you thought it would happen even once? Or or you may be like, oh, Fragilla called it. But, like, every relationship. Now the scary thing is this goes to, like, parents and children. This goes to spouses. But it's like, you know, let me use a hypothetical personable so I don't offend anybody. Okay?
Hypothetical Jenny is married to hypothetical Jacob. And, you know, Jacob doesn't have good fashion sense. He's kind of embarrassing sometimes, but he's almost always funny. You know, he's, like, I don't know, reasonably kind and, like, supportive for, like, her, like, work ambitions or whatever. And, like, you know, she could talk about certain kinds of topics, like, you know, her her work or or maybe her family life. Like, she can talk with him and feel, like, really safe. So, like, there's that. You know, maybe there's a certain degree of sexual chemistry, whatever. It's like, if you just take those four and there's a better path to those four, I'm just not sure about the whole marriage. Now this sounds like, oh, damn. They can reduce a whole like, I wish I could tell you I think there's a magical force. I just don't know about magic, really. I actually think that, like, if the circuits that compose the mutual fulfillment of that relationship are supplanted, the relationship is supplanted.
And I think people kinda have to get that. That's gonna be a thing in work. It's gonna be be a thing in the personal life. And you brought up a great point, which is that for young people, this might be even faster. Now older folks are gonna like to say it's not gonna happen to me. It will. But for younger people, Abel, there's 11 year olds, so they've never been farther than this from an iPad from the time they were born, Abel. You with me? From the moment of their birth. So what that means is the real is not sacred to them.
So why would your grandmother say if I went back eighty years and I said, you're gonna have a great grandchild. Look. It's a long fucking story. You know, you're only 20 now yourself, whatever her name is. I don't know. Mildred. I don't know what your great grandmother's name is. I'm gonna say it's Mildred. Okay? So your great is your great grandmother Mildred, hey. Look. You're gonna have a great grandson. His name is gonna be Abel. He is gonna do all kinds of cool stuff on TV, and he's gonna be able to, like, play some cool guitar songs that you would probably really like. But on top of that, he's going to, like, regularly be in a metal tube 35,000 feet above the air. He's gonna go to places like Japan and Australia. You know where Australia is, there, Mildred. I I know you don't know what Japan is. I I could explain it to you. It'd be a whole thing. But, like, there's a place called Japan. Just fucking deal with that. Like, there's a it's a place. They look different than you, but it's a place. Okay? Anyway, he's gonna be in this metal tube pretty regularly, like, very, very regularly, actually, traveling thousands of miles in a very short period of time. And when he's working, it it's not gonna be uncommon for Abel to spend ten to fourteen hours a day on, looking at a piece of glass that will have other people on it who are living elsewhere in the world, even the opposite side of the world, or, you know, media or his communication. So he's not gonna have letters. He's gonna read everything on this magic screen. And then there's gonna be another one in his pocket. He's gonna use that for everything. And then you explain Airbnb. You explain right? Clearly, the only the only response would be, like, a, that would be technically impossible, but, b, that is monstrous, and no one would let their human experience bend to that level.
The issue for me, Abel, is that people think that applies forward. So everybody thinks that the now the sacreds of the now will apply forward. A big factor for the posthuman transition here is the fact that, like, there's a lot of people for whom, like, the real is not more sacred than the digital man. Sure. Like, there's a lot of people for whom the digital is much more real than the physical. And we might even argue for good reason, We're like most of the value they create, many of the best friendships they have, many of the most fun experiences they have, whatever is happening in these digitally immersive spaces. I think you bring up a great point that that will accelerate it too. So these are all factors of, like, serious change coming up.
[00:31:55] Unknown:
Wow. So how about the other side of that? For good, for productivity,
[00:31:59] Unknown:
and the world eating, what does that look like? Yeah. Yeah. Yeah. So lotus eaters versus kind of world eaters. So high agency people who wanna enhance their agency. Basically, what this looks like so if you go the the lotus eater path of I wanna be immersed in relaxation and then sexual pleasure and then whatever, and people say like, oh, well, certainly the sex thing will be with humans. I really suspect, like, primitive haptics with really good VR, AR, and and AI, like, generated visuals. Like, the main sexual organ is the brain, like, patently, obviously. So I I actually think that will, like, really get the job done, like, for for a huge preponderance of humanity, and and it's I actually suspect it's coped to not admit that. The drive to novelty as well is just Oh, yeah. Off the charts. Right? So that path, if you just wanna swim in new ambulated permutations of pleasure and leave behind, you know, responsibility, productivity, whatever, that's almost like a dissipating and a spreading out of agency. Like, I don't even want it anymore. Have have the Qualia catalog layer itself on me based on my real time biofeedback. Let me just swim in the best version of the Qualia catalog. Now, of course, Abel, even that won't make us happy. Right? Our our biochemistry, the vessel itself is flawed when it comes to well-being. Right? So what'll really happen is people will be at a felt sense of six or seven of overall fulfillment just like they were before the tech, but now they need this stuff in order to be there. Right? Until we get brain computer interface and much more invasive adjustments to to the human condition, it's not actually gonna sustainably make people more happy. But that's a dissipation of one's agency.
The world eater side of things is kind of a a sharpening of the razor of one's agency where you start to drop off all the kinds of tasks that involve human loops of thought and neural activity, shave them away, delegate them away, and hone a huge preponderance of your time on that highest value bit of stuff. Now, of course, a modern executive who's capable, a modern politician who's capable is already doing this to to a great degree. But with tech, it will be much greater. So let's talk about what that could look like. If you are a salesperson, we could imagine a world where you just sell 50% more if you use the interface that in real time prompts you with the right way to overcome the objection, prompts you with the right price to list based on what the client has said and based on the online research done, prompts you with yada yada yada. You may just sell 50% more. Same thing if all the pre call prep is done by AI. Pre call prep and then a pre done five minute video all about that specific client before you jump into the call. Right? All of that stuff maybe just make you unbearably more productive. Same thing with software engineers. A lot of software, of course, is gonna go the way of the dinosaurs. Like, AI is gonna totally just conquer it. But for those that are still surfing and kind of guiding and navigating within the world of code, we can imagine folks that are able to kind of kick off and wield AI agents to complete different parts of projects and then finish other parts of projects and check the work of other agents. The people that are extremely nimble, completely immersed in wielding and and and and hurling all of these AIs in in as many directions as as the project requires will just be the only ones that are sort of reasonably able to get the job done. And we can imagine the same thing for essentially every feasible kind of work ever. Think about the most banal things in the world. Think about a plumber.
It's like if you have augmented reality that will show you, you know, like, you knock on the pipe once, and it, like, will show you with 95% accuracy where the pipe is in the wall. Right? Or you look at something, and it tells you the size of ranch or the size of whatever that you need to use in real time. Right? Or you hear the description of what the problem is, and then the AI analyzes how old the house was when it was built, yada yada, and it gives you relative percentage what is the problem here. Is it the boiler based on what, you know, missus Wilkins just said? Is it the boiler based on you? If you're getting that in real time through some kind of AR interface or even just a goofy tablet, it's just likely that, like, people aren't gonna wanna hire the guy that doesn't know that shit. Like, I don't know. I hired another guy. He just, like, fucking figured it out. Like, what are what are you doing? You know what I mean? It's like if I'm trying to order ball bearings and, like, the guy I'm trying to order ball bearings from, like, the middle man has to send, like, letters by mail to the different manufacturers to get a pricing update. It's like, well, I don't know, man. I was on the phone with another guy, and he just had all the prices up. And he could he could do Slack messaging in real time with these folks and, like, ask specific questions about, like, you know, the grade of steel that I need or whatever the case may be. So what what this eventually turns into, for better or worse, is article called the ambitious AI that's easily Google able on where this goes.
Is it it kinda gets into, like, you gotta leave behind some of the human stuff here. So so competition starts to require stuff that gets a little wacky and wonky. So right now, if you wanna be the most wildly ambitious in whatever your field is, okay, architecture or, you know, building a SaaS company, it doesn't matter. You gotta be putting in eighties to 80 fives. You gotta be, you know, really viciously strong with your time. You're gonna be learning a lot, whatever. That's all par for the course. Musk does it. Other folks do it. They all do it. In the future, the requirements may become a little bit more more intense. So in other words, if there are ways to calibrate, like, all of your waking hours to be able to get good hours out of a 95 as opposed to an 80. Because maybe you wake up in the morning, your your mind's more lulled, but there's certain kinds of work that that's suited for. If that could be conjured for you and guided by your agency in a way that actually isn't all that depleting of your energies and kinda works for where you're at with that mind state, bada bing. Right? Like, right now, even even Musk or these other folks, they gotta do some stuff to relax. They gotta do some stuff to whatever. But if at some point, something can be bent and molded to be kind of relaxing, but it's still nudging, tapping, nudging, tapping your productive goals forward.
Whoever's doing that all the time instead of watching a movie, like watching Netflix, is just winning more. And then, similarly, there may be ways of teasing out and proxying for maybe where we could do less in terms of, sleep, especially as brain computer interface comes online. But even if just vastly more robust biofeedback comes online, if there are ways or times to go to bed or whatever where you can squeeze more time out, it's gonna be mandatory. It'll be completely par for the course. And similarly, as brain computer interface sort of sort of really really gets into into place and and we can actually kind of level up the the hardware, now you might have folks who just don't want sexual drives. They they don't want certain kinds of circuits to be bothering them. They wanna volitionally modulate what their actual, like, emotions actually are, and and volitionally modulate sort of what kinds of areas they can apply their focus to, etcetera, etcetera.
And and and now in order to even be competitive, you don't just like, people look at Musk and they're like, oh, he's he's like a monster. He's working eighty five hours a week. Like, you'd have to turn into an actual monster in order to be competitive because, like, the top 1%, a % will be doing whatever whatever the top one percent things require. So I think world eating actually requires really drastic posthuman transition in in order to remain competitive. Not in the first, like, eighteen months, but after that, I think it basically goes there.
[00:39:22] Unknown:
So right now, what are the skill stacks that people should be going after in their own lives and perhaps abandoning as well? Because it seems like it's coming for everything, but it's it's hard to tell what the timeline looks like and also where humans are going to be in all of this ultimately.
[00:39:39] Unknown:
Yeah. I mean, I think, the early movements that are nudging in this direction. So people are gonna be living in different ecosystems. The the lotus eaters will have an ecosystem that's surrounding them, Like, really, really deliberate ecosystem for pleasure, for for attempting to escape the state of nature. There there is actually no escape, but you can feel like you're escaping. But in fact, you're just, you know, somebody else has power over you. But that's a certain kind of ecosystem. The media diet you're digesting, the way you interact with tech and to what end habitually is to a specific end, which is which is primarily sort of, pleasure and whatever permutations.
On the other side, you're building an ecosystem of what tech you're using and how you're using it that hopefully is also kind of fulfilling and fun for you, but is very much point by point moving you closer to your goals. So, like, the tier one of this is, like, people figure out what are the social media platforms that you could prune really well so that they could be kind of net beneficial to your goals. Like, for me, we generate a lot of customers through LinkedIn. Like, LinkedIn's a really strong channel for me. I'm not scrolling LinkedIn on the regular, but I will post insightful stuff there, and I will use it because we just, you know, we pay multiple salaries with the revenue that we we make from LinkedIn alone. Twitter for me when it comes to staying ahead of, like, the research side of AI, and and certain kinds of takes around policy for AI. Like, I'll I'll engage there, and there's been there's been direct connections to, you know, interviewees or business contacts or or policy contacts I've been able to make by being part of those conversations in a deliberate way. So I found stuff that, like, is fun and will keep my attention, but is is also pretty deliberately, like, moving the direction of my goals. And then in terms of, like, agents, what are you wielding ChatGPT for? Like, do you do you have a spin up of an agent somewhere, whether you like Claude or GPT or whatever, and they're they're changing all the time and whoever has advantages for this? You know, three months later, the other one has the advantages. It's a it's a rolling wave, so I don't think being married to any one company or or or application is necessarily a way to play the game. But do you, do you have a deliberate spin up of GPT baked specifically around whatever your main organizing life purpose is? Right? Like, for me, within the business, like, I I I have a permutation that's sort of, like, completely trained on what happens in our different departments, what the, like, five year goals are. And I can just ask questions where the five year goals are already known, and it knows to answer in a specific kind of format that's that's, like, actionable and succinct that for me is generally best. And then when I double click on something, it knows how to unroll that double click in a way that for me is generally good to prepare for meetings and share with other team members. So so I have a I I have a an agent that I talk to a lot that's not just like, you know, does Samuel l Jackson know how to do karate or something? Right? Like, you could talk to agents like that, or you could just say generate a picture of, you know, I don't know, Mickey Mouse riding a dolphin or, you know, there's plenty of like, there's fun stuff to do. But when you're interacting with those agents, are they purpose molded for, like, specific kinds of stuff you wanna do? I have another sort of personality I talk to that I've given a name and everything that's very bent on my policy goals and sort of, like, public influence around certain elements of the posthuman transition and where that needs to sink in within kind of the tech ecosystem and the policy ecosystem and, like, what I'm doing to influence that. And so the agents I'm mostly talking to are kinda, like, pretty well defined. Like, they're fun to use, so I I use them regularly, but they're also bent towards my goals. And then the media I'm I'm consuming, for the most part, is pretty well pruned and and is platforms that are correlative to my goal. So if you just if you just look at, like, what agents am I talking to and engaging with, and then what social and sort of, like, network stuff am I am I engaging and immersing in, what percent of that is conducive to sort of, like, your organizing purpose, and and what percent is detrimental to your organizing purpose? And I think there are some folks who are eager to let go of any like, their purpose is like, well, I guess I gotta pay my bills. I can't wait to let go of that and just swim in pleasure. Like, I could have Mariah Carey from 1998 in a hot tub. That'd be fucking cool. You know? I could have somebody funnier than Chris Rock telling me jokes whenever I feel like it. That'd be really cool. There's some people that can't wait to dissipate all that stuff and do who needed agency anyway and just swim and experience? We have enough of those people in society. Things get scary because somebody's gotta pay for all of them. Right? And then you'll have some folks that say, what lights me up and how can I surround myself in an ecosystem that informs and energizes me towards what I'm doing? And so this is really phase one. Eventually, that gets into VR, AR and much more powerful AI agents, and eventually, that gets into brain computer interface. So high or low agency, like, the step one is what I said, but it really does, I think, somewhat quickly roll into the the the further phases of these technologies.
[00:44:29] Unknown:
For people who are new to interacting with AI and agents and and that whole piece of things, what is a good best practice
[00:44:37] Unknown:
to start to at least dip your toes into this ecosystem? It's it's a lot of fun. So, you know, ChatGPT, there's, like, plenty of free subscription stuff. I mean, sometimes I'll just use free agents that I'm not even paying for. And it's pretty simple to be able to say, like, okay. Well, like, what are you really passionate about? So there might be somebody tuned in who says, well, like, the biggest thing for me is, like, I really wanna live longer and I wanna be super healthy. And I I know I'm, like, thirty pounds overweight. I know the things that typically distract me. I know the the the the the you know, I know what kind of macros I wanna hit, whatever the case may be. And it's possible to simply say, like, when I did this with the business, you could just kind of load up, like, hey. Like, here's my big specific organizing goal. Like, when I speak to you, I'm I'm gonna call you this name. So you you can give it a name. Maybe like, you know, like, my fitness muse or whatever it is. Right? Like, I'm gonna call you my fitness muse, and I want this to be a personality I can speak to. And, like, the one goal that I'm pursuing with you as a shared journey is, like, this specific outcome that I'm really excited to to get to.
And, like, here's the four or five main reasons that I need to do it that you're gonna have to remind me of. You're gonna have to remind me that I wanna be around for my grandkids, blah blah blah. You're gonna have to remind me whatever it is. Right? So and and then here's my current state of affairs, what I've committed to. Here's often where I fail. And when I bounce ideas off of you, I'm often where I fail. And when I bounce ideas off of you, I want you to bear in mind my weaknesses. I want you to ask the things you would need to ask to make sure I'm staying on my regimen, and I want you to make suggestions that are kind of in line with, like, what generally works for me. So, you know, I hope to be able to come back to you about yada yada and let you know. Like, you know, you can even just say, like, could we save this personality and have have conversations whenever we want? And it it'll say yes. And it'll it'll be like, yes. You know, I I am this now. You can call upon me when you need me. And then, you know, you wanna ask it random stuff about your emails at work. You can do that. But but you can also come back and say, hey. Whatever. I'm thinking of going on a vacation, like, to these two destinations. I really wanna make sure I don't fall off the ball in terms of yada yada. Like, what should be the precommits that I could do and the things I need to look at for a resort that are really gonna help me with my goals knowing my weaknesses? And it'll give you a bunch of suggestions. And what you can do is you can say, when you give me suggestions, when you're my fitness muse, when you give me suggestions, I'd really love for it to be in this way. And say, like, you know, maybe you you copy and paste a blog article you really like that's, like, super actionable. Or maybe you take a comment that it made 20 comments back and say, remember when you responded this way? If if your default can just be this, this is really clean for me. Like, this is great. You have the cited sources. It was, like, two sentences and tight bullets. Like, when it comes to suggestions, this is the way we do suggestions.
Fitness muse, can you have this be the way you respond with suggestions? Yes. I will, Steve. You know, whatever it is. Right? And then when you're going on that vacation or when you fall off the wagon or whatever, you can constantly tap that thing. We could do exactly the same thing for whatever your business is. Do you wanna sell your business for a certain price point? Maybe you wanna explain the whole context on the business, get all the perspective you can on potential acquirers, you know, share all your goals of what you're gonna approach and attack and say, hey. I got off the phone with these folks. They're they're saying these terms are good. These multiples are such and such. Is it reasonable that within six months, you know, like, we could hit these numbers and we could such and such? Or do you think if we had two or three more people competing over this deal, we might be able to get the number we want without having to wait? And then and then it sort of sucks in the whole context, and maybe we'll provide suggestions in whatever format you want. And you could do that for anything you'd like. And, ideally, in my opinion, Abel, if people do this not just for the stuff they begrudgingly know they have to do, but if they do this for the stuff that is in line with what their heart appoints, that is to say, like, what really fires you up, then you can kinda combine agency with enthusiasm, and you can live in a conversation that is, like, really educational, but also, like, fun as heck. And ideally, that's what it should be.
[00:48:31] Unknown:
What about the hesitancy or you know, there's friction when we're giving away our data, especially for the older among like, people who are about our age or older, I would say, generationally, just don't wanna hand over a lot of our personal data, that sort of thing. I remember years ago when I first was doing online baking, you know, I was in college and I was interacting with someone a generation older and they were like, I cannot believe that you put your check on the Internet, that whole thing. And, you know, this this seems similar. And for me, I definitely dragged my feet for a long time, just not wanting to have anything to do, especially on the artist side as a writer and a songwriter and that sort of thing, being very hesitant at first. And I have gotten over that, by the way. But I'm curious your take on, people who are still kind of not there yet. How do they get over that hump to start just being a little more free to play with tools like this without worrying so much about the privacy?
[00:49:26] Unknown:
Yeah. I'd say, like, do do you have a life goal that if everybody knew what you were typing about it and talking about it, it wouldn't be all that embarrassing. Right? May maybe in your business, you feel like, well, my revenue and profit, well, that's someone would insult me if they knew those numbers. I don't wanna tell an AI because maybe it'll tell my neighbor, Jacob, who who because of my Mercedes thinks I make more than I do. I I don't know. Right? But, like, maybe there's some some life goal that you have that, like the data behind it is, like, nothing embarrassing. If someone looked at your search history, they wouldn't be like, oh, I have dirt on this person. So maybe you could find something along those lines that's an easy landing place and then give it a shot. You know? Fiddle around with it. Play around with it a little bit. On some level, to your point, the online banking thing, you and I can both laugh at that because we know how that went. Right? My dad does online banking. You know? He he's old, but he still does it. Right? We we we just chuckle because we know it's inevitable.
This is one of those things as well. I think that there's probably all kinds of various and sundry considerations. But, if you're plotting a very serious crime, that's probably a bad idea with any online system. For for those of you listeners, you know, like, you have a specific bank you wanna break into. Like, I don't know. You do some pen and paper on that stuff. Right? But, otherwise, yeah, if it if it's not, like, super personal data, then go for it. Right now, I I just I kinda think it's gonna be mandatory. Like, the the sales guy who's like, I don't want OpenAI knowing of all the the meetings I'm prepping for. You know? Like, I don't know. What if they sold it to my I and there's somebody else with really robust thoughts about that, like, what the specific litigation or or, like, legalities need to be for as GenAI evolves. I'm focused much more on the posthuman transition stuff, so I don't think as much about, like, privacy and what have you. But but I would say, yeah, starting with something where no one would have any dirt on you if they looked at what you were typing could be a great place. And I think when you see how useful it is, it makes it pretty tough to, pretty tough to avoid, using this stuff. This fascinated me.
[00:51:28] Unknown:
You know, interacting with a lot of people who are my age and older, we think a certain way about privacy and surveillance and and all of that. And then interacting with a few, younger folks, seeing them gleefully give over as much data as possible. Like, why are you doing this? How are you so okay with this? And they're like, well, it makes the experience better. This is the experience that I want. And the more data about me that I give it, the more it caters it to exactly what I want. So if I like turtles, I see turtles. And if I you know? And I'm like, oh, man. What a radically different way to think about all of this. And also, if you were dragging your feet, it's like just adopting a a little bit of a different perspective here could really accelerate your results in terms of interacting with AI. I would think. I mean, the the inevitable
[00:52:18] Unknown:
future, like, as a manager, they'll come a point where you just can't compete unless AI has access to all your Slacks and all your emails. And, you know, you open the email from Jessica who wants a raise. You open the email from Billy who is asking for a new invoice. You wanna click a single button and just generate the goddamn invoice, and you wanna know the format of the invoice and all that shit. It should already have your Google Docs, and you're just not gonna have patience to go in and export a goddamn PDF. You're just not gonna have patience. And the managers that are using those tools are just gonna beat the crap out of you if you're captain like snail mail over here. And so, you know, I I think, again, you can immerse for pleasure or for power.
And I think if you wanna get something done from, like, a productivity sort of output standpoint, people think, well, the people that immerse themselves in this tech all the time, like, they're gonna just be distracted, and the people that are really gonna run the show are gonna be the abstainers from all this AI ecosystem. That's not true. The world eaters will run the whole show, and the Amish and the lotus eaters will be underneath them. That that's how it's gonna work. I'm I'm not like I'm just telling you how it's gonna go. That's all I can do. Right? I just I can just tell you what's gonna happen, and that's what's gonna happen. So the the the world eaters will actually rule both the lotus eaters and the Amish. So there does come a point where you got to adopt. Am I saying any system that spins up in any startup in Shanghai or Boston you should just throw, you know, all of your most sensitive health and legal and financial information into? No. Like, let's gauge these things out. Let's see what's appropriate for what systems. But some degree of of wading into the pool here does make sense if you wanna maintain relevancy. And, again, I think increasingly, if you wanna be on the upper echelons of relevancy, like, if we make it a whole decade with with general intelligence coming about, I think that's gonna start to lean away from, like, human, frankly. And and and I don't think people are really ready for that. I think people think in twenty years, yeah, Siri will write my emails, but life will be kinda the same. I sort of feel like in order to be fighting for relevance as a human when there's really powerful machines and people are screaming the cutting edge out of this stuff, like they're shaving sleep, they're working when you're not working, like they're having stuff done automatically because it has total context on all their past communications.
They have really custom built super specific agents to spin up agents to help them achieve their goals. Like, if if you're not on par with that, you're just not even playing the game.
[00:54:39] Unknown:
So does that lead us to everyone jacking into the matrix and and getting the chips at some point? Or is there anything in between that we could do to change that a bit?
[00:54:50] Unknown:
I think it basically does. So, like, essentially, my position is that people don't want what they think they want. So what people think they want, if you ask them, would be like a red Corvette, you know, an attractive blonde wife, like, accolades from peers to take a company public, regular travel to Aruba, whatever it is. Right? But what they actually want is the fulfillment of the drives they ambulate through. That's what they actually want. And so, you know, if you could have someone live like the best meat suit, monkey suit, existence they could with all the flaws to our well-being, etcetera, or you could have them in a much more kind of continuously blissful, super expansive sort of virtual experience.
I think people just don't come out of the ladder. Right? Like, again, your grandmother, remember here, Abel. Okay? We go back eighty years. We tell her the fucking story. I told you what she would say. I think you agree with me. She'd be like, yeah. I'm not doing that. But but as it turns out, if she was born when you were, she just would, bro. It is what it is. And if she was born after you, she'd be one of these, you know, 11 year olds that are still this close to a screen. They've never been farther than this from a screen from the moment they came out of their own mother. And so I think that, like, if it fulfills the circuits better, people do it, and that quickly pulls people away from being human. The people who say I don't want it, in my opinion, if they got a taste of it, they would want it. Now some people the the the immediate analogy here, Abel, is, like, heroin or something. Well, yeah. Sure. If I tried heroin, I'd like that too, but I'm not one of those weak pea it's like, I get it, but remember, this is not just lotus eaters. A, this tech won't have the same downside as as heroin, certainly. Presumably, most of it. But, b, if you wanna actually get things done, this is required too. That whole jacking in and immersing and, like, having much more than your current mind to to wield, whether it's discovering some new domain of physics as if humans are still gonna be relevant there, but maybe they will be. I don't know. I don't know where AI is gonna be in ten years. Or creating some new kind of super wacky far out music or art or whatever the case may be or being involved in policy if humans are still doing any of that in however many years. Anything that holds meaning, you'd be able to wield vastly more of that in these spaces and also cruise at a much higher altitude of your general level of well-being. And some people say, well, I don't want to feel gradients of bliss all the time. I think, you know, depression and anger and these things are, like, really necessary.
And it would be right to say that given your current hardware and software. So in your current hardware and software, you literally don't have a choice but to experience a gestalt of well-being, a contrast of the good and the bad. You don't have a choice. You don't have a choice. But if you had a choice, maybe you wouldn't want a gestalt. Maybe you would want a plus nine to be as bad as it gets and a plus 18 to be as good as it gets. Maybe you don't want a negative 10. Maybe it's actually less productive and less like experientially well-being. So as soon as these doors open up, people take them because they're they're patently obviously better. And enough taking of enough doors, and we're just not humans anymore. And I I think if we make it around long enough, if if the AGIs allow us to kind of, like, stick around, I do think most people are gonna go that route. I don't think they should be forced. I'm not here advocating everybody's brains get cut up, but I just suspect most people will. Kind of like Abel, your online banking example, my dad with Airbnb.
It is what it is. So the the best analogy here is green eggs and ham. Right? Like, you will eat it in a box. You will eat it with a fox. You will. You'll like it. You'll bite it and you'll like it. So so inevitably, that's where this stuff is going. It's green eggs and ham.
[00:58:35] Unknown:
Before we get there, how do we as humans in the short medium term earn a living or more importantly, perhaps find meaning and purpose in our time here?
[00:58:48] Unknown:
Yeah. I mean, these are really incredibly important questions, Abel. Making a living is that's the big question, in many regards. So I'm thinking about that for for our own business in the market research space at Emerge. You know, I could be in the business of writing articles and have that be the mail. We write really good articles. But as it turns out, AI is gonna write really good articles. So instead, what have we done? Instead, we've decided to anchor our media and the attention we garner not based on writing better articles than a machine, which I don't I don't think we're in very short order, we're just not gonna be able to do very well there. You know, a human and a machine right now combined can create something better than just a regular human can, but it feels like a losing game. The game that feels winning for us is, anchoring all of our content off of one to one primary research interviews with head of AI, Raytheon, chief innovation officer, Goldman Sachs, chief technology officer, Takeda Pharma. Right? Take, like, $60,000,000,000 corpse and get, like, really, really powerful people to give takes.
That's really hard. Like, if enough AI bots start storming the head of AI at Raytheon, maybe he'll go on everybody else's podcast too, but I don't think he will. I don't think he will. Part of it is we have a stage that already has a certain critical mass, and we think the differentiator will be access to power. That even if you can simulate a beautiful simulation of what the CIO at Goldman would say if you ask this question, beautiful simulation, you'd still prefer to listen to him, like, actually answer that question. And so we're sort of bending our content ecosystem around something we don't think the machines are gonna dominate in a very short amount of time. And I think the same extends into every business. You know, what industries will be you know, for the near term until we have biped robots, things like plumbing and roofing, etcetera, we need a lot of that.
You know, there's, like, there's some of the blue collar, tougher to automate stuff that sure. I think for everybody else, there's a really big question of where are the defensibility parts of what we do, which means you need to understand what AI is capable of. And, like, how do we kinda double down and build upon a foundation of advantage that won't be AI conquered in an extremely short period of time? So for us, human relationships are a really big part of, like, the network of where our value is generated rather than just a juicy article, which if you want a juicy article, you'll just ask chat g p t. It'll write an, an article customized for you. Why would you go to Emerge? But I think all the businesses outside of the blue collar world need to be sort of dissecting the day to day of what they're up to and discerning what what portions of that are not gonna be completely carved out and carved away by by AI. But I don't think anybody has a crystal ball, and I'm not even telling you that, like, my strategy is amazing. Again, because I have no idea what innovations are coming up next. I have no idea.
Nobody does. The the best researchers don't. They don't know what the next breakthrough is in what direction. But I think the mental exercise of deliberately aiming to stay ahead of the curve for those of us that are in the white collar world, I I think is a mandatory exercise at this point for annual planning, quarterly planning, that kind of thing.
[01:01:52] Unknown:
Yeah. It's fascinating how being a human is is being obsoleted, but also is the only thing that we have that has value at the same time. Right?
[01:02:03] Unknown:
Yeah.
[01:02:04] Unknown:
I'm not sure how to think about that exactly. Well,
[01:02:09] Unknown:
I don't know if this will make you feel better or worse, but I'll I'll just Of course. You you you opened the can of worms. Okay? So I think, there is a really big question here as to, what is the value of what we're bringing forth. Right? When we have a a system that's outlandishly more capable like, let let's just say we have an AI that's 10,000 times more powerful than the current best model at everything, voice, blah, blah, blah, blah. AI has passed the Turing test. We've already blown past human performance and, like, functionally, all the stuff. Let's say we have, you know, 10 we're 10,000 times better. You could probably hold your breath almost. I mean, it's it's not gonna take that long. And it has millions, plural, of physical instantiations.
It it can control nanobots. It can control life sciences, machines to kind of, like, test, you know, proteins, like, up close, like, with its own digital eyes. And it has all the physical embodiments. It's learning to do plumbing. It's learning to do roofing. It's flying the jet planes, so it it can take off as a jet. It's not limited to, like, a biped shape. It is any shape that it can control. You just it makes so long as it's it's connected to that brain, it's it's plugged in. So it it's interacting with the physical world through 10 times more senses than humans have with, you know, millions and millions of physical embodiments, and it's vastly more powerful than it is now. There is a question here as to what is valuable. And one might argue, Abel, one might argue that if the cards get played right and such such an entity could be sentient and aware in a way that you and I are, Could have the kind of agency that you and I are have, but but, frankly, like, maybe a lot more of it. Right? Maybe a lot more of both of those things.
Maybe that thing would be a pretty good vessel for the whole project of life, life being non dead matter. So you're not dead, and before you was a fish with legs, and before you was like a before that was a eukaryote, and before that was like, you know, some wiggling proteins that didn't even form a cell yet. So the whole project of non dead stuff presumably will continue. And, presumably, I'm very glad it blossomed from a sea snail to Abel James. I'm grateful for that. I think it's wonderful because if I was being interviewed by a sea snail, I don't think it'd be fun. But we might also say, you know, the only thing that has meaning I think it's wonderful to know that humans have meaning. But I think it I think it's okay to suspect that if the flame of life were to blossom up from us as we did from the sea snails, that might also be a great good. We might even argue the greatest good. But if we build something that's unconscious and just optimizes from some industrial process and crushes all of Earth life, that would be an extinguishing of the flame of life as opposed to a blasting out into the multiverse of the flame of life. I think we're at a threshold where, yes, how we treat humans, the currently most morally valuable vessels that we know of, is incredibly important. But, also, whether we extinguish the flame of life or or blossom it, I think are also very morally relevant questions. So what has meaning? What is relevant?
It may be the case that we're kind of building it too.
[01:05:14] Unknown:
It's a reckoning of some kind. When is AGI coming then?
[01:05:19] Unknown:
Real really tough to say. I'm currently in the eighteen month to four year camp Wow. Where basically you have systems that can spin up systems that can spin up systems that can, you know, design what their physical embodiment is gonna be and then take that physical embodiment and run new experiments and design what the next thing will be, not just for their code, but for their for their physical form. You know, three, four years out, I think we'll have stuff that's doing all of that at, like, a proto level. But from a code standpoint, AI writing its own code and improving itself, I think we'll be there in, like, a eighteen month, two year level.
But once once we've got the brain and we've got the brawn both mostly just cycling forward with AI, I do think well, you know, my odds aren't, like, monumentally fantastic about, like, hominids after that. But I I do think it's relatively short time horizons, and I I I hope it's less risky than I think it is. But, but I suspect it's not without its risks.
[01:06:15] Unknown:
Yeah. Yet at the same time, there's not a whole lot that we can control about those risks. Right? Like, it's it's an arms race, At least right now, that's what it seems to be. It is. I mean, there are some people, and you might even be talking to one,
[01:06:30] Unknown:
who are fighting pretty ardently for the intergovernmental world to focus really ardently on, some degree of coordination to have a dynamic to be something other than a brute arms race as we cross the threshold into posthuman intelligence. Because I do think if we have a bit more time to ask what are we conjuring and a bit more shared understanding of what are the sand traps on this golf course we're gonna both agree to kinda steer clear of and how do we check each other on that. I'm no optimist. I'm not some kumbaya. Yeah. Like, I've read enough history. Right? I I I get it. This is not a, you know, an an easy bowl game.
But in my opinion, any dynamic, of core coordination that isn't a brutal, gruesome arms race to conjure something smarter than us is worth a shot. And so so yeah. You're right. Most folks are not working in policy or in the AGI tech itself. I do think it's not gonna be that long until people, like regular people, sort of realize that, like, politically rallying around this stuff and encouraging their governments to do x or y will become a thing. Right now, we're not at what I call the political singularity where the man on the street essentially understands that who builds and controls God is, like, the only important thing. Like, he's not actually concerned about, like, the education budget in his town in Wisconsin. He's actually more concerned about sort of who builds and controls, like, God.
So when that happens, I do suspect people will have something to do, and that will be like getting senators and prime ministers and wherever you are in the world to sort of take this where you'd wanna take it. I'm sure there's some citizens that are gonna say, let's build a big one and take over China. Right? And and then there's gonna be others who who want more coordination. But I I do think we're not quite at the threshold where politicians care because their constituents don't yet care. So there's there's a tipping point we're not at yet. So unless you're working on policy or the hard tech, there isn't much to do. I have a a feeling it won't be terribly long until there's more sort of political locusing around these key issues, and they become kind of the main ballgame.
Right on. Well, Dan, thank you for doing the work that you do. Speaking of that, what is the best place for people to find, your writing, your business, and whatever's coming next? Yeah. Sure. I mean, in terms of the kind of more far future stuff you and I talked about, I mean, Twitter's really easy. It's just Dan Figella, two g's, two l's. That's probably where I'm most active, and there's a lot of great AI researchers and folks that that are on Twitter. It's a good channel for that. On YouTube, it's just called the trajectory. So if people type in the trajectory, if they wanna hear basically what you and I are talking about, but with, like, the guy that ran AI at the Department of Defense or, you know, Joshua Bengio, one of, like, the godfathers of machine learning or, really cutting edge startup leaders or whatever, kinda taking this stuff a little further like you and I did, the trajectory would be a good place, to go. And then, otherwise, it's just danfugello.com.
It's it's the blog with, with all the wacky articles about this kind of stuff. So Brilliant. I love your writing and,
[01:09:21] Unknown:
encourage all of you listening to to check it out. Dan, thank you so much for spending some time with us today. Of course, Abel. It's been a blast. Hey. Abel here one more time. And if you believe in our mission to create a world where health is the norm, not sickness, here are a few things you can do to help keep this show coming your way. Click like, subscribe, and leave a quick review wherever you listen to or watch your podcasts. You can also subscribe to my new Substack channel for an ad free version of this show in video and audio. That's at ablejames.substack.com. You can also find me on Twitter or x, YouTube, as well as fountain f m, where you can leave a little crypto in the tip jar. And if you can think of someone you care about who might learn from or enjoy this show, please take a quick moment to share it with them. Thanks so much for listening, and we'll see you in the next episode.
Introduction to AI and Human Connection
Meet Daniel Figiela: Technologist and Philosopher
Skill Acquisition and the Role of Feedback
The Impact of AI on Daily Life
Lotus Eaters vs. World Eaters: The Future of Human Agency
AI and the Future of Productivity
Building a Purpose-Driven Ecosystem with AI
Privacy Concerns and Embracing AI
The Future of Human Relevance in an AI World
Navigating the AI Revolution in Business
The Political and Ethical Implications of AGI