Call it what you want (Malthusian trap, prisoner dilemma, race to the bottom, coordination failure, tragedy of the commons), the problems are real!
In Episode #488 of 'Musings', Juan & I discuss: the concept of the Moloch problem (a philosophical idea that explores the unintended consequences of competitive behaviour), over-scheduling children's activities to the race for social media status, the silly idea of a dollar auction, how rational decisions can lead to suboptimal outcomes, uneffective philosophy & whether any of this is actually a problem.
Very sad puppy this week, send in a boost to show some support!
Timeline:
(00:00:00) Intro
(00:01:49) Defining the Moloch Trap
(00:06:18) Examples of Moloch Problems
(00:12:36) Origins and Philosophy of Moloch
(00:22:00) AI & the Race to the Bottom
(00:27:38) Boostagram Lounge
(00:29:27) Critique of Moloch and Theoretical Philosophy
(00:40:32) Effective Philosophy and Real-World Application
(00:50:56) Conclusion and Final Thoughts
(00:58:40) V4V
In Episode #488 of 'Musings', Juan & I discuss: the concept of the Moloch problem (a philosophical idea that explores the unintended consequences of competitive behaviour), over-scheduling children's activities to the race for social media status, the silly idea of a dollar auction, how rational decisions can lead to suboptimal outcomes, uneffective philosophy & whether any of this is actually a problem.
Very sad puppy this week, send in a boost to show some support!
Timeline:
(00:00:00) Intro
(00:01:49) Defining the Moloch Trap
(00:06:18) Examples of Moloch Problems
(00:12:36) Origins and Philosophy of Moloch
(00:22:00) AI & the Race to the Bottom
(00:27:38) Boostagram Lounge
(00:29:27) Critique of Moloch and Theoretical Philosophy
(00:40:32) Effective Philosophy and Real-World Application
(00:50:56) Conclusion and Final Thoughts
(00:58:40) V4V
Connect with Mere Mortals:
Website: https://www.meremortalspodcasts.com/
Discord: https://discord.gg/jjfq9eGReU
Twitter/X: https://twitter.com/meremortalspods
Instagram: https://www.instagram.com/meremortalspodcasts/
TikTok: https://www.tiktok.com/@meremortalspodcasts
Value 4 Value Support:
Boostagram: https://www.meremortalspodcasts.com/support
Paypal: https://www.paypal.com/paypalme/meremortalspodcast
[00:00:07]
Juan Granados:
Welcome back, Mere Mortalites. You've got some musings from your two favorite mere mortals right here. You're Juan. and Kyrin yew. August 3 and look, Mere Mortalites, today we're going to be talking on a concept that I don't want to get very detailed and definitive about because again, some of the learnings I've been doing over the past few weeks is the the more that I try to prepare, over prepare or get into like really specific details, I don't necessarily enjoy or do as well as it is more of a conversation. So what we try to do with musings is deep conversations with a lighthearted touch. So it is gonna be more of a deep conversation, but again, with a lighthearted touch. So we're talking the philosophies, we're talking the ideas with a light touch, not anything too deep or dastardly that's hard to understand. So the topic for today is MOC.
Yep. Now you might be wondering what the hell is mork? Well, I'll tell you what it's not. I'll tell you what it's not. It is not a harmless spiny lizard of grotesque appearance which feeds chiefly on ants found in arid inland Australia. I did not know that it was a spiny lizard as well that lived in Australia. So we're not talking about that. And the reason I actually came across this, and I hadn't heard of it before was over the past month, one of the things I just I randomness I've started doing is through, like, a one minute training of just interesting, cool, different ideas I just hopefully haven't heard before that I just, like, get AI to generate and I'll just read through them. I'll be, oh, yeah. Cool. Interesting. Interesting. Interesting. And the the Moloch, however you wanna call it, there's a sometimes if I read it as a Moloch theory, there's Moloch problem.
And as I read it, I was like, oh, that would be cool to talk about. I'll define it quickly. And then we can just get into it. Yeah, please do. I'll call it as a, what Google is saying here, a Moloch trap. Simple terms, it's a zero sum game. Really? Zero sum game? It explains a situation where participants compete for object or outcome x, but make something else worse in the process. So everyone competes for x, but in doing so, everyone ends up actually being worse off in the process. So that's kind of the the moral problem. You know, you go after something and you think that you're doing the right thing and lo and behold, end up doing worse for the entire civilization, humanity, whoever your folks are. Do you wanna give us a couple of examples? Yeah, yeah. So a couple of examples. So here's a few, at least that, pertain to me. And I'll, I'll, I'll try to find ones that weren't like the common tropes, I guess, that exist around this. So the, the real examples that I was putting or at least finding for myself, kids schedules.
So, actually overbooking their kids, which I have had conversations with people around and I've fought to not do this and gets into the conversation about Moloch problems. Being like people will book their kids for things to do, not so much because they want to but because they're scared the kid will find so so that would be more along the lines of I guess you're you're trying to
[00:02:57] Kyrin Down:
what make the kid healthier, more social overall, but you're declining their health because
[00:03:05] Juan Granados:
you're doing too much. Yeah. Well, it'd be more like, the the example here would be, parent a says with a kid, I am a kid. So in a, b, and c. And then the other parent goes or thinks, oh, damn, like, mine must be falling behind. I need to do a, b, c, and d. And and, like, they're gonna be doing this and the tutor for that. And so in the localized way, you do think, well, I am trying to give the best head start or the most advancement for your kid to do sort of things or activities, but in the end, in the outcome, you probably, or you might be actually pricing just an unwantedness that they want. Like, they're just like, oh, I just wanna spend more time with like a parent. Let's just say as opposed to doing all these activities for the activity's sake.
It it it becomes more a so the what happens off is that you just get this general trend towards, oh, people just doing activities for the sake of the activity, not for the outcome that is generated from the original activities that you wanted to do in the first place. So that was an example, that's good for me. Another one was just like social media and statuses. That's another example from a malloc problem. So this is just chasing likes trends, things like that, as opposed to, yeah, doing it for our own reasons. It's from, we don't want, we want to avoid, we want to avoid looking like we're not participating in the race to, get them more likes and the more posts. Now this doesn't happen for everyone. This is probably one that I like is a general example given around social media and status that I see maybe it's an easier Moloch problem to jump out of it. That the reason as I'm talking through this that I like the word at the very least of Moloch and I can't bothered finding that there is a, keyword linkages of it. Okay, cool.
I like that. Just like the naming of it is cool. I like the naming aspect of when you name something or where you when you like personalize it, when you make give it like identity, basically, it helps you to assign it something and step away from it as opposed to it being just like, oh, like what is this? What's going on here? Why am I and throwing this issue? And compromising. That's what it is. I think that's more related to adding things to animals, human qualities to animals. So I don't know if there's one. What's it called like human qualities to like ideas? Something like that? Yeah. So anyways, that's what that is. The health and wellness aspect of it as well. So this is, when you become competitive, you have to like kindness. So this is again, we're all sort of, you know, you get into a race, which maybe this is what I've started to see, race into like the healthiest, the happiest, the fittest.
And again, in a localized way, this can all be good. But then when you start competing and challenging and seeing people do, well, this person did a math one, this person's doing ultra math, and this person's doing this, this, this, and this. And again, then you're all chasing this aspect. You then trend towards this, well, everyone's gonna explode because you don't have enough time and maybe then you're giving up all this energy that you should be spending or could be spending with family and other aspects, either because you believe that that's what you should pursuing if you really like health and fitness. And I I can see myself for sure in that being a Moloch problem that I've been drawn into, or the problem, whatever you wanna call it, theory that
[00:06:12] Kyrin Down:
it's like, okay, that's another one that I would say I've failed that I've failed at sometimes and trying to step away from just like, Okay, this is kind of different from what I had taken as what the the Moloch problem is, in that what you're describing there is more people doing something and it's then having a very obvious rebound effect onto themselves. And it seems the problem is more that that they're not realizing it's happening. So for like the kids thing, like, if you're overbooking them, the rational thing would be, oh, okay, I'm overbooking.
And if I'm overbooking them, it's having this deleterious effect on other parts of their lives or my life, for example, how I'm then starting to treat them. And the fix for that is very easy in that you stop overbooking them. Whereas the Moloch problems that I had heard of were more along the lines of prisoner's dilemma, game theory, things like this, which are, you'd know, you can see the problem, you can see what you're doing, you can see that rationally, I have to do this. And you can see, oh, it's, it's going to have a bad effect on me. But I have to rationally do this, that that was my kind of understanding of what a Moloch problem is. And so to highlight the differences between this, the classic is the prisoner's dilemma.
Two prisoners, one can rat out the other one and get a reduced sentence. But if they both agree to not rat out each other, then they both they get the very lightest sentence. So like, let's say zero years, they don't squeal on each other. If one of them No, sorry, they get one year, if they don't squeal on each other each. If one squeals on the other one, while the other doesn't squeal, the one who is squealing gets zero years and the other one gets punished really heavily, let's say ten years in jail. And then if they both squeal on each other, and vice versa, and if they both squeal on each other, then they both get seven years in jail. So so for example, if you then tally it up and say like, what's the minimal amount of time in jail is the best outcome for them as a whole.
It's for both of them to not squeal, they get two years total, if one of them rats out and the other one, it's a zero years and ten years, so ten years total. And if they both squeal, then it's fourteen years total, because it's seven plus seven. And that's the worst outcome. Rationally, they look at that, and they go, Okay, it would be best if we both didn't agree. And this is where there's variations, because this one has like a trust component to it. So it's like, I need to trust the other person and not squeal. And if there's no communication between them rationally, the thing to do is go don't squeal. No, the yet no rationally, the thing is to squeal and say like, Hey, this other person did this.
And the other person is going to do the same thing. And so they get trapped in this, this game where they're both screwing each other over and they can't get out of it. Which is this kind of ideal that you make the most rational decision.
[00:09:26] Juan Granados:
But it it's the worst decision for yourself, but you can't help but make it that was kind of what I took from this. Yeah, I guess I guess I'll extrapolate it so I might have just extrapolated too hard to some of the more personal views. But yeah, I guess it's a zero sum game, right? Where when you're competing for something, like when you're competing for an outcome and then make something else worse in the process, that's basically you're competing for something, but something else gets worse. I guess I took it in, in that lens. And so like for the kids, kids one, it was very real for those folks who've got kids, you'll understand this. If you've got the ability to send them to things is that the the problem there is is that you think that sending them to more activities because then more people are going sending the kids some more activity is a good thing. And then in the end, it becomes a bit of an effect. That's a, that's a trap. That's the model trap where you go, well, if my kids are doing three activities, but everyone else's kids is doing four activities, getting to do four activities and the choice of that is kind of an assimilation or race with everyone else that's doing it, but it's for the sake of activities, not for the sake of the outcome. So that's being achieved from it. So it's the trap because then you it's, I guess, a simple focus. I think this is the underlying piece with what I was thinking from a Moloch trap or at least in my mind, all different. It was just like, where is it a case where you just start focusing elsewhere because you, because you either join in the race or you're participating in the way that it's supposed to be participated And you don't lose sight of what the real outcome is that you're doing because then you start getting some other random type of outcome that you didn't you think is a is the right thing to be getting. But in the end, there's not what you actually wanted in the to begin with. Yeah.
[00:11:07] Kyrin Down:
I'll jump in because I did a bit of research and like what is a Moloch problem as well? Where did this come from? Why
[00:11:14] Juan Granados:
why is this a term? Because where did you first hear of it as well? I just thought it was like a one minute little like learning piece that I do on a daily basis for things and it was like the Moloch problem. And I was like, oh, cool. Where did you hear about it? And to begin with. Okay.
[00:11:28] Kyrin Down:
What but was there? Like what resource? What? Oh, TTPT
[00:11:31] Juan Granados:
AI one minute view. I did link to the that original one that you sent through in the discord
[00:11:36] Kyrin Down:
Yeah. As well. Yep. So so that's where the term comes from the Moloch problem. Moloch is a like an ancient God in some sort of religion. I can't remember. But the the term Moloch problem he was being used and this was like a kind of creative slash destructive God. He was immortalized, I guess, in a poem by Allen Ginsberg. So this is like the beat era. So think of Jack Kerouac on the road did a book review on that, which is this kind of anti establishment, hippie, anarchic type views in The United States. And I think like, I'm gonna say like the 1960s 50s period after the wars, things like this.
And it was then a blog, which was the S was called the slate star codex in 2014 might have had a different name because it was, from Scott Alexander and he changed the name of his blog. So it was linked to other concepts I've heard about for a while and I listed some of these in there, which is tragedy of the commons prisoner's dilemma, coordination failure, race to the bottom, multipolar traps, all of these sorts of things are kind of in that same genre. And I tried reading the blog. It's fucking difficult, man. And it's difficult because it's not. It's one of those, I guess, like, it's it's a poem slash fun idea slash someone musing on the world.
And in a kind of critique type way, it's like a critique without offering much solutions. You know, there's people who are really good in this world are bashing the shit out of like, the current system. But when they come to offering solutions are not so great at it. It's along those lines. And it's very, it's long, it's long, and I found to be honest, rather intellectual, Heidi, holier than now sort of deal. And he was largely talking about what I thought was the Moloch problem. So this is AI safety politics, climate change, capitalism critiques, which to be honest, I find generally to be rather unproductive conversations. And I can talk about that in a little bit, perhaps.
But these ones were really saying like, you know, environmental Moloch problem would be, you know, we're all trying to produce the best that we can or, you know, most efficiencies and that leads to dumping of toxic waste in the water, which then has tetra detrimental environmental effects. Think of books like The Silent Spring, I've got to got to cover that at some point in the future, which is what started I guess, almost like the environmental movement and things like this. So for these ones, it really was coming from this Not individual personalized things like you're talking about, but the big grander large scale, humans are gonna kill each other over time, because we're all too selfish sort of deal. Yeah, yeah. I had this element of humans are bad.
And the it's not that like an individual might be good, but humans are bad overall. That was good kind of
[00:15:11] Juan Granados:
feeling behind the surface of of Of what they were trying to get to. Yeah. Yeah. For sure. Yeah. I mean, like, for sure, the the I think the the idea or the the problem of all lock and again, because this is just a concept of it's got a name. So I thought that's kinda cool. But that, that whole idea of, and it was AI of those that I was like, oh yeah, I can clearly see again, this is what you define as bad and good obviously differs and so it changes that narrative of it. But if you take AI usage, for instance, we are in a clear race to the bottom type scenario where everyone's spending as much money as possible. I'm talking across like governments and everything else that there's companies, governments buying and spending ridiculous amounts of money to get ridiculous amounts of processing power and more chips and more this. And if you don't do it, like you got, you can't stop because then China's gonna do it or another place is gonna do it. So you have to just go as hard as possible and then you can get into a conversation. Well, what about security or what if you just launch into AGI, general intelligence or super intelligence and something crazy happens and you break something and it's like, yes, but if you don't do it, the other person's gonna do it, so you might as well just continue. And so that race to the bottom or race to succeed, whatever it is in this intended purpose there, that's the the Moloch problem on maybe AI security in that there's going to be unintended consequences to that and nobody wants to stop
[00:16:31] Kyrin Down:
to pause. And you're saying that's in the level of what countries or of companies
[00:16:37] Juan Granados:
that's they? Well, their their decisions to? Well, no, no, I think I think I think you can you can so the concept of AI, let's just say AI, that's the one that I was like, that was an example I was going to bring up where you could use it at the level of like humanity. Humanity is here, like globally, you're talking, like, at the country level, like USA versus trying to trying to develop as much as possible. There's that. There's company. There's company versus company trying to develop or use the technology, and it's a race of the how fast can you bring this in? So you don't get obliterated by whatever all the way down to the individual where you can have that. And this is a, one of the things I started to see with in my own tooling that I do in my day to day life.
I've for years now been trying to use more AI, a lot of things. AI in the sense of summarize this, analyze that, improve this workflow processes, all of those things. And I can see that people would get, see this, this is the dicey thing. You could get into a model problem if you don't, keep a really clear line of sight and talking individual here, line of sight into what the hell it is that you're trying to do. So the example I was going to give was for annual goals in that now we have, you know, month, day, day goes by, you have more improved AI tools and I've played around with a variety of them. And I thought in this particular year coming around, Hey, I'm going to just try and do the annual goals mostly through the AI tool. Here's my entire daily journal notes from the past year. Here's all my lifts. Here's all my training. Here's what I've been doing for the last couple of years of my annual goals. Here's my, like, one to five eminent human being qualities. I'm at level two and level three and level two here. I wanna get to level three, level four, level three here. Cool.
Process all that. Tell me through how I might be able to do that and achieve it through annual goals and whatnot. And what it gave me, look, in part was good, good information, fine. I leverage it. I'm almost having to leverage 20% of it, but I had to shift it somewhat in my mind. But what I saw is that at least as an individual looking at it, it was the if I could see that everyone else was using it, they would get a general trend to this same answer, perhaps because it felt generalized, not too specific to me, even though I'd given it so much information. And so there's when you start getting a bit of a race to the bottom type scenario with the use of AI because is it going to kind of all trend together? And now it's leveraging all the information on the world. And so when I say I'm doing running and I'm training and I'm doing fitness, I did a half marathon last year. Hey, I kinda wanna improve my running and do more running. What's the first thing that it said right after I gave it all this? And I wasn't specific about I wanna do a marathon, but it said, oh, you should do a marathon and and and a half marathon at this time. And so part of it goes like, yes, I get it, but then what's the evolution of that if I continue using this tool or just thinking about it in a very general trend to the other model problem of just society and doing it for like, oh, well, it becomes do an ultra or do it faster or do it again, where the concept of, and I guess is the the bid that I had to, like, play around with for the last month going like, okay. But, yeah, do I wanna do a marathon because I wanna do a marathon or do I want to do a marathon because I'm falling into the Moloch problem or the problem at hand of it's a race to do more, be better, do it faster and you can get really caught into that trap and lose a side of maybe the benefit that I wanted to do in the first phase was just to be healthy and do things versus do it because you're trying to amass a ranking or a popularity or more people to follow you for a certain reason. So I kind of came eyeball to eyeball with AI usage and seeing how it could lead me down that path and then kind of compare it to those other ones around like social status or doing because something because society generally trends as in, oh, yeah, that's that's a thing to do. And I kind of say long story short for that one. So I do I do have Emmanuel Girls marathon and a half marathon, but Hey, I got you. But weirdly enough, I went, yes. It I do wanna do that. But then if I really wanna really, like, thought about it myself, I was like, but I don't care of doing it in a race.
And that's the important thing in that I actually don't give a flying fuck about going and doing it at the Sydney marathon or doing it that. No. No way. But I'm happy to do it on my own just on, like, a random Sunday and plan for it. And it is that achievement of achieving that not for the fact of doing it at a location. And I think that popped me out a little bit from that sort of problem. And it's like, oh, yeah. I don't have to do it and plan it just like everyone else is doing it for twelve weeks in this particular location. I'm not I can't say right at this moment that I've ever had the the passion to do a Sydney marathon and do the London marathon or something to that effect. But the achievement of doing a particular distance of feet ago, that that would be pretty cool. I I could see myself doing that. So, I think the AI piece could lead at the local level. You could see that you just chase down things then for the wrong purposes. And then, yeah, applicate you know, applicating it, the application of it to, like, a global country wide thing.
Sure. I probably have less say or less input into that, but I could see how that become the more like problem that we'd and so much you and me can do about it, but just screams out that's something that we're just careening headfirst and no one's really stopping. I've listened to plenty of podcasts about this, and it's like no one's stopping. Everyone's just going for it as hard as they can. And let's just see what happens at the end of it. Yeah, sure. So I didn't think that much about the applying it at the individual level because I just
[00:22:25] Kyrin Down:
thought that it necessitated a group. I I thought that was the whole point of the Moloch problem. It's meant to apply to a group setting. I guess I'm saying AI safety policy. At the individual level,
[00:22:38] Juan Granados:
it applies in the sense that if I'm doing that at the usage of something in AI as example, if I'm doing that, I can see that there would be a gigantic amount of people who would also similarly be doing it or applying it in that way. And then you start trending towards this, the same commonality or similar commonalities
[00:22:54] Kyrin Down:
where that problem might apply in slightly different ways. But did you do it? Do you think the did you solve the problem yourself and you're not the part of the larger group?
[00:23:04] Juan Granados:
Well, that that I get that's that from like when I was reading with it, it's like the whole concept of calling it a moral problem or turning it into a name for an idea in that particular way is so you can jump out of a Moloch problem in that it probably still exists and is being used and applied in a whole variety of ways by different people, but you can choose not to be participating in that. So I guess in this example, I'd say I think I've I think I've been able to spot it. Have I necessarily been able to step away from it? I don't know if it's in its full entirety I have it on, but at least I've spotted it it to know like, oh, I'm probably going to see this and still continue to use it in this way. So I'm going to spot it more than at least beforehand just being like, cool, that sounds good. Go on. Just do it. Like kind of without even really thinking about it. Sure. Yeah. I don't know if that's a
[00:23:54] Kyrin Down:
like calling it a Moloch problem for the individual. For me, it seems kind of pointless because it's just you'd, how would what would be a better way of describing it is just unintended consequences. You're just not realizing that you're doing something and it's going to have a detrimental effect on what it is that you're trying to do as a whole. Yeah. Well, just lack of lack of foresight, perhaps, or lack of understanding of how the how reality works.
[00:24:26] Juan Granados:
Well, that's, but that's what I mean, I guess the the content that I liked, and while I looked at it, I was like, I liked just the one singular name that you can apply a concept because when you said it was a concept, right? So if you're trying to describe to someone a concept, sometimes, you know, you can alternate it, you shift it, oh, what about this? What about that? If you, at least for yourself, you say, oh, a mollic problem and yeah, like the applicability of everyone, everyone in from a humanity perspective. Yep. You can use it in that particular instance as well for like a Moloch problem. But what what I didn't find it as interesting and I wanted to apply it from an individual level is for some of those ones, what the fuck are we gonna do? Like, as I mean, more light, you know? Are you gonna be doing anything about AI safety with at the global level? Mhmm. Maybe. Maybe some of you do, and maybe there's, pushing from, let's just say, you go on and up. We're gonna vote. It could be one way. Go and protest. But say AI security or AI whatever you want to deem it. Like, I don't think anyone's stopping that. That's just like a moral problem that maybe we can identify and it's like we're all just going headfirst into it. Yeah, I definitely disagree with all of those things
[00:25:33] Kyrin Down:
in terms of I'll I'll get onto that shortly. The is just like, so for example, let's say I want to clean my teeth, I want I want my teeth to look better. So I start brushing them and I over brush them and I start wearing away all the enamel and my teeth end up worse. Yep. Even though I might get like a slight boost initially, because I'm brushing them brushing them so good. Now all nice and clean. And that was just an unintended consequence. I didn't think about it. I didn't know about it. Perhaps you wouldn't call that a Moloch problem. Right? You just go, that's just a problem. But then could I then say, Oh, this is a Moloch problem. Because the reason I want my teeth to look good, is I'm in the dating scene. I'm in the world with all of everyone else.
My teeth looking better will give me a competitive advantage. So therefore, that's the reason why I'm doing this behavior. And then it has this unintended effect. And no, that's it wouldn't be a model problem. Okay, why not? Because you'd have to have everyone trending towards doing that because of that reason. Okay, but so then like the kids one, for example, is,
[00:26:45] Juan Granados:
is that a everyone's doing that with their kids? Yes. If you're a parent, if you're a parent comment in it because I'm telling you right now, the amount of, so conversations with kids, at least, at least around the ones that I've had, there is an astounding amount of times that the conversation around like how are they going with their talking, how are they going with this, whatever. There will always be a tug that comes up around, oh, oh, are they doing soccer? Are they doing something too? Are they are they playing, football now? Are they doing this one? Oh, my my kids are in this. Oh, but my kids all happens all like every I reckon every conversation I've had with every kid, like parent, that's happened. If it doesn't, like if you don't experience that. Yeah. Tell me, tell me. Is it just one group? But there's a lot of there's like heaps of blogs, there's Reddit articles about it. This happened. There's whole TV shows done for this particular
[00:27:31] Kyrin Down:
show. So it happens a lot of the time. Okay. Yeah. Could could be could be different on that regard. All right. My section takes a little bit of time, so I recommend might do the boost. Okay. And then get in into into that. So the comments that are on there just have said anything. Yeah, sure. So so see Cole McCormack coming in. Thanks, Cole. He was just letting us know that the audio on YouTube is better than the podcasting apps.
[00:27:56] Juan Granados:
Could be.
[00:27:57] Kyrin Down:
What are they doing? No, it's probably it could be my fault. Just depending on the setting that I chose, I might have chose the multichannel instead of the podcast stereo option. It should be the same, but obviously it doesn't. So thanks, Cole, for letting me know. I will try and fix it up. I see Patricia also joining us in on the chat. Thank you, everyone. We are live here on nine a. M. Eastern Standard Time on Sundays. This should be a pretty rock solid time for at least the next six months to year type period. And yeah, a this value for value podcast. We just ask that you send it some port. There's a section where we highlight financial support. Did we get any this week? Sad puppy. Sad puppy. Sad puppy. I did no streams either from at least on the.
From True Fans. I'm sure other people were streaming it. We don't see those appearing on our Discord. Here's a question. Would you like to see our Discord, we've just made them hidden these channels. But I could make them available and just non publishable as in you can see boostograms coming in pre hand. So let's say someone was joining our discord and wanted to just see, oh, did they get any boostograms this week? I could make that available so that they could see it but not post in there. Yeah, I can I can only answer from my so like, I wanna be if someone can comment if they would like that, that'd be good to see? Yeah. Sure. All the details. Yep. Cool. Alright. Thank you. Thank you, everyone. And for joining in and letting us know things like this. Very, very helpful and important. So I had some more thoughts just on the the kind of variation that I had. So, in the in the blog and in the poem, Moloch was this entity, I guess you'd call it like an ephemeral fog that blinds us to, our problems or even that's probably not the right metaphor because it's we can see the detrimental effects, but we can't do anything about it. So I don't know what you'd call that maybe a femoral wasp.
It's coming it's coming towards you. It's femoral, you can't like bat it away or anything. But you know, you know, it's coming to get you. Which chains us to suboptimal outcomes that make us behave irrationally. If you take it as a large whole. It's rather poetic, creative, and intriguing. It captured you. It's like the Moloch problem. Oh, what is this? It sounds good. It's like a nice name for it. And the, you know, I think that's all reflected in the origination of the poem and then subsequent blog, which then put this Moloch problem as a as a as a concept out there in the world or, you know, that's, that's kind of the origination point of where it came from. And I think the problem with that is it's it's not effective philosophy. It's an effective philosophy. If you read that blog, that is an effective philosophy at its at its core, because it lacks this grounding, and it almost always goes to theoretical. So for example, in the blog, he talks about a dollar auction. Have you heard of this before? The dollar auction? Yeah. And it's essentially an auction for a dollar. You can you can buy a dollar and there's only two rules to it. The bill goes to the winner of the auction, But second place loses their bid as well. So let's say it's just the two of us in a room, there's $1 over here, I bid first I bid 10¢, because like, oh, I bid 10¢ and win $1 It does good happen.
You then go, Alright, well, I'm gonna bid 20¢. And if that happens, I'm losing my 10¢, and you win 20. So obviously, we're gonna go right up close to $1 Right? Yep. But then I get to 99¢ 99.99, whatever. And there's no higher to go without it being a stupid decision on your part, because your next option is to go like $1 or $1.01. Yeah, more than that was. Yeah. So it makes no sense for you. But you're losing 98¢ because that was your your second highest bid. So rationally, you should go above it just to make sure you win the option. And not to lose the 98¢. Yeah. And so now you go $1.01 But it's like, yeah, I'm losing money, but I'm only going to lose 1¢ now instead of losing 99.
Well, then I of course, go. And so the theoretically, there's no end to this. There's, we're just going to keep outbidding each other. And it's, you know, it's the rational thing to do. But it's also irrational, because now we're losing money, instead of the whole point of the auction was to actually make some money. So that was like an example he uses in this in this blog of the, the Moloch problem. And I find this much like Zeno's Achilles and the Tortoise paradox, which is, you know, a philosophical problem. He said, you know, if there's Achilles is raising the tortoise or the the tortoise and the hare from Aesop's fables, The tortoise is ahead or sorry that the hare is ahead but the tortoise you know catches up or vice versa I can't remember which way the the tortoise is always going to move a slightly little bit time and the hare will catch up even closer in that time period.
But then in that time period, it catches up, the tortoise is going to move a tiny little bit more forward. And even though the hare can catch up even closer, it's going to get to the point where every little segment of time the hare is unable to catch up because it's always got to move that extra little bit bit of distance that the tortoise has moved. And you go, Oh my God, this is like groundbreaking. This changes the fabric of reality. Tortoises are always going to be faster than hares. But then if you just think logically, and same with this dollar auction, if we were to actually do it right now, are we going to bid each other to infinity? Is the hare never going to be able to catch up to the tortoise?
No, that's not that's not how reality works. There's actual things which make rational sense as a problem.
[00:34:16] Juan Granados:
And that also rationally in theory, in theory,
[00:34:20] Kyrin Down:
the the ideas, like, stay. Yeah. But in reality, it it doesn't work. Correct. And that is what I've kind of found with most of the topics it was talking about. Problems of AI safety, politics, climate change, capitalism critiques, which were largely what he talks about in the book and which I figured was the the Moloch problems, which is individuals doing competitive things and it ending up deleterious for everyone as a whole. There's instances where you can point out at it, but it always seems to go too far in these theoretical assumptions.
And then they they become retarded, they become so stupid with the implications of what they're saying. And you could then say, okay, maybe something like, alright, but what about, you know, does this thing all theoretical things are stupid? And Einstein, for example, let's take his theory of relativity, he was claiming things which were absurd, impossible to believe, you know, humans go in the speed of light, we can't do that, or what happens at the speed of light, etc, etc. And you could then say, Alright, well, all of that is wasted. And this is getting a little bit beyond Moloch. And as becoming like, what is effective philosophy in a in a in a sense.
And the difference between I find, say someone creating things like the dollar auction, and Einstein was, he wasn't making claims about human behavior. For him, it was all this is what could happen at the speed of light. Like if this is what is happening here, gravity is going to bend or spacetime is going to bend and, you know, light will bend around a star or a black hole. And it will be changed by that. Whereas, you know, we all thought just light was straight photon rays coming and hitting us. And this was where I was just going, okay, like, if you're making claims upon human behavior, they almost always need to be tested. Because, and as a real life example, because you can make all of these things that sound nice, sounds like a real bad problem. Shit, we're gonna kill ourselves to death with climate change. We're gonna, you know, nuke ourselves into existence.
But we haven't we didn't do it. And you can see it really obviously with the atomic bomb example, which is well, no, we we actually got kind of close to the brink. And then game theory broke down in a sense, and we ended up just dismantling all that bombs. They didn't squeal on each other. They didn't bomb each other, The US and, and the Soviet Union. So this is where I go to like things like the problem.
[00:37:10] Juan Granados:
Oh, this doesn't doesn't make sense in the real world. Well, see, I think I think the it still does. But the problem lies in that it's the assumption that all Moloch problems or all problems like that have a bad result to it. So it's like, again, the definition of it is when you're chasing after something with the unintended consequence of something bad happening. And so, yes, there's going to have to be a bad thing that happens, that comes from it. But that's not to say that good things don't come out of it at the same time. So the AI piece is like, again, anyone can challenge me on this. If you think someone in the individual right now can stop how we're progressing in AI, someone please tell me how you're going to do it because I don't think you will. There is no one on a on a on a developing AI to destroy all other AI. There is just no there is no feasible way I can ever see that we're not just going race as much power usage as possible to create these and continue on our merry way with it. I just can't see how that stops, but it could in itself be it is like which is a euphemism or like an an example of technology, humans. Yep. Developing more technology, more technology. Better, better, better, greater, greater, bigger, bigger, bigger. And yes, it will have unintended bad consequences.
We could come up and dream up with many of them. So in the distinction of a Moloch problem, it is a Moloch problem. It's you're doing something, you're chasing whatever you kind of know is going to give you unintended consequences, but fuck it, we've got to do it because otherwise someone's going to do it. However, it does have some good consequences that come out of it as well, right? So, what's the example saying, Sam Altman said it's like an idea. Again, there's like a universal basic income UBI perspective is, you know, imagine you do it's AI becomes a superintelligence that's doing basically all the work and then you can democratize it into sort of tokens utilization. And so whatever program uses, there's, like, 12,000,000,000,000 tokens and every person in the world gets, you know, an even, share amount of the other tokens. And then you can choose to utilize those tokens that turn into, you know, economical power usage to however you want it. And that's basically everyone got the same amount of money. And if you want to go on, apply them or group them or create things and that's your own choosing. So you can think of these other unintended things would be maybe good that you could see it being good depending obviously on what you see as good. The other problem as well is, and you hit it smack on is, yes, I like the Moloch problem as an idea. It's but it's a human concept. Again, a fucking animal doesn't give a flying fuck about this concept. It is a human concept like a lot of things from humanity is. And the problem that I see it in effective philosophy is when you are talking about the fabric of reality, physics, and if you get all the way down to, have you had the concept or the idea like at the very, very, maybe at the very base layer, it's just information, like that is everything. So at the base of physics and maths, it's just information. You know, think about it that way, whatever.
The concept of humanity don't really apply there because it is just concepts as a human that we leverage. Again, a donkey does not care about a model problem, doesn't understand a model problem, but things probably occur that all donkeys do that have unintended consequences. Probably. But they don't think about it that way. This is not a concept that applies. So when it applies to and that's why I was like, as a as me, even we're talking about being models, I go, there is a power to seeing where playing games is gonna have unintended consequences and choosing when sometimes whatever activity is worth doing a Moloch problem. So kids' activities, and then so I go back to this. In kids' activities, it's not worth it for me to participate in that problem or the unintended consequences slash potential results that I might get. There's, a book I wrote read ages ago about, kinda like the the the falsity of the head start where you you think, well, if my kid's doing this plus that plus this or just doing tennis from, you know, three years old or doing soccer from four years old and they get a really head start, they'll have to be like the best. And the reality is there are some outliers absolutely that your repetition and mastery do become really good, but on the average, actually, that's not what the data says. Now this book was, I don't know, twenty twenties, early twenty twenties.
And so perhaps, maybe there's some new data around it, but for the most part, on average, that's not correct. In fact, the the head start is more so a myth apart from a certain few people who really do well and then they get called up. Mhmm. Tiger Woods, the William Sisters, all these individuals, people who've just taught the kid chess. So that doesn't really apply when you look at it at the data specifically. And I go, the unintended consequence of forcing someone, let's say kid, to do something that they don't like. Take for instance, us with fucking reading when we're in school. Right? If there was, like, this race and, like, well, my kid's gonna read 10 books and my kid's gonna read 12 books and 15 books and whatnot, that means something consequence might have been like, this kid's gonna be like, fuck you. I'm not reading books anymore. I hate this. Like, I I don't like this.
The what you want the outcome to be, perhaps this book, that your kid likes to read or participate or picks up his own skills, you can do that without jumping into this problem. So I liked it for that concept alone. But I I I did see that the theorizing it and then calling it, like, well, it's just bad was dumb because in this other what one of the ones I called out earlier around the doing things for the likes, the in the posts and the comments. And, again, that is a Moloch problem in that everyone, most people for the most part do it. There's a select few humans out there who just by default will not do it. But for the most part, if you are creating content, if you're posting things, if you're sharing things with the world, in part, you probably want people to engage with it in some way, shape, or form. Otherwise, if you're just posting it for your own goodness, I guess, maybe there's some some humans out there. But for the most part, you want someone to engage with it. You want if you want one comment, you may want two comments. If you want four comments, you want eight comments. And so if everyone's doing that, yes, we know that there's unintended consequences around that. There's displays of social media being bad for kids because then you're chasing likes and comments as opposed to the reality.
But it also comes with some good consequences as well. If, you know, you do something successfully and it becomes, you know, a profession or a passion or you make money from it, it has the right effects to it. So, my, like, underlying thing from all of this is I like the naming for knowing it, but the effective philosophy is not fucking just going like, oh, I've got to jump out of it completely and not do it. It's being aware. Like, I think it's more just being aware of what a problem like that exists, humanity wide or to an individual, and then realizing what the fuck it is that you actually want the outcome to be. Because sometimes then you will do it. Right? From our perspective, you know, if we have 5,000 subscribers or 10,000 subscribers or a million subscribers, is it getting us to the outcome that we want even if it's gonna have unintended consequence? What's gonna be unintended consequence? We're probably gonna get people talking shit about us, about all host of things. Right? And it can the more some of our clips go, you know, viral and more people think, you get all these comments like, what's fucking dummy, dude? Like, what is whatever?
That's the only turning consequence. I'm happy to have that if it means more individuals engaging with our stuff that then do get a positive result from it. So that's a more like problem that I choose to go and participate in. If everyone's doing that, because it got some good consequence and maybe it's helping some people out. So more so I kind of like landed in effective philosophy is be aware of them. Be aware when you're applying them and if the potential consequences, hopefully they're not too risky and judging that and going, okay, well then do I choose to participate in that and do it fully under the encounter of that, I think is fine without this like, I didn't read into that too deeply, but onto this, like, chaotic, like, fuck. If you play it, that's it. We're all gonna get, like, blow up and and whatnot. Because, again, it goes to the same point of, like, nuclear weapons where because US had some this other country had to have some and so this other country has to have some. But look, let us get to a point where it's like, okay, actually pause here, everybody, because we've now reached the point where we're just going to have mutually assured destruction if anyone launches something. We're spending all our resources on atomic weapons and not on, you know, our economies and Yeah. So like, humanity does not act in the theory of the world. We act in the reality of us as a species. And so that that concept, I I kinda landed with it. Cool to know, to be aware and apply where apply the thought process to it so that you don't go down the path of doing activities that are being done by everyone else and suffering unintended consequences that you're not willing to live with.
Some you will and then fuck it. Good. Yeah.
[00:46:05] Kyrin Down:
I find that the general grouping of this, these problems and even just topics of discussion tend to have the it's kind of like a meta understanding or assumption that either humans are almost always all good. And but as a species, like humans, as the individuals are good, that as a species were bad. And so that seems to be kind of the trend I especially reading that blog I noticed and even just some of the associated things that are going around this. So like AI, for example, no, I don't think there's anyone calling out any individual of like, you're bad for using AI.
Sure, there is someone somewhere always doing something. But as a large sweep, it's not, you know, this person because you used
[00:46:58] Juan Granados:
ChatGPT, you're a bad person now. But because we all use ChatGPT, we're bad as a as a whole. Well, and I challenge that because we heard it from from our good friend, Joseph in the in the comedian arts world. Yeah. How they were talking about how you're using AI to do your thing in your images? Bad human, bad person. You know, they told us that. Now, I don't know if this is like everyone saying that in the comedic artist world, but I could see that. I could see I could see that where people are like, no, fuck you for using this. But if you're, you know, making people laugh, then again, it's an unintended consequence, but willing to participate because your end outcome was to make people laugh and to have a good time. So it's like, yes, but I'm going to participate in it because even unintended consequence is not too bad for the fact that what I'm getting is what I wanted to in the first place. Okay, sure. Yeah. Yeah, that's that's a good example. I don't it might even highlight the
[00:47:54] Kyrin Down:
opposite of what I was going to say, which is the and perhaps I'm more aligned with them than I thought because I, for example, believe that there's actually more like, bad individual humans. And there's tons of behaviors, which are terrible. But humanity as a good as a whole, the terms good and evil or bad, are tricky when they are applying it to these concepts like this. But I tend to incline more inclined more to perhaps the optimistic side of things. So for example, you'll hear, okay, but now we can prove humanity's bad as a whole or people as as bad a whole with like, Stanford prison experiments, the Stanley Milgram shock experiments where this was showing, oh, if you have, you know, a human overlooking another human, you can get them to shock them to death or like the Stanford prison was like they treated these their captors, treated the prisoners terribly, even after only seven days, they had to shut down the experiment because it was so bad. And it's just like, okay, but if you actually look into them, those experiments weren't unbiased in their setup and that there was all sorts of things which were made them incentives for people to do the bad thing in a non normal way, I guess, like in a non normal in a non realistic sense. Correct. Yeah. So it's hard because there's undoubtedly times where there's kind of Moloch problem type things. Look at the Soviet Union, the gulags.
There's tons of things where it's like, yeah, you had to compete in being more psychopathic to prove your loyalty to the Soviet Union. And to do this, you create this whole fucked up system, put tons of people in gulags, kill millions of people, etc, etc, etc. So there is things like that where I go, Alright, yeah, you know what, that that perhaps is a good example of a Moloch problem that went terribly. Was it sustainable over time and ended up killing all of humanity? No. But they killed a lot of people terrible for sure. So there's, there's ones like that, where I go, okay, that's a good example of like, yeah, humanity as a whole is bad. But as a general rule, I think this is where like a Steven Pinker type book enlightenment now just shows over time, like, there's less wars, less rapes, there's less less, you know, childbirth, deaths of childbirth of either the child or the mother mortality rates continue to decline. So people getting older, in general.
Things like happiness and stuff like that, it's hard to measure. You could you could argue differently for that. So I don't know. How do you solve a Molot problem?
[00:50:57] Juan Granados:
Is that that's why I was like, you don't solve it?
[00:51:00] Kyrin Down:
Yeah, I and I actually even questioned the validity of it being a legit actual problem.
[00:51:09] Juan Granados:
I would say it is a legit actual problem. It's legit in the sense that it's, it's an activity that you do with unintended consequences that everyone's doing and then you'll land at unintended consequences. But where I balk at the idea is that it's like this catastrophic level of unintended consequences so that you can't play them when you choose to play them And it's a much more of a be aware of a Moloch problem as like a, Oh, it's Moloch, that's cool. This is like a name as opposed to putting this like full sentence of what it is. And then just being aware whether you're playing it or not for the right reasons. Cause then if you are for the activities and you unintended consequences, like, whatever, fuck it, do it.
If it's a real terrible unintended consequence, then, okay, maybe then you actually realize, okay, potentially I won't do this. Now it's like real world practice. I don't know when this really would occur, but it was like, you know, know, you can win a million dollars or but it's like an unintended consequence of whatever you're doing, you might die. Like, I would go, okay, well, there's I'm not doing that. Like, that's definitely a game I do not want to play. Right. But if the risk was lesser so, an unintended consequence might be like, oh, you don't get to talk to your seventeenth best friend. I might be like, Yeah, okay. Let's do it. Fucking bitch.
My seventeenth best friend is fucking boy. So you know what I mean? Like, it's it very much differs on what that is. Whoever, if you, if you, if you're a philosopher, you're probably hating us. But you know what I actually landed on now that I'm talking about this, maybe I thought about this prior. One of the reasons I guess I'm doing much more effective philosophy conversations and stuff is I like philosophy. But the more and more I think about it, I'm like, a lot of philosophy sucks. Like, a lot of philosophy shit in that it might be true in the most real sense in this, like, theoretical things. I'm not saying that's wrong in that sense. But when it comes into not just applicability, but just humanity at large, like you an individual walking around, fucking shit. Like, what what kind of fucking application is this? Like, you in what world would this have to exist? I would have to exist in a very, very specific because even in any of these, game theory that we're just talking about, you could just change, like, such little tiny things of reality. Like, yeah. But what if this happened or that happened and you're angry or you're hungry or whatever? Like, it just changes fucking dramatically.
It is a hyperobject, people. It's too complex. And so I kind of go, just a load of shit. The more you have to the more I want to, I reckon for the next few years at least on the podcast and more, it's effective philosophy stuff. So when I see shit like that, like, calling it out and be like, nah, this is bullshit unless it applies. And for me, it applied in like, oh, I can see where using AI leads me personally. I could see myself going off the beaten path into generality. I don't really want that, but I'm aware of it. I'll leverage it in a specific way. So good. That's that's my table. You're almost
[00:54:03] Kyrin Down:
we're very much on the same page. I've read so many philosophy books now. And do you know what my favorite one of all time that I've read was Batman and philosophy. Batman and philosophy was great because it highlighted the, I guess like larger concepts that, you'll sometimes face or that that are kind of like intriguing or interesting. What I found or like the trolley problem, for example, things like this where you can provide examples which are kind of back and forth, back and forth of the of the intuition of like, oh, okay, yeah, yeah, I would push push the switch. Oh, wait, no, I wouldn't push a fat man off a bridge to, to save the five people. So therefore, like, what was the difference between the two and doing those kind of like highlighting experiments and then showcasing how Batman did it, I found super useful.
All of the other books talking about, you know, just these general things related to time to related to AI safety and stuff like this, you know, I will dismiss basically offhand a discussion of capitalism, of the AI safety. But if you want to talk about a niche idea of, oh, yeah, there's a gun attachment that you can that utilizes AI somehow, or attaching it to drones. So it can distinguish between, you know, the dark skinned people and the light skinned people fighting in Afghanistan. I would be happy about that. But even those ones, if you if you're specific about
[00:55:37] Juan Granados:
the chips being made in Taiwan and the race to the bottom resulting in a war in Taiwan for the manufacturing of these chips. And so like, it kind of makes sense. Well, I'll be able to actually answer that shortly. Because I'm reading a book called Chip wars at the moment, which is going over the whole
[00:55:51] Kyrin Down:
aspect of what year was that? It's relatively recent. I think I think it was like four
[00:55:57] Juan Granados:
or five years. Yep. Yeah. So right now there are some companies building it outside of Taiwan now attempting to do certain things like that to avoid that. But a shit ton being manufactured there. A lot of, limits are trying to be imposed to China, but they just get around it, right? Again, this is more politics piece, but if you want to talk about that from a race to the bottom and the problems, okay, there's some reality there. But again, and I'm not again, I'm not saying all philosophies are shit. I'm just saying the ones that are non grounded in reality are like niche, just the heart. Like stoicism. So stoicism, just like there's lots, but stoicism, I would say, largely grounded in reality, because it's a lot of individuals who live through that life. Some of it might be a little bit airy fairy, but a lot of it is grounded, especially the stuff that remains today. Well, they give examples with Buddhism, on the other hand, has a lot more non grounded, weirdly enough, but does actually apply really well into everyday life where it's like there's less examples, I guess, I've seen, at least I've seen that are like, oh, this human did this very thing. It's very, like, story and everything else.
Fuck, it's really applicable. Like, to a point I remember maybe two months ago, like, looking into a couple of Buddhist things, I was like, man, I could see myself being a Buddhist. Like, that's actually, like, really, really, really related to the things that are like the best. The best Buddhist books I wrote actually have
[00:57:18] Kyrin Down:
specific examples of here was a problem I was dealing with. This is a Buddhist concept or Taoist concept. Yeah. Eastern philosophy in general that helped me, solve this problem in my own mind or approach it differently compared to how I was approaching it. If you read something like Niche Heidegger, I can can't mean, Khan gets a lot of hate Kierkegaard. I'm going to chuck this out. So we're talking we're talking about a popular clip, popular clips. One of our most popular on the book reviews channel at the moment is like there's only three good niche books. And it's just it's not once again, it's not even me. It's Dave Jones.
[00:57:58] Juan Granados:
That one gets talking such comments talking about,
[00:58:02] Kyrin Down:
actually, maybe that's not on the book reviews channel because it was a mere mortals conversation. So it's probably on our main channel. And yeah, the amount of just criticism thrown Dave's way off chops. Dave, you're in trouble, man. The internet doesn't like you. And I'm going to say it. I'm going to back them up. Niche is shit. I've reading I'm gonna clip that for sure. Beyond God and order is shit reading fucking Thus Spoke Zarathustra shit. Will I try any of his others? Probably because I'm gonna need it. And I'll and I keep getting drawn back to these things. But that shit.
[00:58:42] Juan Granados:
That's that's that's that's that's that's enough of a summary, I think. That's Memorial Lads again, we're live on Sundays, nine a. M. If you want to join us live, of course you can feel free to do so. Again, if you want to support us, feel feel free to do it in various ways. You can comment, you can like, you can join live. You can share. Yes. You can join our Discord.
[00:59:01] Kyrin Down:
Send us boostagram meremodelspodcasts dot com slash support. Correct. Where you can learn more about how to do things like that. And, yeah, we very much appreciate it. Shout you out. We do have a leaderboard that we keep in contact with as well. And, yeah, we still got the, the offer of,
[00:59:19] Juan Granados:
if you send in 100,000 sats, we will send you a shirt. That's right. Dollars 200 worth Australian dollar shirt by this point. Just getting more expensive people. That's what happens. That's what happens. Alright. Meanwhile, I will leave you there. Be well wherever you are in the world. Thank you very much. One out. Carry it out. Bye. Bye.
[00:59:34] Kyrin Down:
Bye. Bye.
Welcome back, Mere Mortalites. You've got some musings from your two favorite mere mortals right here. You're Juan. and Kyrin yew. August 3 and look, Mere Mortalites, today we're going to be talking on a concept that I don't want to get very detailed and definitive about because again, some of the learnings I've been doing over the past few weeks is the the more that I try to prepare, over prepare or get into like really specific details, I don't necessarily enjoy or do as well as it is more of a conversation. So what we try to do with musings is deep conversations with a lighthearted touch. So it is gonna be more of a deep conversation, but again, with a lighthearted touch. So we're talking the philosophies, we're talking the ideas with a light touch, not anything too deep or dastardly that's hard to understand. So the topic for today is MOC.
Yep. Now you might be wondering what the hell is mork? Well, I'll tell you what it's not. I'll tell you what it's not. It is not a harmless spiny lizard of grotesque appearance which feeds chiefly on ants found in arid inland Australia. I did not know that it was a spiny lizard as well that lived in Australia. So we're not talking about that. And the reason I actually came across this, and I hadn't heard of it before was over the past month, one of the things I just I randomness I've started doing is through, like, a one minute training of just interesting, cool, different ideas I just hopefully haven't heard before that I just, like, get AI to generate and I'll just read through them. I'll be, oh, yeah. Cool. Interesting. Interesting. Interesting. And the the Moloch, however you wanna call it, there's a sometimes if I read it as a Moloch theory, there's Moloch problem.
And as I read it, I was like, oh, that would be cool to talk about. I'll define it quickly. And then we can just get into it. Yeah, please do. I'll call it as a, what Google is saying here, a Moloch trap. Simple terms, it's a zero sum game. Really? Zero sum game? It explains a situation where participants compete for object or outcome x, but make something else worse in the process. So everyone competes for x, but in doing so, everyone ends up actually being worse off in the process. So that's kind of the the moral problem. You know, you go after something and you think that you're doing the right thing and lo and behold, end up doing worse for the entire civilization, humanity, whoever your folks are. Do you wanna give us a couple of examples? Yeah, yeah. So a couple of examples. So here's a few, at least that, pertain to me. And I'll, I'll, I'll try to find ones that weren't like the common tropes, I guess, that exist around this. So the, the real examples that I was putting or at least finding for myself, kids schedules.
So, actually overbooking their kids, which I have had conversations with people around and I've fought to not do this and gets into the conversation about Moloch problems. Being like people will book their kids for things to do, not so much because they want to but because they're scared the kid will find so so that would be more along the lines of I guess you're you're trying to
[00:02:57] Kyrin Down:
what make the kid healthier, more social overall, but you're declining their health because
[00:03:05] Juan Granados:
you're doing too much. Yeah. Well, it'd be more like, the the example here would be, parent a says with a kid, I am a kid. So in a, b, and c. And then the other parent goes or thinks, oh, damn, like, mine must be falling behind. I need to do a, b, c, and d. And and, like, they're gonna be doing this and the tutor for that. And so in the localized way, you do think, well, I am trying to give the best head start or the most advancement for your kid to do sort of things or activities, but in the end, in the outcome, you probably, or you might be actually pricing just an unwantedness that they want. Like, they're just like, oh, I just wanna spend more time with like a parent. Let's just say as opposed to doing all these activities for the activity's sake.
It it it becomes more a so the what happens off is that you just get this general trend towards, oh, people just doing activities for the sake of the activity, not for the outcome that is generated from the original activities that you wanted to do in the first place. So that was an example, that's good for me. Another one was just like social media and statuses. That's another example from a malloc problem. So this is just chasing likes trends, things like that, as opposed to, yeah, doing it for our own reasons. It's from, we don't want, we want to avoid, we want to avoid looking like we're not participating in the race to, get them more likes and the more posts. Now this doesn't happen for everyone. This is probably one that I like is a general example given around social media and status that I see maybe it's an easier Moloch problem to jump out of it. That the reason as I'm talking through this that I like the word at the very least of Moloch and I can't bothered finding that there is a, keyword linkages of it. Okay, cool.
I like that. Just like the naming of it is cool. I like the naming aspect of when you name something or where you when you like personalize it, when you make give it like identity, basically, it helps you to assign it something and step away from it as opposed to it being just like, oh, like what is this? What's going on here? Why am I and throwing this issue? And compromising. That's what it is. I think that's more related to adding things to animals, human qualities to animals. So I don't know if there's one. What's it called like human qualities to like ideas? Something like that? Yeah. So anyways, that's what that is. The health and wellness aspect of it as well. So this is, when you become competitive, you have to like kindness. So this is again, we're all sort of, you know, you get into a race, which maybe this is what I've started to see, race into like the healthiest, the happiest, the fittest.
And again, in a localized way, this can all be good. But then when you start competing and challenging and seeing people do, well, this person did a math one, this person's doing ultra math, and this person's doing this, this, this, and this. And again, then you're all chasing this aspect. You then trend towards this, well, everyone's gonna explode because you don't have enough time and maybe then you're giving up all this energy that you should be spending or could be spending with family and other aspects, either because you believe that that's what you should pursuing if you really like health and fitness. And I I can see myself for sure in that being a Moloch problem that I've been drawn into, or the problem, whatever you wanna call it, theory that
[00:06:12] Kyrin Down:
it's like, okay, that's another one that I would say I've failed that I've failed at sometimes and trying to step away from just like, Okay, this is kind of different from what I had taken as what the the Moloch problem is, in that what you're describing there is more people doing something and it's then having a very obvious rebound effect onto themselves. And it seems the problem is more that that they're not realizing it's happening. So for like the kids thing, like, if you're overbooking them, the rational thing would be, oh, okay, I'm overbooking.
And if I'm overbooking them, it's having this deleterious effect on other parts of their lives or my life, for example, how I'm then starting to treat them. And the fix for that is very easy in that you stop overbooking them. Whereas the Moloch problems that I had heard of were more along the lines of prisoner's dilemma, game theory, things like this, which are, you'd know, you can see the problem, you can see what you're doing, you can see that rationally, I have to do this. And you can see, oh, it's, it's going to have a bad effect on me. But I have to rationally do this, that that was my kind of understanding of what a Moloch problem is. And so to highlight the differences between this, the classic is the prisoner's dilemma.
Two prisoners, one can rat out the other one and get a reduced sentence. But if they both agree to not rat out each other, then they both they get the very lightest sentence. So like, let's say zero years, they don't squeal on each other. If one of them No, sorry, they get one year, if they don't squeal on each other each. If one squeals on the other one, while the other doesn't squeal, the one who is squealing gets zero years and the other one gets punished really heavily, let's say ten years in jail. And then if they both squeal on each other, and vice versa, and if they both squeal on each other, then they both get seven years in jail. So so for example, if you then tally it up and say like, what's the minimal amount of time in jail is the best outcome for them as a whole.
It's for both of them to not squeal, they get two years total, if one of them rats out and the other one, it's a zero years and ten years, so ten years total. And if they both squeal, then it's fourteen years total, because it's seven plus seven. And that's the worst outcome. Rationally, they look at that, and they go, Okay, it would be best if we both didn't agree. And this is where there's variations, because this one has like a trust component to it. So it's like, I need to trust the other person and not squeal. And if there's no communication between them rationally, the thing to do is go don't squeal. No, the yet no rationally, the thing is to squeal and say like, Hey, this other person did this.
And the other person is going to do the same thing. And so they get trapped in this, this game where they're both screwing each other over and they can't get out of it. Which is this kind of ideal that you make the most rational decision.
[00:09:26] Juan Granados:
But it it's the worst decision for yourself, but you can't help but make it that was kind of what I took from this. Yeah, I guess I guess I'll extrapolate it so I might have just extrapolated too hard to some of the more personal views. But yeah, I guess it's a zero sum game, right? Where when you're competing for something, like when you're competing for an outcome and then make something else worse in the process, that's basically you're competing for something, but something else gets worse. I guess I took it in, in that lens. And so like for the kids, kids one, it was very real for those folks who've got kids, you'll understand this. If you've got the ability to send them to things is that the the problem there is is that you think that sending them to more activities because then more people are going sending the kids some more activity is a good thing. And then in the end, it becomes a bit of an effect. That's a, that's a trap. That's the model trap where you go, well, if my kids are doing three activities, but everyone else's kids is doing four activities, getting to do four activities and the choice of that is kind of an assimilation or race with everyone else that's doing it, but it's for the sake of activities, not for the sake of the outcome. So that's being achieved from it. So it's the trap because then you it's, I guess, a simple focus. I think this is the underlying piece with what I was thinking from a Moloch trap or at least in my mind, all different. It was just like, where is it a case where you just start focusing elsewhere because you, because you either join in the race or you're participating in the way that it's supposed to be participated And you don't lose sight of what the real outcome is that you're doing because then you start getting some other random type of outcome that you didn't you think is a is the right thing to be getting. But in the end, there's not what you actually wanted in the to begin with. Yeah.
[00:11:07] Kyrin Down:
I'll jump in because I did a bit of research and like what is a Moloch problem as well? Where did this come from? Why
[00:11:14] Juan Granados:
why is this a term? Because where did you first hear of it as well? I just thought it was like a one minute little like learning piece that I do on a daily basis for things and it was like the Moloch problem. And I was like, oh, cool. Where did you hear about it? And to begin with. Okay.
[00:11:28] Kyrin Down:
What but was there? Like what resource? What? Oh, TTPT
[00:11:31] Juan Granados:
AI one minute view. I did link to the that original one that you sent through in the discord
[00:11:36] Kyrin Down:
Yeah. As well. Yep. So so that's where the term comes from the Moloch problem. Moloch is a like an ancient God in some sort of religion. I can't remember. But the the term Moloch problem he was being used and this was like a kind of creative slash destructive God. He was immortalized, I guess, in a poem by Allen Ginsberg. So this is like the beat era. So think of Jack Kerouac on the road did a book review on that, which is this kind of anti establishment, hippie, anarchic type views in The United States. And I think like, I'm gonna say like the 1960s 50s period after the wars, things like this.
And it was then a blog, which was the S was called the slate star codex in 2014 might have had a different name because it was, from Scott Alexander and he changed the name of his blog. So it was linked to other concepts I've heard about for a while and I listed some of these in there, which is tragedy of the commons prisoner's dilemma, coordination failure, race to the bottom, multipolar traps, all of these sorts of things are kind of in that same genre. And I tried reading the blog. It's fucking difficult, man. And it's difficult because it's not. It's one of those, I guess, like, it's it's a poem slash fun idea slash someone musing on the world.
And in a kind of critique type way, it's like a critique without offering much solutions. You know, there's people who are really good in this world are bashing the shit out of like, the current system. But when they come to offering solutions are not so great at it. It's along those lines. And it's very, it's long, it's long, and I found to be honest, rather intellectual, Heidi, holier than now sort of deal. And he was largely talking about what I thought was the Moloch problem. So this is AI safety politics, climate change, capitalism critiques, which to be honest, I find generally to be rather unproductive conversations. And I can talk about that in a little bit, perhaps.
But these ones were really saying like, you know, environmental Moloch problem would be, you know, we're all trying to produce the best that we can or, you know, most efficiencies and that leads to dumping of toxic waste in the water, which then has tetra detrimental environmental effects. Think of books like The Silent Spring, I've got to got to cover that at some point in the future, which is what started I guess, almost like the environmental movement and things like this. So for these ones, it really was coming from this Not individual personalized things like you're talking about, but the big grander large scale, humans are gonna kill each other over time, because we're all too selfish sort of deal. Yeah, yeah. I had this element of humans are bad.
And the it's not that like an individual might be good, but humans are bad overall. That was good kind of
[00:15:11] Juan Granados:
feeling behind the surface of of Of what they were trying to get to. Yeah. Yeah. For sure. Yeah. I mean, like, for sure, the the I think the the idea or the the problem of all lock and again, because this is just a concept of it's got a name. So I thought that's kinda cool. But that, that whole idea of, and it was AI of those that I was like, oh yeah, I can clearly see again, this is what you define as bad and good obviously differs and so it changes that narrative of it. But if you take AI usage, for instance, we are in a clear race to the bottom type scenario where everyone's spending as much money as possible. I'm talking across like governments and everything else that there's companies, governments buying and spending ridiculous amounts of money to get ridiculous amounts of processing power and more chips and more this. And if you don't do it, like you got, you can't stop because then China's gonna do it or another place is gonna do it. So you have to just go as hard as possible and then you can get into a conversation. Well, what about security or what if you just launch into AGI, general intelligence or super intelligence and something crazy happens and you break something and it's like, yes, but if you don't do it, the other person's gonna do it, so you might as well just continue. And so that race to the bottom or race to succeed, whatever it is in this intended purpose there, that's the the Moloch problem on maybe AI security in that there's going to be unintended consequences to that and nobody wants to stop
[00:16:31] Kyrin Down:
to pause. And you're saying that's in the level of what countries or of companies
[00:16:37] Juan Granados:
that's they? Well, their their decisions to? Well, no, no, I think I think I think you can you can so the concept of AI, let's just say AI, that's the one that I was like, that was an example I was going to bring up where you could use it at the level of like humanity. Humanity is here, like globally, you're talking, like, at the country level, like USA versus trying to trying to develop as much as possible. There's that. There's company. There's company versus company trying to develop or use the technology, and it's a race of the how fast can you bring this in? So you don't get obliterated by whatever all the way down to the individual where you can have that. And this is a, one of the things I started to see with in my own tooling that I do in my day to day life.
I've for years now been trying to use more AI, a lot of things. AI in the sense of summarize this, analyze that, improve this workflow processes, all of those things. And I can see that people would get, see this, this is the dicey thing. You could get into a model problem if you don't, keep a really clear line of sight and talking individual here, line of sight into what the hell it is that you're trying to do. So the example I was going to give was for annual goals in that now we have, you know, month, day, day goes by, you have more improved AI tools and I've played around with a variety of them. And I thought in this particular year coming around, Hey, I'm going to just try and do the annual goals mostly through the AI tool. Here's my entire daily journal notes from the past year. Here's all my lifts. Here's all my training. Here's what I've been doing for the last couple of years of my annual goals. Here's my, like, one to five eminent human being qualities. I'm at level two and level three and level two here. I wanna get to level three, level four, level three here. Cool.
Process all that. Tell me through how I might be able to do that and achieve it through annual goals and whatnot. And what it gave me, look, in part was good, good information, fine. I leverage it. I'm almost having to leverage 20% of it, but I had to shift it somewhat in my mind. But what I saw is that at least as an individual looking at it, it was the if I could see that everyone else was using it, they would get a general trend to this same answer, perhaps because it felt generalized, not too specific to me, even though I'd given it so much information. And so there's when you start getting a bit of a race to the bottom type scenario with the use of AI because is it going to kind of all trend together? And now it's leveraging all the information on the world. And so when I say I'm doing running and I'm training and I'm doing fitness, I did a half marathon last year. Hey, I kinda wanna improve my running and do more running. What's the first thing that it said right after I gave it all this? And I wasn't specific about I wanna do a marathon, but it said, oh, you should do a marathon and and and a half marathon at this time. And so part of it goes like, yes, I get it, but then what's the evolution of that if I continue using this tool or just thinking about it in a very general trend to the other model problem of just society and doing it for like, oh, well, it becomes do an ultra or do it faster or do it again, where the concept of, and I guess is the the bid that I had to, like, play around with for the last month going like, okay. But, yeah, do I wanna do a marathon because I wanna do a marathon or do I want to do a marathon because I'm falling into the Moloch problem or the problem at hand of it's a race to do more, be better, do it faster and you can get really caught into that trap and lose a side of maybe the benefit that I wanted to do in the first phase was just to be healthy and do things versus do it because you're trying to amass a ranking or a popularity or more people to follow you for a certain reason. So I kind of came eyeball to eyeball with AI usage and seeing how it could lead me down that path and then kind of compare it to those other ones around like social status or doing because something because society generally trends as in, oh, yeah, that's that's a thing to do. And I kind of say long story short for that one. So I do I do have Emmanuel Girls marathon and a half marathon, but Hey, I got you. But weirdly enough, I went, yes. It I do wanna do that. But then if I really wanna really, like, thought about it myself, I was like, but I don't care of doing it in a race.
And that's the important thing in that I actually don't give a flying fuck about going and doing it at the Sydney marathon or doing it that. No. No way. But I'm happy to do it on my own just on, like, a random Sunday and plan for it. And it is that achievement of achieving that not for the fact of doing it at a location. And I think that popped me out a little bit from that sort of problem. And it's like, oh, yeah. I don't have to do it and plan it just like everyone else is doing it for twelve weeks in this particular location. I'm not I can't say right at this moment that I've ever had the the passion to do a Sydney marathon and do the London marathon or something to that effect. But the achievement of doing a particular distance of feet ago, that that would be pretty cool. I I could see myself doing that. So, I think the AI piece could lead at the local level. You could see that you just chase down things then for the wrong purposes. And then, yeah, applicate you know, applicating it, the application of it to, like, a global country wide thing.
Sure. I probably have less say or less input into that, but I could see how that become the more like problem that we'd and so much you and me can do about it, but just screams out that's something that we're just careening headfirst and no one's really stopping. I've listened to plenty of podcasts about this, and it's like no one's stopping. Everyone's just going for it as hard as they can. And let's just see what happens at the end of it. Yeah, sure. So I didn't think that much about the applying it at the individual level because I just
[00:22:25] Kyrin Down:
thought that it necessitated a group. I I thought that was the whole point of the Moloch problem. It's meant to apply to a group setting. I guess I'm saying AI safety policy. At the individual level,
[00:22:38] Juan Granados:
it applies in the sense that if I'm doing that at the usage of something in AI as example, if I'm doing that, I can see that there would be a gigantic amount of people who would also similarly be doing it or applying it in that way. And then you start trending towards this, the same commonality or similar commonalities
[00:22:54] Kyrin Down:
where that problem might apply in slightly different ways. But did you do it? Do you think the did you solve the problem yourself and you're not the part of the larger group?
[00:23:04] Juan Granados:
Well, that that I get that's that from like when I was reading with it, it's like the whole concept of calling it a moral problem or turning it into a name for an idea in that particular way is so you can jump out of a Moloch problem in that it probably still exists and is being used and applied in a whole variety of ways by different people, but you can choose not to be participating in that. So I guess in this example, I'd say I think I've I think I've been able to spot it. Have I necessarily been able to step away from it? I don't know if it's in its full entirety I have it on, but at least I've spotted it it to know like, oh, I'm probably going to see this and still continue to use it in this way. So I'm going to spot it more than at least beforehand just being like, cool, that sounds good. Go on. Just do it. Like kind of without even really thinking about it. Sure. Yeah. I don't know if that's a
[00:23:54] Kyrin Down:
like calling it a Moloch problem for the individual. For me, it seems kind of pointless because it's just you'd, how would what would be a better way of describing it is just unintended consequences. You're just not realizing that you're doing something and it's going to have a detrimental effect on what it is that you're trying to do as a whole. Yeah. Well, just lack of lack of foresight, perhaps, or lack of understanding of how the how reality works.
[00:24:26] Juan Granados:
Well, that's, but that's what I mean, I guess the the content that I liked, and while I looked at it, I was like, I liked just the one singular name that you can apply a concept because when you said it was a concept, right? So if you're trying to describe to someone a concept, sometimes, you know, you can alternate it, you shift it, oh, what about this? What about that? If you, at least for yourself, you say, oh, a mollic problem and yeah, like the applicability of everyone, everyone in from a humanity perspective. Yep. You can use it in that particular instance as well for like a Moloch problem. But what what I didn't find it as interesting and I wanted to apply it from an individual level is for some of those ones, what the fuck are we gonna do? Like, as I mean, more light, you know? Are you gonna be doing anything about AI safety with at the global level? Mhmm. Maybe. Maybe some of you do, and maybe there's, pushing from, let's just say, you go on and up. We're gonna vote. It could be one way. Go and protest. But say AI security or AI whatever you want to deem it. Like, I don't think anyone's stopping that. That's just like a moral problem that maybe we can identify and it's like we're all just going headfirst into it. Yeah, I definitely disagree with all of those things
[00:25:33] Kyrin Down:
in terms of I'll I'll get onto that shortly. The is just like, so for example, let's say I want to clean my teeth, I want I want my teeth to look better. So I start brushing them and I over brush them and I start wearing away all the enamel and my teeth end up worse. Yep. Even though I might get like a slight boost initially, because I'm brushing them brushing them so good. Now all nice and clean. And that was just an unintended consequence. I didn't think about it. I didn't know about it. Perhaps you wouldn't call that a Moloch problem. Right? You just go, that's just a problem. But then could I then say, Oh, this is a Moloch problem. Because the reason I want my teeth to look good, is I'm in the dating scene. I'm in the world with all of everyone else.
My teeth looking better will give me a competitive advantage. So therefore, that's the reason why I'm doing this behavior. And then it has this unintended effect. And no, that's it wouldn't be a model problem. Okay, why not? Because you'd have to have everyone trending towards doing that because of that reason. Okay, but so then like the kids one, for example, is,
[00:26:45] Juan Granados:
is that a everyone's doing that with their kids? Yes. If you're a parent, if you're a parent comment in it because I'm telling you right now, the amount of, so conversations with kids, at least, at least around the ones that I've had, there is an astounding amount of times that the conversation around like how are they going with their talking, how are they going with this, whatever. There will always be a tug that comes up around, oh, oh, are they doing soccer? Are they doing something too? Are they are they playing, football now? Are they doing this one? Oh, my my kids are in this. Oh, but my kids all happens all like every I reckon every conversation I've had with every kid, like parent, that's happened. If it doesn't, like if you don't experience that. Yeah. Tell me, tell me. Is it just one group? But there's a lot of there's like heaps of blogs, there's Reddit articles about it. This happened. There's whole TV shows done for this particular
[00:27:31] Kyrin Down:
show. So it happens a lot of the time. Okay. Yeah. Could could be could be different on that regard. All right. My section takes a little bit of time, so I recommend might do the boost. Okay. And then get in into into that. So the comments that are on there just have said anything. Yeah, sure. So so see Cole McCormack coming in. Thanks, Cole. He was just letting us know that the audio on YouTube is better than the podcasting apps.
[00:27:56] Juan Granados:
Could be.
[00:27:57] Kyrin Down:
What are they doing? No, it's probably it could be my fault. Just depending on the setting that I chose, I might have chose the multichannel instead of the podcast stereo option. It should be the same, but obviously it doesn't. So thanks, Cole, for letting me know. I will try and fix it up. I see Patricia also joining us in on the chat. Thank you, everyone. We are live here on nine a. M. Eastern Standard Time on Sundays. This should be a pretty rock solid time for at least the next six months to year type period. And yeah, a this value for value podcast. We just ask that you send it some port. There's a section where we highlight financial support. Did we get any this week? Sad puppy. Sad puppy. Sad puppy. I did no streams either from at least on the.
From True Fans. I'm sure other people were streaming it. We don't see those appearing on our Discord. Here's a question. Would you like to see our Discord, we've just made them hidden these channels. But I could make them available and just non publishable as in you can see boostograms coming in pre hand. So let's say someone was joining our discord and wanted to just see, oh, did they get any boostograms this week? I could make that available so that they could see it but not post in there. Yeah, I can I can only answer from my so like, I wanna be if someone can comment if they would like that, that'd be good to see? Yeah. Sure. All the details. Yep. Cool. Alright. Thank you. Thank you, everyone. And for joining in and letting us know things like this. Very, very helpful and important. So I had some more thoughts just on the the kind of variation that I had. So, in the in the blog and in the poem, Moloch was this entity, I guess you'd call it like an ephemeral fog that blinds us to, our problems or even that's probably not the right metaphor because it's we can see the detrimental effects, but we can't do anything about it. So I don't know what you'd call that maybe a femoral wasp.
It's coming it's coming towards you. It's femoral, you can't like bat it away or anything. But you know, you know, it's coming to get you. Which chains us to suboptimal outcomes that make us behave irrationally. If you take it as a large whole. It's rather poetic, creative, and intriguing. It captured you. It's like the Moloch problem. Oh, what is this? It sounds good. It's like a nice name for it. And the, you know, I think that's all reflected in the origination of the poem and then subsequent blog, which then put this Moloch problem as a as a as a concept out there in the world or, you know, that's, that's kind of the origination point of where it came from. And I think the problem with that is it's it's not effective philosophy. It's an effective philosophy. If you read that blog, that is an effective philosophy at its at its core, because it lacks this grounding, and it almost always goes to theoretical. So for example, in the blog, he talks about a dollar auction. Have you heard of this before? The dollar auction? Yeah. And it's essentially an auction for a dollar. You can you can buy a dollar and there's only two rules to it. The bill goes to the winner of the auction, But second place loses their bid as well. So let's say it's just the two of us in a room, there's $1 over here, I bid first I bid 10¢, because like, oh, I bid 10¢ and win $1 It does good happen.
You then go, Alright, well, I'm gonna bid 20¢. And if that happens, I'm losing my 10¢, and you win 20. So obviously, we're gonna go right up close to $1 Right? Yep. But then I get to 99¢ 99.99, whatever. And there's no higher to go without it being a stupid decision on your part, because your next option is to go like $1 or $1.01. Yeah, more than that was. Yeah. So it makes no sense for you. But you're losing 98¢ because that was your your second highest bid. So rationally, you should go above it just to make sure you win the option. And not to lose the 98¢. Yeah. And so now you go $1.01 But it's like, yeah, I'm losing money, but I'm only going to lose 1¢ now instead of losing 99.
Well, then I of course, go. And so the theoretically, there's no end to this. There's, we're just going to keep outbidding each other. And it's, you know, it's the rational thing to do. But it's also irrational, because now we're losing money, instead of the whole point of the auction was to actually make some money. So that was like an example he uses in this in this blog of the, the Moloch problem. And I find this much like Zeno's Achilles and the Tortoise paradox, which is, you know, a philosophical problem. He said, you know, if there's Achilles is raising the tortoise or the the tortoise and the hare from Aesop's fables, The tortoise is ahead or sorry that the hare is ahead but the tortoise you know catches up or vice versa I can't remember which way the the tortoise is always going to move a slightly little bit time and the hare will catch up even closer in that time period.
But then in that time period, it catches up, the tortoise is going to move a tiny little bit more forward. And even though the hare can catch up even closer, it's going to get to the point where every little segment of time the hare is unable to catch up because it's always got to move that extra little bit bit of distance that the tortoise has moved. And you go, Oh my God, this is like groundbreaking. This changes the fabric of reality. Tortoises are always going to be faster than hares. But then if you just think logically, and same with this dollar auction, if we were to actually do it right now, are we going to bid each other to infinity? Is the hare never going to be able to catch up to the tortoise?
No, that's not that's not how reality works. There's actual things which make rational sense as a problem.
[00:34:16] Juan Granados:
And that also rationally in theory, in theory,
[00:34:20] Kyrin Down:
the the ideas, like, stay. Yeah. But in reality, it it doesn't work. Correct. And that is what I've kind of found with most of the topics it was talking about. Problems of AI safety, politics, climate change, capitalism critiques, which were largely what he talks about in the book and which I figured was the the Moloch problems, which is individuals doing competitive things and it ending up deleterious for everyone as a whole. There's instances where you can point out at it, but it always seems to go too far in these theoretical assumptions.
And then they they become retarded, they become so stupid with the implications of what they're saying. And you could then say, okay, maybe something like, alright, but what about, you know, does this thing all theoretical things are stupid? And Einstein, for example, let's take his theory of relativity, he was claiming things which were absurd, impossible to believe, you know, humans go in the speed of light, we can't do that, or what happens at the speed of light, etc, etc. And you could then say, Alright, well, all of that is wasted. And this is getting a little bit beyond Moloch. And as becoming like, what is effective philosophy in a in a in a sense.
And the difference between I find, say someone creating things like the dollar auction, and Einstein was, he wasn't making claims about human behavior. For him, it was all this is what could happen at the speed of light. Like if this is what is happening here, gravity is going to bend or spacetime is going to bend and, you know, light will bend around a star or a black hole. And it will be changed by that. Whereas, you know, we all thought just light was straight photon rays coming and hitting us. And this was where I was just going, okay, like, if you're making claims upon human behavior, they almost always need to be tested. Because, and as a real life example, because you can make all of these things that sound nice, sounds like a real bad problem. Shit, we're gonna kill ourselves to death with climate change. We're gonna, you know, nuke ourselves into existence.
But we haven't we didn't do it. And you can see it really obviously with the atomic bomb example, which is well, no, we we actually got kind of close to the brink. And then game theory broke down in a sense, and we ended up just dismantling all that bombs. They didn't squeal on each other. They didn't bomb each other, The US and, and the Soviet Union. So this is where I go to like things like the problem.
[00:37:10] Juan Granados:
Oh, this doesn't doesn't make sense in the real world. Well, see, I think I think the it still does. But the problem lies in that it's the assumption that all Moloch problems or all problems like that have a bad result to it. So it's like, again, the definition of it is when you're chasing after something with the unintended consequence of something bad happening. And so, yes, there's going to have to be a bad thing that happens, that comes from it. But that's not to say that good things don't come out of it at the same time. So the AI piece is like, again, anyone can challenge me on this. If you think someone in the individual right now can stop how we're progressing in AI, someone please tell me how you're going to do it because I don't think you will. There is no one on a on a on a developing AI to destroy all other AI. There is just no there is no feasible way I can ever see that we're not just going race as much power usage as possible to create these and continue on our merry way with it. I just can't see how that stops, but it could in itself be it is like which is a euphemism or like an an example of technology, humans. Yep. Developing more technology, more technology. Better, better, better, greater, greater, bigger, bigger, bigger. And yes, it will have unintended bad consequences.
We could come up and dream up with many of them. So in the distinction of a Moloch problem, it is a Moloch problem. It's you're doing something, you're chasing whatever you kind of know is going to give you unintended consequences, but fuck it, we've got to do it because otherwise someone's going to do it. However, it does have some good consequences that come out of it as well, right? So, what's the example saying, Sam Altman said it's like an idea. Again, there's like a universal basic income UBI perspective is, you know, imagine you do it's AI becomes a superintelligence that's doing basically all the work and then you can democratize it into sort of tokens utilization. And so whatever program uses, there's, like, 12,000,000,000,000 tokens and every person in the world gets, you know, an even, share amount of the other tokens. And then you can choose to utilize those tokens that turn into, you know, economical power usage to however you want it. And that's basically everyone got the same amount of money. And if you want to go on, apply them or group them or create things and that's your own choosing. So you can think of these other unintended things would be maybe good that you could see it being good depending obviously on what you see as good. The other problem as well is, and you hit it smack on is, yes, I like the Moloch problem as an idea. It's but it's a human concept. Again, a fucking animal doesn't give a flying fuck about this concept. It is a human concept like a lot of things from humanity is. And the problem that I see it in effective philosophy is when you are talking about the fabric of reality, physics, and if you get all the way down to, have you had the concept or the idea like at the very, very, maybe at the very base layer, it's just information, like that is everything. So at the base of physics and maths, it's just information. You know, think about it that way, whatever.
The concept of humanity don't really apply there because it is just concepts as a human that we leverage. Again, a donkey does not care about a model problem, doesn't understand a model problem, but things probably occur that all donkeys do that have unintended consequences. Probably. But they don't think about it that way. This is not a concept that applies. So when it applies to and that's why I was like, as a as me, even we're talking about being models, I go, there is a power to seeing where playing games is gonna have unintended consequences and choosing when sometimes whatever activity is worth doing a Moloch problem. So kids' activities, and then so I go back to this. In kids' activities, it's not worth it for me to participate in that problem or the unintended consequences slash potential results that I might get. There's, a book I wrote read ages ago about, kinda like the the the falsity of the head start where you you think, well, if my kid's doing this plus that plus this or just doing tennis from, you know, three years old or doing soccer from four years old and they get a really head start, they'll have to be like the best. And the reality is there are some outliers absolutely that your repetition and mastery do become really good, but on the average, actually, that's not what the data says. Now this book was, I don't know, twenty twenties, early twenty twenties.
And so perhaps, maybe there's some new data around it, but for the most part, on average, that's not correct. In fact, the the head start is more so a myth apart from a certain few people who really do well and then they get called up. Mhmm. Tiger Woods, the William Sisters, all these individuals, people who've just taught the kid chess. So that doesn't really apply when you look at it at the data specifically. And I go, the unintended consequence of forcing someone, let's say kid, to do something that they don't like. Take for instance, us with fucking reading when we're in school. Right? If there was, like, this race and, like, well, my kid's gonna read 10 books and my kid's gonna read 12 books and 15 books and whatnot, that means something consequence might have been like, this kid's gonna be like, fuck you. I'm not reading books anymore. I hate this. Like, I I don't like this.
The what you want the outcome to be, perhaps this book, that your kid likes to read or participate or picks up his own skills, you can do that without jumping into this problem. So I liked it for that concept alone. But I I I did see that the theorizing it and then calling it, like, well, it's just bad was dumb because in this other what one of the ones I called out earlier around the doing things for the likes, the in the posts and the comments. And, again, that is a Moloch problem in that everyone, most people for the most part do it. There's a select few humans out there who just by default will not do it. But for the most part, if you are creating content, if you're posting things, if you're sharing things with the world, in part, you probably want people to engage with it in some way, shape, or form. Otherwise, if you're just posting it for your own goodness, I guess, maybe there's some some humans out there. But for the most part, you want someone to engage with it. You want if you want one comment, you may want two comments. If you want four comments, you want eight comments. And so if everyone's doing that, yes, we know that there's unintended consequences around that. There's displays of social media being bad for kids because then you're chasing likes and comments as opposed to the reality.
But it also comes with some good consequences as well. If, you know, you do something successfully and it becomes, you know, a profession or a passion or you make money from it, it has the right effects to it. So, my, like, underlying thing from all of this is I like the naming for knowing it, but the effective philosophy is not fucking just going like, oh, I've got to jump out of it completely and not do it. It's being aware. Like, I think it's more just being aware of what a problem like that exists, humanity wide or to an individual, and then realizing what the fuck it is that you actually want the outcome to be. Because sometimes then you will do it. Right? From our perspective, you know, if we have 5,000 subscribers or 10,000 subscribers or a million subscribers, is it getting us to the outcome that we want even if it's gonna have unintended consequence? What's gonna be unintended consequence? We're probably gonna get people talking shit about us, about all host of things. Right? And it can the more some of our clips go, you know, viral and more people think, you get all these comments like, what's fucking dummy, dude? Like, what is whatever?
That's the only turning consequence. I'm happy to have that if it means more individuals engaging with our stuff that then do get a positive result from it. So that's a more like problem that I choose to go and participate in. If everyone's doing that, because it got some good consequence and maybe it's helping some people out. So more so I kind of like landed in effective philosophy is be aware of them. Be aware when you're applying them and if the potential consequences, hopefully they're not too risky and judging that and going, okay, well then do I choose to participate in that and do it fully under the encounter of that, I think is fine without this like, I didn't read into that too deeply, but onto this, like, chaotic, like, fuck. If you play it, that's it. We're all gonna get, like, blow up and and whatnot. Because, again, it goes to the same point of, like, nuclear weapons where because US had some this other country had to have some and so this other country has to have some. But look, let us get to a point where it's like, okay, actually pause here, everybody, because we've now reached the point where we're just going to have mutually assured destruction if anyone launches something. We're spending all our resources on atomic weapons and not on, you know, our economies and Yeah. So like, humanity does not act in the theory of the world. We act in the reality of us as a species. And so that that concept, I I kinda landed with it. Cool to know, to be aware and apply where apply the thought process to it so that you don't go down the path of doing activities that are being done by everyone else and suffering unintended consequences that you're not willing to live with.
Some you will and then fuck it. Good. Yeah.
[00:46:05] Kyrin Down:
I find that the general grouping of this, these problems and even just topics of discussion tend to have the it's kind of like a meta understanding or assumption that either humans are almost always all good. And but as a species, like humans, as the individuals are good, that as a species were bad. And so that seems to be kind of the trend I especially reading that blog I noticed and even just some of the associated things that are going around this. So like AI, for example, no, I don't think there's anyone calling out any individual of like, you're bad for using AI.
Sure, there is someone somewhere always doing something. But as a large sweep, it's not, you know, this person because you used
[00:46:58] Juan Granados:
ChatGPT, you're a bad person now. But because we all use ChatGPT, we're bad as a as a whole. Well, and I challenge that because we heard it from from our good friend, Joseph in the in the comedian arts world. Yeah. How they were talking about how you're using AI to do your thing in your images? Bad human, bad person. You know, they told us that. Now, I don't know if this is like everyone saying that in the comedic artist world, but I could see that. I could see I could see that where people are like, no, fuck you for using this. But if you're, you know, making people laugh, then again, it's an unintended consequence, but willing to participate because your end outcome was to make people laugh and to have a good time. So it's like, yes, but I'm going to participate in it because even unintended consequence is not too bad for the fact that what I'm getting is what I wanted to in the first place. Okay, sure. Yeah. Yeah, that's that's a good example. I don't it might even highlight the
[00:47:54] Kyrin Down:
opposite of what I was going to say, which is the and perhaps I'm more aligned with them than I thought because I, for example, believe that there's actually more like, bad individual humans. And there's tons of behaviors, which are terrible. But humanity as a good as a whole, the terms good and evil or bad, are tricky when they are applying it to these concepts like this. But I tend to incline more inclined more to perhaps the optimistic side of things. So for example, you'll hear, okay, but now we can prove humanity's bad as a whole or people as as bad a whole with like, Stanford prison experiments, the Stanley Milgram shock experiments where this was showing, oh, if you have, you know, a human overlooking another human, you can get them to shock them to death or like the Stanford prison was like they treated these their captors, treated the prisoners terribly, even after only seven days, they had to shut down the experiment because it was so bad. And it's just like, okay, but if you actually look into them, those experiments weren't unbiased in their setup and that there was all sorts of things which were made them incentives for people to do the bad thing in a non normal way, I guess, like in a non normal in a non realistic sense. Correct. Yeah. So it's hard because there's undoubtedly times where there's kind of Moloch problem type things. Look at the Soviet Union, the gulags.
There's tons of things where it's like, yeah, you had to compete in being more psychopathic to prove your loyalty to the Soviet Union. And to do this, you create this whole fucked up system, put tons of people in gulags, kill millions of people, etc, etc, etc. So there is things like that where I go, Alright, yeah, you know what, that that perhaps is a good example of a Moloch problem that went terribly. Was it sustainable over time and ended up killing all of humanity? No. But they killed a lot of people terrible for sure. So there's, there's ones like that, where I go, okay, that's a good example of like, yeah, humanity as a whole is bad. But as a general rule, I think this is where like a Steven Pinker type book enlightenment now just shows over time, like, there's less wars, less rapes, there's less less, you know, childbirth, deaths of childbirth of either the child or the mother mortality rates continue to decline. So people getting older, in general.
Things like happiness and stuff like that, it's hard to measure. You could you could argue differently for that. So I don't know. How do you solve a Molot problem?
[00:50:57] Juan Granados:
Is that that's why I was like, you don't solve it?
[00:51:00] Kyrin Down:
Yeah, I and I actually even questioned the validity of it being a legit actual problem.
[00:51:09] Juan Granados:
I would say it is a legit actual problem. It's legit in the sense that it's, it's an activity that you do with unintended consequences that everyone's doing and then you'll land at unintended consequences. But where I balk at the idea is that it's like this catastrophic level of unintended consequences so that you can't play them when you choose to play them And it's a much more of a be aware of a Moloch problem as like a, Oh, it's Moloch, that's cool. This is like a name as opposed to putting this like full sentence of what it is. And then just being aware whether you're playing it or not for the right reasons. Cause then if you are for the activities and you unintended consequences, like, whatever, fuck it, do it.
If it's a real terrible unintended consequence, then, okay, maybe then you actually realize, okay, potentially I won't do this. Now it's like real world practice. I don't know when this really would occur, but it was like, you know, know, you can win a million dollars or but it's like an unintended consequence of whatever you're doing, you might die. Like, I would go, okay, well, there's I'm not doing that. Like, that's definitely a game I do not want to play. Right. But if the risk was lesser so, an unintended consequence might be like, oh, you don't get to talk to your seventeenth best friend. I might be like, Yeah, okay. Let's do it. Fucking bitch.
My seventeenth best friend is fucking boy. So you know what I mean? Like, it's it very much differs on what that is. Whoever, if you, if you, if you're a philosopher, you're probably hating us. But you know what I actually landed on now that I'm talking about this, maybe I thought about this prior. One of the reasons I guess I'm doing much more effective philosophy conversations and stuff is I like philosophy. But the more and more I think about it, I'm like, a lot of philosophy sucks. Like, a lot of philosophy shit in that it might be true in the most real sense in this, like, theoretical things. I'm not saying that's wrong in that sense. But when it comes into not just applicability, but just humanity at large, like you an individual walking around, fucking shit. Like, what what kind of fucking application is this? Like, you in what world would this have to exist? I would have to exist in a very, very specific because even in any of these, game theory that we're just talking about, you could just change, like, such little tiny things of reality. Like, yeah. But what if this happened or that happened and you're angry or you're hungry or whatever? Like, it just changes fucking dramatically.
It is a hyperobject, people. It's too complex. And so I kind of go, just a load of shit. The more you have to the more I want to, I reckon for the next few years at least on the podcast and more, it's effective philosophy stuff. So when I see shit like that, like, calling it out and be like, nah, this is bullshit unless it applies. And for me, it applied in like, oh, I can see where using AI leads me personally. I could see myself going off the beaten path into generality. I don't really want that, but I'm aware of it. I'll leverage it in a specific way. So good. That's that's my table. You're almost
[00:54:03] Kyrin Down:
we're very much on the same page. I've read so many philosophy books now. And do you know what my favorite one of all time that I've read was Batman and philosophy. Batman and philosophy was great because it highlighted the, I guess like larger concepts that, you'll sometimes face or that that are kind of like intriguing or interesting. What I found or like the trolley problem, for example, things like this where you can provide examples which are kind of back and forth, back and forth of the of the intuition of like, oh, okay, yeah, yeah, I would push push the switch. Oh, wait, no, I wouldn't push a fat man off a bridge to, to save the five people. So therefore, like, what was the difference between the two and doing those kind of like highlighting experiments and then showcasing how Batman did it, I found super useful.
All of the other books talking about, you know, just these general things related to time to related to AI safety and stuff like this, you know, I will dismiss basically offhand a discussion of capitalism, of the AI safety. But if you want to talk about a niche idea of, oh, yeah, there's a gun attachment that you can that utilizes AI somehow, or attaching it to drones. So it can distinguish between, you know, the dark skinned people and the light skinned people fighting in Afghanistan. I would be happy about that. But even those ones, if you if you're specific about
[00:55:37] Juan Granados:
the chips being made in Taiwan and the race to the bottom resulting in a war in Taiwan for the manufacturing of these chips. And so like, it kind of makes sense. Well, I'll be able to actually answer that shortly. Because I'm reading a book called Chip wars at the moment, which is going over the whole
[00:55:51] Kyrin Down:
aspect of what year was that? It's relatively recent. I think I think it was like four
[00:55:57] Juan Granados:
or five years. Yep. Yeah. So right now there are some companies building it outside of Taiwan now attempting to do certain things like that to avoid that. But a shit ton being manufactured there. A lot of, limits are trying to be imposed to China, but they just get around it, right? Again, this is more politics piece, but if you want to talk about that from a race to the bottom and the problems, okay, there's some reality there. But again, and I'm not again, I'm not saying all philosophies are shit. I'm just saying the ones that are non grounded in reality are like niche, just the heart. Like stoicism. So stoicism, just like there's lots, but stoicism, I would say, largely grounded in reality, because it's a lot of individuals who live through that life. Some of it might be a little bit airy fairy, but a lot of it is grounded, especially the stuff that remains today. Well, they give examples with Buddhism, on the other hand, has a lot more non grounded, weirdly enough, but does actually apply really well into everyday life where it's like there's less examples, I guess, I've seen, at least I've seen that are like, oh, this human did this very thing. It's very, like, story and everything else.
Fuck, it's really applicable. Like, to a point I remember maybe two months ago, like, looking into a couple of Buddhist things, I was like, man, I could see myself being a Buddhist. Like, that's actually, like, really, really, really related to the things that are like the best. The best Buddhist books I wrote actually have
[00:57:18] Kyrin Down:
specific examples of here was a problem I was dealing with. This is a Buddhist concept or Taoist concept. Yeah. Eastern philosophy in general that helped me, solve this problem in my own mind or approach it differently compared to how I was approaching it. If you read something like Niche Heidegger, I can can't mean, Khan gets a lot of hate Kierkegaard. I'm going to chuck this out. So we're talking we're talking about a popular clip, popular clips. One of our most popular on the book reviews channel at the moment is like there's only three good niche books. And it's just it's not once again, it's not even me. It's Dave Jones.
[00:57:58] Juan Granados:
That one gets talking such comments talking about,
[00:58:02] Kyrin Down:
actually, maybe that's not on the book reviews channel because it was a mere mortals conversation. So it's probably on our main channel. And yeah, the amount of just criticism thrown Dave's way off chops. Dave, you're in trouble, man. The internet doesn't like you. And I'm going to say it. I'm going to back them up. Niche is shit. I've reading I'm gonna clip that for sure. Beyond God and order is shit reading fucking Thus Spoke Zarathustra shit. Will I try any of his others? Probably because I'm gonna need it. And I'll and I keep getting drawn back to these things. But that shit.
[00:58:42] Juan Granados:
That's that's that's that's that's that's enough of a summary, I think. That's Memorial Lads again, we're live on Sundays, nine a. M. If you want to join us live, of course you can feel free to do so. Again, if you want to support us, feel feel free to do it in various ways. You can comment, you can like, you can join live. You can share. Yes. You can join our Discord.
[00:59:01] Kyrin Down:
Send us boostagram meremodelspodcasts dot com slash support. Correct. Where you can learn more about how to do things like that. And, yeah, we very much appreciate it. Shout you out. We do have a leaderboard that we keep in contact with as well. And, yeah, we still got the, the offer of,
[00:59:19] Juan Granados:
if you send in 100,000 sats, we will send you a shirt. That's right. Dollars 200 worth Australian dollar shirt by this point. Just getting more expensive people. That's what happens. That's what happens. Alright. Meanwhile, I will leave you there. Be well wherever you are in the world. Thank you very much. One out. Carry it out. Bye. Bye.
[00:59:34] Kyrin Down:
Bye. Bye.
Critique of Moloch and Theoretical Philosophy
Effective Philosophy and Real-World Application
Conclusion and Final Thoughts