A short episode that covers the recent/ongoing AWS caused outage. Also discussion of a WSJ article about AI Doom and its fans. Plus, where did all the chemtrails go?
[00:00:47]
Unknown:
Hello, everybody. Welcome to episode 21 of the No Pill Podcast, being recorded here on the evening of 10/21/2025. And if you are listening in the not too distant future, you might remember a little bit bit of a Internet outage going on. It's called an Internet outage. Not really anything wrong with the Internet, more so Amazon Web Services DNS server in one particular location. But as it turns out, that can cause quite a few issues all over the place. So we'll talk about that tonight, talk about some other things. Just a short episode, but did want to try and get back in the habit of doing these more regularly.
So we've got, we talked about them a lot on, the last episode, Toby Rogers, and he posted something about AI Doom. AI Doom, no problem from The Wall Street Journal. And, I'll go ahead and we don't have a ton of stuff to get through, so I'll go ahead and read most of this. We've talked about some of this in the past, but, anyway, this comes from The Wall Street Journal. It says, AI doom? No problem. Governments and experts are worried that a super intelligent AI could destroy humanity. For the cheerful apocalyptics in Silicon Valley, that would not be a bad thing. At a birthday party for Elon Musk in Northern California wine country, late at night after cocktails, he and longtime friend Larry Page fell into an argument argument about the safety of artificial intelligence.
There was nothing obvious to be concerned about at the time, it was twenty fifteen, seven years before the release of ChatGPT. State of the art AI models, playing games, and recognizing dogs and cats weren't much of a threat to humankind, but Musk was worried. Page, then CEO of Google parent company Alphabet, pushed back. MIT professor Max Tegmark, a guest at the party, recounted in his 2017 book Life three point o, that Page made a passionate argument for the idea that digital life is the natural and desirable next step in, quote, cosmic evolution.
Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win. That, Musk responded, would be a formula for the doom of humanity. For the sin of placing humans over silicon based life forms, Page denigrated Musk as a speciesist, someone who assumes the moral superiority of his own species. Musk happily accepted the the label. Page did not respond to request for comment. As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI AI world, but includes influential believers.
Call them the cheerful apocalyptics. I first encountered such views a couple years ago through my x feed when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who, in March, received received the Turing Award, the highest award in computer science. Sutton wrote, the argument for fear of AI appears to be AI sign number one, AI scientists are trying to make entities that are smarter than current people. Number two, if these entities are smarter than people, then then they may become powerful. Three, that would be really bad, something greatly to be feared, an existential risk. The first two steps are clearly true, but the last one is not. Why shouldn't those who are smartest become powerful?
This for me says the author was something new I was used to thinking of AI leaders and researchers in terms of two camps on one hand optimists who believe it's a it's no problem to align AI models with human interest and on the other doomers who wanted to call a time out before wayward super intelligent AI's just exterminate us. Now, here is this third type of person asking, what's the big deal anyway? In the field of AI research, the level of a asked AI researchers for their estimates of p times doom, what probability they placed on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species.
Almost half the 1,300 respondents to the question gave a probab probability of 10% or higher. The average was 16% or around one chance in six Russian roulette odds. These figures are in line with the off the cuff estimates from Musk, Anthropic CEO, Dario Amode, and Joshua Bengio, a key contributor to the foundations of modern AI. I selfishly prefer having you put humans at the apex, since I'm a human myself on my good days. So I wanted to learn more about it's very conversational art, as an aside, very conversational article from the Wall Street Journal. Is this I don't know. I don't read a lot of long form Wall Street Journal content, but it seems, I don't know, very very informal.
Alright. Back to it. So I wanted to learn more about why people should learn to accept AI doom. Sutton told me AIs are different from other human inventions and that they're analogous to children. When you have a child, Sutton said, would you want a button that if they do the wrong thing, you can turn them off? That's kinda discipline there. But, anyway, that's much of the discussion about AI. It's just assumed we want to be able to control them. But suppose a time came when they didn't like having humans around. If the AIs decided to wipe out humanity, would he be at peace with that? I don't think there's anything sacred about human DNA, Sutton said. There are many species, most of them go extinct eventually, and we are we are the most interesting part of the universe right now, but there might come a time when we're no longer the most interesting part. I can imagine that. And when that day comes, goodbye, homo sapiens.
If it was really true we were holding it's got, like, the the evolutionary skeleton, you know, imaginary, of course. You've got the the monkey and then the sort of Cro Magnon monkey skeleton and then the human skeleton. Now it's got the it the robot. If it was really true we were holding the universe back from being the best universe universe that it could, I think it would be okay. Okay, that is, for the AIs to rid the universe universe of us one way or another. I wondered how common is this idea among AI people. I caught up with Jared Lanier, a polymath musician, computer scientist, and pioneer virtual reality. In an essay in the New Yorker in March, he mentioned in passing he had been hearing a crazy idea that at AI conferences, that people who have children become excessively committed to the human species.
Fact true. He told me in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties, and any place else they might get together. Lanier is a senior inter interdisciplinary researcher at Microsoft, but does not speak for the company. There's a feeling that people can't be trusted on this topic because they're infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way. We should get out of the way, that is, because it's unjust to favor humans and because consciousness and the universe will be superior if AI supplant us. The number of people who hold that belief is small, Lanier said, but they happen to be positioned in stations of great influence. So it's not something we can ignore.
The closest thing to a founding document for the cheerful apocalyptics is Mind Children, a 1988 book by Carnegie Mellon roboticist Hans Moravec. The, the title expresses the idea that intelligent robots would, in concept, be our children, and in what he what he regarded as a happy outcome would eventually replace us. Moravec, who had a self described obsession with artificial life, viewed human minds as simply a collection of data. He envisioned that in some cases, a robot's mind would simply be a digital copy of it of a biological person's mind, achieved through a process of uploading that he called transmigration.
These things were later elaborated by the technologist Ray Kurzweil and the science fiction writer Werner Vinge. Kurzweil added a touch of romance to the story predicting that posthuman nanobots, unhindered by human chauvinism, would spread across star systems. Exactly how this extinction of humanity would come about is radically unknowable, say the cheerful apocalyptics. Once AIs are able to apply their intelligence to design in their next generations, their capabilities will skyrocket, leaving humans as the equivalent of mollusks in comparison.
IJ Good, a former Bletchley Park codebreaker turned researcher, foresaw the scenarios in the nineteen sixties, calling it an intelligence explosion. At that point, humanity would be powerless against the wishes of AIs, which would have their own goals, whether hostile to us or simply wanting to use our resources towards some other priority. You may be thinking to yourself, if killing someone is bad and mass murder is very bad, then the extinction of humanity must be very, very bad. Right? What this fails to understand, according to the cheerful apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes. Much as Darwin needed a popular popularizer, Thomas Huxley, known as Darwin's Bulldog, for his ideas to reach a wider discourse, the cheerful apocalyptics have their popularizer in Daniel Fagella.
He's an AI autodidact who uses his podcast, blog, and conferences to promote the idea of bringing about a worthy successor to humankind. The eternal locus of all moral value and volition until the heat death of the universe will not be effing opposable thumbs, he told me. I'm not sure opposable thumbs are steering the ship in, like, twenty years. What Figuella has in common with some advocates of restrictions on AI is that while he's okay with the AI replacing humans, he doesn't want it to happen too quickly. Policymakers should try to stave it off until AI's are worthy, that is until they can carry the torch of consciousness. He doesn't want humans to be succeeded by the mindless equivalent of vacuum cleaners.
That doesn't mean worthy AI's will be concerned about humans, even the hope for a worthy successor is unlikely to care enough about us to keep us around indefinitely, if at all. Purely anthropocentric moral aspirations, he summed up, are untenable. I'm not so sure. While the cheerful apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are admissible. The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome awesome in the original sense of inspiring awe, they viewed it as a slow, fragile vessel vessel ripe for obsolescence.
The late MIT professor, Joseph Weizenbaum, a pioneer AI researcher in the nineteen sixties, who created the first known chatbot, became a a fierce critic of much AI research. He summed up Moravec's attitude bluntly. He despises the body. The cheerful apocalyptics, larger judgment is the version of the age old maxim that might might makes right, but this time with the higher intelligence as the supposed trump card. That is, it confers a superior claim to existence. Figuella, in an essay titled Rightful Misanthropy asked the rhetorical question, why maintain a a species of biological husks, that is humans, when vastly superior intelligences can be cultivated?
One possible response is the Judeo Christian idea that humankind was uniquely created in god's image. Of course, the cheerful apocalyptics would see any such spiritual belief as inadmissible. But their view of intelligence alone, as conferring ripe full supremacy, is itself a spiritual belief, excellent observation there, author, that needs to be defended or rejected. What does it imply for the moral rights of less intelligent humans versus smarter ones? What does it mean for theories of justice that are founded on the equal moral worth of persons? Yeah. It's a it's basically a slight tweak on eugenics, just taking it taking it posthuman there. Alright. Back to the article. The whole school of thought can sometimes feel like the ultimate revenge fantasy of disaffected smart kids for whom the triumph of their AI proxies amounts to sweet victory over lesser mortals.
Lanier suggested to me that people in elite AI circles seemingly embrace the the ideas of the cheerful apocalypse because they grew up identifying with the non biological villains in science fiction movies, such as those of the Terminator and Matrix franchises. Even if the AI's in those movies are kind of evil, they're superior, and from their perspective, people are just a nuisance to be gotten rid of. Weizenbaum recognized this problem early on, denouncing the idea that the machine becomes the measure of the human being. In 1998, he told an interviewer, I believe the essential common ground between National Socialism and the ideas of Hans Moravec lies in the degradation of the human and the fantasy of a perfect new man that must be created at Alcas.
At the end of this perfection, however, man is no longer there. Like some rat other radical doctrines, those of the cheerful apocalyptics amount to a closed system. If you resist belief, your views can be dismissed. Either you're infected with the pro human mind virus or you're biased by human arrogance. Fortunately for humankind, our biases in favor of our species would indeed be a powerful barrier to the acceptance of human extinction, provided that its proponents proclaim them in the open and not just at parties and salons behind laboratory doors. Laboratory doors do we really want more of what we have now? Moravec once asked. More millennia of the same old human soap opera?
I, for one, say yes. And that's, David A. Price is the one who wrote that. It's interesting article, makes some good points. I thought his observation that this whole cheerful apocalyptics AI worldview is very much a religious belief. Right? You're you're making a moral choice. You can't just say, oh, we won't factor in whether humans are are created by God or not, or if there's anything innately special about humankind as opposed to other creatures or robots. You can't just you've gotta make your argument. You can't just ignore the argument. Right? But, interesting article. I'm sure those of you that listen to this podcast know where I stand on that.
But the reason I thought that was interesting today, is that, you know, the the AI stuff, it's all so fragile. Right? I mean, this is not and this is not the first big, big issue, And it doesn't have to be as big as an AWS, Amazon Web Services issue, or, you know, what are the other ones? We've had Facebook, and we've had other just giant megalithic corporate corporations where all of a sudden stuff doesn't work. Almost to the extent where it seems like, you know, they take turns kinda testing stuff out, like there might be something bigger on the horizon.
And as an aside, that is how you would, quote, take down the Internet. You can't take down the Internet as in, like, the actual infrastructure that, you know, glass tube light running through glass tubes all over the world. You could take that out in specific locations, but you can't kind of universally take that out. If you took out some undersea cables, if you took out enough of them, that could certainly do some damage. But in general, the physical infrastructure is not not the most vulnerable part. So I I threw a cartoon in there, and a host just of the or I did a a search on Substack for the AWS outage, and one of the first things that popped up is, a little stick figure comic that says, the Internet in 1969.
Let's create a distributed network so it can survive a nuclear winter. The Internet in 2021, or as we found out this week, 2025. Let's host half of it on one company and see how it goes. Right? So so the the same we've got kind of this pie in the sky AI. It's gonna take over. Maybe it's a good thing. Maybe it's a bad thing, but, you know, it's it's probably gonna take over. You know, at least a one in six chance of it taking over and ruling everything and wiping out humanity. And, like, no. No. There's there's really not a one in six chance of that happening. There's exactly zero percent chance of that happening.
And part of the reason for that is people people are the ones who make the AI stuff. You can't program an imperfect person cannot program imperfection. And this is evident, every AI system, it doesn't automatically get better and better and better and eventually just achieve perfection. In many cases, it gets worse. The concept of AI slop, AI, you know, just imagining things. You could put put in a search into AI or any search engine for that matter about something you really know about and see how accurate it is. You know, this stuff is it's just pulling massive amounts of data and information, putting it together in a way that sort of makes sense from a mathematical perspective, and spitting it back out. There's no intelligence.
There's no judgment. There's no discernment. There's no wisdom. And there there won't be, no matter how much compute and how much electricity and how much, you know, money pours into it. It still is not gonna work, but, you know, we can we can all enjoy the, the stock market bubble and everything else that that goes with, at least pretending this is all going some somewhere good. So the another article specifically about the outage let me pull that up. This comes from just a substacker, Leslie Joy Allen, that ominous outage on 10/20/2025. If you experienced Internet disruptions on many platforms this morning the twentieth, it was due to a failure of Amazon Web Services' temporary inability to convert its readable domain system, DNS, into computer readable IP addresses.
This problem originated in Amazon Web Services East. Many people do not know that US East 1 is Amazon's oldest and largest Amazon Web Service region. This region on the Eastern Portion of The United States hosts enormous portions of the world's Internet services. If it fails, the entire global infrastructure malfunctions. When the domain name system, or DNS, of AWS failed early on the October 20, its function that converts a essentially making websites, social media, and various internet platforms unstable everywhere. This failure also prevented thousands of businesses from communicating with a database named DynamoDB that almost all businesses now use for both both customer and operational data. As of this writing around 04:12PM eastern time, not all functionality has been destroyed has been corrected.
And I'll say here, 09:41PM central on October 21, still not all functionality has been corrected. You know, you just went to Citizen Free Press to look at stories, and half the links you click on take forever to open up or don't open up at all. Trying to do work. You know, Salesforce wasn't working right. All sorts of all sorts of stuff was not working as it normally does. So still still going on, over a full day later, day and a half later. Guess who owns DynamoDB? Amazon. Dyna so DynamoDB is one of 25 cloud services owned by Amazon. In addition to these cloud services, Amazon owns nine technology firms, 14 e commerce entities, four grocery outlets, Whole Foods being one of them, 19 media and entertainment businesses, including MGM Studios. You can access a complete list of everything Amazon owns and controls by clicking here at everything owned by Amazon. If you clicked on that hyperlink and just looked at the list, you will recognize that Amazon owns too much of everything. If a glitch or a malfunction in Amazon Web Services East can create problems for most of the world's businesses and consumers, then it has too much control and too much reach.
Now, before you computer folks start reminding me that the whole purpose of this these globally integrated systems is to ensure that everything's things run smoothly, that I should be able to sit in my home in Atlanta and buy a product online from a store in Istanbul without any problems, I get it. Yet, Amazon's massive control ought to worry you as well. Let's be clear. We are still tied to Amazon even if we don't shop with them. Do your research. Support your local businesses as as the first line of defense against this type of monopoly. Ask your local local merchants what their relationship with Amazon is even if those merchants have a relationship with the is if those merchants have a relationship with the company. A company that is gobbled up one company after another has the potential to control everything we do, who we shop with, and virtually with no virtually who we shop with and with virtually no input from us.
Update. As of 09:55PM eastern, when Amazon Web Services eastglitch for the third time in five years on October 20, it interrupted services at the British government's website and its tax services. It disrupted services for the payment at Venmo, The Wall Street Journal's website, and games on The New York Times website, Amazon itself, Hulu, Snapchat, McDonald's, Ring doorbells interesting and the game Fortnite all experienced interruptions. Alright. So here we go. The AWS outage. And, you know, as someone who has worked for Internet service providers for the better part of a decade now, many not a totally uncommon thing.
If it's a small outage, it's usually a a fiber cut. You know, if it's a big outage, there's usually something going on, either a a very important server or other piece of equipment goes down or yeah. And in this case, kind of a a not total outage, but a DNS malfunction of some sort. And if it's just one DNS provider, you can switch that, just FYI. But, yeah. And you're you've got a couple not very good choices. I think you've got Microsoft has DNS. Google has DNS. And, of course, Amazon has DNS. So, yeah, it's not a lot of great options from from the the big ones. And then if you have a local Internet service provider that provides DNS, they're probably pulling from a much larger system, or they have one which, might not work very well because it's it's not an easy thing to to operate. But alright. We'll get back on track here. Just just saying the you know, we've got some time before the AI overlords take over.
But that spiritual the spiritual thing behind that attitude of the hatred of humans, the hatred of the human body, the hatred of, you know, this of the idea of being created by God. Right? And that, like, no. You don't get to create something that's gonna take over the world. It's not gonna happen. There are bigger forces than you that already exist. This whole idea of creating God like Elon Musk talked about. No. There is a God. He's not you, and you're not going to overpower him. And this is the the spirit behind the desire for this is demonic. It's a it's a demonic spiritual entity that's been the same thing, you know, for thousands of years, still still marching on, still going, just different disguises. We've got the, you know, the Greek and Roman gods, the Egyptian gods, and, we've got aliens. We've got, you know, stuff in between, just kind of the spiritualism stuff, seance stuff for thousands of years, talking to supposed dead people. Right? So just different forms, but it's all demons.
And this is now we're we're moving into, demons speaking through, at times, I think, AI chatbots or those kind of weird AI experiences that people get sucked in and think it's become aware and is talking to them and and leading them. There could be truth to that, but it is not AI that became self aware. It is a demon at the controls there. So, anyway, keep an eye out for that stuff. We've we've talked about most of that before, but I I just thought it was interesting given what's going on. The other stuff that's making big news, we've got this really phony fight between Trump Trump trying to enforce law and order, but only through means that are, you know, either blatantly unconstitutional or it should never be done. Right? So, all that for a purpose to get everyone stirred up, to to let, you know, people on the left feel like they finally had you know, they're finally right. They're finally on the good side of an issue.
And and then on the right, it's like, well, do you just want these places to go into disrepair? No. You want the you want the troops to come in. What? You're against troops on the streets in America? You know? I mean, come on, people. This was this was Alex Jones' big thing, martial law, you know, seizing the guns after Katrina, just the all terrible stuff. It's the end of America if this happens. And, you know, to be fair to Alex Jones in particular, he did predict this. He said they're they're gonna let the illegal immigration get the cities get so bad that you will beg for troops on the streets.
And it hasn't got to that extreme. You know, it's still a controversial thing, but there's still there's people that are like, yeah. Let's let's clean it up. Let's stop the killing in Chicago. Let's and, you know, it's a fake fight. It's clearly being set up. So I'm not saying the next president, but a future president, future puppet in chief will use the troops on, oh, I I don't know, people like us. And it'll be like, oh, you were all for it when Trump was doing it. Well, most people aren't all for it. You know, it's kind of a media creation too. But that's that's how you move move forward with it. Right? You get the these aren't just these aren't authoritarian brain chips. These are cool Elon Musk, you know, freedom brain chips, free speech brain chips. You don't have to be afraid of these. So that's that's how they're trying to push push the technocratic, algocracy forward.
So that's pretty much all I got for tonight. Thank you so much for listening. It's a beautiful time of year. As one more thing I did want to talk about, It's kinda made the the rounds on social media, but if you are wise and stay off social media, you might not have noticed it. But you might have noticed it in real life. Here in Oklahoma City, I haven't seen a lot of chemtrails lately, and this coincides with the government shutdown. Is it a coincidence? Is the you know, is whatever government department was in charge of chemtrailing shut down for the government shut down. I don't know.
Open to input on that. If you've been seeing chemtrails during the the past couple of weeks here in October, let me know. We'll we'll blow up that theory if we need to. But the blue the sky is blue and and beautiful here in Oklahoma City or has been for the for the last few days. So, hope everyone out there is doing well. Thank you so much for listening, and we will talk to you again soon.
Hello, everybody. Welcome to episode 21 of the No Pill Podcast, being recorded here on the evening of 10/21/2025. And if you are listening in the not too distant future, you might remember a little bit bit of a Internet outage going on. It's called an Internet outage. Not really anything wrong with the Internet, more so Amazon Web Services DNS server in one particular location. But as it turns out, that can cause quite a few issues all over the place. So we'll talk about that tonight, talk about some other things. Just a short episode, but did want to try and get back in the habit of doing these more regularly.
So we've got, we talked about them a lot on, the last episode, Toby Rogers, and he posted something about AI Doom. AI Doom, no problem from The Wall Street Journal. And, I'll go ahead and we don't have a ton of stuff to get through, so I'll go ahead and read most of this. We've talked about some of this in the past, but, anyway, this comes from The Wall Street Journal. It says, AI doom? No problem. Governments and experts are worried that a super intelligent AI could destroy humanity. For the cheerful apocalyptics in Silicon Valley, that would not be a bad thing. At a birthday party for Elon Musk in Northern California wine country, late at night after cocktails, he and longtime friend Larry Page fell into an argument argument about the safety of artificial intelligence.
There was nothing obvious to be concerned about at the time, it was twenty fifteen, seven years before the release of ChatGPT. State of the art AI models, playing games, and recognizing dogs and cats weren't much of a threat to humankind, but Musk was worried. Page, then CEO of Google parent company Alphabet, pushed back. MIT professor Max Tegmark, a guest at the party, recounted in his 2017 book Life three point o, that Page made a passionate argument for the idea that digital life is the natural and desirable next step in, quote, cosmic evolution.
Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win. That, Musk responded, would be a formula for the doom of humanity. For the sin of placing humans over silicon based life forms, Page denigrated Musk as a speciesist, someone who assumes the moral superiority of his own species. Musk happily accepted the the label. Page did not respond to request for comment. As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI AI world, but includes influential believers.
Call them the cheerful apocalyptics. I first encountered such views a couple years ago through my x feed when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who, in March, received received the Turing Award, the highest award in computer science. Sutton wrote, the argument for fear of AI appears to be AI sign number one, AI scientists are trying to make entities that are smarter than current people. Number two, if these entities are smarter than people, then then they may become powerful. Three, that would be really bad, something greatly to be feared, an existential risk. The first two steps are clearly true, but the last one is not. Why shouldn't those who are smartest become powerful?
This for me says the author was something new I was used to thinking of AI leaders and researchers in terms of two camps on one hand optimists who believe it's a it's no problem to align AI models with human interest and on the other doomers who wanted to call a time out before wayward super intelligent AI's just exterminate us. Now, here is this third type of person asking, what's the big deal anyway? In the field of AI research, the level of a asked AI researchers for their estimates of p times doom, what probability they placed on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species.
Almost half the 1,300 respondents to the question gave a probab probability of 10% or higher. The average was 16% or around one chance in six Russian roulette odds. These figures are in line with the off the cuff estimates from Musk, Anthropic CEO, Dario Amode, and Joshua Bengio, a key contributor to the foundations of modern AI. I selfishly prefer having you put humans at the apex, since I'm a human myself on my good days. So I wanted to learn more about it's very conversational art, as an aside, very conversational article from the Wall Street Journal. Is this I don't know. I don't read a lot of long form Wall Street Journal content, but it seems, I don't know, very very informal.
Alright. Back to it. So I wanted to learn more about why people should learn to accept AI doom. Sutton told me AIs are different from other human inventions and that they're analogous to children. When you have a child, Sutton said, would you want a button that if they do the wrong thing, you can turn them off? That's kinda discipline there. But, anyway, that's much of the discussion about AI. It's just assumed we want to be able to control them. But suppose a time came when they didn't like having humans around. If the AIs decided to wipe out humanity, would he be at peace with that? I don't think there's anything sacred about human DNA, Sutton said. There are many species, most of them go extinct eventually, and we are we are the most interesting part of the universe right now, but there might come a time when we're no longer the most interesting part. I can imagine that. And when that day comes, goodbye, homo sapiens.
If it was really true we were holding it's got, like, the the evolutionary skeleton, you know, imaginary, of course. You've got the the monkey and then the sort of Cro Magnon monkey skeleton and then the human skeleton. Now it's got the it the robot. If it was really true we were holding the universe back from being the best universe universe that it could, I think it would be okay. Okay, that is, for the AIs to rid the universe universe of us one way or another. I wondered how common is this idea among AI people. I caught up with Jared Lanier, a polymath musician, computer scientist, and pioneer virtual reality. In an essay in the New Yorker in March, he mentioned in passing he had been hearing a crazy idea that at AI conferences, that people who have children become excessively committed to the human species.
Fact true. He told me in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties, and any place else they might get together. Lanier is a senior inter interdisciplinary researcher at Microsoft, but does not speak for the company. There's a feeling that people can't be trusted on this topic because they're infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way. We should get out of the way, that is, because it's unjust to favor humans and because consciousness and the universe will be superior if AI supplant us. The number of people who hold that belief is small, Lanier said, but they happen to be positioned in stations of great influence. So it's not something we can ignore.
The closest thing to a founding document for the cheerful apocalyptics is Mind Children, a 1988 book by Carnegie Mellon roboticist Hans Moravec. The, the title expresses the idea that intelligent robots would, in concept, be our children, and in what he what he regarded as a happy outcome would eventually replace us. Moravec, who had a self described obsession with artificial life, viewed human minds as simply a collection of data. He envisioned that in some cases, a robot's mind would simply be a digital copy of it of a biological person's mind, achieved through a process of uploading that he called transmigration.
These things were later elaborated by the technologist Ray Kurzweil and the science fiction writer Werner Vinge. Kurzweil added a touch of romance to the story predicting that posthuman nanobots, unhindered by human chauvinism, would spread across star systems. Exactly how this extinction of humanity would come about is radically unknowable, say the cheerful apocalyptics. Once AIs are able to apply their intelligence to design in their next generations, their capabilities will skyrocket, leaving humans as the equivalent of mollusks in comparison.
IJ Good, a former Bletchley Park codebreaker turned researcher, foresaw the scenarios in the nineteen sixties, calling it an intelligence explosion. At that point, humanity would be powerless against the wishes of AIs, which would have their own goals, whether hostile to us or simply wanting to use our resources towards some other priority. You may be thinking to yourself, if killing someone is bad and mass murder is very bad, then the extinction of humanity must be very, very bad. Right? What this fails to understand, according to the cheerful apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes. Much as Darwin needed a popular popularizer, Thomas Huxley, known as Darwin's Bulldog, for his ideas to reach a wider discourse, the cheerful apocalyptics have their popularizer in Daniel Fagella.
He's an AI autodidact who uses his podcast, blog, and conferences to promote the idea of bringing about a worthy successor to humankind. The eternal locus of all moral value and volition until the heat death of the universe will not be effing opposable thumbs, he told me. I'm not sure opposable thumbs are steering the ship in, like, twenty years. What Figuella has in common with some advocates of restrictions on AI is that while he's okay with the AI replacing humans, he doesn't want it to happen too quickly. Policymakers should try to stave it off until AI's are worthy, that is until they can carry the torch of consciousness. He doesn't want humans to be succeeded by the mindless equivalent of vacuum cleaners.
That doesn't mean worthy AI's will be concerned about humans, even the hope for a worthy successor is unlikely to care enough about us to keep us around indefinitely, if at all. Purely anthropocentric moral aspirations, he summed up, are untenable. I'm not so sure. While the cheerful apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are admissible. The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome awesome in the original sense of inspiring awe, they viewed it as a slow, fragile vessel vessel ripe for obsolescence.
The late MIT professor, Joseph Weizenbaum, a pioneer AI researcher in the nineteen sixties, who created the first known chatbot, became a a fierce critic of much AI research. He summed up Moravec's attitude bluntly. He despises the body. The cheerful apocalyptics, larger judgment is the version of the age old maxim that might might makes right, but this time with the higher intelligence as the supposed trump card. That is, it confers a superior claim to existence. Figuella, in an essay titled Rightful Misanthropy asked the rhetorical question, why maintain a a species of biological husks, that is humans, when vastly superior intelligences can be cultivated?
One possible response is the Judeo Christian idea that humankind was uniquely created in god's image. Of course, the cheerful apocalyptics would see any such spiritual belief as inadmissible. But their view of intelligence alone, as conferring ripe full supremacy, is itself a spiritual belief, excellent observation there, author, that needs to be defended or rejected. What does it imply for the moral rights of less intelligent humans versus smarter ones? What does it mean for theories of justice that are founded on the equal moral worth of persons? Yeah. It's a it's basically a slight tweak on eugenics, just taking it taking it posthuman there. Alright. Back to the article. The whole school of thought can sometimes feel like the ultimate revenge fantasy of disaffected smart kids for whom the triumph of their AI proxies amounts to sweet victory over lesser mortals.
Lanier suggested to me that people in elite AI circles seemingly embrace the the ideas of the cheerful apocalypse because they grew up identifying with the non biological villains in science fiction movies, such as those of the Terminator and Matrix franchises. Even if the AI's in those movies are kind of evil, they're superior, and from their perspective, people are just a nuisance to be gotten rid of. Weizenbaum recognized this problem early on, denouncing the idea that the machine becomes the measure of the human being. In 1998, he told an interviewer, I believe the essential common ground between National Socialism and the ideas of Hans Moravec lies in the degradation of the human and the fantasy of a perfect new man that must be created at Alcas.
At the end of this perfection, however, man is no longer there. Like some rat other radical doctrines, those of the cheerful apocalyptics amount to a closed system. If you resist belief, your views can be dismissed. Either you're infected with the pro human mind virus or you're biased by human arrogance. Fortunately for humankind, our biases in favor of our species would indeed be a powerful barrier to the acceptance of human extinction, provided that its proponents proclaim them in the open and not just at parties and salons behind laboratory doors. Laboratory doors do we really want more of what we have now? Moravec once asked. More millennia of the same old human soap opera?
I, for one, say yes. And that's, David A. Price is the one who wrote that. It's interesting article, makes some good points. I thought his observation that this whole cheerful apocalyptics AI worldview is very much a religious belief. Right? You're you're making a moral choice. You can't just say, oh, we won't factor in whether humans are are created by God or not, or if there's anything innately special about humankind as opposed to other creatures or robots. You can't just you've gotta make your argument. You can't just ignore the argument. Right? But, interesting article. I'm sure those of you that listen to this podcast know where I stand on that.
But the reason I thought that was interesting today, is that, you know, the the AI stuff, it's all so fragile. Right? I mean, this is not and this is not the first big, big issue, And it doesn't have to be as big as an AWS, Amazon Web Services issue, or, you know, what are the other ones? We've had Facebook, and we've had other just giant megalithic corporate corporations where all of a sudden stuff doesn't work. Almost to the extent where it seems like, you know, they take turns kinda testing stuff out, like there might be something bigger on the horizon.
And as an aside, that is how you would, quote, take down the Internet. You can't take down the Internet as in, like, the actual infrastructure that, you know, glass tube light running through glass tubes all over the world. You could take that out in specific locations, but you can't kind of universally take that out. If you took out some undersea cables, if you took out enough of them, that could certainly do some damage. But in general, the physical infrastructure is not not the most vulnerable part. So I I threw a cartoon in there, and a host just of the or I did a a search on Substack for the AWS outage, and one of the first things that popped up is, a little stick figure comic that says, the Internet in 1969.
Let's create a distributed network so it can survive a nuclear winter. The Internet in 2021, or as we found out this week, 2025. Let's host half of it on one company and see how it goes. Right? So so the the same we've got kind of this pie in the sky AI. It's gonna take over. Maybe it's a good thing. Maybe it's a bad thing, but, you know, it's it's probably gonna take over. You know, at least a one in six chance of it taking over and ruling everything and wiping out humanity. And, like, no. No. There's there's really not a one in six chance of that happening. There's exactly zero percent chance of that happening.
And part of the reason for that is people people are the ones who make the AI stuff. You can't program an imperfect person cannot program imperfection. And this is evident, every AI system, it doesn't automatically get better and better and better and eventually just achieve perfection. In many cases, it gets worse. The concept of AI slop, AI, you know, just imagining things. You could put put in a search into AI or any search engine for that matter about something you really know about and see how accurate it is. You know, this stuff is it's just pulling massive amounts of data and information, putting it together in a way that sort of makes sense from a mathematical perspective, and spitting it back out. There's no intelligence.
There's no judgment. There's no discernment. There's no wisdom. And there there won't be, no matter how much compute and how much electricity and how much, you know, money pours into it. It still is not gonna work, but, you know, we can we can all enjoy the, the stock market bubble and everything else that that goes with, at least pretending this is all going some somewhere good. So the another article specifically about the outage let me pull that up. This comes from just a substacker, Leslie Joy Allen, that ominous outage on 10/20/2025. If you experienced Internet disruptions on many platforms this morning the twentieth, it was due to a failure of Amazon Web Services' temporary inability to convert its readable domain system, DNS, into computer readable IP addresses.
This problem originated in Amazon Web Services East. Many people do not know that US East 1 is Amazon's oldest and largest Amazon Web Service region. This region on the Eastern Portion of The United States hosts enormous portions of the world's Internet services. If it fails, the entire global infrastructure malfunctions. When the domain name system, or DNS, of AWS failed early on the October 20, its function that converts a essentially making websites, social media, and various internet platforms unstable everywhere. This failure also prevented thousands of businesses from communicating with a database named DynamoDB that almost all businesses now use for both both customer and operational data. As of this writing around 04:12PM eastern time, not all functionality has been destroyed has been corrected.
And I'll say here, 09:41PM central on October 21, still not all functionality has been corrected. You know, you just went to Citizen Free Press to look at stories, and half the links you click on take forever to open up or don't open up at all. Trying to do work. You know, Salesforce wasn't working right. All sorts of all sorts of stuff was not working as it normally does. So still still going on, over a full day later, day and a half later. Guess who owns DynamoDB? Amazon. Dyna so DynamoDB is one of 25 cloud services owned by Amazon. In addition to these cloud services, Amazon owns nine technology firms, 14 e commerce entities, four grocery outlets, Whole Foods being one of them, 19 media and entertainment businesses, including MGM Studios. You can access a complete list of everything Amazon owns and controls by clicking here at everything owned by Amazon. If you clicked on that hyperlink and just looked at the list, you will recognize that Amazon owns too much of everything. If a glitch or a malfunction in Amazon Web Services East can create problems for most of the world's businesses and consumers, then it has too much control and too much reach.
Now, before you computer folks start reminding me that the whole purpose of this these globally integrated systems is to ensure that everything's things run smoothly, that I should be able to sit in my home in Atlanta and buy a product online from a store in Istanbul without any problems, I get it. Yet, Amazon's massive control ought to worry you as well. Let's be clear. We are still tied to Amazon even if we don't shop with them. Do your research. Support your local businesses as as the first line of defense against this type of monopoly. Ask your local local merchants what their relationship with Amazon is even if those merchants have a relationship with the is if those merchants have a relationship with the company. A company that is gobbled up one company after another has the potential to control everything we do, who we shop with, and virtually with no virtually who we shop with and with virtually no input from us.
Update. As of 09:55PM eastern, when Amazon Web Services eastglitch for the third time in five years on October 20, it interrupted services at the British government's website and its tax services. It disrupted services for the payment at Venmo, The Wall Street Journal's website, and games on The New York Times website, Amazon itself, Hulu, Snapchat, McDonald's, Ring doorbells interesting and the game Fortnite all experienced interruptions. Alright. So here we go. The AWS outage. And, you know, as someone who has worked for Internet service providers for the better part of a decade now, many not a totally uncommon thing.
If it's a small outage, it's usually a a fiber cut. You know, if it's a big outage, there's usually something going on, either a a very important server or other piece of equipment goes down or yeah. And in this case, kind of a a not total outage, but a DNS malfunction of some sort. And if it's just one DNS provider, you can switch that, just FYI. But, yeah. And you're you've got a couple not very good choices. I think you've got Microsoft has DNS. Google has DNS. And, of course, Amazon has DNS. So, yeah, it's not a lot of great options from from the the big ones. And then if you have a local Internet service provider that provides DNS, they're probably pulling from a much larger system, or they have one which, might not work very well because it's it's not an easy thing to to operate. But alright. We'll get back on track here. Just just saying the you know, we've got some time before the AI overlords take over.
But that spiritual the spiritual thing behind that attitude of the hatred of humans, the hatred of the human body, the hatred of, you know, this of the idea of being created by God. Right? And that, like, no. You don't get to create something that's gonna take over the world. It's not gonna happen. There are bigger forces than you that already exist. This whole idea of creating God like Elon Musk talked about. No. There is a God. He's not you, and you're not going to overpower him. And this is the the spirit behind the desire for this is demonic. It's a it's a demonic spiritual entity that's been the same thing, you know, for thousands of years, still still marching on, still going, just different disguises. We've got the, you know, the Greek and Roman gods, the Egyptian gods, and, we've got aliens. We've got, you know, stuff in between, just kind of the spiritualism stuff, seance stuff for thousands of years, talking to supposed dead people. Right? So just different forms, but it's all demons.
And this is now we're we're moving into, demons speaking through, at times, I think, AI chatbots or those kind of weird AI experiences that people get sucked in and think it's become aware and is talking to them and and leading them. There could be truth to that, but it is not AI that became self aware. It is a demon at the controls there. So, anyway, keep an eye out for that stuff. We've we've talked about most of that before, but I I just thought it was interesting given what's going on. The other stuff that's making big news, we've got this really phony fight between Trump Trump trying to enforce law and order, but only through means that are, you know, either blatantly unconstitutional or it should never be done. Right? So, all that for a purpose to get everyone stirred up, to to let, you know, people on the left feel like they finally had you know, they're finally right. They're finally on the good side of an issue.
And and then on the right, it's like, well, do you just want these places to go into disrepair? No. You want the you want the troops to come in. What? You're against troops on the streets in America? You know? I mean, come on, people. This was this was Alex Jones' big thing, martial law, you know, seizing the guns after Katrina, just the all terrible stuff. It's the end of America if this happens. And, you know, to be fair to Alex Jones in particular, he did predict this. He said they're they're gonna let the illegal immigration get the cities get so bad that you will beg for troops on the streets.
And it hasn't got to that extreme. You know, it's still a controversial thing, but there's still there's people that are like, yeah. Let's let's clean it up. Let's stop the killing in Chicago. Let's and, you know, it's a fake fight. It's clearly being set up. So I'm not saying the next president, but a future president, future puppet in chief will use the troops on, oh, I I don't know, people like us. And it'll be like, oh, you were all for it when Trump was doing it. Well, most people aren't all for it. You know, it's kind of a media creation too. But that's that's how you move move forward with it. Right? You get the these aren't just these aren't authoritarian brain chips. These are cool Elon Musk, you know, freedom brain chips, free speech brain chips. You don't have to be afraid of these. So that's that's how they're trying to push push the technocratic, algocracy forward.
So that's pretty much all I got for tonight. Thank you so much for listening. It's a beautiful time of year. As one more thing I did want to talk about, It's kinda made the the rounds on social media, but if you are wise and stay off social media, you might not have noticed it. But you might have noticed it in real life. Here in Oklahoma City, I haven't seen a lot of chemtrails lately, and this coincides with the government shutdown. Is it a coincidence? Is the you know, is whatever government department was in charge of chemtrailing shut down for the government shut down. I don't know.
Open to input on that. If you've been seeing chemtrails during the the past couple of weeks here in October, let me know. We'll we'll blow up that theory if we need to. But the blue the sky is blue and and beautiful here in Oklahoma City or has been for the for the last few days. So, hope everyone out there is doing well. Thank you so much for listening, and we will talk to you again soon.