18 January 2024
AI Discussion With George Allen Miller, Author of Eugene J. McGilliguddy's Alien Detective Agency (251) - E251
In this episode, I have the pleasure of talking with George Allen Miller, author of the science fiction book, Eugene J. McGilliguddy's Alien Detective Agency. George shares his insights into the role of AI in his novel and how it shapes the story. We also delve into the challenges and benefits of traditional publishing versus self-publishing. Join us as we explore the fascinating world of George's book and the creative process behind it.
During our conversation, George discusses the inspiration behind Eugene J. McGilliguddy's Alien Detective Agency and how he developed the concept of an AI-driven detective agency. He explains how AI technology plays a central role in the book, influencing the characters and driving the plot forward. We dive into the ethical implications of AI and its potential impact on society.
George shares his experiences with both traditional publishing and self-publishing. He highlights the advantages and disadvantages of each approach, including the creative control and financial considerations. We gain valuable insights into the publishing industry and the various paths authors can take to bring their work to readers.
If you're a fan of science fiction, AI, or the publishing industry, this episode is a must-listen. Join us as we explore the captivating world of Eugene J. McGilliguddy's Alien Detective Agency and gain a deeper understanding of the role of AI in storytelling.
Welcome to the Digital Marketing Masters podcast with your host, Matt Rouse.
[00:00:06] Matt Rouse:
Hey, and welcome back to Digital Marketing Masters. My guest today is George Allen Miller. George, how are you? I am good. How are you? Thank you for having me. I am doing fantastic. It's almost a holiday shutdown time here for the company, which I'm I was saying I'm probably gonna work half of anyway. But It's great to see the holidays. My kids going nuts every single day because she's on Christmas break from school. But, George, I wanted to have you on the show because I saw that you had a book come out, and it is a science fiction book. And my book that just came out a couple months ago, is a nonfiction book, will AI take my job, Kind of got me thinking, and I was like, I should write a fiction book about AI because I've already done all this research in advance.
And I started actually writing a story about, you You know, a couple chapters in now and, got some framework set up. But I was like, man, I should talk to somebody who's actually finished one already. And, Yeah. I'm I like I said, I haven't had a chance to read it since we only first talked a couple days ago. Sure. But it looks great. It's Eugene j McGillicuddy's Alien Detective Agency,
[00:01:20] George Allen Miller:
which is a mouthful.
[00:01:21] Matt Rouse:
Do you wanna tell me a little bit about what the book is about?
[00:01:24] George Allen Miller:
Sure. So, Eugene j McGilligutty's Alien Detective Agency stars Eugene McGilligutty who has a unique, psychic ability to Answer any question that's asked to him. It sort of pops into his mind as soon as somebody asks them a question. It's a psychic ability. He doesn't really know where it came from, And he is navigating a very unique world where basically the earth, you know, human society basically collapsed, in the mid 21st century. And, aliens came and rescued the day, built humanity back up again and says, okay. We're here to help you guys. Obviously, you can't, handle the world yourself, So we're going to help build you back up. And, Eugene has some allies and friends. One of them actually is an AI. So, artificial intelligence plays a pretty prominent role in this novel.
His best friend and partner is named Eddie, who was a, control routine and an office Lounge chair or an office chair that jumped its routines and actually became self aware. So a little bit like, you know, Skynet on Right. The Terminator, but this happened in an office chair. So the tech basically, the the theme is the technology gets so advanced, when quantum computers, quantum chip chips are plentiful because that's just the manufacturing process, and you could put them in anything. Well, now they're gonna be in anything, and they're get so Powerful that if you don't have enough control routines, then something can jump its routine and become self aware. So that's kind of one of the big themes of the book that toasters and chairs and are becoming self aware because the technology is just so powerful.
[00:02:56] Matt Rouse:
Right? There's that that, old adage of the, You know, the AI into everything, and then you always have some sort of arbitrary machine that has suddenly becomes conscious. Right? Like the old, Toasty the toaster, I think it was on Red Dwarf, if you remember that show. I love Red Dwarf.
[00:03:17] George Allen Miller:
British humor heavily influences a lot of my life. So Red Dwarf, Hitchhiker's Guide to the Galaxy, all those great shows, Fawlty Towers. Love them all. It's all fantastic humor.
[00:03:29] Matt Rouse:
Hi. I'm actually reading a very odd book right now. It's, somebody wrote a book of, basically, religious philosophies of the people and characters from Red Dwarf. And it's like a comparison to Christianity to the beliefs of the people in the show, which I don't think that was the intention of the person who bought me the book. They didn't really know that that's what it was about, but it is super interesting. That is very cool. Might have to get the name of that book. Yeah. I'll I'll have to grab for you. It's got a super long title. But anyway, So in your book, I think one of the interesting things I noticed kind of right off when I was, like, reading up on, like, the cover Burn stuff on the book is, like, it's very advanced AI, but it's kind of thoughtful about the AI future. Right?
Like, the the person who wakes up on the medical bed and The AI is is performing surgery on them. I think that is something that Is coming faster than than people realize?
[00:04:40] George Allen Miller:
It is. I mean, very, very much. I mean, you know, we can already have, like, Siri can, schedule my calls and it can do a lot of, different things for me by voice command. And we're already seeing, autonomous Controlled cars. Right? I know that there was a big recall for Tesla for their automated cars. Right. But that's still coming and once they get that right. So driving a vehicle is extremely complicated. There's a 1,000,000 different variables in driving 1, and AI can do that today. So sure. AI being able to perform surgery, I that's probably around the corner as well.
[00:05:15] Matt Rouse:
Well, you already have surgical robots now. We do. Right? And so that and those are generally speaking, those are so that someone can remotely handle the robot, And the robot is way more precise than a person, so they can, you know, turn a dial and it moves the robot a millimeter instead of their hand, maybe moving a centimeter kind of thing or Half an inch or whatever. Depends where you live. But they're already testing moving the robots around autonomously with An AI
[00:05:47] George Allen Miller:
system. Right? Absolutely. And you're the robotic tested now. Absolutely, it is. Yeah. And the robotic robots, being controlled by surgeons, That alone is a groundbreaking breakthrough. Right? Because now I can have, you know, the world's foremost brain surgeon who may live in, let's pick a city, New York, And he can do remote operations all around the world without having to travel. So I think that alone is a pretty big breakthrough.
[00:06:11] Matt Rouse:
Well, and the other thing that's interesting about that is that the robotic surgery, like, the the devices that are used for it are so Right? They're more accurate than any person could ever be. Yep. And one of the things that people tell me all the time when they see that I wrote a book about is AI gonna take their job, They're like, well, I have to do this really, like, finicky tiny work. You know, there's no way a robot's ever gonna do that. And I'm like, well, the robots already do that. But
[00:06:40] George Allen Miller:
Yeah. I know. I mean, technology gets exponentially better. I think it was Ray Kurzweil that said that technology has just gone this curve. So it's not like, you know, 10 years ago, technology was the equivalent of, you know, a 100 years ago versus, like, railroad technology as it slowly increased in Inability and things. So it just follows this massive curve where technologies are getting more and more and more complicated as it goes up that curve Faster and faster because it's it's it's building into itself so that it gets even better. Something interesting
[00:07:14] Matt Rouse:
Something interesting that came up when I was researching my book was this idea of double exponent technology where if you have a technology like a a good example of this is gene editing. Right? Good. Because the gene editing is on an exponential curve, but also the processing power and the the equipment needed for it is also on an exponential Curve. It actually gets better exponentially twice as fast. That's right. So doubles the exponent, and AI is the same way because you have an AI that helps you code the next
[00:07:51] George Allen Miller:
AI. Exactly right. And and right now we just have that. So I think, there's been some different, Like, like Copilot, is this new tool that just came out that helps you write your own code. You can go into Bard, you can Go into all these different learn language learning models, and you can say, hey, write me a Node JS application that does an API call to some API in the world and then spits information out into a flat file, flat text file and put that on a drive somewhere. And these programs can just write it, and they can write it efficiently, and you can even give it a snippet of code and say, hey, can you write Piece of snippet more efficiently, and they can do that too. And, you know, I don't even think we've broken I think we're coming to a tipping point too as far as Once we get into like quantum level computing power Mhmm. And I know that that, you know, it's a it's a It's a word that's being thrown out a lot, but there really will be an exponential ability chain once we hit that ability to just do computations faster, once these quantum computers are actually able to to to be used widely, then who knows what we're gonna see when that happens? You know, I think there's gonna be this
[00:08:57] Matt Rouse:
This I don't know. This is not something I have any actual scientific basis for. But I think that there's gonna be this combination of, Like, silicon based computing and quantum computing working together because quantum computing, the the structure Sure of the mathematics seems to work really well for some stuff like cryptography and things like that, but not as well for some other things. And so I think the combination of the 2 working in concert is gonna be the the the superpower there. That's actually a fantastic point. I mean, so not everybody some people think that once quantum computers are invented,
[00:09:36] George Allen Miller:
classical computer, the classic structure that we have Today for computing, just gonna go away. That's not really true. Quantum computing is really good at doing computations. Classical computer is really good at doing some classical Computing methodology. So Right. I don't think both are I don't think one is gonna just completely get rid of the other. The but you're right. There's gonna be a convergence where they both together. Some things are gonna be in the classical computer. Your MacBook is still gonna do work, and your quantum computer is gonna go off and do it some other things. So where those 2 merge together, that's where you're really going to see
[00:10:07] Matt Rouse:
Some some special things happening. Yeah. If you look at some of the new open source AIs like Minstrel or Wizard or something like that where it has I think one of the I think the new minstrel is 8 separate, like, AIs, and each one is a different language Model, but one's like a math subprocessor and some stuff, but they all work together. Yep. And then there's some kind of governing model in there that kind of decides which Parts to use and then gives the output, which makes the model so much more powerful. And that's something you could run with, you know, a couple a 100. Right? It's
[00:10:43] George Allen Miller:
not So a 100%. So I've, you know, there's the big models out there that I think everybody knows the big websites like Midjourney and like Night Cafe Studio, where you can do image generation. So I can type in, I wanna see a duck on a pond with a cigar. Right? So there's a You can actually, take that software and install it on your computer, and you can combine it, or you can look under the hood and see all these other different settings and all these different tweaks all these different ways that you can manipulate that software to do even more than, the commercial products like Midjourney and that cafe studio do. So yes, once you start being able to really open source this stuff, open source is where things really start taking off. Once you open source a software package, Then you get the power of the world, really, the power of all these different developers, all these different mindsets, all these different, you know, different takes on these technologies come together.
And Sometimes an idea can come where you just never would have thought about it in, you know, a team of 5 or 10, but a team of 10,000,
[00:11:38] Matt Rouse:
hey. What about we try this crazy idea and then it actually turns out Really work. Well, and I think what's interesting is when you come when it comes to AI is that we need more ideas. Right? It's not like Like, when it comes to something like Python, you know, somebody could suggest a feature or something, and it's not gonna change the way Python works. Right. One person out of 10,000 can come up with a process for AI that could just radically change how it works. Right?
[00:12:04] George Allen Miller:
Absolutely.
[00:12:05] Matt Rouse:
Now you see that with, you know, like the multimodal agent. Right? And you know what? I should just for a moment, We are gonna talk more about publishing fiction books, but we're gonna nerd out on AI for a minute first. So, Like, if you look at a multimodal agent, right, where, You know, the AI agent itself can determine which AI it should contact for that step in the process that it's trying to do. Mhmm. So it's and then you take that and you combine that with, like, a simulation of multiple agents. So you have multiple specialized agent, And each agent in that simulation can also talk to different AI models.
And now you've got something that's incredibly powerful Because it can use a cheaper or faster model to do one thing, it can use an image generator that's good at making a picture of a person versus a different one like Pika or something that makes, you know, cartoon videos or, you know, whatever. Right? So it can use what it needs at the time. And then you could have, like, a project manager model that Understands how to manage the components and the pieces and make the decisions and handle the output. And then you've got this this amazing kind of symphony of all these different AI systems working together, which is gonna be incredibly powerful compared to, You know what we're seeing now.
And I see this all the time. Maybe let me know if this happens to you. I go on LinkedIn. Somebody who's, let's say, a copywriter. They'll say, well, I typed something in. I Ask Chad GPT to write me a story about this, and the story was terrible. So this thing's a piece of crap. You know? And I'm like, That is the worst test of a language model ever. Right? Like, I have 1500 characters of custom instructions in my GPT 4 before I do anything. And then I also give it 2,000 tokens of context. And then we, you know, go back and and, you know, refine it and stuff.
That's a better test of how can this write a Dory. It's like if you had an employee and I said, I want you to go write a story and one of the guys is an AI and one of them's a dinosaur. And they were like, okay. You know, they're just not gonna get it on the first try. Right.
[00:14:34] George Allen Miller:
Yeah. I mean, the thing about Hey, language learning models, and and I think we have to boil things down to be a little bit more specificity to these things. Chat gbt, you know, it's really a language learning model. It's meant to take, you know, an existing amount of data set, and from that data set, be able to respond to questions. So if it's in the data set, it can respond to it. If it's not in the data set, it's really has no idea. What I mean by that, it's not really capable of independent thought. Right? It's not sentient. It's not like, You know, the example I'd like to give here, I know everybody says AI is and in some cases, it is going to impact jobs. But I love I love to make the analogy of, like, Star Trek. We all remember Star Trek, you know, Next Generation, Captain Picard, one of my favorite shows. I've seen every episode probably like 5 times.
There's the ship's computer in Star Trek who was played, of course, by Nurse Chapel who was also the mother of Diana Troy. I'm just nerding out there a little bit on those things. That ship's computer is exactly kinda like a language learning model. It you can ask it a question. It can do some work. It Only works within the boundary of its programming though. It can't really go beyond that boundary. But then the other side of the coin, you have Data. Right? Right. Commander Data is an AI. He is a fully sentient, Self aware organism. There was an episode in Star Trek where Picard and Riker fought about, you know, whether or not data has autonomy, whether or not he can have rights on his own. And data is that, you know, what everybody we really say AI. A lot of folks think, oh, I mean data. Right? We think data. I think, you know, I think the sentient thing that's gonna be able to make its own decisions and not be bound by programming, not be bound by a a round programming.
We're not there. We're Right. We are light years away from that. We're, you know, probably decades away from that kind of thing. We're still in language learning models. We're still making enterprise ship computer level stuff. It's still bound by the programming, bound by what it gives it. You know, the old adage in software development, garbage in garbage out. Right. So if I give, You know, a language learning model, a ton of books about let's say if I give it all of doctor Seuss books and I say go write a book. Well, don't be Surprise when your book comes out rhyming. Right? Right. Because it's learn it's taught on how to write doctor Seuss books.
[00:16:46] Matt Rouse:
Well, that's also where the bias thing comes in that people talk about, which They don't understand is that the bias is not the the machine being biased. The bias is a correlation in the training data. Yeah. My brother. The correlation is just it's like the Midjourney had this problem where a gal took her photo. I think She was like a, an Asian race gal, and she put it in into Midjourney and said create me professional LinkedIn photo, and it turned her white. Well, that's because it was trained on photos of professional people as white because most stock photos of professionals White people. It's just because that was the training data, has nothing to do with the actual design of the system. Like, How it functions, that is is it's correctable.
Right? But this shouldn't be like, well, we can't use Hi. Because it's racist. You know, it's not racist. It just it only knows what it was given. So all you have to do is either give it more of the correct amount of data, or you need to, you know, use waiting or or guard railing or something to correct those Discrepancies.
[00:18:02] George Allen Miller:
Yes. Though I guess I will have to, you know, add the caveat. There can be some implicit bias that comes in Even from, you know, into some of these software programs that are developed, you know, in the Western world, you are generally correct. You know, it's again, it will go back to garbage in, garbage out. Right. A 100%. Garbage in garbage out. That is definitely, going to be a major major factor in some of those, how those, AIs function for sure. You do see this, and I don't know if you've, Super sure.
[00:18:28] Matt Rouse:
You do see this, and I don't know if you've seen this in in any of the models that you've been using, but It seems like the more that they kind of guardrail them, whether that's for safety or bias or whatever the reason that they're kind of guardrailing the models, The less accurate the model seemed to get. And it does seem that, like, GPT 4 3 months ago Would give me a lot more, responses and a lot less saying, no. I can't do that for whatever reason That it does now, which is probably a good thing for safety, but it's not a great thing for actually being able to use it.
[00:19:07] George Allen Miller:
Yeah. Again, I think it's, you know, if you want to give it the Encyclopedia Britannica, but leave out the letter z, You're not gonna get any responses from it on the letter z. Right? Or any of those things that are going on in there. I do think that there does need to be you There were some examples where if you train the LLMs on, hate speech. Right? If you if you there was exposed to hate speech and then it starts to, You know, that's just part of the its model as it's generating. So there definitely has to be some awareness of that, have some care in that, especially as we open the the, these to these these tool sets up to the public to come in and use that are publicly available. Folks will just be able to type in a question and then it gives you a response.
So I think, They may be less accurate on the guardrails, but if it if it makes it a little bit safer, maybe they're maybe that's warranted. In some levels, it's it's it's a very it's a slope. It's very There seems to be a fine line between
[00:20:04] Matt Rouse:
safety And functionality that they haven't quite figured out yet. Like, if I say, you know, to the API. If this does something wrong, I want you to tell me how to kill this process, and it says, no. I can't because killing's wrong. That has nothing to do with killing people. Right? Right. True. And that's something that actually happens. Like, you could try it. Right? Oh, yeah. But also, like, if you can develop something where it will recognize hate speech, and I can tell you, you know, The response that would be given or, you know, what you told me, can be in some cases considered hate speech, so we're not gonna give you a response.
You know, that's gonna work. Correct. But the other side of the coin, I think, is hate speech can be found on the Internet. Right? Like, surprise. You just go on Twitter for 5 minutes, you can find hate speech. Right? Correct. So are we really protecting anybody, You know, by having something where it doesn't do it, I mean, that line needs to be it I think there does need to be some, but I also think, You know, maybe we're pushing it too far, and kind of handicapping the model. I don't even know if I should say handicapping. We're making the model less effective.
[00:21:25] George Allen Miller:
I guess, my thought up there is if any language learning model is being presented as having an authoritative, voice. Right? Right. For instance, like Microsoft, you know, that's a very large company, and they'd say, you know, try our chat gbt 4 system. In a lot of people's minds, that becomes an authoritative force, like, you know, Bing, like search, Sure. Like, you know, whatever. Right. So when it has that versus, you know, George Allen Miller's Discount LLM stop Studio. Right? Where not a single thing is right because I don't know how to make an LLM. Right?
In those cases where it is authoritative source, if CNN decided to bake, language learning models into their website, and I go into CNN and I type something in, I think that there is an onus on the company who's going to put that forward to get it right or to get it, you know, And that is say, okay. This is hate speech. This is not incorrect or this is an incorrect statement. This is, you know, that kind of a thing.
[00:22:31] Matt Rouse:
Well, Well, it's gotta be an incredibly difficult task. Right? Like Yes. Like, the CNN example is actually a really good example because What if somebody like a political figure was in some kind of scandal because they said something that is considered hate speech? Mhmm. We your LLM, if it's guardrail against hate speech, it's not even gonna tell you that. Right? It's gonna be like, well, this included hate speech, So I'm afraid, you know, I can't tell you what happened in this story, and that's the way it works now. Right? Woah. So you'd need to have some sort of, like, I don't know, Like, journalistic, like, filter or something. I don't know. Maybe stuff just needs to be, like,
[00:23:12] George Allen Miller:
Pass it by a real person. I don't know. It's hard to say. And I think, you know, we I guess we have to take a step back too. This is a very fast evolving technology. This is what, what, 9 months ago, this really stuff really exploded onto the scene with chat gbt. So I I think that you have a great point. We do have to have some sort of maybe human in the middle. So that is actually one concept that a lot of these AIs use because the technology may not quite be there As far as being able to have those guardrails to make sure that hate speech is kind of eliminated from, these authoritative sources, so the human in the middle can come in and say, okay, you know, this has been Like, in an an autonomous system, you know, it could flag certain things that this is questionable. I'm not like, the LLM could say, I'm not sure if I should include this. I need a human in the middle to come in And, you know, give me some of those guardrails that only a human can do. So, you know, in that respect, good for us because we're not totally out of jobs. We still Well, I don't wanna, like, you know, beat a A dead horse on the on the guard railing and and kind of security idea. But one interesting thing that I did hear about also
[00:24:13] Matt Rouse:
Was that intent kind of matters? And there's no intent behind the model. Right? The only intent of the model is to answer the question. Right. So, like, if an if a model says something that offends you, it wasn't trying to offend you. Right? Correct. It wasn't trying to it doesn't right. Because it doesn't have thought or It doesn't have feelings or anything. Right? Yeah. It's not like it's not like I'm gonna make George, feel bad because, you know, he typed the wrong thing into me with too many tokens or something. You know, it's just like
[00:24:42] George Allen Miller:
It it I don't know if Enchanted is, but it It it is simulating a conversation with another human being. So in that simulation of a conversation with another human being, if it says, you know, hey, George, I think you should jump off a cliff. Really? So that's not cool. So, you know, so I think that there is still because we are simulating You have conversations with a person where it's trying to simulate a sentient living creature, which it is not. You know, I'm not trying to say that Any language learning model today or any AI today is sentient, is self aware. That's we're not there yet. But because we're simulating that, I do think there is some responsibility on organizations and companies to be able to say, okay, you know, This is almost like it's like Tesla recently with their autonomous cars. They there's an issue. There's an error with them, and they're recalling them. They're not having to fix them.
We're just not there yet. I think that we're in a lot of cases, this technology were running very, very fast to a goal. We'll get to that goal. We'll get to the point where all of this stuff will be worked out. I have no doubt in my mind, but we're in that that that we're in the, era where, you know, we don't have anti lock brakes on our cars yet. Right. We don't have crumple zones built into our cars yet. Yeah. And, you know, the other thing is that I don't think
[00:25:55] Matt Rouse:
you know, Sam, the the the one that Sam Altman uses of GPT 4. Doesn't have the same, you know, guard railing that the one that we use has. Right? And I think that, you know, if you can take, like, you could take Cabo Llama 2 or, you know, minstrel or whatever, and you can unguardrail it. Right? You jailbreak it. It takes minutes. Right? And then it works fine, you know, and it's not going to tell you anything that's Super dangerous, you know, because it's built into the training data. So the like, the models themselves now, like, they've already sort of Cured that problem. It's not gonna tell you how to make anthrax or something. Right?
[00:26:37] George Allen Miller:
But,
[00:26:38] Matt Rouse:
you know, you could probably twist your words around and get it to teach you how to be a science, you know, chemistry student and do something bad. But, you know, that's like that would be some serious jailbreaking you would have to do to make that happen. Well, yeah, George, I want you we'll get off the security and and if AI is gonna kill us all thing for a moment. I I know before The show, we talked a little bit about publishing. And so your book, you went with a publisher.
[00:27:11] George Allen Miller:
I did. The small press, the Wild Rose Press. They're great, great group of folks. They were I was lucky enough for them to decide, hey, yo, your book is pretty good. We're gonna give it a shot. So they, again, they're a small press, so they're not one of the big five or anything. But they are a very group, awesome group of people. And, yep, they published the book. So that's great.
[00:27:30] Matt Rouse:
And if you don't mind me asking, how did you kind of decide on a publisher? Was it something where, you know, did you approach a bunch of publishers? Or, you know, did Did you just you knew somebody there or something? Or
[00:27:43] George Allen Miller:
So I guess in the young writer's career, and I'm not young anymore, but when, when the hair wasn't as gray, You know, you just wanna get published. You want your work to get out there. So you I did agents. I did, small presses. I I basically sent the, query letter to anyone and everyone that's willing to accept query letters to tell me that they don't wanna publish my book, Which basically is what happened, except for the one person that said yes, which was the Wild Rose Press. So, there's almost a point where you're not You're you're you're you're just open to whoever someone to say yes. You just need to say it. Someone is a agent, a publishing house, whatever it is so that your words can get out there. So I think that's just part of it. They they they said yes, so I said yes back. Yeah. Well, one of the biggest problems of being an author is is
[00:28:30] Matt Rouse:
An audience. Right? Because there's so many books.
[00:28:34] George Allen Miller:
There's, you know, millions and millions and millions of books. I think the stat that I heard is 11,000 new books being published every week Yeah. Or something like crazy like that.
[00:28:44] Matt Rouse:
And, that's a lot of books. Now Personally, self published. One thing also that we tried with my first book is we crowdfunded it. Okay. Yep. That's another. Yep. Crowdfunding was interesting, honestly. We ended up buying a lot of our own books to kind of push the crowdfunding, which is a a trick that some of you thinking about crowdfunding might not know is that people tend to only invest in crowdfunded things that they think are gonna get funded. Right. So what you do is you figure out what your budget is, And, you know, you set your your
[00:29:26] George Allen Miller:
your target for double what your budget you can afford is, and then you buy half your own stuff. No. I definitely have heard of that tactic in the, you know, when you're an entrepreneur and you're starting out, I mean, whatever it takes, you know, to get the message out, and you're right. There's some bias there about someone to say, hey. I'm gonna go look at some crowdfunding things. Here's one with a zero funding. Here's one that has 50, 60% funding. Well, I'm gonna go with 60% funding because 60% of a 100 people or however many what the goal is Have said yes. So it's like it's sort of, it it it it gives you confirmation that this is actually a good thing. Same thing on Amazon with reviews.
So if I see 2 books, one book has 0 reviews, and one book has 50 reviews or a 100 reviews. Well, I'm gonna be more likely to publish to take that book with a 100 reviews because that's some confirmation that this is actually a a good investment, a good purchase. So I've I've absolutely seen that tactic. If I'm just gonna buy it myself, get my get my that'll start me all up and I can just start rolling.
[00:30:29] Matt Rouse:
Did you find that there was anything in the process of using a publisher that, you know, maybe you would have done Finley doing it on your own, or were they pretty easy to work with and you kinda had still your own autonomy?
[00:30:42] George Allen Miller:
They're very they were very easy to work with. Again, they were great. I don't know actually what because they actually they a lot of the small presses, they'll they'll walk the fine line of, look, we're we'll, you know, do the Book formatting. We're gonna get it in the right epub format. We're we'll work on the cover for you. We have in house artists that we can help work on the cover, etcetera, things like that.
[00:31:04] Matt Rouse:
Did they do print also or was it just digital? They did print, but it's print on demand. Okay.
[00:31:09] George Allen Miller:
But, the marketing, that's all you. So there is, like, The one thing I guess is like when when you and and this is all Amazon based. When you do Amazon with the KDP, you have more tools in the back end to be able to touch things. Right. But they again, they'll update that if you just send them an email and say, hey. Can you update this stuff? So there's a little bit faster if you can go back and tweak things yourself, I do things, that's the only thing that has been different if I were to self publish. Everything else really, it's just they They have done all the the the work that I really wanna do anyway, getting your ISBN number, getting the formatting correct for an ebook book, all that kind of stuff.
[00:31:46] Matt Rouse:
Yeah. A lot of that stuff when you're self publishing can be a bit of pain in the butt, but, Yep. Like you were saying, you know, there's a lot of resources online to find that information out now.
[00:31:56] George Allen Miller:
There's industry around it. Like, you know, 20 years ago, and I'm, you know, I'm long in the tooth. So 20 years ago for me, I remember, like, self publishing was first a thing, And there was this interesting curve where everybody's like, lots of people started doing it, then it sort of, like, died out, and then now it's really come back with the industry behind it. And when it was first started, you were 100% on your own. You had to figure out the formatting yourself. You had to figure out the cover image size yourself, the spine, the back, All these things, the inside pages, the thank you stuff. Now there's actually, you know, organizations, there's companies, there's people, they Have this stuff down to a science. I even think Amazon has a tool that you can download where you just upload it. One for Word. It's It's like Yeah. How are you? For Microsoft Word. You just upload it, and it it'll actually formatted everything for you. So there's now industry behind Self publishing. I think that's why we're getting 7 or 11,000 books a week being published Right. Because all that hard stuff to do is now kind of easy. Yeah. And you can publish, like, you can publish with an API now, and I
[00:32:55] Matt Rouse:
no. I'll tell you what, though. Nothing good ever comes out that's published through an API. Yeah. Like, nobody who's a really good author is also like, I need to write an API so I can auto pump my book. No worries. So, Yeah. So so interestingly, self publishing, what we've done is kind of documented the process, in the file that we use for the book itself kind of thing. So then all I do is I copy the file to the next book I'm gonna write, and I Gonna delete all the text out and redo all the titles and stuff, and then I already have everything pre formatted. Yep.
[00:33:31] George Allen Miller:
No. That's That's a great strategy. I think, you know, what what all this basically means is the authors can spend more time on doing the one thing that matters the absolute most, and that's writing a quality product. You know, get your craft plot, story, character, arc, all that kind of stuff. Make sure that is tight and solid. Go to conferences. Go to, you know, find some groups, find like minded authors, you know, work on your craft, improve your craft. All the self publishing stuff, that's actually kind of it's not as hard as it once was, and I think anybody can actually do that self publishing stuff now. And you don't wanna spend all your time, like, learning how to run Amazon ads or something. Right? Just
[00:34:09] Matt Rouse:
get somebody to do it. Absolutely. I'd do it, but that's our industry. You know? So it's not hard for me to go in and, you know, figure it out in 5 minutes to get it done. But, yeah, the first time I ran Amazon Ads back in the day, like, you know, 12 years ago or whatever it was. Jesus. It was, like, painful. It it must have taken me 6 hours to get Ads running for one product, you know. Now it's Last night it's
[00:34:36] George Allen Miller:
I was gonna say last night, I Decided to try to do a new, Amazon ad, and it took me 5 minutes. I go to the campaign, you create a new campaign, you add some keywords. If you wanna do a custom keyword search, And you select your book, and you go. And it's already running. Have you found that your Amazon ads lose money, but you make it back in the long run? I've just gotten started. So there's this, philosophy that I found that some folks think Facebook ads are where it's at, where you should only do Facebook ads. And there's some folks out there that say, well, I've made all my sales through Amazon ads, actually. So there's 2 camps here. I for a long time, for about 3 or 4 months, I've been in the Facebook ad camp, just running ads on Facebook. Mhmm. Again, their ad system is just top notch where you can go in. You can choose your ad. You can choose your targeting. You choose all these different things. It's almost it's almost overwhelming how much power you have and control you have in there, but you can set up all of your ads in Facebook and let that run and go.
You can also melt your credit card in 24 hours if you put the decimal point in the wrong space. You can't melt your credit card in in faster than 24 hours if you're not careful enough. That is a 100% true. But Amazon ads, I'm just starting to get my feet wet on them. I haven't really Spent a lot of effort or time there yet. Like I said, I just set them up a couple of days ago. We'll see what happens. Right. Where where the best one is. Well, I know that from, you know, product marketing on advertising,
[00:36:01] Matt Rouse:
we find that kind of you get this initial surge because you have An audience. Right? So your book comes out and or your publisher has an audience or whatever that is. And then so a bunch of people are gonna buy it in the 1st week, and then it dies off, like, to 0 usually. Right? Or, like, one here and there. So a good thing you can do is run some advertising, whether that's, you know, Facebook or Amazon or Google. Whatever. Mhmm. Depends on your audience. Right? But just so you get that, like, you know, a couple sales every day or 2. And once you kinda get that rhythm going, now your book starts to show up in search because Amazon's system says, oh, you know what? People click on this and they then they purchase this, And they've consistently been doing that. Right.
[00:36:47] George Allen Miller:
And the consistency is where it's at. Right? Well, Amazon wants to make money. Right? So they will they if your book is starting to be successful, they are also gonna promote it. There's also, like, shadow advertising, and I like to say, So if you have an advertisement on Facebook or Amazon and somebody clicks on that link and they go to your product landing page, but you don't buy it, Amazon actually will send them an email later on. They'll send them a little notification later on. They'll say, hey. We noticed you didn't buy this book. Are you so interested in this book? So there's a little bit of a free benefit there. So there's a little synergistic relationship between authors and Amazon. I know Yes. That, you know, there's been some Friction there as well, but there's also, as far as advertising goes, a little bit of a synergistic relationship too.
[00:37:31] Matt Rouse:
Yeah. I think, you know, that's probably a good spot for us to leave off here. George, can you tell us where people can get your book and the name of it again?
[00:37:40] George Allen Miller:
Sure. It is Eugene j McGilliguddy's Alien Detective Agency. It's available on Amazon. You can visit my website georgeallenmiller.com. The sequel should be out within the first half of twenty twenty four. And thank you so very much for having me, Matt. It's been, it's been fun.
[00:37:54] Matt Rouse:
Perfect. Thanks, George. I love it. And, man, it was really nice chatting with you. And, maybe we'll have to have you, back again when you have the sequel. And we can, you know, maybe talk a little bit more about book marketing.
[00:38:08] George Allen Miller:
Love to do it. Alright. Thanks, George. Thank you. Remember to tap like,
[00:38:13] Narrator AI:
Subscribe and follow to never miss a show. This voice over used to be done by a human, but now it is synthetic. Oh, la la. If you want to know if your job or business is safe from disruption, read Matt's new book, Will AI Take My Job? Predictions about AI in corporations, small business, and the workplace. Available now on Amazon. Trust me. It'll be worth it.