We're spending a lot more time playing around with models.
In Episode #469 of Musings, Juan and I discuss: why we think Agentic AI will be extremely useful, whether open source models can compete with closed source, why I believe they are necessary for privacy/freedom of expression/anti-advertising reasons, the bitter lesson with regards to compute, why governments will regulate and try to control the industry and similarities to cases with big tech.
No boostagrams again, very sad puppy :'(
Timeline:
(00:00:00) Intro
(00:04:12) Decentralised Vs Closed Source AI
(00:14:20) Open Source AI: Current Trends and Challenges
(00:24:01) The Future of AI: Predictions and Speculations
(00:31:00) Boostagram Lounge
(00:32:48) AGI: Company Or Lone Genius?
(00:39:04) Compute Power: Ethereum Beats Data Centres
(00:45:03) Government Regulations
(00:54:01) Conclusion And Future Discussions
In Episode #469 of Musings, Juan and I discuss: why we think Agentic AI will be extremely useful, whether open source models can compete with closed source, why I believe they are necessary for privacy/freedom of expression/anti-advertising reasons, the bitter lesson with regards to compute, why governments will regulate and try to control the industry and similarities to cases with big tech.
No boostagrams again, very sad puppy :'(
Timeline:
(00:00:00) Intro
(00:04:12) Decentralised Vs Closed Source AI
(00:14:20) Open Source AI: Current Trends and Challenges
(00:24:01) The Future of AI: Predictions and Speculations
(00:31:00) Boostagram Lounge
(00:32:48) AGI: Company Or Lone Genius?
(00:39:04) Compute Power: Ethereum Beats Data Centres
(00:45:03) Government Regulations
(00:54:01) Conclusion And Future Discussions
Connect with Mere Mortals:
Website: https://www.meremortalspodcast.com/
Discord: https://discord.gg/jjfq9eGReU
Twitter/X: https://twitter.com/meremortalspod
Instagram: https://www.instagram.com/meremortalspodcast/
TikTok: https://www.tiktok.com/@meremortalspodcast
Value 4 Value Support:
Boostagram: https://www.meremortalspodcast.com/support
Paypal: https://www.paypal.com/paypalme/meremortalspodcast
[00:00:07]
Juan Granados:
Welcome back. Mere Mortalites slightly delayed from our usual, well we planned out to be 7 am. However, we are an hour and 40 minutes early from our usual time. So you can decide whether we're late or whether we're early. This is Musings, we are the Mere Mortals. You got Juan here. Yeah, Kyrin here on the other side. 29th December and yeah, 7:15 I am. News is a chance for us to have a bit of a conversation around a particular topic. We will meet, yeah, deep topics with a lighthearted touches, kind of the little tagline that we used to use quite a lot of times. Those passing in the park was another one. But today, something that I kind of came up through the week was AI that I want to talk to you about. Now, the title that Karen's put together for this particular one is open source decentralized AI, can the compute compete?
Just I'm going to give you a right of the onset just to say people, if you're listening to this, we will not be technically talking about AI. Both me and kind of the chops, nor the real probably care factor. I don't know. At that level, I'm curious, but I'm not intelligent enough to go down that path. Just not intelligent enough, not not, prepared to learn the amount that I would need to learn and then continually be up to date with what everything's going on. Both software, which I I enjoy, but the hardware aspect of it. Oh, yeah. That's like, it's a little bit of a stretch too far. Yeah, that's my end. It's not not interesting to me either. So I thought one part because we are likely to talk a lot more AI, I think leading into 2025.
Karen is going to be doing a particular niche, I would say of the concept and the philosophy and some of the ideas happening in a decentralized space. But I thought it would be good to like we do sometimes with the baselining. I'll ask you, AI, just in its general sense, what do you hope to gain from it? Like you personally, what are you hoping to gain? Whether it's from a learning, whether it's from a usage, whether it's from an investment?
[00:02:03] Kyrin Down:
What would you kind of like generalize for people at home? Oh, like old technology, I wanted to make my life easier. I think that would be the thing. You know, Google search was amazing, because it made people's lives much easier and was extremely helpful. So I see a similar sort of thing with the technology of, of especially agentic AI. So that's when you've got a agentic AI. Yeah. So it's an agent. So you basically through very similar to how, Google, the search page rank algorithm would you just it was just simple text you type into it and things that you found useful would pop up, whether it be videos, whether it be websites, whether it be business contacts, you know, whatever it is, I'm hoping that I can get to the similar place where it's I type in, hey, I'd like to buy 5,000 shares of Woolworths.
And it just goes and does that for me using and this is where we've got to have like a whole stack of additional technologies built 1 on top of another. And it might not be shares, you know, might be Hey, buy me 50 grand worth of Ethereum because my ball up. And it just goes and does that for me using wallets and things like that. Obviously, it's a financial aspect. But you could imagine this doing similar things like, hey, go buy me a domain for this website. Hey, message this person for me. Hey, do this for me. And so I really hope that is what the future of of AI is when it comes to me personally. I just want, I want the use cases on the technology of being able to learning about it and things like that, as you mentioned, I don't really I've got a whole stack of YouTube videos about, you know, the different types of transformer models.
I've tried reading some of the papers, the attention is all you need, that sort of stuff. And it it doesn't interest me. Yeah, I've I haven't gotten into it, whether it be because my IQ is too low or because there's just no interest there.
[00:04:12] Juan Granados:
Yeah. So I will drop in the the question of decentralized versus non decentralized versus I guess, closed system AI, that I guess, objective or outcome that you want, which is because leverage ai so that actions that you want to take become easier yes there's less friction in it yeah do you then care if it's decentralized or a closed system
[00:04:39] Kyrin Down:
ultimately
[00:04:42] Juan Granados:
no. What might it be? It's possible. Like if you know, one place charging you $1,000
[00:04:46] Kyrin Down:
per month to use it and the other one's free. All these come into consideration. The the I'll throw an analogy of compared to the social media. There's alternative decentralized social media out there. It's just. Mastodon Mastodon's example. Nosta is another example. They haven't caught on traction. I haven't found them particularly interesting or useful. No. Mastodon has been useful for the podcast index community. And I don't use it personally. But I don't really use social media personally that much. So you can give us an example as well of close ones. Instagrams threads,
[00:05:27] Juan Granados:
people are using it but very few be real, which was like as a social media took off for a little bit and then kind of like spit it out. There's a few what was the one with the speaking tree? The one where people would just go into a lot of audio chat? Yeah. You can always tell that one's gone. And that had this moment. Yeah. So I'm not kind of if you can remember clubhouse clubhouse clubhouse. So I guess you could see it in close versus decentralized sort of Yeah, you know, there's there's
[00:05:56] Kyrin Down:
arguments as to why and this is why I value the RSS audio that we produce the podcast more than the YouTube videos. I value that much more. And there's a reason for that because it's just you're always at the risk with YouTube of getting banned demonetized out. Hitler loves no sorry, Kanye loves Hitler video. Almost got us in trouble. Don't brought back though. Karen challenged it and we've got
[00:06:23] Juan Granados:
Yeah. Google socket. Yeah, alphabet socket.
[00:06:28] Kyrin Down:
Yeah, look, I'll tell you my end of it. Yeah. What what about for you for AI? What what would you
[00:06:34] Juan Granados:
Mine's careening, especially for 2025. This is careening into like, oh, wow, already as part of my annual goals, if you go back to the annual goals, I had a goal there, leverage more AI. And even when I wrote that one down, I didn't realize how much more it was going to be impacting my life. So in a personal way, I am using it more and same degree to yourself. I would like it to reduce the effort I'm using or the friction. Here's a clear example, while looking at potentially moving from the current home to another place. I would have in the past let's go back 5 6 years ago to do my research around where next to buy and all that sort of stuff. I would have gone and perused documents, reports, sales histories, percentages, maybe there was a couple of aggregators out there that could help me out with that. But it would have taken me a good 10 15 20 hours.
Now, in fact, yesterday, I had this this quick idea. And I was like, oh, you know, I wonder if I can I can leverage one of the AI is to help me do something like that, which was, I think my sort of call outs was I wanted to know over the last 10 years, what were the best performing suburbs over the last 2 years performing suburbs? Fast forward to 2032 Olympics here happening in Brisbane, what could be the best create an equation for me, based on various factors of what that might look like. Now, apply the equation into why give me the reasoning as to why that might be and he gave me I'll give you the answer that he gave me from Brisbane, Woolloongabba. So Woolloongabba, that'll probably be destined to be and I was like, you know, they've questioned me one. Yeah, there's a couple of places that we've got like, listen to this, I'll just say that but I went, you know, okay, for me, I again, it's the reduction of I would have taken 20 hours probably took me 15 minutes to get to maybe 90% of the solution.
One aspect that we could talk a little bit later, maybe it would be about what are you losing in constricting that time all the way down there? Is there anything that you're losing as practical or valuable, that you're now getting to the outcome so much quicker? Yeah, maybe maybe there's some of that like, is a fun in the journey that you would do to get to that and the learnings that you would get that you don't get in this way because you're just rushing towards the outcome. Perhaps perhaps I would say there's some foundational learnings but again you could probably leverage AI to do that. Okay, that parking inside. Yep. The other one though from a business perspective is, again talking about agents and honestly I guess I understood agents or agentic ai maybe in the wrong way. I out my attack with it all when I was having conversations last couple of weeks was you don't need agents, why do you need agents because some of these LLMs are going to advance so far, whether decentralized or not, they can just do anything. It's just like go for it until I really kind of understood.
Yes, but you want to like focus in a particular action or proposal or integration so that it serves a purpose. And I went, oh, okay. Yeah, that makes sense. Because in pick whichever llm out there ai that you want to use it can practically go and do many many things, lots of things, but like the example you just gave, hey I want to buy $50,000 worth of Ethereum, you probably want a specific agent that has that integration, has the connection. Yeah, that maybe goes through the credential
[00:09:47] Kyrin Down:
that knows the difference between okay, I want to buy this well, you can just go bulk buy that on an exchange and get smashed with regards to fees or this one's like, oh, I'll if I use a dex or multiple dexes, I can break this up into smaller parcels. If I'm trying to buy 50,000,000,000 worth of Ethereum, it would know, hey, don't try and just do one big limit order of that. That's not going to be good. Kind of just like a human, you know, if you are suggesting general human to do something, they can do most things. It's just if you want the guy to, you know, pave your your lawn over with with new tiles or something.
You want the dude who's practiced that a lot knows where to get the new tiles from how to lay them all this sort of stuff, rather than just one coming in. And
[00:10:38] Juan Granados:
there's like creating a mess details underneath the sentence, which I guess gets to my last one as well. But there's details underneath the sentence where you say, how I want you to pave over my driveway. Okay, does that mean you pave over the top of the concrete that's there? Do you remove the concrete and put it over soil? Do you put a brand new layer of something and then put the tiles over it? So there's many ways you can do that, like you say, but there's more definitions to that, which I think is part of the agent experience, which I'm kind of saying that more and more.
And then the third and the thing that big is, so I first heard it from Tim Ferris probably about 2 years ago. Again, Tim seems to always be well ahead of the curve in some of the stuff. But it was, whoever gets the incantation incantation incantation best or suited the best will win, or basically whoever can write the better prompt, or write the question really, really well into whatever AI L. L. M. You're using, the better it's going to go for you, the better it's going to give you whatever it is that you're actually seeking to get. And so a big part I think of AI, again, whether closed or decentralized, I'm actually gonna disagree with that. Okay. No, there's like a and the reason I'm saying that, here's a clear example.
This is open AI. If you start JPT 4.0, right now, it will give you a pretty good answer, depending on your question that you ask. However, up the chain of logical, modes, so like the o one and then o one pro, which I've gotten my hands on to play around with. It's even more, let's just call it finicky. It's more finicky because you have to ask it even more appropriate questions. Otherwise, you do get an answer but it's not as best used until you ask it a really, really good query or input into it it will work better that way the only place I haven't seen it doing it that well is probably the video creation ai users like sora which man whatever I do creates horseshit videos and some of the featured ones over there, they look really cool man. I can't replicate anything. So if someone tells me if I'm just doing something wrong then I've tried basic as balls, really complex bloody prompts and everything in between Not working out for me. Yeah, yeah. That my thing with the prompts
[00:12:52] Kyrin Down:
is that's I think that's, it's kind of once again, going back to Google search, you know, if you've got the you've got the Google search where you can use all of the parameters of like, don't put in this date, I'm looking for this specific keyword, exclude these keywords, etcetera, etcetera, etcetera. There's all these, you know, things that you can actually do with that to refine your search and make it better and get optimal result. But nobody really used that. And so this is where I go the the it's it's not the prompts. It's that that's, at least when we're talking like mainstream adoption of these things, it would be what the AI that wins the model that wins or something like that will be the one that can decipher what the average human just types in to make it.
[00:13:44] Juan Granados:
Sorry, that's good. Cool. At the at the simplistic level, I'd like general Googleable interactions agree like taking the minimal amount use other info to kind of gather what you might need presented. Maybe I was thinking about it in the agent usage business context more harder complex problems that you want to put in some significant parameters integrations whatever that's where I go okay, prompts and the right prompts creation would be really beneficial. No, I agree, I agree. Like the more simple it can do those sort of things, the more of a challenge it will be to like Google or something
[00:14:20] Kyrin Down:
similar. Yeah. Let's get on to the open source decentralized part of the equation. So I've I've gone down this rabbit hole for a bit recently and it's you're getting 2 good portions here with the mere mortals because 1 certainly used a lot more of the chat GPTs
[00:14:38] Juan Granados:
of this world of the Oh, yeah. Just to take context about because we're gonna say we're users, we're not creators of any of this shit. Like we're not we're not coding. We're not independent. So come up with that. But the ones that I've played around with is GROC. So XAI is one, OpenAI is Chatterpoutine, all those various models. Claude from Anthropic. And some of the ones that use I can't remember the name but play around with those, like open source ones very like, gently just to see what like what it creates and stuff like that just to see its interaction. Yep, sure. And mine's almost the opposite. I've used
[00:15:12] Kyrin Down:
most of the through Venice dotai and because I've got a pro account there, I've got access to all of the biggest best open source models. So the llamas, the
[00:15:25] Juan Granados:
what if you can recall Yeah, I was gonna say does it give you like the very, very latest of say open AI model?
[00:15:31] Kyrin Down:
No, no. So there's, there's the open source models. So they don't they don't have any closed source stuff in there. Gotcha. This is
[00:15:38] Juan Granados:
open source. What? Llama open? Okay. Yeah.
[00:15:42] Kyrin Down:
So let me just see if I can go into settings here. So so it's got llama, dolphin, 72b, quin, 2.5 code, Hermes 38b, etcetera, all sorts of things. And that and that's that was just for the text for the image, fluently, XL, flux standard, flux uncensored, pony realism, stable diffusion. Yeah. So this is, this is the ones I've been using personally and then I have very right when chat gbt was first coming out. I tried that a couple of times. So you've got a good mix of the uses here. At this very current stage, I would say almost guaranteed that one has gotten way better use way better results out of out of everything that he's used so far. And this is because it is closed sourced, not decentralized.
And it's had a, you know, renaissance. So not the right word. It's had a big popularity explosion since what was it 2021? That all this stuff? Yeah. And so this is actually what happens a lot of times with technologies where there's it's the closed source models that win. So why am I interested in the open source decentralized ones if it's not getting as good results as as what yours is. Right now. Right now. And this is linked very heavily to other stuff that I care about. Advertising privacy, which I actually don't care about that much in general. We haven't talked about it much on here. Obviously, we're doing the podcast, so we were pretty forward facing to the public and things like this. But when it comes to what I see being able to use with these these models, and we talked about it previously, like it getting to know me, this is where kind of privacy is becoming a bit more important to me, freedom of expression and even coercion. So I think the use cases that will set apart the decentralized stuff is it allows for all of these things, allows you to have private conversations, which, you know, OpenAI has taken everything that one has typed into it than that I've ever typed into it and it stored it somewhere. Which what's it going to use that for?
I think probably they'll go down the advertising model eventually, which is that it's going to start feeding specific results into what you're typing and searching for. Whether that be a little sidebar, kind of like how Google does, where it puts the top results as as advertisers sponsored sponsored ads, or whether it's thrown into it a little bit more sneakily. Who knows? I think I predict that's what's going to happen.
[00:18:27] Juan Granados:
It look, honestly, it might do It might do. And and this is I'm just throwing this out of complete left field because I don't necessarily know the exact numbers because we've talked about this with Spotify where you pay the Spotify premium, you don't get ads. But again, Spotify itself is still not making a profit. They're still generating a loss. So, you know, that's quite probably why they still Have just Keep out of the problem. Yeah. Okay. Like the really recently. Okay. Really, really recently. Congratulations, Spotify. Yeah. But I was gonna say much shit talking about. The Inverse was gonna say was, you know, with try to put it or just open a eyes, various models, I kind of go, I think there's enough people paying all the privilege slash the usage of it, they might not need to go down the path of having to go advertising role where everyone's just how that many people have subscribed to it, maybe 200,000,000 people. Maybe there's enough money there being generated just by subscription payments that that would be great.
[00:19:24] Kyrin Down:
I'm trying to think of a an example of big tech not going down advertising at some point model of a company that's, you know, refrained from from doing that.
[00:19:36] Juan Granados:
That well think about it from a non social media perspective. I was guessing like there'd be plenty out there that don't leverage ads in that regard. Like I would say Amazon leverages ads to market itself, but they don't they themselves implement ads into their internal software that they sell.
[00:19:57] Kyrin Down:
Yeah. Yeah. So that feels like just a whole like a whole thing is ads. Well, like like Oracle, Oracle as
[00:20:05] Juan Granados:
a software as a software producing like company, they themselves don't have to go down any path of ads because their product that they're selling makes them enough revenue and profits. I would say how many people use Oracle like how many people buy that? Is that a? More so enterprise like most of businesses enterprises. I guess I would say it's weird because it's generally we I guess you'd see ads is where the eyeballs is and the eyeball is usually the product is free and so when the product is free then you are the product when they're leveraging that whereas with Open NI or some of the other play systems it isn't there is free models and yeah I could see the free model than having ads introduced to it But it appears like there's a lot of people using paid models, you know, in the closest. Yeah. Just seems that way. Okay. Yeah. In in any case,
[00:20:55] Kyrin Down:
I think that'll probably be what's going to happen. So this I feel like this is one of those ones where there's a decent chance that the open source models will never get better than the the closed source ones I could imagine. And when it comes to prompting, when it comes to the agentic AI, I think that'll be a bit different. But I'm talking probably the next couple of years is what I'm imagining when I'm saying this. Close source. If you just type it in a general thing, we're experimenting with this recently. What was it? Elmo playing sports with someone? Yeah, I was playing like basketball with Michael Jordan. Yeah. That effect. I didn't get very good results from the open source models that I was using. I couldn't even get a result from close like open AI model. Okay. That's it because it restricts you cannot create an Elmo. Yeah, which is which is then getting into once again, the kind of restricting of certain things, whether this be because of copyright or whatever.
You know, that's, that's fine. I'm unsure, I'm unsure as to whether the open source models will be better, but it's kind of like how I see RSS and, and it's play with YouTube and video. I feel like YouTube is only good because there is RSS behind the scenes. It's not as popular. Adam Curry talks about when RSS and YouTube were first kind of coming out the first time I should say this podcasting for RSS RSS been around for much longer than than podcasting in it. And so this was around the 2,004 to 10 period and both were these kind of exciting new things. Oh, shit, you put audio offline, and it's called podcasting. And then oh, shit, you could put video up online.
And it's, you know, YouTube, or you could put video into podcasts. That was actually how I first found them in the iTunes Apple Store. I used to watch podcasts in there. And he was saying that they were kind of like equal level of buzz. And then YouTube just skyrocketed up in terms of popularity. People using it attention, news media's writing about it, etc, etc. For me, it feels like it needed RSS to be able to almost take in a lot of the people who YouTube wouldn't take in. So this is the people doing the type of people who would do the COVID podcast or the conspiracy podcast. Obviously RSS is not just for that.
Podcasting RSS is not just for that, but it certainly still has a role. It's like a backup almost like you need the backup for the this other thing to work really well. And I kind of see how that that could potentially happen with with this as well. So am I claiming that open open source decentralized AI is going to be better? I think probably in the long run, there will be but certainly in like the short to medium.
[00:24:01] Juan Granados:
Questionable. Yeah. Do you want us to do it? So I'll look at it in the other way between the decentralized versus closed in that. I think in the decentralized space, all of that you stated, I think is going to have its use cases. But I don't even in the medium may again, I don't know, who knows what long term looks like. But in the medium term, I don't see it bettering some of the availabilities of closed systems. And that is more aligned to probably the value of the benefit that is being obtained and sold in the closed system, but also the level of talent that's being brought in into the closed system. Years ago, and still today, meta, alphabet, Apple, media, you have these ones, right?
They will basically going out and still go out and they will pay whatever money it takes to get literally the best people to be working on this. I think there's an example of x AI chief AI officer or something like that, or finance, something technology, I'm pretty sure it was like, x AI was like trying to hire them. And that's like, they bought them from Google minds, a deep mind for like 1,000,000 of dollars of salary. And then Google paid that particular person even more money to come back like just ridiculous money, to make sure that hey we've got the absolute best people working on this.
I don't know with both that incentive and alignment to what they want to do in the closed systems how the decentralized model will be in very specific tasks that are going to be broadly used. I don't know, I don't see how that's going to do it in the meantime, no way. But in all other fronts, because of the restrictions that they place in a closed system, I can see its usage becoming much more relevant. As you mentioned, the fact that you can kind of get unrestricted, kind of get around some of the copyright items to create things that you want. That does seem valuable to a lot of people, I would say, I again, I have no idea from this perspective, something that I've read in, again, Tim Ferriss is both in a document that he's written, but also in a conversation, how copyright will be applied to this is quite the I don't know, same conversation happens with NFT. Right? How does copyright get applied to NFTs when you own it, but someone duplicates its usage and stuff like that. It's still in a I don't think there's been enough legal cases in that. But if someone started using Ben as a I with whichever model to create things and sell them, but it was breaching copyright law.
Who's at fault there? Is it the models themselves for allowing allowing that is that the individual generating and creating the copyright law itself? And it's outdated and not useful? Exactly. Is it is it a case that you cannot you can no longer do copyright law? There's a little mindset I think Elon has on the, rockets and stuff that they're they're building or specifically the rocket boosters and whatnot. Like a video out there where someone asks them like, oh, copywriter or trademark these things. These kind of call outs are kind of like is, why would you bother like you, you would only do copyright and trademark laws or like to try to slow down other people from catching up to you. But if you're just looking for the betterment, and you're just far and away kind of the best I can who cares and copyright it because they're not always gonna get close for you to continue to do it. So maybe there is shifts and changes into what you know, the application of it or if it's outdated in this model, because now there's such an ability to generate those sort of ideas and creations. So maybe it's that as well.
[00:27:43] Kyrin Down:
Check out my book review of common as air because real dives into it. And I think expresses my thoughts on why thoughts on it, why copyright and IP is
[00:27:53] Juan Granados:
outdated. So yeah, so I just think in the I think both close and open source AI are going to be useful all in their own rights? It does it's gonna definitely depend on what its application is gonna be that you wanna use it for. I can certainly tell you the various reasons why Anthropic or OpenAI or some of the Microsoft AI is way better for coding. Like, way better, you go and use an open source model. Maybe it's okay. But its level of integration and some of these other closed systems, and the ease of use, it just blows them out of the water, right? So I can see like, there's easy use cases why you'd use that. But some other generations, maybe more creative outputs, maybe things that are restricted, or freedom of speech and creation of information might be better on that front from a use case perspective. Or maybe it'd be like on par, just just as much or a little bit better. So I think both will have the use cases, just say, sure. I'm talking about the just briefly on the talent side of things. One of the things Adam Curry says about the the podcasting 2.0 community, he's always like, I could
[00:29:00] Kyrin Down:
never hire I could never get this group in a company. This would not work like the amount of money it would require to hire all of these people to do all of this work and getting them to voluntarily or work productively in a group environment just just wouldn't work. And that's why open source is so great in many ways is you can attract a lot of talent and they're doing it free voluntary. And so it's there's a it's one of those ones where it's the work culture and team culture and stuff. It if it breaks apart, it breaks apart in open source. And that does happen. You know, people have personalities and they'll they'll fuck things up. But there's not necessarily the motivation and talent aspect. I think you can get really motivated, talented people in open source as well.
How how it measures up to these other companies is up to debate, but it's not as one-sided as a futuristic
[00:30:01] Juan Granados:
tech that maybe maybe come to you and maybe doesn't. I'm just picturing one of these closed companies achieving what may be true AGI, right? Like real to AGI? That's a question for us to have to like a real true AGI. What if you get a couple of these that are might as well be or even better than 8,000,000,000 people working all at once, then there is no match because you know, you have these internal technical agents or participants that are just outdoing entire world creation in just whatever time frame you want to do, then that's it. Then you've just outpaced it completely and utterly and there's no catching it at all. So that's the other like injection idea that perhaps could could happen.
Again, there's a lot of, was a very horrible pitch for what it is all the intricacies and what that actually means. Yeah, I don't think that's gonna happen. We'll talk about that afterwards.
[00:30:59] Kyrin Down:
Boost your ram launch. This section is where we like to think that people have supported the show. I don't think anyone supported this week, which so I'm just going to talk about some changes coming up. Just quickly there was a v for v boost. Yeah. So that for the so that was just someone following on the, on 2 things. Oh, gotcha. Okay. The Blight bloomer actor. I can't remember that guy's name. In any case, a couple of changes coming up. So no financial support this week. Very sad puppy. You can do better people you can do better. You better. I'm going to change up the value for value splits probably this next week. And so just into the easiest way possible using sound fountain and Satoshi stream.
So that's one thing. There is probably some things that once again, getting into like agentic AI and stuff like this. When does the the website come up for renewal?
[00:31:57] Juan Granados:
Good new time.
[00:31:59] Kyrin Down:
I don't think it's really needed the what we're doing at the moment. So I'm tempted to go for a less expensive, easier option if I can get some AI to how good is chat GBT at creating a website if you like, yo, create a website
[00:32:18] Juan Granados:
and most of them using WordPress or something doesn't matter most of the WordPress weeks, all of them now have all AI integrated to them. So they create the website for you pretty easily. So shit code, shit code, because that's one of the things people need to be aware of with a lot of the AI's is it makes redundant, a lot of redundant code, a lot of redundant items, unless you're really specific. The creation of the code itself. It's quite redundant, but it does the job pretty well. Sure. I might play around with that. We'll see about that. So
[00:32:46] Kyrin Down:
yeah, so that's that's that for this week. You talked to just then about the the AGI. Will it will it come up? And this is kind of related to it, but is that is that going to come from a if that does happen, Is it going to come from a company you think or will it come from a lone genius being able to do this on their own?
[00:33:12] Juan Granados:
I think it would come from a company. I would even go as far so again, who the freak knows where they're actually on this, you weren't working at some of these companies,
[00:33:21] Kyrin Down:
what about it will be the spark that creates it? Is it because that a human has or company has created the perfect way of synthesizing the knowledge of, of creating the correct algorithms to make the AGI come to life? Or is it going to be because they've got the most compute?
[00:33:43] Juan Granados:
And that's, that's the thing that's driving? I mean, it's good. I think part of it compute part of it will be and this is I guess, why I kind of maybe play like the idea of agents or agent decay I or usage as application, I was a bit confused. But I was like, if you create something that is all consuming and all powerful, which could then proliferate whatever agent AI you want to then use without the input necessarily from a human in a continuous way, That that maybe that's one definition of approaching something like AGI, then I'd go and I can see that probably happening more so from a company than it would from a lone individual doing that, both by just the level of compute that you need, or the level of, interactions, security, whatever you want to call it. I would see that more so. So, yeah, in that example is imagine if, and I'm talking about here OpenAI because most people know about it, but chat GPT 03 is going to be coming out at the end of January.
Internally, there's been a bit of conversation, I was like, that's getting now pretty close to what they internally think is like more of AGI. And they might even start changing naming conventions to more AGI usage. I don't know what they define that, right? But let's just say that OE3 comes out, it can proliferate more versions or even smarter versions, or go down the path of, okay, I have an intent that you want to do business or this or that. I'll just go and I can on its own, it can go and create all of its agents, all of its creations, and then just keep on spreading from there.
At that point, I go, and if it's doing that, it's going to be doing that at magnitudes of every human being on earth being able to do it. Okay. Now you've reached some escape velocity. No person, no decentralized group of people are gonna be reaching it because it's just going in magnitude faster. Then what? Like, what what is this blocker? What is this restriction? How does any any other entity go and beat that level that just kind of consumes? I know. Is it too futuristic? I don't know. I don't know.
[00:35:45] Kyrin Down:
Yeah, that's that's so far out. It's kind of reminds me of when people talk about Bitcoin and 100 years time, they're like, the security model, it's not going to work the you know, when there's only the block rewards only one satoshi
[00:36:02] Juan Granados:
per block and things like this. I'm just. Quantum computing will break Bitcoin before that even happens.
[00:36:08] Kyrin Down:
Yeah. Yeah. Well, even excluding that, the it's just me it's so dumb to think that you could know what the world will look like in 120 years time. And this one particular problem, yeah, you're arguing over this one thing and it would be, you know, the equivalent of so where are we now? 2024. So in 1900, what was the existential threat that they were thinking?
[00:36:34] Juan Granados:
Pre electricity?
[00:36:36] Kyrin Down:
Yeah, yeah. Well, yeah, it essentially be like, is electricity going to
[00:36:44] Juan Granados:
I don't
[00:36:45] Kyrin Down:
know, fry the brains of insects or something called like a street lights gonna burn the world? Or do something
[00:36:53] Juan Granados:
so done? I believe at the start of electricity, someone's gonna go verify, so just listen to it. But at the very, very beginning of them, there was no off switch or on switch it was just you had electricity plugged in it was on. Okay. Non stop so it kind of would have been like at that time you were like fuck but in 120 year times it manages the amount of energy we're gonna be using in this electricity and we can't turn it off. Yeah. Yeah. But it's like fuck we have to switch It's changed so much. That's that's not even that's not a problem anymore. Yeah, I think similarly, like, that will not be a problem at all. At that time. There'll be so many other things that either become bigger problems or get solved.
[00:37:27] Kyrin Down:
Yeah, yeah. Yeah. That's kind of kind of how I view AGI when it's when when you get too far down that rabbit hole. It's you just come up with all these crazy scenarios where it's just like, I don't know if any of this is going to be I'm gonna make a bold prediction, though.
[00:37:42] Juan Granados:
We sit right now at the end of 2024. I I'd be willing to put like a $1,000 bet mid 2026. I think that have already would have like that's going to come to fruition in terms of the level of, accessible models that can go and just generate multiple things. Maybe it's not this like panacea, fucking looking amazing AI that's taken over everything. But to your call out of Oh, hey, I want to do this, I think it will get to It will get to a level of capability that you could just ask it like, hey. Go and create this business, go and create this, and it will do it. The restriction is gonna be, I think, in the integrate integration layer.
And even then, I'm gonna go, fuck. I think the closed models are gonna blow that shit out of the water versus open source decentralized models in its integration, network power, business deals that will get done to be able to integrate whatever layers you want. Again, I could see any banking systems integrating with Microsoft and other layers to use AI to do that sort of exchanges. I don't see them using an open source model to do that for various security reasons.
[00:38:54] Kyrin Down:
The all right. I'll get I'll give you my little things here. So will it learn Genelis be able to create a model or LOM or something that's so useful that someone could become a billionaire on their own? I don't think so. The better lesson, if you haven't looked up that little tiny article by Richard Sutton published in 2019, and when I say article, it's literally just 2 pages. And it sums up almost all of AI history in the sense that humans really want to try and use their knowledge to create a better way of accessing the data or so for example, with chess, having heuristics or rules put into the program to go, okay, you know, if, generally, you want to control the center of the board, that's, that's like a general rule. And so they'd be trying to put that into the AI.
This was when they were trying to beat the current world champion. And they were trying to put this into the model saying like, you know, yes, you can do this stuff on the side. But remember, you got to control the center. And this has occurred all throughout AI history of research and stuff. And it's always been the wrong approach to do that. The right approach has been spend your time making access to compute and using that compute better. And that's just because of Moore's Law, get more transistors get more power. And that's the thing that's actually worked. So in terms of beating Kasparov at chess, Lisa Dola had go, it was just being able to use better the compute that was coming online better.
So this is one of those ones where it's like the genius, lone genius, being able to do the right things to create the right thing. No, that's not probably almost certainly not the right way to go. It's being able to use compute better. So then the question is,
[00:40:49] Juan Granados:
I will I'll quickly I'll challenge the statement of using compute better. And I only challenge it because I got challenged about it as well. Because I was like, man, all of the big top players out there all I'm hearing is everyone's going more compute more more more more more. Now my dad was the one actually pointed out there's a couple of places in Korea and Japan that are coming out. It escapes me right now the name of it. I think I might send it to you though, but it was the using, the actual systems in a more efficient effective way. So you don't actually have to be battling for compute and you can do with same things with much less compute. And they were on par almost beating in some aspects, some of the current models, which made me be like, oh, shit, maybe, maybe, maybe it isn't all it. I just need this computers, maybe more effective usage of it. But again, that channel that I was more in the camp on.
[00:41:42] Kyrin Down:
Maybe, I don't think you see more compute regardless. It reminds me of the book More from Less, which was saying a lot of the problems that we have with regards to trash and recycling and you know, we can't continually grow forever, which I do agree with you can't, there are physical limitations to growth. And but you can grow by using more from less, like, you still need to have gotten too far up the range to then go like, okay, we need to dial it back a little bit. It's becoming more efficient. I think that the AI stuff, it's like, now we're gonna, we need to do more and then maybe eventually pull it back. Yeah. So then the question is, I guess, will so if you need compute, if that's a thing, that's that's got the big thing.
Will the closed source companies be able to access more compute? Or will it be the decentralized ones? And this is where I don't know, what do you think on that?
[00:42:44] Juan Granados:
What do we define that by compute by the sheer number of
[00:42:48] Kyrin Down:
like. Data centers and yeah, I don't know what computers measured in terms of is it energy or is it like hash rate or. Yeah, I'm not in a data shop all I know, yeah, I can see the amount of
[00:43:01] Juan Granados:
ridiculously large plants like system plants that are being built by the likes of you know Elon through XAI, Microsoft through their own setup. I believe Microsoft has now bought like I I think it's like 4 times more processing chips, or it's compute than everyone else combined. And it's just like a ridiculous number of chips that they actually own to be able to do this. So part of me goes like, I think some of the close companies out there are going to have more compute than even
[00:43:31] Kyrin Down:
aggregated across a decentralized system. I don't know. I really know. Yeah, because this is where you go. If you look at Bitcoin mining, for example, you're okay, well, they've harnessed a ridiculous amount of and when I say they I mean, the Bitcoin protocol has harnessed a ridiculous amount of energy, that energy usage is 1% of the world's energy or whatever it is. It's crazy. I tried looking this up, because I heard this claim. And I can't verify it particularly. And I don't know how to measure these things as well. But the amount of compute when Ethereum of GPU, GPU usage when Ethereum was at its peak of mining power was 50 times the amount needed to train GPT 4.
[00:44:20] Juan Granados:
So what's the say that again? So
[00:44:23] Kyrin Down:
the amount of GPU compute that the Ethereum network was using, yep, was 50 times what was needed to train, I think he said to train the GPT 4. So once again, I tried looking this up, I don't know really know how to measure these things, how, what kind of units so I didn't I wasn't able to verify this. But it's one of those ones where apparently at the moment data centers are operating about 70% efficiency, a lot of them. Yep. So there certainly is a lot of compute just out there. Where if you had the decentralized model, and it's like, oh, we can't, you know, access it for this thing, but, you know, we need full power usage, or we need, it needs to be for these hours of the days or this certain certain time, you can see where the decentralized is proven use cases where decentralized
[00:45:23] Juan Granados:
models or protocols work and being able to access and harness these sorts of things. So this is what I can get on that. And again, let's take this with a grain of salt of how close it is. But some of the notes that I gather here is that Ethereum, the mining was using millions of GPUs worldwide with a combined power of about 1 petahash a second, giving an annual energy consumption of about 80 to 100 terawatt hours. With a GPT like GPT 3 and 4, the training cost was around 1207 megawatt hours or about 0.0016 terawatt hours. So, you know, magnitudes smaller. Sure. Correct. One of the things it does talk about though is, the yes, the usage of EPU for Ethereum mining, like dwarves requirements for the training charge of t models, but the charge of t model creation was magnitudes more efficient and meaningful because the Ethereum mining was just pure repetitive cryptographic, puzzle solving.
But you're right. Yeah. So like it would magnitude of powers more or peak Ethereum processing than it would have been training any of those models. Yep. Yep.
[00:46:39] Kyrin Down:
Last one, governments and regulations. This is the only other real big thing that I think will play a part in all of whether decentralized is useful or not is that they're gonna do it, man. This is it. It It doesn't feel like it at the moment. There's a bit of a shift to change. You know, the US is becoming a bit more tech friendly at the moment. But that'll that'll come around again in a couple of years time where governments will start to see oh, shit, this stuff's really powerful. Like, oh, it can do these things much like it did with the big tech companies. And then they start calling you into Congress and like, blah, blah, blah, blah. I think that's going to happen. They're going to meddle with it. Yeah. This is where the decentralized you at the very least want I think need is probably the better word to have an alternative option out there.
So once again, will it be better than the closed source? Who knows? Probably not, at least not for the immediate future. But I certainly want to have access to, to these things. And this is where it's getting towards like the I've heard some arguments of like, the the AI alignment problem. What if it gets conscious and it's not aligned with humans and kills us all and these sorts of things? The The paperclip problem if you want to go. Yep, the paperclip maximizer. It gets the the task in its head that it needs to maximize paperclips and turns the whole word into it including including humans. The the thing that I've been coming more into is like the the models or sorry, the agents probably need to be individualistic, it needs to be trained on me. And this is this will be the biggest use case.
So the the use case of, okay, I want it to create a website for me, people look, you know, type it in and it does it. There might be, it's almost like you, I have the feeling the future will be something like, you'll all have a model where I'm just talking to, and then it it knows about what kind of website I want. And we'll go and find another agent that will do this thing. So it's like agents interacting with agents. That's kind of how I view the world going. And I'm probably going to have one that knows me and knows when I mistype this word. I actually meant this. I'm being sarcastic right now. So don't actually do that thing. Or I'm drunk. Don't definitely don't do that thing.
So I definitely, and that's where I'd go. Yeah, I'd feel much more comfortable. Interacting with one that I know is open source, has my privacy, you know, locked in guarantees,
[00:49:30] Juan Granados:
I guess, if you want to call it like that. The one problem I see with that with that model, or with that sort of system, I can only come into it with like, some of the experience throughout some of the integration is, I think people miss the fact that to do actions to such a more extended degree, you have to integrate or interact with other systems that are existing systems, right? So it's not like an open source right now, maybe even in the maybe in the long term, this won't be this will change. But in the short to medium term, you're still having to integrate with other systems, other companies to get what you want. Take the example of I I want you to go and buy $50,000 of whatever I want you to go and create a website.
The AI itself they're using isn't going to do that. It then has to go and plug into somewhere else. Even if to the point of you said, let's say you're using an open source system and AI and you say, hey, I I'm going to create this sort of website, you know, me needs to be this thing, do that fantastic. It'll go, it'll go find the best website creator for you that it needs to do that. It'll know which one the cheapest is, but Bobo, you want to connect with an API or integration with that agent. There's still personal personal information that you have to apply into that other system for it to go and create something. That layer I think is going to be a bit of a interesting one at least to solve if you're using a more decentralized device because there are certain keys and security inputs or identifiers that those other systems need which will not be provided by a decentralized model or more open like contains a privacy.
Now, if that were to happen, you could go either 1 or 2 ways. Well, companies will just not do that. Or the models themselves, like the actual open source models will have to use something a security encrypted ala apple, like when you pay with the apple credit card if you use a fine and use apple pay as opposed to your credit card it actually sends through an apple token or the security not the credit card token that's how it protects that so unless they're building that which then means they have to have some level of your information to do that, then I don't know how that's going to work out at least in the short to medium term, where I see that abundantly being much easier for a closed system that does have that sort of information, does have the level of privacy that they can keep from some sort of level security that meets all of the ISO 27100, ISO 40, 14 100 clarifications that a bank might need to even be able to do that. So again, with the, using an AI agent, I might go, Hey, I need to send $25100 to no, not even that, you say, build me this website. And as part of that, one of those, okay, could we have to pay, you have to put in your credit card details, we have to do some sort of transaction to pay for something. I think the open source model is going to have a real hard time without identifiers.
Whereas a closed system with identifiers with some level of privacy integration will be able to pass on all that information through an API or some other form of ingestion for them to actually take that action. Currently, that's a humongous blocker. And I think a lot of people probably will miss that that in a world where you're still integrating with a lot of things until it becomes somehow completely devoid of that and you don't even need that anymore. You're still gonna have to integrate with existing systems. To do that you have to provide the information that they need to go and create whatever that you need to create. You're going to need something in that in that place. When you go to Wix dotcom right now, if you go Wix dotcom and you want to create a website and you type in all the details and use AI to get there you have to put in your personal information you have to put credit card payments you have to do all these various things they need certain level of identifiers to go and do what they need to do without that that process isn't going to begin or maybe other open source systems will have to come and come and take place. I think a lot of that will go away.
[00:53:13] Kyrin Down:
Yeah. The amount of just data you need to provide for the minor of things that I think will eventually and will probably pretty quickly actually go away. Yeah.
[00:53:27] Juan Granados:
Yeah, I don't know. I get I didn't know in in all of my time that I've seen, I've not seen trend away from that in all the various integrations and connections for a lot of places, their existing systems are built with a lot of that information coming through. So you would either have to have a major transformation on basically like all companies and how they do that or have new companies that come in that disrupt all these systems to go you don't have to provide that it can just be pure other type of information that's being shared to create whatever you need. Good good go either way. Yeah, we'll see. Again, I think it's gonna be a lot of interesting conversations that come up in 2025 and AI. Honestly, like, I'm excited for, like, if o three, like, the the amount that they're talking up internally about what o three from open AI looks like by the end of January.
I've got my hands on 01 Pro. I sort of said this to you, you before, takes a long time to process and things and man, it's pretty it's it's revolutionarily different to the simple free model that you can find on OpenAI. If never tried it, I guess there's no way for you to maybe go watch some videos on how to use. It's a logical reasoning and not only it's logical reasoning, but as I said, you know, I thought of a, hey, I want to create this sort of thing for a particular business. It'll not only do that, but it does inquire questions and more things and prompt you without me prompting it. So there is a lot of that internal conversation going in of, Oh, but have you thought about this and what about this legal regulation and that? I've not seen any other model do that.
It's doing that well. The next model coming online is multiples of that. I don't understand, I don't even understand what that even looks like. I don't know again, it's very it's very different of information usage versus, integration connections and stuff like that. Again, it just seems interesting. I feel like us in the podcast, you're going to be talking very specifically about one concept of AI which is Morpheus, which will be interesting, right? There's a very different way to think about some of the stuff that's going on. I just think there'll be, I'm sorry, new modellites, we talked about, crypto and nfc quite a lot back in 2022, 2021, 2022, 2023, less than 2023.
Feel like 2025 is going to be a lot more AI conversations. Not in a on demand news, but just I think it's gonna consume a lot more of these things we do. That's interesting stuff. For sure. Yep. Yep. I think we'll leave it there. You're more or less than you very much. Is there any questions at all from the live? I saw I saw Dimalix in the chat. GMGM Quantum Hoskie.
[00:55:58] Kyrin Down:
Quantum Hoskie. Maybe a little bit more of that coming up.
[00:56:01] Juan Granados:
Jesus. When's Hoskie coming back to peak? Is it March? March gonna be a tough one? Yeah. Fucking ass. It's gonna 0 going to a dollar. No in between. Alright, we'll leave you there. Thank you very much for tuning in. New More Alliance, hope you're well. 1 out. Bye now.
Welcome back. Mere Mortalites slightly delayed from our usual, well we planned out to be 7 am. However, we are an hour and 40 minutes early from our usual time. So you can decide whether we're late or whether we're early. This is Musings, we are the Mere Mortals. You got Juan here. Yeah, Kyrin here on the other side. 29th December and yeah, 7:15 I am. News is a chance for us to have a bit of a conversation around a particular topic. We will meet, yeah, deep topics with a lighthearted touches, kind of the little tagline that we used to use quite a lot of times. Those passing in the park was another one. But today, something that I kind of came up through the week was AI that I want to talk to you about. Now, the title that Karen's put together for this particular one is open source decentralized AI, can the compute compete?
Just I'm going to give you a right of the onset just to say people, if you're listening to this, we will not be technically talking about AI. Both me and kind of the chops, nor the real probably care factor. I don't know. At that level, I'm curious, but I'm not intelligent enough to go down that path. Just not intelligent enough, not not, prepared to learn the amount that I would need to learn and then continually be up to date with what everything's going on. Both software, which I I enjoy, but the hardware aspect of it. Oh, yeah. That's like, it's a little bit of a stretch too far. Yeah, that's my end. It's not not interesting to me either. So I thought one part because we are likely to talk a lot more AI, I think leading into 2025.
Karen is going to be doing a particular niche, I would say of the concept and the philosophy and some of the ideas happening in a decentralized space. But I thought it would be good to like we do sometimes with the baselining. I'll ask you, AI, just in its general sense, what do you hope to gain from it? Like you personally, what are you hoping to gain? Whether it's from a learning, whether it's from a usage, whether it's from an investment?
[00:02:03] Kyrin Down:
What would you kind of like generalize for people at home? Oh, like old technology, I wanted to make my life easier. I think that would be the thing. You know, Google search was amazing, because it made people's lives much easier and was extremely helpful. So I see a similar sort of thing with the technology of, of especially agentic AI. So that's when you've got a agentic AI. Yeah. So it's an agent. So you basically through very similar to how, Google, the search page rank algorithm would you just it was just simple text you type into it and things that you found useful would pop up, whether it be videos, whether it be websites, whether it be business contacts, you know, whatever it is, I'm hoping that I can get to the similar place where it's I type in, hey, I'd like to buy 5,000 shares of Woolworths.
And it just goes and does that for me using and this is where we've got to have like a whole stack of additional technologies built 1 on top of another. And it might not be shares, you know, might be Hey, buy me 50 grand worth of Ethereum because my ball up. And it just goes and does that for me using wallets and things like that. Obviously, it's a financial aspect. But you could imagine this doing similar things like, hey, go buy me a domain for this website. Hey, message this person for me. Hey, do this for me. And so I really hope that is what the future of of AI is when it comes to me personally. I just want, I want the use cases on the technology of being able to learning about it and things like that, as you mentioned, I don't really I've got a whole stack of YouTube videos about, you know, the different types of transformer models.
I've tried reading some of the papers, the attention is all you need, that sort of stuff. And it it doesn't interest me. Yeah, I've I haven't gotten into it, whether it be because my IQ is too low or because there's just no interest there.
[00:04:12] Juan Granados:
Yeah. So I will drop in the the question of decentralized versus non decentralized versus I guess, closed system AI, that I guess, objective or outcome that you want, which is because leverage ai so that actions that you want to take become easier yes there's less friction in it yeah do you then care if it's decentralized or a closed system
[00:04:39] Kyrin Down:
ultimately
[00:04:42] Juan Granados:
no. What might it be? It's possible. Like if you know, one place charging you $1,000
[00:04:46] Kyrin Down:
per month to use it and the other one's free. All these come into consideration. The the I'll throw an analogy of compared to the social media. There's alternative decentralized social media out there. It's just. Mastodon Mastodon's example. Nosta is another example. They haven't caught on traction. I haven't found them particularly interesting or useful. No. Mastodon has been useful for the podcast index community. And I don't use it personally. But I don't really use social media personally that much. So you can give us an example as well of close ones. Instagrams threads,
[00:05:27] Juan Granados:
people are using it but very few be real, which was like as a social media took off for a little bit and then kind of like spit it out. There's a few what was the one with the speaking tree? The one where people would just go into a lot of audio chat? Yeah. You can always tell that one's gone. And that had this moment. Yeah. So I'm not kind of if you can remember clubhouse clubhouse clubhouse. So I guess you could see it in close versus decentralized sort of Yeah, you know, there's there's
[00:05:56] Kyrin Down:
arguments as to why and this is why I value the RSS audio that we produce the podcast more than the YouTube videos. I value that much more. And there's a reason for that because it's just you're always at the risk with YouTube of getting banned demonetized out. Hitler loves no sorry, Kanye loves Hitler video. Almost got us in trouble. Don't brought back though. Karen challenged it and we've got
[00:06:23] Juan Granados:
Yeah. Google socket. Yeah, alphabet socket.
[00:06:28] Kyrin Down:
Yeah, look, I'll tell you my end of it. Yeah. What what about for you for AI? What what would you
[00:06:34] Juan Granados:
Mine's careening, especially for 2025. This is careening into like, oh, wow, already as part of my annual goals, if you go back to the annual goals, I had a goal there, leverage more AI. And even when I wrote that one down, I didn't realize how much more it was going to be impacting my life. So in a personal way, I am using it more and same degree to yourself. I would like it to reduce the effort I'm using or the friction. Here's a clear example, while looking at potentially moving from the current home to another place. I would have in the past let's go back 5 6 years ago to do my research around where next to buy and all that sort of stuff. I would have gone and perused documents, reports, sales histories, percentages, maybe there was a couple of aggregators out there that could help me out with that. But it would have taken me a good 10 15 20 hours.
Now, in fact, yesterday, I had this this quick idea. And I was like, oh, you know, I wonder if I can I can leverage one of the AI is to help me do something like that, which was, I think my sort of call outs was I wanted to know over the last 10 years, what were the best performing suburbs over the last 2 years performing suburbs? Fast forward to 2032 Olympics here happening in Brisbane, what could be the best create an equation for me, based on various factors of what that might look like. Now, apply the equation into why give me the reasoning as to why that might be and he gave me I'll give you the answer that he gave me from Brisbane, Woolloongabba. So Woolloongabba, that'll probably be destined to be and I was like, you know, they've questioned me one. Yeah, there's a couple of places that we've got like, listen to this, I'll just say that but I went, you know, okay, for me, I again, it's the reduction of I would have taken 20 hours probably took me 15 minutes to get to maybe 90% of the solution.
One aspect that we could talk a little bit later, maybe it would be about what are you losing in constricting that time all the way down there? Is there anything that you're losing as practical or valuable, that you're now getting to the outcome so much quicker? Yeah, maybe maybe there's some of that like, is a fun in the journey that you would do to get to that and the learnings that you would get that you don't get in this way because you're just rushing towards the outcome. Perhaps perhaps I would say there's some foundational learnings but again you could probably leverage AI to do that. Okay, that parking inside. Yep. The other one though from a business perspective is, again talking about agents and honestly I guess I understood agents or agentic ai maybe in the wrong way. I out my attack with it all when I was having conversations last couple of weeks was you don't need agents, why do you need agents because some of these LLMs are going to advance so far, whether decentralized or not, they can just do anything. It's just like go for it until I really kind of understood.
Yes, but you want to like focus in a particular action or proposal or integration so that it serves a purpose. And I went, oh, okay. Yeah, that makes sense. Because in pick whichever llm out there ai that you want to use it can practically go and do many many things, lots of things, but like the example you just gave, hey I want to buy $50,000 worth of Ethereum, you probably want a specific agent that has that integration, has the connection. Yeah, that maybe goes through the credential
[00:09:47] Kyrin Down:
that knows the difference between okay, I want to buy this well, you can just go bulk buy that on an exchange and get smashed with regards to fees or this one's like, oh, I'll if I use a dex or multiple dexes, I can break this up into smaller parcels. If I'm trying to buy 50,000,000,000 worth of Ethereum, it would know, hey, don't try and just do one big limit order of that. That's not going to be good. Kind of just like a human, you know, if you are suggesting general human to do something, they can do most things. It's just if you want the guy to, you know, pave your your lawn over with with new tiles or something.
You want the dude who's practiced that a lot knows where to get the new tiles from how to lay them all this sort of stuff, rather than just one coming in. And
[00:10:38] Juan Granados:
there's like creating a mess details underneath the sentence, which I guess gets to my last one as well. But there's details underneath the sentence where you say, how I want you to pave over my driveway. Okay, does that mean you pave over the top of the concrete that's there? Do you remove the concrete and put it over soil? Do you put a brand new layer of something and then put the tiles over it? So there's many ways you can do that, like you say, but there's more definitions to that, which I think is part of the agent experience, which I'm kind of saying that more and more.
And then the third and the thing that big is, so I first heard it from Tim Ferris probably about 2 years ago. Again, Tim seems to always be well ahead of the curve in some of the stuff. But it was, whoever gets the incantation incantation incantation best or suited the best will win, or basically whoever can write the better prompt, or write the question really, really well into whatever AI L. L. M. You're using, the better it's going to go for you, the better it's going to give you whatever it is that you're actually seeking to get. And so a big part I think of AI, again, whether closed or decentralized, I'm actually gonna disagree with that. Okay. No, there's like a and the reason I'm saying that, here's a clear example.
This is open AI. If you start JPT 4.0, right now, it will give you a pretty good answer, depending on your question that you ask. However, up the chain of logical, modes, so like the o one and then o one pro, which I've gotten my hands on to play around with. It's even more, let's just call it finicky. It's more finicky because you have to ask it even more appropriate questions. Otherwise, you do get an answer but it's not as best used until you ask it a really, really good query or input into it it will work better that way the only place I haven't seen it doing it that well is probably the video creation ai users like sora which man whatever I do creates horseshit videos and some of the featured ones over there, they look really cool man. I can't replicate anything. So if someone tells me if I'm just doing something wrong then I've tried basic as balls, really complex bloody prompts and everything in between Not working out for me. Yeah, yeah. That my thing with the prompts
[00:12:52] Kyrin Down:
is that's I think that's, it's kind of once again, going back to Google search, you know, if you've got the you've got the Google search where you can use all of the parameters of like, don't put in this date, I'm looking for this specific keyword, exclude these keywords, etcetera, etcetera, etcetera. There's all these, you know, things that you can actually do with that to refine your search and make it better and get optimal result. But nobody really used that. And so this is where I go the the it's it's not the prompts. It's that that's, at least when we're talking like mainstream adoption of these things, it would be what the AI that wins the model that wins or something like that will be the one that can decipher what the average human just types in to make it.
[00:13:44] Juan Granados:
Sorry, that's good. Cool. At the at the simplistic level, I'd like general Googleable interactions agree like taking the minimal amount use other info to kind of gather what you might need presented. Maybe I was thinking about it in the agent usage business context more harder complex problems that you want to put in some significant parameters integrations whatever that's where I go okay, prompts and the right prompts creation would be really beneficial. No, I agree, I agree. Like the more simple it can do those sort of things, the more of a challenge it will be to like Google or something
[00:14:20] Kyrin Down:
similar. Yeah. Let's get on to the open source decentralized part of the equation. So I've I've gone down this rabbit hole for a bit recently and it's you're getting 2 good portions here with the mere mortals because 1 certainly used a lot more of the chat GPTs
[00:14:38] Juan Granados:
of this world of the Oh, yeah. Just to take context about because we're gonna say we're users, we're not creators of any of this shit. Like we're not we're not coding. We're not independent. So come up with that. But the ones that I've played around with is GROC. So XAI is one, OpenAI is Chatterpoutine, all those various models. Claude from Anthropic. And some of the ones that use I can't remember the name but play around with those, like open source ones very like, gently just to see what like what it creates and stuff like that just to see its interaction. Yep, sure. And mine's almost the opposite. I've used
[00:15:12] Kyrin Down:
most of the through Venice dotai and because I've got a pro account there, I've got access to all of the biggest best open source models. So the llamas, the
[00:15:25] Juan Granados:
what if you can recall Yeah, I was gonna say does it give you like the very, very latest of say open AI model?
[00:15:31] Kyrin Down:
No, no. So there's, there's the open source models. So they don't they don't have any closed source stuff in there. Gotcha. This is
[00:15:38] Juan Granados:
open source. What? Llama open? Okay. Yeah.
[00:15:42] Kyrin Down:
So let me just see if I can go into settings here. So so it's got llama, dolphin, 72b, quin, 2.5 code, Hermes 38b, etcetera, all sorts of things. And that and that's that was just for the text for the image, fluently, XL, flux standard, flux uncensored, pony realism, stable diffusion. Yeah. So this is, this is the ones I've been using personally and then I have very right when chat gbt was first coming out. I tried that a couple of times. So you've got a good mix of the uses here. At this very current stage, I would say almost guaranteed that one has gotten way better use way better results out of out of everything that he's used so far. And this is because it is closed sourced, not decentralized.
And it's had a, you know, renaissance. So not the right word. It's had a big popularity explosion since what was it 2021? That all this stuff? Yeah. And so this is actually what happens a lot of times with technologies where there's it's the closed source models that win. So why am I interested in the open source decentralized ones if it's not getting as good results as as what yours is. Right now. Right now. And this is linked very heavily to other stuff that I care about. Advertising privacy, which I actually don't care about that much in general. We haven't talked about it much on here. Obviously, we're doing the podcast, so we were pretty forward facing to the public and things like this. But when it comes to what I see being able to use with these these models, and we talked about it previously, like it getting to know me, this is where kind of privacy is becoming a bit more important to me, freedom of expression and even coercion. So I think the use cases that will set apart the decentralized stuff is it allows for all of these things, allows you to have private conversations, which, you know, OpenAI has taken everything that one has typed into it than that I've ever typed into it and it stored it somewhere. Which what's it going to use that for?
I think probably they'll go down the advertising model eventually, which is that it's going to start feeding specific results into what you're typing and searching for. Whether that be a little sidebar, kind of like how Google does, where it puts the top results as as advertisers sponsored sponsored ads, or whether it's thrown into it a little bit more sneakily. Who knows? I think I predict that's what's going to happen.
[00:18:27] Juan Granados:
It look, honestly, it might do It might do. And and this is I'm just throwing this out of complete left field because I don't necessarily know the exact numbers because we've talked about this with Spotify where you pay the Spotify premium, you don't get ads. But again, Spotify itself is still not making a profit. They're still generating a loss. So, you know, that's quite probably why they still Have just Keep out of the problem. Yeah. Okay. Like the really recently. Okay. Really, really recently. Congratulations, Spotify. Yeah. But I was gonna say much shit talking about. The Inverse was gonna say was, you know, with try to put it or just open a eyes, various models, I kind of go, I think there's enough people paying all the privilege slash the usage of it, they might not need to go down the path of having to go advertising role where everyone's just how that many people have subscribed to it, maybe 200,000,000 people. Maybe there's enough money there being generated just by subscription payments that that would be great.
[00:19:24] Kyrin Down:
I'm trying to think of a an example of big tech not going down advertising at some point model of a company that's, you know, refrained from from doing that.
[00:19:36] Juan Granados:
That well think about it from a non social media perspective. I was guessing like there'd be plenty out there that don't leverage ads in that regard. Like I would say Amazon leverages ads to market itself, but they don't they themselves implement ads into their internal software that they sell.
[00:19:57] Kyrin Down:
Yeah. Yeah. So that feels like just a whole like a whole thing is ads. Well, like like Oracle, Oracle as
[00:20:05] Juan Granados:
a software as a software producing like company, they themselves don't have to go down any path of ads because their product that they're selling makes them enough revenue and profits. I would say how many people use Oracle like how many people buy that? Is that a? More so enterprise like most of businesses enterprises. I guess I would say it's weird because it's generally we I guess you'd see ads is where the eyeballs is and the eyeball is usually the product is free and so when the product is free then you are the product when they're leveraging that whereas with Open NI or some of the other play systems it isn't there is free models and yeah I could see the free model than having ads introduced to it But it appears like there's a lot of people using paid models, you know, in the closest. Yeah. Just seems that way. Okay. Yeah. In in any case,
[00:20:55] Kyrin Down:
I think that'll probably be what's going to happen. So this I feel like this is one of those ones where there's a decent chance that the open source models will never get better than the the closed source ones I could imagine. And when it comes to prompting, when it comes to the agentic AI, I think that'll be a bit different. But I'm talking probably the next couple of years is what I'm imagining when I'm saying this. Close source. If you just type it in a general thing, we're experimenting with this recently. What was it? Elmo playing sports with someone? Yeah, I was playing like basketball with Michael Jordan. Yeah. That effect. I didn't get very good results from the open source models that I was using. I couldn't even get a result from close like open AI model. Okay. That's it because it restricts you cannot create an Elmo. Yeah, which is which is then getting into once again, the kind of restricting of certain things, whether this be because of copyright or whatever.
You know, that's, that's fine. I'm unsure, I'm unsure as to whether the open source models will be better, but it's kind of like how I see RSS and, and it's play with YouTube and video. I feel like YouTube is only good because there is RSS behind the scenes. It's not as popular. Adam Curry talks about when RSS and YouTube were first kind of coming out the first time I should say this podcasting for RSS RSS been around for much longer than than podcasting in it. And so this was around the 2,004 to 10 period and both were these kind of exciting new things. Oh, shit, you put audio offline, and it's called podcasting. And then oh, shit, you could put video up online.
And it's, you know, YouTube, or you could put video into podcasts. That was actually how I first found them in the iTunes Apple Store. I used to watch podcasts in there. And he was saying that they were kind of like equal level of buzz. And then YouTube just skyrocketed up in terms of popularity. People using it attention, news media's writing about it, etc, etc. For me, it feels like it needed RSS to be able to almost take in a lot of the people who YouTube wouldn't take in. So this is the people doing the type of people who would do the COVID podcast or the conspiracy podcast. Obviously RSS is not just for that.
Podcasting RSS is not just for that, but it certainly still has a role. It's like a backup almost like you need the backup for the this other thing to work really well. And I kind of see how that that could potentially happen with with this as well. So am I claiming that open open source decentralized AI is going to be better? I think probably in the long run, there will be but certainly in like the short to medium.
[00:24:01] Juan Granados:
Questionable. Yeah. Do you want us to do it? So I'll look at it in the other way between the decentralized versus closed in that. I think in the decentralized space, all of that you stated, I think is going to have its use cases. But I don't even in the medium may again, I don't know, who knows what long term looks like. But in the medium term, I don't see it bettering some of the availabilities of closed systems. And that is more aligned to probably the value of the benefit that is being obtained and sold in the closed system, but also the level of talent that's being brought in into the closed system. Years ago, and still today, meta, alphabet, Apple, media, you have these ones, right?
They will basically going out and still go out and they will pay whatever money it takes to get literally the best people to be working on this. I think there's an example of x AI chief AI officer or something like that, or finance, something technology, I'm pretty sure it was like, x AI was like trying to hire them. And that's like, they bought them from Google minds, a deep mind for like 1,000,000 of dollars of salary. And then Google paid that particular person even more money to come back like just ridiculous money, to make sure that hey we've got the absolute best people working on this.
I don't know with both that incentive and alignment to what they want to do in the closed systems how the decentralized model will be in very specific tasks that are going to be broadly used. I don't know, I don't see how that's going to do it in the meantime, no way. But in all other fronts, because of the restrictions that they place in a closed system, I can see its usage becoming much more relevant. As you mentioned, the fact that you can kind of get unrestricted, kind of get around some of the copyright items to create things that you want. That does seem valuable to a lot of people, I would say, I again, I have no idea from this perspective, something that I've read in, again, Tim Ferriss is both in a document that he's written, but also in a conversation, how copyright will be applied to this is quite the I don't know, same conversation happens with NFT. Right? How does copyright get applied to NFTs when you own it, but someone duplicates its usage and stuff like that. It's still in a I don't think there's been enough legal cases in that. But if someone started using Ben as a I with whichever model to create things and sell them, but it was breaching copyright law.
Who's at fault there? Is it the models themselves for allowing allowing that is that the individual generating and creating the copyright law itself? And it's outdated and not useful? Exactly. Is it is it a case that you cannot you can no longer do copyright law? There's a little mindset I think Elon has on the, rockets and stuff that they're they're building or specifically the rocket boosters and whatnot. Like a video out there where someone asks them like, oh, copywriter or trademark these things. These kind of call outs are kind of like is, why would you bother like you, you would only do copyright and trademark laws or like to try to slow down other people from catching up to you. But if you're just looking for the betterment, and you're just far and away kind of the best I can who cares and copyright it because they're not always gonna get close for you to continue to do it. So maybe there is shifts and changes into what you know, the application of it or if it's outdated in this model, because now there's such an ability to generate those sort of ideas and creations. So maybe it's that as well.
[00:27:43] Kyrin Down:
Check out my book review of common as air because real dives into it. And I think expresses my thoughts on why thoughts on it, why copyright and IP is
[00:27:53] Juan Granados:
outdated. So yeah, so I just think in the I think both close and open source AI are going to be useful all in their own rights? It does it's gonna definitely depend on what its application is gonna be that you wanna use it for. I can certainly tell you the various reasons why Anthropic or OpenAI or some of the Microsoft AI is way better for coding. Like, way better, you go and use an open source model. Maybe it's okay. But its level of integration and some of these other closed systems, and the ease of use, it just blows them out of the water, right? So I can see like, there's easy use cases why you'd use that. But some other generations, maybe more creative outputs, maybe things that are restricted, or freedom of speech and creation of information might be better on that front from a use case perspective. Or maybe it'd be like on par, just just as much or a little bit better. So I think both will have the use cases, just say, sure. I'm talking about the just briefly on the talent side of things. One of the things Adam Curry says about the the podcasting 2.0 community, he's always like, I could
[00:29:00] Kyrin Down:
never hire I could never get this group in a company. This would not work like the amount of money it would require to hire all of these people to do all of this work and getting them to voluntarily or work productively in a group environment just just wouldn't work. And that's why open source is so great in many ways is you can attract a lot of talent and they're doing it free voluntary. And so it's there's a it's one of those ones where it's the work culture and team culture and stuff. It if it breaks apart, it breaks apart in open source. And that does happen. You know, people have personalities and they'll they'll fuck things up. But there's not necessarily the motivation and talent aspect. I think you can get really motivated, talented people in open source as well.
How how it measures up to these other companies is up to debate, but it's not as one-sided as a futuristic
[00:30:01] Juan Granados:
tech that maybe maybe come to you and maybe doesn't. I'm just picturing one of these closed companies achieving what may be true AGI, right? Like real to AGI? That's a question for us to have to like a real true AGI. What if you get a couple of these that are might as well be or even better than 8,000,000,000 people working all at once, then there is no match because you know, you have these internal technical agents or participants that are just outdoing entire world creation in just whatever time frame you want to do, then that's it. Then you've just outpaced it completely and utterly and there's no catching it at all. So that's the other like injection idea that perhaps could could happen.
Again, there's a lot of, was a very horrible pitch for what it is all the intricacies and what that actually means. Yeah, I don't think that's gonna happen. We'll talk about that afterwards.
[00:30:59] Kyrin Down:
Boost your ram launch. This section is where we like to think that people have supported the show. I don't think anyone supported this week, which so I'm just going to talk about some changes coming up. Just quickly there was a v for v boost. Yeah. So that for the so that was just someone following on the, on 2 things. Oh, gotcha. Okay. The Blight bloomer actor. I can't remember that guy's name. In any case, a couple of changes coming up. So no financial support this week. Very sad puppy. You can do better people you can do better. You better. I'm going to change up the value for value splits probably this next week. And so just into the easiest way possible using sound fountain and Satoshi stream.
So that's one thing. There is probably some things that once again, getting into like agentic AI and stuff like this. When does the the website come up for renewal?
[00:31:57] Juan Granados:
Good new time.
[00:31:59] Kyrin Down:
I don't think it's really needed the what we're doing at the moment. So I'm tempted to go for a less expensive, easier option if I can get some AI to how good is chat GBT at creating a website if you like, yo, create a website
[00:32:18] Juan Granados:
and most of them using WordPress or something doesn't matter most of the WordPress weeks, all of them now have all AI integrated to them. So they create the website for you pretty easily. So shit code, shit code, because that's one of the things people need to be aware of with a lot of the AI's is it makes redundant, a lot of redundant code, a lot of redundant items, unless you're really specific. The creation of the code itself. It's quite redundant, but it does the job pretty well. Sure. I might play around with that. We'll see about that. So
[00:32:46] Kyrin Down:
yeah, so that's that's that for this week. You talked to just then about the the AGI. Will it will it come up? And this is kind of related to it, but is that is that going to come from a if that does happen, Is it going to come from a company you think or will it come from a lone genius being able to do this on their own?
[00:33:12] Juan Granados:
I think it would come from a company. I would even go as far so again, who the freak knows where they're actually on this, you weren't working at some of these companies,
[00:33:21] Kyrin Down:
what about it will be the spark that creates it? Is it because that a human has or company has created the perfect way of synthesizing the knowledge of, of creating the correct algorithms to make the AGI come to life? Or is it going to be because they've got the most compute?
[00:33:43] Juan Granados:
And that's, that's the thing that's driving? I mean, it's good. I think part of it compute part of it will be and this is I guess, why I kind of maybe play like the idea of agents or agent decay I or usage as application, I was a bit confused. But I was like, if you create something that is all consuming and all powerful, which could then proliferate whatever agent AI you want to then use without the input necessarily from a human in a continuous way, That that maybe that's one definition of approaching something like AGI, then I'd go and I can see that probably happening more so from a company than it would from a lone individual doing that, both by just the level of compute that you need, or the level of, interactions, security, whatever you want to call it. I would see that more so. So, yeah, in that example is imagine if, and I'm talking about here OpenAI because most people know about it, but chat GPT 03 is going to be coming out at the end of January.
Internally, there's been a bit of conversation, I was like, that's getting now pretty close to what they internally think is like more of AGI. And they might even start changing naming conventions to more AGI usage. I don't know what they define that, right? But let's just say that OE3 comes out, it can proliferate more versions or even smarter versions, or go down the path of, okay, I have an intent that you want to do business or this or that. I'll just go and I can on its own, it can go and create all of its agents, all of its creations, and then just keep on spreading from there.
At that point, I go, and if it's doing that, it's going to be doing that at magnitudes of every human being on earth being able to do it. Okay. Now you've reached some escape velocity. No person, no decentralized group of people are gonna be reaching it because it's just going in magnitude faster. Then what? Like, what what is this blocker? What is this restriction? How does any any other entity go and beat that level that just kind of consumes? I know. Is it too futuristic? I don't know. I don't know.
[00:35:45] Kyrin Down:
Yeah, that's that's so far out. It's kind of reminds me of when people talk about Bitcoin and 100 years time, they're like, the security model, it's not going to work the you know, when there's only the block rewards only one satoshi
[00:36:02] Juan Granados:
per block and things like this. I'm just. Quantum computing will break Bitcoin before that even happens.
[00:36:08] Kyrin Down:
Yeah. Yeah. Well, even excluding that, the it's just me it's so dumb to think that you could know what the world will look like in 120 years time. And this one particular problem, yeah, you're arguing over this one thing and it would be, you know, the equivalent of so where are we now? 2024. So in 1900, what was the existential threat that they were thinking?
[00:36:34] Juan Granados:
Pre electricity?
[00:36:36] Kyrin Down:
Yeah, yeah. Well, yeah, it essentially be like, is electricity going to
[00:36:44] Juan Granados:
I don't
[00:36:45] Kyrin Down:
know, fry the brains of insects or something called like a street lights gonna burn the world? Or do something
[00:36:53] Juan Granados:
so done? I believe at the start of electricity, someone's gonna go verify, so just listen to it. But at the very, very beginning of them, there was no off switch or on switch it was just you had electricity plugged in it was on. Okay. Non stop so it kind of would have been like at that time you were like fuck but in 120 year times it manages the amount of energy we're gonna be using in this electricity and we can't turn it off. Yeah. Yeah. But it's like fuck we have to switch It's changed so much. That's that's not even that's not a problem anymore. Yeah, I think similarly, like, that will not be a problem at all. At that time. There'll be so many other things that either become bigger problems or get solved.
[00:37:27] Kyrin Down:
Yeah, yeah. Yeah. That's kind of kind of how I view AGI when it's when when you get too far down that rabbit hole. It's you just come up with all these crazy scenarios where it's just like, I don't know if any of this is going to be I'm gonna make a bold prediction, though.
[00:37:42] Juan Granados:
We sit right now at the end of 2024. I I'd be willing to put like a $1,000 bet mid 2026. I think that have already would have like that's going to come to fruition in terms of the level of, accessible models that can go and just generate multiple things. Maybe it's not this like panacea, fucking looking amazing AI that's taken over everything. But to your call out of Oh, hey, I want to do this, I think it will get to It will get to a level of capability that you could just ask it like, hey. Go and create this business, go and create this, and it will do it. The restriction is gonna be, I think, in the integrate integration layer.
And even then, I'm gonna go, fuck. I think the closed models are gonna blow that shit out of the water versus open source decentralized models in its integration, network power, business deals that will get done to be able to integrate whatever layers you want. Again, I could see any banking systems integrating with Microsoft and other layers to use AI to do that sort of exchanges. I don't see them using an open source model to do that for various security reasons.
[00:38:54] Kyrin Down:
The all right. I'll get I'll give you my little things here. So will it learn Genelis be able to create a model or LOM or something that's so useful that someone could become a billionaire on their own? I don't think so. The better lesson, if you haven't looked up that little tiny article by Richard Sutton published in 2019, and when I say article, it's literally just 2 pages. And it sums up almost all of AI history in the sense that humans really want to try and use their knowledge to create a better way of accessing the data or so for example, with chess, having heuristics or rules put into the program to go, okay, you know, if, generally, you want to control the center of the board, that's, that's like a general rule. And so they'd be trying to put that into the AI.
This was when they were trying to beat the current world champion. And they were trying to put this into the model saying like, you know, yes, you can do this stuff on the side. But remember, you got to control the center. And this has occurred all throughout AI history of research and stuff. And it's always been the wrong approach to do that. The right approach has been spend your time making access to compute and using that compute better. And that's just because of Moore's Law, get more transistors get more power. And that's the thing that's actually worked. So in terms of beating Kasparov at chess, Lisa Dola had go, it was just being able to use better the compute that was coming online better.
So this is one of those ones where it's like the genius, lone genius, being able to do the right things to create the right thing. No, that's not probably almost certainly not the right way to go. It's being able to use compute better. So then the question is,
[00:40:49] Juan Granados:
I will I'll quickly I'll challenge the statement of using compute better. And I only challenge it because I got challenged about it as well. Because I was like, man, all of the big top players out there all I'm hearing is everyone's going more compute more more more more more. Now my dad was the one actually pointed out there's a couple of places in Korea and Japan that are coming out. It escapes me right now the name of it. I think I might send it to you though, but it was the using, the actual systems in a more efficient effective way. So you don't actually have to be battling for compute and you can do with same things with much less compute. And they were on par almost beating in some aspects, some of the current models, which made me be like, oh, shit, maybe, maybe, maybe it isn't all it. I just need this computers, maybe more effective usage of it. But again, that channel that I was more in the camp on.
[00:41:42] Kyrin Down:
Maybe, I don't think you see more compute regardless. It reminds me of the book More from Less, which was saying a lot of the problems that we have with regards to trash and recycling and you know, we can't continually grow forever, which I do agree with you can't, there are physical limitations to growth. And but you can grow by using more from less, like, you still need to have gotten too far up the range to then go like, okay, we need to dial it back a little bit. It's becoming more efficient. I think that the AI stuff, it's like, now we're gonna, we need to do more and then maybe eventually pull it back. Yeah. So then the question is, I guess, will so if you need compute, if that's a thing, that's that's got the big thing.
Will the closed source companies be able to access more compute? Or will it be the decentralized ones? And this is where I don't know, what do you think on that?
[00:42:44] Juan Granados:
What do we define that by compute by the sheer number of
[00:42:48] Kyrin Down:
like. Data centers and yeah, I don't know what computers measured in terms of is it energy or is it like hash rate or. Yeah, I'm not in a data shop all I know, yeah, I can see the amount of
[00:43:01] Juan Granados:
ridiculously large plants like system plants that are being built by the likes of you know Elon through XAI, Microsoft through their own setup. I believe Microsoft has now bought like I I think it's like 4 times more processing chips, or it's compute than everyone else combined. And it's just like a ridiculous number of chips that they actually own to be able to do this. So part of me goes like, I think some of the close companies out there are going to have more compute than even
[00:43:31] Kyrin Down:
aggregated across a decentralized system. I don't know. I really know. Yeah, because this is where you go. If you look at Bitcoin mining, for example, you're okay, well, they've harnessed a ridiculous amount of and when I say they I mean, the Bitcoin protocol has harnessed a ridiculous amount of energy, that energy usage is 1% of the world's energy or whatever it is. It's crazy. I tried looking this up, because I heard this claim. And I can't verify it particularly. And I don't know how to measure these things as well. But the amount of compute when Ethereum of GPU, GPU usage when Ethereum was at its peak of mining power was 50 times the amount needed to train GPT 4.
[00:44:20] Juan Granados:
So what's the say that again? So
[00:44:23] Kyrin Down:
the amount of GPU compute that the Ethereum network was using, yep, was 50 times what was needed to train, I think he said to train the GPT 4. So once again, I tried looking this up, I don't know really know how to measure these things, how, what kind of units so I didn't I wasn't able to verify this. But it's one of those ones where apparently at the moment data centers are operating about 70% efficiency, a lot of them. Yep. So there certainly is a lot of compute just out there. Where if you had the decentralized model, and it's like, oh, we can't, you know, access it for this thing, but, you know, we need full power usage, or we need, it needs to be for these hours of the days or this certain certain time, you can see where the decentralized is proven use cases where decentralized
[00:45:23] Juan Granados:
models or protocols work and being able to access and harness these sorts of things. So this is what I can get on that. And again, let's take this with a grain of salt of how close it is. But some of the notes that I gather here is that Ethereum, the mining was using millions of GPUs worldwide with a combined power of about 1 petahash a second, giving an annual energy consumption of about 80 to 100 terawatt hours. With a GPT like GPT 3 and 4, the training cost was around 1207 megawatt hours or about 0.0016 terawatt hours. So, you know, magnitudes smaller. Sure. Correct. One of the things it does talk about though is, the yes, the usage of EPU for Ethereum mining, like dwarves requirements for the training charge of t models, but the charge of t model creation was magnitudes more efficient and meaningful because the Ethereum mining was just pure repetitive cryptographic, puzzle solving.
But you're right. Yeah. So like it would magnitude of powers more or peak Ethereum processing than it would have been training any of those models. Yep. Yep.
[00:46:39] Kyrin Down:
Last one, governments and regulations. This is the only other real big thing that I think will play a part in all of whether decentralized is useful or not is that they're gonna do it, man. This is it. It It doesn't feel like it at the moment. There's a bit of a shift to change. You know, the US is becoming a bit more tech friendly at the moment. But that'll that'll come around again in a couple of years time where governments will start to see oh, shit, this stuff's really powerful. Like, oh, it can do these things much like it did with the big tech companies. And then they start calling you into Congress and like, blah, blah, blah, blah. I think that's going to happen. They're going to meddle with it. Yeah. This is where the decentralized you at the very least want I think need is probably the better word to have an alternative option out there.
So once again, will it be better than the closed source? Who knows? Probably not, at least not for the immediate future. But I certainly want to have access to, to these things. And this is where it's getting towards like the I've heard some arguments of like, the the AI alignment problem. What if it gets conscious and it's not aligned with humans and kills us all and these sorts of things? The The paperclip problem if you want to go. Yep, the paperclip maximizer. It gets the the task in its head that it needs to maximize paperclips and turns the whole word into it including including humans. The the thing that I've been coming more into is like the the models or sorry, the agents probably need to be individualistic, it needs to be trained on me. And this is this will be the biggest use case.
So the the use case of, okay, I want it to create a website for me, people look, you know, type it in and it does it. There might be, it's almost like you, I have the feeling the future will be something like, you'll all have a model where I'm just talking to, and then it it knows about what kind of website I want. And we'll go and find another agent that will do this thing. So it's like agents interacting with agents. That's kind of how I view the world going. And I'm probably going to have one that knows me and knows when I mistype this word. I actually meant this. I'm being sarcastic right now. So don't actually do that thing. Or I'm drunk. Don't definitely don't do that thing.
So I definitely, and that's where I'd go. Yeah, I'd feel much more comfortable. Interacting with one that I know is open source, has my privacy, you know, locked in guarantees,
[00:49:30] Juan Granados:
I guess, if you want to call it like that. The one problem I see with that with that model, or with that sort of system, I can only come into it with like, some of the experience throughout some of the integration is, I think people miss the fact that to do actions to such a more extended degree, you have to integrate or interact with other systems that are existing systems, right? So it's not like an open source right now, maybe even in the maybe in the long term, this won't be this will change. But in the short to medium term, you're still having to integrate with other systems, other companies to get what you want. Take the example of I I want you to go and buy $50,000 of whatever I want you to go and create a website.
The AI itself they're using isn't going to do that. It then has to go and plug into somewhere else. Even if to the point of you said, let's say you're using an open source system and AI and you say, hey, I I'm going to create this sort of website, you know, me needs to be this thing, do that fantastic. It'll go, it'll go find the best website creator for you that it needs to do that. It'll know which one the cheapest is, but Bobo, you want to connect with an API or integration with that agent. There's still personal personal information that you have to apply into that other system for it to go and create something. That layer I think is going to be a bit of a interesting one at least to solve if you're using a more decentralized device because there are certain keys and security inputs or identifiers that those other systems need which will not be provided by a decentralized model or more open like contains a privacy.
Now, if that were to happen, you could go either 1 or 2 ways. Well, companies will just not do that. Or the models themselves, like the actual open source models will have to use something a security encrypted ala apple, like when you pay with the apple credit card if you use a fine and use apple pay as opposed to your credit card it actually sends through an apple token or the security not the credit card token that's how it protects that so unless they're building that which then means they have to have some level of your information to do that, then I don't know how that's going to work out at least in the short to medium term, where I see that abundantly being much easier for a closed system that does have that sort of information, does have the level of privacy that they can keep from some sort of level security that meets all of the ISO 27100, ISO 40, 14 100 clarifications that a bank might need to even be able to do that. So again, with the, using an AI agent, I might go, Hey, I need to send $25100 to no, not even that, you say, build me this website. And as part of that, one of those, okay, could we have to pay, you have to put in your credit card details, we have to do some sort of transaction to pay for something. I think the open source model is going to have a real hard time without identifiers.
Whereas a closed system with identifiers with some level of privacy integration will be able to pass on all that information through an API or some other form of ingestion for them to actually take that action. Currently, that's a humongous blocker. And I think a lot of people probably will miss that that in a world where you're still integrating with a lot of things until it becomes somehow completely devoid of that and you don't even need that anymore. You're still gonna have to integrate with existing systems. To do that you have to provide the information that they need to go and create whatever that you need to create. You're going to need something in that in that place. When you go to Wix dotcom right now, if you go Wix dotcom and you want to create a website and you type in all the details and use AI to get there you have to put in your personal information you have to put credit card payments you have to do all these various things they need certain level of identifiers to go and do what they need to do without that that process isn't going to begin or maybe other open source systems will have to come and come and take place. I think a lot of that will go away.
[00:53:13] Kyrin Down:
Yeah. The amount of just data you need to provide for the minor of things that I think will eventually and will probably pretty quickly actually go away. Yeah.
[00:53:27] Juan Granados:
Yeah, I don't know. I get I didn't know in in all of my time that I've seen, I've not seen trend away from that in all the various integrations and connections for a lot of places, their existing systems are built with a lot of that information coming through. So you would either have to have a major transformation on basically like all companies and how they do that or have new companies that come in that disrupt all these systems to go you don't have to provide that it can just be pure other type of information that's being shared to create whatever you need. Good good go either way. Yeah, we'll see. Again, I think it's gonna be a lot of interesting conversations that come up in 2025 and AI. Honestly, like, I'm excited for, like, if o three, like, the the amount that they're talking up internally about what o three from open AI looks like by the end of January.
I've got my hands on 01 Pro. I sort of said this to you, you before, takes a long time to process and things and man, it's pretty it's it's revolutionarily different to the simple free model that you can find on OpenAI. If never tried it, I guess there's no way for you to maybe go watch some videos on how to use. It's a logical reasoning and not only it's logical reasoning, but as I said, you know, I thought of a, hey, I want to create this sort of thing for a particular business. It'll not only do that, but it does inquire questions and more things and prompt you without me prompting it. So there is a lot of that internal conversation going in of, Oh, but have you thought about this and what about this legal regulation and that? I've not seen any other model do that.
It's doing that well. The next model coming online is multiples of that. I don't understand, I don't even understand what that even looks like. I don't know again, it's very it's very different of information usage versus, integration connections and stuff like that. Again, it just seems interesting. I feel like us in the podcast, you're going to be talking very specifically about one concept of AI which is Morpheus, which will be interesting, right? There's a very different way to think about some of the stuff that's going on. I just think there'll be, I'm sorry, new modellites, we talked about, crypto and nfc quite a lot back in 2022, 2021, 2022, 2023, less than 2023.
Feel like 2025 is going to be a lot more AI conversations. Not in a on demand news, but just I think it's gonna consume a lot more of these things we do. That's interesting stuff. For sure. Yep. Yep. I think we'll leave it there. You're more or less than you very much. Is there any questions at all from the live? I saw I saw Dimalix in the chat. GMGM Quantum Hoskie.
[00:55:58] Kyrin Down:
Quantum Hoskie. Maybe a little bit more of that coming up.
[00:56:01] Juan Granados:
Jesus. When's Hoskie coming back to peak? Is it March? March gonna be a tough one? Yeah. Fucking ass. It's gonna 0 going to a dollar. No in between. Alright, we'll leave you there. Thank you very much for tuning in. New More Alliance, hope you're well. 1 out. Bye now.