A weekly live show covering all things Freedom Tech with Max, Q and Seth.
HELP GET SAMOURAI A PARDON
- SIGN THE PETITION ----> https://www.change.org/p/stand-up-for-freedom-pardon-the-innocent-coders-jailed-for-building-privacy-tools
- DONATE TO THE FAMILIES ----> https://www.givesendgo.com/billandkeonne
- SUPPORT ON SOCIAL MEDIA ---> https://billandkeonne.org/
TO DONATE TO ROMAN'S DEFENSE FUND: https://freeromanstorm.com/donate
VALUE FOR VALUE
Thanks for listening you Ungovernable Misfits, we appreciate your continued support and hope you enjoy the shows.
You can support this episode using your time, talent or treasure.
TIME:
- create fountain clips for the show
- create a meetup
- help boost the signal on social media
TALENT:
- create ungovernable misfit inspired art, animation or music
- design or implement some software that can make the podcast better
- use whatever talents you have to make a contribution to the show!
TREASURE:
- BOOST IT OR STREAM SATS on the Podcasting 2.0 apps @ https://podcastapps.com
- DONATE via Monero @ https://xmrchat.com/ugmf
- BUY SOME STICKERS @ https://www.ungovernablemisfits.com/shop/
FOUNDATION
https://foundation.xyz/ungovernable
Foundation builds Bitcoin-centric tools that empower you to reclaim your digital sovereignty.
As a sovereign computing company, Foundation is the antithesis of today’s tech conglomerates. Returning to cypherpunk principles, they build open source technology that “can’t be evil”.
Thank you Foundation Devices for sponsoring the show!
Use code: Ungovernable for $10 off of your purchase
CAKE WALLET
https://cakewallet.com
Cake Wallet is an open-source, non-custodial wallet available on Android, iOS, macOS, and Linux.
Features:
- Built-in Exchange: Swap easily between Bitcoin and Monero.
- User-Friendly: Simple interface for all users.
Monero Users:
- Batch Transactions: Send multiple payments at once.
- Faster Syncing: Optimized syncing via specified restore heights
- Proxy Support: Enhance privacy with proxy node options.
Bitcoin Users:
- Coin Control: Manage your transactions effectively.
- Silent Payments: Static bitcoin addresses
- Batch Transactions: Streamline your payment process.
Thank you Cake Wallet for sponsoring the show!
MYNYMBOX
https://mynymbox.net
Your go-to for anonymous server hosting solutions, featuring: virtual private & dedicated servers, domain registration and DNS parking. We don't require any of your personal information, and you can purchase using Bitcoin, Lightning, Monero and many other cryptos.
Explore benefits such as No KYC, complete privacy & security, and human support.
Hello, and welcome back to FreedomTech Friday. For those of you that you are new here, allow me to briefly explain what this is all about and why the hell we are here. FreedomTech Friday is a weekly live and interactive show hosted on The Ungovernable Misfits X, Foster, YouTube, Sometime Rumble, and Twitch feeds. We go live for one hour every Friday at 9AM eastern or 2PM UK time, but you can also catch up later on the podcast feed. On Freedom Tech Friday, we like to cover cover the latest news, trends, and anything relating to phologies. That could be anything from Bitcoin or Monero and Cresnger's privacy tools and everything in between.
Essentially, if there's a news item tool or topic that can help you take back some control in today's digital panopticon, we want to talk about it. My name's q and a, and I'm head of customer experience at Foundation, where we build Bitcoin focused sovereignty tools. As always, I'm joined by my good friends, Max, the head honcho at the Ungovernable Misfits Empire, Seth, who is VP at CAKE, and the observant among you will see that we also have, a guest I'm gonna bring on shortly. As I mentioned, the show is live and interactive, and we're allowing you guys to steer us towards the top the tools and topics that you want covered. There's loads of ways in which you can get involved with the show, and all of which really help us spread awareness for the show. This includes commenting or asking questions in the live chat, submitting your topics or questions before the show on X or Nosta, boosting the show on Fountain or other podcasting two point o apps, sending in tips or boosts, and sharing the show on Nosta or x, of course.
Top support for last week's show where we covered threat modeling comes from Late Stage Huddle who sent in 10,001 and said, I thought a wanky watch was when Brit sat down to watch a little porn laughing face. In all seriousness, the past few years, I've toned down how much I talked to my friends about Bitcoin. I've realized that the risk is too high, but for what reward? That they buy Bitcoin? I realized when I when someone new moved into the neighborhood, one of my good friends at a cookout said, oh, yeah. He's got a bunch of Bitcoin. Ouch. Firstly, I was like, dude, lower your voice. Then very honestly said to him, I'm this new guy, I just heard about it in 2020, and I certainly don't have have much, which is completely true. But you never know. That that little statement gets mentioned to a few more people and some rays on my house in the middle of the night for 10 BTC, which I don't have.
They might still threaten my family, only to find out that I have, like, a 100,000 sats on a hot wallet dot dot dot. Similarly, I wanna teach my kids how money, but also have to teach them about privacy too so they can go telling their random PE coach or some kid on the bus who knows where we live. Yeah. My dad's got a lot of Bitcoin. Sorry for the long boost. I sent extra stats. Well, thank you for your support, late stage travel, and, yeah, coach and Anthony said that you do you do have to be careful. And, yes, You don't have the context there for what we're talking about. Please go back and listen to our sweet show. It's a really useful one. But, yeah, thanks for your support. Let you stay subtle and to everybody else that boosts.
So without further ado, Max and Seth, how's it going? And also wanna bring up, Marks, the CEO of Maple AI, which is, very topical for today's show. How's it going, gentlemen?
[00:03:08] Unknown:
Hey there. Doing great. Thanks for having me on.
[00:03:12] Unknown:
I am doing well. I'm excited for this one. I've spoken to Marks before. Fantastic, guest. So I'm excited to chat with him more here about, about AI, and it's something I've been, I think, finally finding a niche for in my workflow. So be a good conversation.
[00:03:26] Unknown:
Nice. Yeah. Me too. Looking forward to it, and, nice to have you on, Marks. We've talked about Maple actually quite a lot on this show. So, yeah, good to have your expertise.
[00:03:37] Unknown:
Yeah. I'm glad we could make it work. It was, a very minute thanks to the the nuances and intricacies of Nosta. But, yeah. Glad we could make it work with ten minutes quite literally ten minutes to spare. So, yeah. Thanks for for stopping by. Before we well, today's topic is, is us trying to discuss and answer the question, is AI gonna be a must have for for your personal life and for your business life in 2026? But before we dive into that, I'm keen to kind of set the scene with all three of you to kind of learn what your daily AI usage looks like. Again, from a personal perspective and maybe from a business perspective, you know, Max, how do you use it for the podcast? Seth, how's it go at cake?
Obviously, Maple's a bit of a unique one for you, Max, because you are quite literally an AI company providing AI services. So I'll leave you till last because I'm sure that'll be the most, intricate answer. But, Max and Seth, like, what what does your daily AI usage look like right now?
[00:04:34] Unknown:
So for me, I I know we've talked about it a bit on this show and other places. I've been a little bit of an AI bear, so to speak. I I haven't seen it be as useful as a lot of people have talked about it. And a lot of that is carried over to chatting with other team members at Cake. Obviously, I'm not a developer. I have a little bit of skill on that side, but far more on in other places. So I am not keen on using it for dev stuff anyways. But one place where it's been really interesting talking to my team is, most of our developers really don't love using AI for coding. Obviously, there's some very simple use cases that we use it for, especially for creating tests. But a lot of areas, they've just found that it it actually slows down their workflow, especially when it comes to, like, actually getting useful PR review and that sort of thing. So we don't lean on it super heavily here.
All of our graphics and stuff are actually created in house. There maybe is a little bit of AI usage there, but I think the vast majority is is all done by hand too. So we're not a very AI heavy team. Me, personally, I think I have found that the the best use for me on the AI side is really doing research and deep research, kind of using it as a an assistant around searching rather than something more complex. And it's been really good for me. I know we've talked about IceCoggy generally for that. And their AI products work well, but have used Maple as well in the past. And that's a lot of the the value that I've gotten out of it. It's helping me to do deeper dives in searching and do a lot of the, like, groundwork for me on that side of things, a little bit in in other ways, but that's really the main use case for me.
[00:06:09] Unknown:
What about you, Max, from a a personal perspective? And and, obviously, how's it how do you get involved with with producing the podcast?
[00:06:16] Unknown:
Yeah. Well, I've sort of gone from not using it at all, maybe, I don't know, a year, year and a half ago. I I'd never used it for anything at all to now. I use it every day, in different ways. So it's part of the process for creating artwork. Usually, it won't be all of it. I'll then chuck it across and start doing things manually as well, making adjustments and things, and so does Crown. But definitely as part of the artwork, I use it for cleaning up particularly bad audio. I use it for notes. I use it for transcripts. I've used it for legal battles I've had for the last six months working as my little basically bitch slash assistant to put everything down and make sure that I can write good emails, especially being as dyslexic as I am. Like, getting my thoughts down, and making them make sense in an email is quite difficult for me, so it's great for that.
General admin every day, emails for the Fiat World stuff, a a building project I'm working on at the moment at the house, just like running things past it, double checking measurements, trying to find the most efficient ways to do things, and between that and YouTube can basically build anything.
[00:07:41] Unknown:
I use it all the time. Like, probably not in the best and most efficient way, but I lean on it pretty hard. Wow. That was way more than I thought you're gonna say. It sounds like it Yeah. Really has kind of wormed its way firmly into your your daily life. That was surprising to me, actually. And for me, before I hand it over to you, Mark, like, for me, I I was a bit like Seth, a bit of a an AI laggard and a bit of a pessimist, but a similar sort of story, really. It was kind of being used more heavily by my teammates over at Foundation, both in and out of work. I know the devs were using things like Claude to to help them, you know, check PRs and things like that and and, you know, war game out sort of different changes before they they happen.
So the by osmosis, I guess, use it day to day and their productivity increase, I slowly started to to to play around a little bit. And I'd say I'm probably still quite a a a basic AI user. It's become my kind of default search, in and around, like, the case, ecosystem. Again, I use use, Maple quite a bit as well on my phone, as, like, a, I guess, a glorified research assistant and and search engine. That would be my primary usage. But, more recently, I've also started using, the assistant within, Versus code for for this the foundation documentation website so that I can you know, what that's enabled me to do is to make visual changes to that that website that me as a non dev would never have had any clue how to do without going to spending quite some time on something like Stack Exchange to figure it out. So as a bit of a force multiplier for somebody who's a little bit more on the basic end and you're definitely not, a developer, I can definitely start to see the the the the glaringly obvious, productivity gains that can be had. So for me, it's something that seems to be, increasing, and it's something that I'm kind of keen to to learn more about, but, I'm definitely classify myself as a as a laggard.
But, Marks, aside from literally being an AI company, which we're gonna get on to shortly, what does your kind of personal, or, sorry, daily AI usage look like in, like, day to day life?
[00:09:56] Unknown:
Sure. Yeah. I'm I'm actually pretty similar to a lot of you. I was a bit skeptical on it a couple years ago. In my previous job, I was actually building a lot of machine learning AI stuff, but I viewed it more as this tool specifically for the project we were on. And so when generative AI came out and these LLMs, I didn't pick it up right away because I was like, I don't know. This this feels weird. But the more I I dove in, the more it's really just become part of my daily life. When I was doing my weekly podcast, I was doing Freedom Tech weekends. I put it on pause for a little bit, but I was heavily use using it to do things like transcripts, write up summaries, you know, YouTube descriptions.
I had this whole prompt set up where I would just take I I would use whisper to to make a transcript of my audio. I would dump it into this prompt, and it would basically generate all the stuff I need to just copy and paste in YouTube and Twitter and all these other places. So that was that was super helpful. I also just have this I've got one prompt that is just like my proofreader. So when I'm gonna send out an important email or post something that's important, I just quickly drop it in there and it lets me know if there's any errors or things that I should clean up. And then I also really like using it aside from just general, hey, I've got a question about something I wanna search about something like. I definitely use it for that. But, I use it to help me, like, detect fraud. So we all get these emails or random text message or something. And I'm running a company now, and I'll have people reach out and say, hey. You know, I want to create a partnership with you to post online. I'm an influencer. Yada yada yada. And I'll just take that and drop it in and be like, alright. Here's what I received. Walk me through how they're trying to, like, screw me over, or is this a good deal?
And it's it's awesome for that kind of stuff. So, yeah. I I really just I use it I use it as, like, a a backstop for a lot of different things.
[00:11:48] Unknown:
Interesting. Yeah. I I know, some of my colleagues at Foundation have used it for similar kind of legal stuff as well, so that's interesting to hear. Before we we hop back, on the kind of main topic, it would be remiss to not have you, I guess, introduce and give the the the Maple overview for the, those that have been living in a cave and have never heard of of what Maple is and why people might wanna explore the service. So, yeah. Do you wanna give us the the the TLDR on that?
[00:12:16] Unknown:
Yeah. Maple is it's the the privacy version of Chacha BT. We're trying to create the same functionality, really powerful AI, something that you can use as a strong tool in your life. But we don't do any of the data tracking. We don't do any of the advertising. There's no training off of your data to create new models. We're we're following the old school subscribe for email kinda model similar to Proton where you just you pay for it and you get a service. And we do our best using end to end encryption, secure enclaves. We we put you into a private room with an AI, and then nobody else is involved in the process at that point.
So it's it's it's very similar to if you ran AI on your local computer. We try to create that same level of privacy. Now, obviously, there are trade offs because we're running in the cloud, and so we're using secure enclaves. So it's not gonna be a 100% the same privacy that you would have if you were on a local device with the Internet turned off. But it's really as close as you can get to that level of privacy. So everything is encrypted on your device first before it goes to our servers. And then in our servers, the confidential computing is what takes over, and it's the only thing that's able to decrypt it, talk to the AI, and then it re encrypts it and sends it back to your device.
[00:13:39] Unknown:
That's very cool. So stating the obvious here, but, like, for for those that kind of don't know what, I guess, secure queue or secure, compute or, enclaves or anything like that means. Essentially, if the the server that's kind of, that you guys are pinging for the the AI service is compromised, evidence encrypted, essentially. So there's no there's no risk of data loss even in the the kind of worst case scenario.
[00:14:02] Unknown:
Yeah. There's there's two great examples I'd bring up here. So one, that a lot of people felt was when Ledger, the the hardware wallet, when they got hacked years ago, a whole bunch of e, mailing addresses were leaked on the Internet. So people who had purchased Bitcoin wallets and, you know, to use it for their crypto, suddenly their addresses were out there for people to say, hey, these are all the people who own Bitcoin. Had they been using some kind of secure encrypted process, like confidential computing, they could have done a better job of of protecting that data. They also could have scrubbed it later on. They don't always need to keep it. The other one is the New York Times is in a big lawsuit with ChachiPT right now. And Chat GPT has had certain, data retention policies where they say we will only keep data for thirty days or ninety days or whatever.
Even the temporary chats, people go into Chat GPT and they'll say, do a temporary chat. I want this to disappear when I'm done. Chat GPT actually hangs on to those temporary chats quote unquote for thirty days. Well, this this lawsuit is impelling or come sorry, compelling Chat GPT and OpenAI to hand over 20,000,000 chat records in part of the the process. And so now all these all these chats that people thought were just, you know, private between them and the ChatGPT AI are now being given over to another entity. And they've also told them that they have to change their retention policies and start retaining data indefinitely.
So these other services, if they aren't using encrypted chat, if they aren't using confidential computing and secure enclaves correctly, then your data is just sitting there in a giant box, you know, for anybody to look at. Employees of the company, a judge in a trial could reach in there, governments, hackers. It's just sitting there for people to to get.
[00:15:54] Unknown:
Yeah. I've I've remember hearing about those stories. Quite, horrifying, to be honest, especially if you weren't paying attention from a from a privacy perspective. While we're talking specifically about Maple, we've got a couple of questions from the chat. I've just bought one on screen. Says it's from Patriotic, and they said, some people use one of us email addresses. Wouldn't that person get additional privacy if Maple used an account based system similar to Molvad? Maybe it while you're answering that, if you could outline the, I guess, all of the different login options that Maple offers or account creation options, should I say?
[00:16:25] Unknown:
Yeah. And we actually just added something similar to what they're asking for with Mullvad. So, we have we we've tried to make privacy very easy for people to use. I mean, you guys on here have used tons of privacy tools. You know how it can be difficult at times. So we're trying to make it as easy as possible to capture the most amount of people possible and bring them into the privacy space. And so you can log in with email. You can log in with, like, Google or Apple or, GitHub. But then we just added a totally anonymous login, which is similar to MullDads. So it creates a, a unique, account ID just for you, and you have to write it down. If you lose it, like, you're kinda out of luck, so don't lose your account ID. And then also you create your own password. We don't set it for you. And then you can only pay with Bitcoin. We don't accept credit card payments for that anonymous account.
For all the other accounts, you can pay with credit card if you want to. But this anonymous one is completely, you know, paying with Bitcoin and a completely unique account ID that we don't we don't store.
[00:17:25] Unknown:
Awesome. Yeah. I was, very excited to see that one come out recently. While we're talking about payments, I I know of two kind of payment structures for kind of, private AI where it's like pay per per, risk per request, or I believe you guys offer a monthly one. So so are you just monthly based or annual subscription based, or is there also a pay per request option as well? I may be canal behind whatever decision you guys have made there.
[00:17:55] Unknown:
Yeah. There are lots of great services out there. I know somebody asked about Nano GPT. There's another one called PPQ. They let you pay per query, with with GPT. Those are great. I don't wanna put down anybody else's tools because this this world is so large. We all have different tools that we need for different moments. So those are great services. We do we went the subscription route where we want this to be a tool that's just there for you at any moment when you need it. So it's a monthly subscription or you can pay annually. And then when you are just in a moment and you need that that AI that you can trust end to end, it's just right there waiting for you. You don't have to worry about, like, wallet balances or do I have enough in this in this wallet over here or did I did I top up? It's just sitting there waiting for you.
[00:18:40] Unknown:
How does it work, on the payment side with Bitcoin? Do you have Bitcoin, Lightning, and other cryptocurrencies or is it just on chain?
[00:18:50] Unknown:
We offer on chain, Lightning, and then we have people who also pay us with e cash over Lightning. And, like, you can do ARC and other Bitcoin layer twos. And that's really as far as we've gone. We haven't gone outside of the Bitcoin space for accepting, and we have a lot of people who ask to pay in Monero. And Yeah. Which is about to ask. Yeah. Which is great. But we're really trying to also build up the, like, this this private AI that is compatible or not compatible, but, like, on the same level as ChatGPT. So we simply just, like, don't have time right now to try and get other currencies involved in the mix, and so we we stopped the Bitcoin.
We're open to it in the future to to look at other things, but that's just where we're at right now.
[00:19:34] Unknown:
I mean, lightning's lightning's pretty good. But we always send the show, like, if we wanna pay privately, we use Monero. And if you wanna pay slightly less privately, it'll be, lightning. And then on chain is, a little bit more troublesome. So if you have Lightning, that's a pretty good start.
[00:19:51] Unknown:
Yeah. I'm curious what what your guys' thoughts are on eCash, and that's a totally different topic. But, like, eCash on on Lightning and Bitcoin, like, do do you buy into the privacy aspect of that?
[00:20:04] Unknown:
This is where we let Seth come in and get
[00:20:06] Unknown:
I was I was gonna see if anyone else would chime in first. So I just laughed. I knew you're gonna come in. I mean, from for me, it all boils down to I don't wanna give up custody. I don't think it's necessary to get strong privacy, especially when something like Monero exists. If you want better than Bitcoin privacy, you can use Monero. So I I'd normally lean away from it. I mean, my my hope with Spark and Arc specifically is that we can do more privacy preserving self custodial lightning in a way that actually feels seamless from a user experience perspective.
And so there's just not a need for eCash. I think it's kind of a it's kind of a dead end to me because there's just so many issues with who's actually gonna run the ments, who's gonna handle that regulatory hurdle, for lack of a better term, much less just users getting rugged and the bad UX that comes with that. So I definitely much prefer staying away from it. I I'd be curious if y'all are running, like, BTC pay, using Manera is very straightforward. But if you're if you're running a different system, obviously, it can be a little bit more complex. But I know just from talking to basically every other privacy preserving service out there who have integrated Urban Arrow or want it, they just get massive demand for Urban Arrow usage. It's a it's a good fit for your customers. So I'd I'd love to see that, but I also could can help out with that if you ever had any have any questions down the line.
[00:21:22] Unknown:
Okay. Yeah. I'll have to, I'll have to hit you up offline and figure out what what that would take. Yeah. No. And that's a that's a fair assessment of eCash. I can I can go along with that? I I I like it as a technology. I definitely have used it many times. So curious to see where these all head down the line. For sure. For sure.
[00:21:46] Unknown:
Awesome. Alright. I I wanna step into the the the conversation around, do we think, you know, AI do we need AI to kind of keep up or to to not fall behind, as we look forward in into, 2026? It feels like AI is, like, literally everywhere these days. Your phone, your work software, your watch, your glasses, like, sometimes even your fridge might want a little chat with you, which is all kinds of dystopian. But anyway, that we digress. From from a business perspective, guys, we're all involved in some way, shape, or form here, in, you know, using businesses that are tend to alter to AI and, you know, technology based. Do you do you guys think that it is a a requirement essentially these days to to use AI in some way, shape, or form, so that you either can get ahead or, more importantly, don't fall behind with the with your competitors?
[00:22:45] Unknown:
I think, personally, it's a double edged sword. Like, depends on the person, and it depends on the business. Like, throw a bit of AI on there. You you quite often end up with just AI slop. And, definitely, like, with media stuff, the quality could go down massively if you don't use it in the right way or you try and lean on it too much. So I think it it really comes down to, like, what you're trying to do and who you are. So someone like you or Seth are probably perfectly capable of writing emails and, like, reading through documents and going through notes. And someone like me, I'm a fucking retard and can't do it, and it would take me a whole day to to do what you could do in ten minutes.
So I use AI where I have massive flaws, and I need help. But I but I think, I think, yeah, I I do see some people leaning too hard on it, and I think that it could ruin a business just as much as it could make one.
[00:23:46] Unknown:
Yeah. I mean, when I when I look at AI, I think like most other things, it can be good in limited and wise usage. I think there's a tendency to want to go all or nothing with it. And I think it's really apparent when companies go all in with AI, especially on the marketing and brand side. It's just it's hideous. Right. Yeah. It looks really bad. It immediately turns off real people. It's really noticeable more than people think it is. It's it just does not does not pan out. But I think going the opposite direction and not using it at all is also a mistake. And I think, really, for me, it comes down to like I talked about a little bit earlier, really finding, like, where what are parts of your workflow where you're just spending a lot of time repeating the same task over and over? Or where's the part of your workflow where you're just doing some sort of tedious thing, like, oh, I have to do this spreadsheet every week, or I need to to calculate these numbers, or I need to to find out comparisons between these two things. Like, instead of doing all of that manually, those kinds of, like, repetitive tedious tasks where they don't where, like, you have expertise so that you can judge is the AI bullshitting me, or is it actually telling the truth?
And it's something that would just take a lot of time for you. I think it can be really, really good. But I think a lot of people run into issues where either they lean on it way too heavy heavily on the marketing side of things, or they trust it way too much, and they end up causing a lot of problems because AI AI are is really built to bullshit you. There's a really good article that I just linked in the the x chat where it's it's very good at thinking it's right and not really caring about if it is or not, and just wanting you to think it's right. And you can get into a lot of problems if you don't actually know the truth of what the end result of what you're asking it for. That could be on the dev side. Gaslighting me yesterday, Seth. Yeah. Probably telling you gaslighting me about it was literally telling me that 10 plus three wasn't 13. Like, we we had this back and forth. I was trying to do measurements, and it was saying how much glass had to come into a rabbit. And I was like, yeah. They said that it has to go in by ten, and then there has to be three
[00:26:01] Unknown:
mil all the way around for silicon. I was like, okay. So so total 13. They're like, no. Total cutout should be 10. And we went back and forth for half an hour until I just closed my laptop and was like, you're gaslighting me. I can't do this anymore. So, yeah, you can't trust it. But, as long as you kind of take everything with a pinch of salt and double check, it can be massively helpful.
[00:26:23] Unknown:
Yeah. Yeah. Absolutely. And I I think that's where, like, it really is the most value add to someone who's already good at what they're doing and can just use it to enhance their abilities. Not replace what they do, but enhance their abilities and where they can judge is this output useful or not, and and really make the most of it is is where it can be can be really useful. Yeah.
[00:26:46] Unknown:
Yeah. I definitely saw it the other day. I was I was driving and saw a billboard, a big advertisement on the side of the road, and you could tell they used Chachipity to make it. And like you said, Seth, it just sticks out like a sore thumb. It's, it doesn't look great. So we're we're not there yet for that kind of stuff, but, I just wanna add on to definitely verify everything that you use it for. Like, if if it gives you a fact, like, go double check that fact. And it's it's really opened up my eyes to how little fact checking I used to do prior to AI. Like, I would just read a blog and be like, oh, this person is super knowledgeable there on the Internet, and they wrote a blog post. And then I would just take it and move on. And, first first during COVID and all the the the stuff that went on there, but then now with AI, I've really tuned myself to, like, double check things before I pass them along and make sure that that the data is correct. And that's not just an AI thing. That's a general life thing, but but definitely need to do with AI.
[00:27:47] Unknown:
Yeah. Absolutely. Something I've had to learn as I've got my usage is kind of, increased as well. But, Mac, were you gonna come in and say something then?
[00:27:54] Unknown:
No. I was just gonna ask, how if if at all does it change things at Foundation?
[00:28:01] Unknown:
Yeah. For me personally, it's it's been a force for good, you know, by the ways that I I outlined earlier. Also, I've seen some of my also nontechnical colleagues get their hands, dirty with more technical stuff than they would have otherwise been able to do so. Not always with a 100% hit rate, but, like, maybe, like, an example would be, a PM using Claude or something similar to that to to asking it to look over a PR to, say, the Envoy mobile app or something like that, break down the changes that are made so that when we do all the assurance testing that we know specifically what to look for based on the changes that have been made. So things like that where ordinarily, we just be be reliant on the developers to be like, okay. I've changed this, so go and test this.
Now it's a case of we're a lot more autonomous in that kind of arena so that we can go and do things faster, more accurately, and get the devs to go and do more dev stuff. Portland, Huddl, I've just seen your comment. You have a ton of thoughts about this. If you have any specific questions or if you wanna hop up, feel free to to jump in. I can just drop me a DM. I can give send you a link over. Couple of comments that you guys made there around kind of, I guess I would summarize it as, like, AI literacy, in terms of being able to spot bullshit like you like you, Max, where you, you know, knowing enough to know that, you know, 10 plus three was actually equal 13 despite what you're being
[00:29:31] Unknown:
how, like,
[00:29:33] Unknown:
how how do people get over over that? Like, like, do you think this is gonna be something that they teach in schools? Do you think it's gonna be something that, you know, it's it's upon people to to kind of teach themselves to to interact with them and be wary of this this all encompassing power that sits in their phone or on their computer. Like, how do you have any tips on, like, being able to spot bullshit when you see it, especially if you're you know, if I go back to the example where, you know, we're using it to to look at PRs, like, we're not developers, so we are kind of reliant on that.
You know, what what's your thoughts around that and and sort of being able to to spot bullshit because sometimes the AI might actually not know as much as they think it does.
[00:30:15] Unknown:
I mean, real quick, my thoughts on it are that it's not really an AI problem. It's a it's a general education problem. Like, you need to actually know how to discern truth from fiction yourself in whatever area you're you're looking at. I mean, that could be basic arithmetic. That could be something as advanced as, oh, this PR is actually really bad and will cause this problem that the ad just doesn't see. It depends on what specifically you're trying to do with it, but it really just comes down to, like, you have to actually know things for yourself, and you can't just outsource thinking and knowledge to a third party. And I think that's my biggest, like, meta fear, I guess, with AI, is that a lot of people are just starting to outsource more and more of their thinking and their contemplation to AI and going, like, I don't wanna think about this. Let me just drop the prompt, and then I'll come back, and AI will think about it for me. And then I'll just read what it says, and that'll be what I think about this thing. And that is what's really terrifying is because if you lean more and more onto that, you're you're kind of, truth detection muscle in your brain is gonna atrophy and atrophy and atrophy. And you'll you'll have to rely on an outside source because you're just not contemplative.
And that's like a maybe outside of the question, but more of a kind of a a fear of mine overall is that you have to just have basic, like, common sense, like street street smarts kind of thing to be able to discern is this true or not. And you also have to know that AI is will bullshit you. And I think that's the main thing that maybe people need to be taught is, like, AI is not right a lot of the time, and you don't you need to assume it's not right unless you know that what you're looking at seems correct.
[00:31:56] Unknown:
Mark, she'll be able to answer this question. I mean, obviously, it has to pull data from somewhere. I'm sure I've seen, like, GBT pulls a lot from, like, Reddit and these type of places. Obviously, you're gonna have slop in and slop out. So is there a way to refine, for certain use cases to have more of a specifically built AI model for different tasks where it pulls data from people who actually know what they're talking about, not some div on Reddit?
[00:32:31] Unknown:
Yeah. There are ways to do that. You're right. I mean, if if the Internet was written by a bunch of retards, then all the AIs are gonna be retarded. So Mhmm. That we definitely have to keep that in mind, as well as political biases. Right? If if the media lean one direction politically more than the other, and then they are viewed as credible. And so when these AI is trained on it, then they train on quote, unquote, the the credible sources. And so they'll have political biases that way as well. So the the heavier lift is to train a completely new model, feed it all the data, tag it all. That's incredibly expensive. It's, time consuming, and it's very difficult to do that and have it have it come out well. The other way you can do it is you can build these vector databases, these rags, or these other things you kinda bolt on to the side, and you give it all the sources that you find to be incredibly credible or factual, and you can really tailor it to what you want. And so we see this in a lot. There are people will take people take GPT and then they'll build their own rag and and attach it on and say, okay. This is the this is the Bitcoin bot or this is the I mean, people don't like religion. This is the Christianity bot. Right? Like, it it knows more about the specific topics. So when you talk to it, it's been fine tuned for that.
That let that lift is much smaller than trying to train an entirely new AI, model. So you can go down that road. It's it's still not, it's still a newer technology, still being developed, and and, it's not not a 100 yet.
[00:34:10] Unknown:
We, we got Portland Portland Huddl, in the house, who hopped in into the Twitter chat and said he's got some, some hot take and some, some thoughts here. So, yeah, Holland, Holland, Portland Huddl. How's it going?
[00:34:34] Unknown:
Do you want me just to go into my hot take? Let's let's hear it. I think I wasted six months of my life with that. And what I mean by that is, it started off pretty slow in terms of, like, my integration into my workflow. I just kept like, okay. Like, tab complete. Well, I don't use tab complete like cursor, but the meme is true. Just like, okay. Generate this. Fix this problem. Keep going down the rabbit hole. And what I found was, like, over time, I didn't enjoy my craft. I didn't learn a lot, and I became far worse at problem solving during that time. And they took that, like, cognitive atrophy took roughly forty five days to kind of resolve because I had to basically, like, reprogram myself to instead of just adding more code to fix things, like, actually architecting real solutions.
And it could just be a skill issue at the end of the day. Like, maybe I'm just not the prompt door that knows how to, like, really architect, like, large project based solutions. But, also, I came across a problem the other day that specifically was involving blinding PSVTs. And by actually just, like, reading docs and thinking through different keywords and how they integrate into the like, I can use them as tools, I was able to actually instead of AI, which usually ends up adding more lines to my code base, I removed a bunch of code, and I added a feature. And then that's when it really I'm really starting to feel like I'm getting back into the flow of, like, really being a software engineer instead of just kind of, like, bolting together all these different, like, pieces, which seem to always have, like, some amount of missing context throughout my project. And it really just I guess I'd say is, like, yeah. I just I I
[00:36:16] Unknown:
I atrophied cognitively because of it, pretty pretty hard. And, yeah, I'm back on. Like, I feel, like, full throttle again, but it it did take a little bit to get back in the back into it. Yeah. There there's definitely a lot to be said around, you know, the the old saying around your mind is mind is also a muscle that that needs kind of exercise and needs to be flexed every now and again. I guess it would be fair to assume that most of your AI usage is gonna be, like, things like Claude and and code based stuff.
[00:36:43] Unknown:
Yeah. So Claude specifically, like, Claude code, Claude Opus through Klein, like, those kind of tools I started using. And, yeah. So basically, I would be like, okay. I have this feature. This, like, a lot of times or, like, I need to interact with, like, a specific Bitcoin script. And, I mean, maybe it does really good for other types of work. I think FE stuff, front end, it works incredibly well to just pump volume. But with specifically, like, Bitcoin script, etcetera, it just seems to run into a lot of problems with, like, okay. It doesn't understand how to put these things together to make it work cohesively with, like, Bitcoin consensus or to, like, especially with Rust, like, what traits are really available to it. Like, I can read a doc and just go, okay. This trait has these methods. I could figure out what I need to use to make this work. And a lot of times, I can find a more optimal solution.
But because of my human nature, it becomes very easy to just go, like, okay. Can you fix this? And it'll spin its wheels for, like, five, ten minutes. You know? Yeah. I definitely I've been enjoying my craft a lot more, and I think there are a lot of shortcomings with AI currently beyond, like, I would say, in my opinion, like, things that are small, very well scoped projects. And what I mean by that is, like, hey. I need, like, a Telegram bot. That's a pretty small little box you can put it in. The context window can read everything. It can understand probably most of the project. But as these code bases under keep expanding for some projects, it it tips over. And my best example of that would be, like, just try using, like, Claude or these other AI tools to help you build on Bitcoin Core.
It it does a pretty awful job overall.
[00:38:31] Unknown:
That's a a good insight. I I I am curious. When you said, you know, you you started to to kind of flex your muscle again, Did you go, like, completely the other way and you'd, like, stopped using it altogether, or was it just more measured usage?
[00:38:44] Unknown:
100% cold turkey. The whole thing. No. Like, the I there are, like, probably, like, a couple instances where I just straight up, like, Google could not and Google's a terrible tool. But, like, what I mean by that is, like, okay, Google, take me to the docs of this specific library and then read through the library. If I could not find that solution quick, I would use AI. But a lot of times, it would end up just going back to me just having to read the docs some more for that specific problem. And also, I think there's, like, a big element to, like, create good solutions.
There's, like, a okay. I'm gonna try to put this together very simply because I've been thinking about this a lot, like deep reflection stuff. The thing that AI doesn't do well is, like, I, as a developer, when I write code, I feel pain. Right? Like, I have to sit there and I gotta type something. So if I start creating, like, a subpar solution, it usually shows itself as, like, hey. I gotta keep writing more code. It keeps stacking. It becomes very expensive for me as a time measurement to continue to write this code in a very poor way or, like, I could see my scaffolding start to tip over. Like, I'm not templating enough or I'm not using traits well. I'm not using the dying keyword. It all just kinda like it feels bad. But with AI specifically, it doesn't feel any pain. And I also don't feel any pain using it.
So I could just keep stacking more and more code and just say bolt this together, and it'll just add more code. It'll keep bolting it together. And, eventually, it just kinda tips over on itself because it can't see the full context of the project.
[00:40:13] Unknown:
So those are my my thoughts on that. I think I'll talk about some of the the the comments that the guys have made around, like, you know, you've gotta have a good level of quality of of input to get, you know, an extremely high level of of quality output as well. We we do have a question from the the Nas chat, from Observer that asks, and this is open to anybody, by the way. Feel free to to take it. How how can I chain tasks for different AI agents? I'm asking as a laggard. So I don't even know what chaining tasks means. I'm hoping one of you guys can hop in. I don't have an answer to that question.
[00:40:53] Unknown:
They say chaining or training?
[00:40:55] Unknown:
Chain. How how to chain tasks?
[00:40:58] Unknown:
I think that the context would be, like, how do I, like, take something from Claude and then have, like, maybe Gemini or something else? Like, you can kinda pass these things along. I don't have the answer to that.
[00:41:09] Unknown:
There there's a service called n eight n. N eight n is a good one that that can chain things together for you. You can kinda bolt things together. Another awesome one, if we're here talking about, like, running it yourself is Goose. So Goose is made by Block. It's open source. And it is this agent app for you, and it has all sorts of recipes and plugins you can do. And so it'll it'll talk to the LLMs, but it'll also grab data from your local file system if you need and go out and search the web. And it can it can chain things together.
[00:41:39] Unknown:
And, so I would say go go check out Goose as well if you wanna look at something. And, Mark's, basic question for for another lag out over here. Why why would I wanna do that? Can you give us a scenario as to why that might be useful?
[00:41:51] Unknown:
Yeah. Agents and this this is all the the the promise of AI. Right? So agents are something that can kinda, like, work while you're gone, and they could do something for you. So you can imagine if you hired another person into your company, like, you're hiring an agent to come do something for you. So you can you can have it do all sorts of tasks, that menial tasks that you would do normally, and now you can set it off to go do that for you while you're working on other things.
[00:42:18] Unknown:
That makes sense. Okay. Cool. Let us know if you have any, questions in the chat, guys. I do have one in the meantime, and it's around this is open to anybody. Feel free to jump in. It's around tool selection. Again, as a more basic AI, you like, I see Maple, ChatGPT, Claude, Jai, Apple Intelligence, etcetera, etcetera. Like, there's millions of them out there. How are you guys making educated decisions on which tool to use for which task? Like, you know, what what's driving those decision for you to make? Say, oh, right. For this task, I know that Claude will be the best. Like, how are you making that, that choice?
[00:43:01] Unknown:
I'll let other experts chime in because I'm I'm a I'm a peasant, and I normally know what the what the recommended models are. I definitely I did a lot of, like, trying tons of different models, and I just it it ended up usually just costing me more time than anything else, but I'm also not using it for super advanced stuff. So I'm really curious more on on marks and Portland Huddles. Like, how do you choose the model?
[00:43:22] Unknown:
So this is just my experience. Most models for me have not been very productive in a professional setting. And, specifically, the one that seems to do the best, hands down, has been Claude Sonnet and Opus. There's a cost trade off between the two. Opus is much more expensive, but its results are typically, like, you will save time because it'll actually solve the problem. Sonnet will, a lot of times, it'll kinda spiral or circle the drain as it descends this gradient of trying to figure out what you're looking for as it, like, turns this noise into code based on your prompts. But, Gemini has some good, like, planning stuff, so you can basically go like, hey, Gemini. Here's my code base. Analyze it and tell me how I'm gonna fix this thing I want to do or implement this. But a lot of times, the actual execution of creating the code, Claude has done by far the best I have not seen a replacement. That is my opinion.
[00:44:19] Unknown:
Yeah. Claude really is the gold standard, that that people use. We use it for programming Maple itself, in addition to Maple. There is a question here. Can Maple be used for programming for coding? So I'll kinda wrap that into my answer here. But, yeah, Claude, Sonnet, and Opus are awesome. The the interesting thing about that company is they are subsidizing users to use us as well. Like, you pay your you pay your, your monthly subscription, but you can, like, go well above that in credits that you use, and they just kind of flip the bill for the rest. So that that's an interesting business model for them. I don't know how they keep it going.
[00:44:56] Unknown:
Good. But as far as, oh, go ahead. Oh, no. Yeah. I'll I'll bring up another topic after this. I got one more point. Yeah. No. You're good.
[00:45:04] Unknown:
A a lot of it has to do with just kinda, like, how what kind of output you get from these different models. Right? Some of them are great at conversations. Some of the great are at math analytics. And so you have to use them to kinda get get your feel of it. But I think this is a a weird transition phase that we're in. I think we all move to something that's more of, like, a a council of of models in the future where you have one that actually goes out and talks to all of them, finds it knows which ones are the best and kinda bring them all back together and show you that's work and then come up with, like, the most factual answer. I think that's that's one direction that we're going with AI.
As far as specifically answering the question here in the chat, we do have an API for maple. We run quen three coder, which is is, really good. It's not the best compared to clod, but it's it's a very functional coding, LLM and it's totally private. And so you can you can access the maple API. You can plug it into Cline. You can plug it into Goose. You can plug it into other coding tools. It speaks the same same protocol that the OpenAI, coding agents do. So, it's pretty easy to plug in.
[00:46:16] Unknown:
Go ahead, Bolton. I'll hold.
[00:46:19] Unknown:
So I do gotta jump off here for a, meeting with Mara, but I wanted to bring up one last kind of thing. I think I've I've been seeing a lot lately, and I think that's like, AI is an incredibly powerful tool. And, like, deep down, I do believe it will replace, like, software engineers to a large degree. I think it requires a lot of management right now, and it turns software engineers into, like, kind of, like, managers of sorts where they kind of, like, guide these little agents or whatnot through the their tasks. But that is leading to some problems that I'm noticing, like, when I review people's code is that it's typically really sloppy.
And a lot of times, I'll hear the response. I always read what comes off my editor. Like, I I always read all the lines of code. But a lot of times, what I'm finding is that they aren't reading the code or maybe they just don't understand or maybe they forget what came off of this thing or even the the PRs before they hit submit for review. They're not getting fully reviewed, and you can see that there are a lot of elements of LLM generated code or just overall poor quality code. Again, like, I really don't mind if somebody uses an LLM successfully, but it should remain very transparent. I shouldn't be able to point it out and go, like, just like an AI generated image. Hey. This is I can see that you're using AI here. This this doesn't make any sense. This pattern is very poor. Can you use this method or whatnot? But, yeah, that's creating a little bit of a social risk for companies as well right now because you don't wanna be caught using AI for your products. It makes you look sloppy. It makes it look cheap. And, especially if you back products, I think we've even seen recently with some stuff in Bitcoin. Like, companies go out and full on back, a proposal or whatnot. And then you find they find out, like, oh, wow. This thing was written by AI. It's failing tests, etcetera.
And it's kinda like you gotta you gotta be very careful because people if somebody doesn't have the discipline to write the code in the first place, I don't fully believe they have the discipline to read the code. And this stuff can, like, literally just propagate from PR to company to production to potentially a user error where they they could lose money, especially in Bitcoin. Like, this is this is your time. This is your savings. And I would just urge people to be very careful about these outputs. Like, we've talked I think I heard you guys talk about, like, the bullshit of AI. It's it is there. You've gotta be very careful with it. And, with that said, yeah. Just, please do your due diligence if you use AI. Review the code, especially if it's going in any professional or production environment that involves Bitcoin and people's money. I take that very seriously.
[00:48:57] Unknown:
Yeah. Absolutely. Cosign that as somebody who also works in the industry. Yeah. Couldn't be further, closer to the truth. But, yeah, I know you got a drop. Thanks thanks for hopping by. Some great things there. Appreciate it. I really appreciate it. I really appreciate it. Thanks for coming on. Had an absolute blast. We, we have another, question, that aligns with one I was gonna ask, come from Observer again in the, Nostra chat. He's curious about local AI, as am I. They're asking, do you guys have any, recommended hardware, any recommended components?
Local AI even worth it? Like, what what would be some,
[00:49:34] Unknown:
kind of specific applications? Do you guys have any any hot takes over there? If you wanna learn how to make napalm, probably local AI is good for that. But I'm I'm completely serious. Like, if you have questions you don't want other people to know about or you have things that you want to understand, and that can be something as simple as how to cook a recipe. Like, I don't care what it is. You have your right to privacy. You need a local LLM for that. And the problem is local LLMs are very slow typically if you want, like, a good thinking LLM. I think, like, DeepSeek was one of my favorites to run. You'd run it on your CPU, and it'd pump out, like, three tokens per second. But I had enough RAM to run the full thing. That That was 512 gigabytes of memory, ran the undistilled model. It did okay. It's just really slow. And then my final statement on that is, like, you can get, like, a a Radeon GPU or something like that. They're not cheap these days. And you can run what are called, like, a distilled model.
They do pretty good, but they definitely are lobotomized, in my opinion. They just don't give, like, very especially for coding, they seem to fall apart very quick. And for producing good code, I still have to typically use services like Opus or Anthropic because the local LMs are too slow, and the results are too poor to use in any capacity.
[00:50:51] Unknown:
What about you, Marks?
[00:50:53] Unknown:
Yeah. No. I use local AI as well. And this is the big reason why Maple exists is to try and bring this powerful cloud hardware and give you the same privacy roughly that you would get at home and try to bring that cloud hardware into your home. Because in order to run the full deep seek r one six seventy one b, like, you need to have, multiple NVIDIA, like, b 200 running together. They need to be, like, chained together. So it's it's a very difficult thing to do, to run these full models. And the the ones that Claude are running and the ones that GPT are running, they they have trillions of parameters. So, like, they're so massive you can never run them in your house. Local AI though, like, it definitely needs to be part of everybody's toolbox. Everybody needs to get an app on their phone that can do local AI. Everyone needs an app on their computer and download a small model, these distilled quantized models, the ones that have been compressed and have that because, it's you wanna have it to to ask certain questions or just in a moment where you don't have internet or, you know, it's We always talk about how how awesome it would be to have all of Wikipedia offline on your phone for if the world ends. Well, like, why not get an AI on there as well? So, I think locally I definitely have this place, but you're not going to get the same level of accuracy or speed that you'll get on cloud hosted AI.
[00:52:16] Unknown:
And any specific, tool recommendations there? Like, any specific models you mentioned around, like, some small ones? Like, what what have you had that that worked well at home on? Let's say like, let's use the most common use case. Most people are just gonna have, like, a a MacBook Pro or something at home.
[00:52:31] Unknown:
Yeah. The easiest tool to use on a MacBook would be LM Studio. And in there, it's it's like an app store for models, And you can just go download GPT OSS from OpenAI, and then you can also download DeepSeek and, like, Kimmy k two. There's there's a whole bunch of other great ones out there. Quinn has a lot of good models. You can just download them, try them out, and see how they work for you.
[00:52:55] Unknown:
Mark's, obviously, it's saying that you use local AI, and I understand if there's connection issues or the end of the world. Maple would be down because it's cloud based service even if it is encrypted. Outside of the end of the world scenario, why would I use a local one rather than using Maple, privacy wise? If I wanted to, research something that I didn't want to get out, if it is encrypted, does that not mean that I'd be protected?
[00:53:27] Unknown:
Yeah. You are protected. So reasons for local, AI would be, like, if you are dealing with a ton of files on your computer, or you had you you wanted to do, like, computer use things, I am never going to trust perplexity or OpenAI to take over my computer and do things for me. And so I've been playing around with some local AI tools for the that kind of stuff. Goose Goose can do that. So I I'll run Goose locally, and I'm trying to have it do some things for me on my computer. I don't wanna trust, one of these big companies with that. We're we're trying to build Maple to be that trust situation for you, but everybody has their their own trust levels and their own risk levels that they have. But you can go inspect the code of Maple, and you can see how we run it, and you can see the end to end encryption that's there.
So, we're we're we're always aware that local AI is the most private. I'm I'm never going to, like, concede that. It really is the most private. Not concede that. Sorry. Give that up. But, Maple can be, like, as close as you can get with with a cloud service.
[00:54:35] Unknown:
Can I give you an example? So the other day, I had a massive contract to read through all in, like, horrible legal speak. And I read through it myself, made some notes, and then I thought, oh, I'd love to put this into chat GBT to see if it comes up with the same things or has any different questions or problems. But I didn't want to because there's too much personal information, and I didn't wanna physically go in and edit and physically go in and edit and put, like, dot dot dot and take out names and things like that. If I was to update something like that to Maple, obviously, I'll be in a much better situation. I would I could do that. It's not like it's loads of files or having anything done on my computer, but I could upload a file for it to go through, some text for it to go through and be in a much better position. Right?
[00:55:26] Unknown:
Yeah. Definitely. We have lawyers and accountants and financial professionals who you know, we have individuals who do the same, who maybe they're getting kicked out of their apartment that they live in because their their landlord is trying to do something shady. And so they'll upload their rental agreement into Maple. They'll chat with it, and then they'll come up with really good talking points of how to push back or what their rights are, that kind of stuff.
[00:55:55] Unknown:
Great. Great. Yeah. Because it it protects you whereas chat g b t is like a worrying thing putting, here's my name. This is where I live. This is all my details, and here's all the other questions I've asked you and link it all together. And now you have a load of information on me that I wish you didn't. It's, it's that's a worrying thing. So that's good to know. Yeah. And just last thing on that is,
[00:56:17] Unknown:
when when you sit down with a lawyer in a room, you have this attorney client privilege. And one of the biggest reasons that exists is because you might ask questions that are illegal, but you just didn't know. Right? You might say, hey. Well, could we do this? And the lawyer will be like, no. That would be illegal if we tried to go that route. Be like, okay. Dumb idea. Let's try this other thing. If you ask that in ChatChiPT, that can be used in court against you later. That can be like, oh, this person was trying to get around tax laws by doing this thing. Right? And that's, like, total bullshit. So what we need is some kind of way to chat with AI and and be okay to make mistakes because we don't know the extent of the law. And so local AI and Maple give you that attorney client privilege, if you will.
Nice.
[00:57:05] Unknown:
Yeah. Before we we're almost at time. It's gone really quickly as always. One final question I had in a complaint to to Maple Mark is, for somebody who's never used the service before, presumably, you're not kind of locked into a specific model. What what does that kind of model selection look like, and is that kind of are those selections tier based? How does that generally work?
[00:57:27] Unknown:
So when when people get into Maple, they're immediately on a free account, and they're put into one model that we have that's from from Meta. It's the llama model, and it's a really good general capable model. And then if you upgrade to pro or any of the other plans, then you get access to seven more, seven powerful models, deep seek, OpenAI's models, and others. And you can jump between them really easily. It's it's it's not difficult. You can even, like, be in the same chat and switch between models in that chat and kind of have them all work on it if you want to. So we we try to make that process simple. And if you were to try to do that, for example, with local AI, you don't have enough memory on your computer, enough RAM to run multiple models at once. And so you're gonna have, like, unload one and load another in. So we let you just kinda flip on the fly if you want to.
[00:58:18] Unknown:
Awesome. Alright. Guys, any any final questions for for March while we have him before I close out the show?
[00:58:25] Unknown:
Main question for me would just be what what are you excited about coming up for Maple? I think, y'all have been doing really good work. I love the anonymous accounts specifically, but I'm curious if there's anything else kinda on your radar that you're you're pumped about.
[00:58:38] Unknown:
So, yeah. We're really gonna be working on having documents and being able to dump an entire folder of documents or other things into Maple. Right now, it's just one at a time. So we we want to make Maple the most personal AI for you. Be because it's private, you can have memory in there where it learns who you are. You can have all your documents connected to it, but you can have that assurance that it's encrypted end to end. And then kind of something I wanted to touch on that's not totally related to that, but something you said earlier, Seth, about people learning how to think for themselves and not not just relying on and outsourcing their thinking to other people. One thing that I really fear and one reason why we're building Maple with open models and open source code and fully inspectable and verifiable is I don't want us to have generations of people who are depending on ChatChippity to think for them because ChatChippity is completely closed, and that could start to steer the public conversation whether it's Chatchipiti doing that or it's a government entity that comes in and asks them to change it.
It can start to nudge us because these AIs learn our thought process. They know where we're gullible. They know where we're vulnerable. And so they can start to to change our thoughts over time to be more aligned with whatever directive they're given. And so I think that we need to stay vigilant with our own thought process and keep keep this ability to do critical thinking, but then also use tools that we can inspect and make sure that there aren't other ulterior motives built into them, as we use them and depend on them.
[01:00:18] Unknown:
Yeah. Definitely. I mean, as always, privacy is paramount for for freedom. That's why we're all here at the end of the day. So, guys, this has been a a a brilliant conversation. I've I've had a lot of fun here. Definitely learned a lot. I'm, gonna be here to use Maple. I've had a great experience with it so far, and, looking forward to any, future iterations that come out. And before I close out the show, I just wanna point everybody to the to the QR code on screen. Two of, to the the the leading freedom fighters in the Bitcoin industry, are unfortunately gonna have to surrender to prison after having of the the US government thrown at them.
We are positioning as hard as we can to to get them a pardon and get this in front of the right people to, you know, try and recreate the the Ross outcome, if we can. So please, take the time to scan that QR code and, just sign a petition. It will take thirty seconds of your time. You don't have to give any money. Just please just show your support if you can. If you're listening to this back on the, podcast stream, the link will be in the show notes so you can go and show your support there as well. Guys, thank you very much for for joining. It's been a fun one. And as always, thanks for everybody stopping by in the in the, livestream and, asking your questions in the chat. We will be back for more at the same time next week. Thanks, everyone.
[01:01:53] Unknown:
Thank you for listening to Freedom Tech Friday. To everyone who boosted, asked questions, and participated in the show, we appreciate you all. Make sure to join us next week on Friday at 9AM EST and 2PM London. Thanks to Seth, Max, and Q for keeping it ungovernable. And thank you to Cake Wallet, Foundation, and my NIM box for keeping the ungovernable misfits going. Make sure to check out ungovernablemisfits.com to see mister Crown's incredible skills and artwork. Listen to the other shows in the feed to hear Kareem's world class editing skills.
Thanks to Expatriotic for keeping us up to date with Boost's XMR chats and sending in topics. John, great name and great guy, never change and never stop keeping us up to date with mining news or continuing to grow the mesh to Dell. Finally, a big thanks to the unsung hero, our Canadian overlord, Short, for trying to keep the ungovernable in check and for the endless work he puts in behind the scenes. We love you all. Stay ungovernable.
Welcome!
Top Boost
Introductions
Will AI be a must by 2026?
Seth's Current AI Usage
Max's Workflow
Marks AI Usage
What is Maple?
Who Will Run The Mints?
Do you need AI to compete by 2026?
Don't Trust, Verify Your AI
AI Inside Foundation
Spot the Slop
Can we tune sources?
Hot Take from Portland HODL
Choosing Models
Quality Control Risk
Local AI: When to Run Yourself
Local vs Maple
Whats next for Maple: document memory and inspectable AI
Closing Thoughts
Signing-Off