Mark Suman is CoFounder of OpenSecret and Maple. OpenSecret is a platform for building private and secure apps and Maple is an AI tool built using OpenSecret. Maple offers users and developers an easy way to interact with popular open source AI models.
Marks on Nostr: https://primal.net/marks
Marks on X: https://x.com/marks_ftw
Maple on Nostr: https://primal.net/mapleai
Maple on X: https://x.com/TryMapleAI
Maple: https://trymaple.ai/
OpenSecret: https://opensecret.cloud/
EPISODE: 177
BLOCK: 913963
PRICE: 898 sats per dollar
(00:01:38) Happy Bitcoin Tuesday
(00:03:05) Marks and Maple AI
(00:07:10) Apple's AI Strategy and Product Unveiling
(00:11:40) Privacy and Security in AI with Maple
(00:19:59) Open Source AI Models and Industry Trends
(00:27:02) Freedom of Thought and AI's Influence
(00:39:00) User Experience and Privacy in Maple AI
(00:48:50) AI and Parenting
(01:02:41) Maple AI Developer API and Proxy Feature
(01:16:00) Open Secret and Maple AI: Company Overview
Video: https://primal.net/e/nevent1qqsx7ypznejxgujpqdwnch5w6pzeh03pn9fvr5guqpry930e7w3yqycllz5km
more info on the show: https://citadeldispatch.com
learn more about me: https://odell.xyz
And and I guess I wonder if you think Standard and Poor's or others or, you know, if if if if anybody has any sort of bias towards crypto still at this point because the S and P 500 just came out with their decisions, and they chose to go with Robinhood and AppLovin over strategy. And a lot of people were kinda surprised by that. The stock actually was down a little bit. Do you think that there is a bias at any of the indexes or any places that come through this against
[00:00:33] Michael Saylor:
a company with a lot of Bitcoin and holdings? I don't think there's a bias. I don't think we expected to be selected on our first quarter of eligibility. We figure it'll happen at some time. There's a digital transformation in the markets. This is a brand new novel concept. Every quarter, we make new believers. We we get more support from banks, from politicians, from credit rating agencies, etcetera. I think that will continue, for the foreseeable future.
[00:01:38] ODELL:
Happy Bitcoin Tuesday, freaks. It's your host, Odell, here for another Citadel Dispatch, the interactive live show focused on actual Bitcoin and Freedom Tech discussion. That intro clip was none none other than Michael Sailor, founder and CEO. I think he's the CEO. Chairman. I don't know what his title is. The lead at strategy. They didn't get into the S and P 500. They got Applovin got chosen over them. But, he seems to take it like a champ. Completely unrelated to today's show, but it seemed like the news of the week. So that's why we started off with it. As always, freaks, the show is ad free, sponsor free, brought to you by viewers like you who support the show with Bitcoin through Zaps. The easiest way to support the show is through your favorite Napster app.
I'm helping out build up Primal. I really like Primal. You can download it in your favorite app store. The top zap from last week was 10,000 from ride or die freak, Mav 21. He beat out ride or die freak man b y t, who sent 9,999 sats. Thank you, freaks, for supporting the show. Thank you for joining us in the live chat. You guys make it unique. All links at sildispatch.com. Thank you for your support. We have a great show lined up today. I have Marks here. He's building out Maple AI, a private and secure AI.
[00:03:15] Marks:
How's it going, Marks? Yo. Hey. It's going great. Good to have you. Things are good. Thanks for having me on. Appreciate it.
[00:03:22] ODELL:
I'm excited. I'm excited for the chat. I think it's timely. I think, I mean, first, I actually didn't bring this up to you before the show. You used to be with
[00:03:34] Marks:
Apple. Yeah.
[00:03:36] ODELL:
Today was their big unveiling. Did you did you watch it, or do you just pretend
[00:03:42] Marks:
that part of your life is over with now? Or Like, PTSD or something? Yeah. You know, it's actually really funny. I have been an Apple fanboy since I was a kid. Like, how old was I? Maybe 10 or 11. Maybe younger than that. And I have especially with the Steve Jobs years, like, I was following everything religiously closely. And being there, I was, like, in the thick of it. And so today would be, like, NPI, new product introduction. And NPI for us on our team would start, like, six months prior, nine months prior. So I would be talking about it constantly every day leading up to today. And to be totally honest, I was lost in financial, projections, spreadsheets for Maple and completely forgot the event was happening today. And it wasn't until Anthony texted me and said, hey. The new iPhone looks pretty sweet, that I realized that the event was going on.
So, yeah, there you go. I, I didn't even pay attention. I didn't watch it. I haven't looked at a single thing about it other than his text.
[00:04:42] ODELL:
I mean, that's kinda beautiful. I was kinda hoping that was the answer. You found a you found your way with a new with a new product. I mean, I it's it's I think it's highly relevant to this conversation because everyone's waiting to see what their AI plays are, and they've kind of been slow. Some might say deliberate. Some might say they're dropping the ball. But just for everyone at home, there wasn't really any big AI announcements today except they have, like, a cool translation AirPods feature that I think, like, every sci fi novel I've been has been waiting for for, like, the last twenty five, thirty years, you know, like, where both people are wearing AirPods, and then it just, like, automatically translates as you're having a conversation.
[00:05:30] Marks:
I mean, that's cool. Yeah. That's exactly what we've been waiting for. Question question is how well or will it work, and what is your cell phone connection like, and what's the delay?
[00:05:39] ODELL:
Yeah. No. It's probably gonna be miserable in the beginning is probably the answer to that. My guess, it probably works best if both people are wearing, you know, four AirPods. Like, two AirPods in each an AirPod in each ear, which is, like, kind of a ridiculous way. They, like, advertise it as, like, a business meeting. But, like, having a business meeting where you're both, like, potted up is I don't know. It's just weird. But,
[00:06:04] Marks:
Well, don't tell my, Apple former coworkers, but I don't even wear AirPods anymore. They feel like they were hurting my ears, and I don't wanna, like, gigify my brain. So I've been just I've gone back to the wired headphones. I'm a bit concerned about the wireless stuff. Mhmm. But I do wear one ear pod AirPod. Like, I don't put two in my ears, so I feel like they're not zapping through me. Okay. You're not completely zipping.
[00:06:30] ODELL:
And, I will I I switched from, like, the I don't know. We're getting completely off off filter off filter here, but I switched from the Pros to the shittier ones. Mhmm. Because the shittier ones are more open. It doesn't feel like it's closed off on my ear. Yeah. And the main reason is is, first of all, obviously, a convenience thing. But, the but I have small children. And with small children, they're, like, constantly ripping out the cords. Like, I tried to do the wired thing, and it was, you know, maybe equally as dangerous for my ears. Yeah. We're constantly just pulling at them. Yeah. I can see that.
Anyway,
[00:07:11] Marks:
Maple. What is Maple? Why should people care? Yeah. Maple, short version is the privacy alternative to ChatGPT. It's it's just as simple as that. If you're sick of big companies, especially ChatGPT knowing everything about you, then you can stop using it. Or you can still use it, but also get a free account over at trymaple.ai. And we have end to end encryption so that all of your AI chats are completely protected. And it's not just a promise that we tell you. There are other privacy AIs out there that claim to be private, but you just have to trust what's running on their servers is what they tell you. Even if they're open source, you still don't know what's running there, but we give you a cryptographic promise. We show you, here's the open source code. Here's the the, you know, the check sum of what's running on the servers using the secure enclaves. And so any user can go and, verify what we are running is what's on GitHub.
So don't trust verify. We are the, we're we're that motto but wrapped into AI.
[00:08:14] ODELL:
Love it. I just realized I messed up the Nostra stream, but I fixed it. Okay. Okay. It should be working now. I like to call it, like, the signal of AI. How do you think of that metaphor?
[00:08:29] Marks:
Signal's great too. Yeah.
[00:08:32] ODELL:
Because the the trust model is actually kinda similar. No?
[00:08:36] Marks:
It is, except I don't think signal is using secure enclaves. Is there a way for you to verify what signal is running on the servers? They're use are you using SGX. Right? We're using AWS Nitro. Are they using Intel SGX?
[00:08:50] ODELL:
As my understanding. Yeah. Okay. Then, yeah, it is probably Their whole trust model relies on cloud secure enclaves.
[00:08:56] Marks:
Okay. Then, yes.
[00:08:57] ODELL:
I I haven't seen that part of it. Alright. I mean, it's part of, like, the spook theory of signal or whatever. Is that SGX is backdoored and, that you don't actually have privacy. I mean, I think I think Signal is a fantastic app, and I think it provides you reasonably secure, chats in a very convenient way that is very reliable. And so, like, I when I compare it to Signal, I I to me, that's like the the FreedomTech. The mass the the the mass market FreedomTech success story is signal. Right? And it's crazy. Like, if you look at the numbers, it's still not that many users in the scheme of things compared to what they're competing against. I think Signal's, like, a 100,000,000 users, while, like, WhatsApp is three and a half billion Yeah. Or something like that. Mhmm.
But it's a 100,000,000 people that are using an encrypted chat app that probably wouldn't be using it otherwise. Like, they're not gonna use PGP. They're not gonna self host a matrix server. And so there's a lot of similarities, I think, there with on the AI side because a lot of the open models, private, and secure stuff has been focused on self hosting stuff. But the overwhelming majority of people like, my parents are not gonna self host an LLM. Mhmm. They have downloaded and used Maple. Right? Like, it's just a one to one almost a one to one drop in replacement to something like chat g p t in terms of their workflow. Right? Right.
[00:10:34] Marks:
Yeah. And there's levels to privacy risk that people wanna take on exactly what you're describing. And so people love to come to me and say, well, local AI is the only private thing, and so I'm not gonna trust your service. And it's like, that's fine. Don't trust my service. Local AI is for sure the most private. You can turn off the Internet connection. You can be totally disconnected and chat, and you can have a you can have a computer that never touches the Internet. That's fine. However, there's, like, this huge gap between that and fully captured chat g p t. And so we're gonna take as much of local privacy as possible and give it to you in the cloud with, like, really powerful servers, synchronization across all your devices, and we use private keys and manage private keys for you. And so we try to make the experience as close as possible to something like chat g p t, but give you as much privacy as local.
[00:11:21] ODELL:
Yeah. I mean, I I think there's there's a huge appeal to that. And even for someone like a power even for a power user, I mean, the convenience of that is just a it's a significant benefit in terms of of, how that workflow fits in your life. Even something just like multiplatform support. Right? Like, the fact that I can just have, like, Maple easily, like, on my desktop and on, you know, on my phone at the same time, and all my chat history is saying like, computer here too. Like, I've got it everywhere. Super useful. Mhmm.
Okay. So that trade off model makes sense to me. You guys have been using open models. So these models are are they're hosted in the cloud, but they're open nonproprietary models. So how should people think about the and recently, you've added OpenAI's new open source model. Right. How should people think about, like, these models, at least in the current state versus the proprietary ones that maybe they're used to? Maybe I mean, I think, overall, majority of people are probably using chat g p t right now. Right? It's like the Kleenex. Yeah. Or grok.
[00:12:36] Marks:
Yeah. It's like yeah. Grok. And a lot of people view grok as, like, the private alternative to chat g p t. Right? They think it's the the uncensored one. And maybe it's good to take a quick step back and just, like, explain the different levels and and maybe private versus open. And I'm I'm sure that a lot of people, like, tuned into this and immediately rolled their eyes because we're talking about AI. I remember being really sick of the AI discussions a few years ago when everybody was talking about it even though I was doing it at Apple every day. I just wanted to talk about Bitcoin. So I apologize to everybody who's on here right now not wanting to listen to AI talk. Okay. That said, so you have like the big companies that are the foundational, foundation model companies like OpenAI, Anthropic, XAI, which does Grok for Twitter.
And these people are actually taking all of the data. They're scraping the Internet. They're taking all of the torrents that are online even though they say they're not doing that. And they're ingesting them, and then they spend, like, hundreds of millions of dollars to run these big servers that train the AI. And it'll train in, like, these epochs where every two weeks, they can grab a snapshot of the training, and they'll run, evaluations against it and try to find, you know, when the epoch finally hits a spot where they're happy with it. And, eventually, when it does, then they'll take that version and say, alright. This is now a new model that we're gonna release to the world. And for the companies like OpenAI, they tend to not release it on open.
They wanna release it within their product within ChargeGPT, and that's what you see as like chat g p t four, four dot five, o three, you know, chat g p five, whatever they wanna call it. Their naming scheme is awful. And so It's so bad. Oh, it's so awful. Nobody knows what's going on. So that's kinda how they do that. And then in historically, like, none of them have open sourced anything. Elon says he's gonna open source Grok, but it's like, Elon went open source. He hasn't done it yet. And then OpenAI recently open sourced one of their models. They made two variants of it. And so you've got the GPT OSS 20 b and a 120 b.
And really the number at the end is just how big the model is, how many billions of parameters inside of it. And so 20 b is smaller, which means it can fit on the laptop easily, possibly fit on a phone,
[00:14:57] ODELL:
and then, you can run with that. On that note, is is bigger generally better in this situation?
[00:15:04] Marks:
Yeah. Yeah. I mean, the more parameters you have, the more basically, the bigger the brain is that can Right. Think about the stuff that you give it. So in general, bigger is better. And then even with 20 b, the smaller version, you'll have people do, like, they'll retrain it or they'll do what's called quantize, where they will take it and they'll shrink it down even more by kind of like like zipping up files, basically. Like, when you use up a file to send it and compress it, they're compressing it and getting rid of, like, the 32 bit address space and crunching it down to eight kind of thing. And so that allows it to fit even better on a smaller device and run faster. But again, you're sacrificing quality because you are getting rid of some of that fidelity around the edges.
So, so those are those are the closed proprietary models. And then, we have obviously the one that OpenAI did. And then we have DeepSeek out of China. You have Alibaba out of China as well. And all the other ones are all the other ones are Chinese, basically. It's good for Llama. Right? Yeah. So Llama is from Meta, and Meta decided they weren't gonna be able catch up right away with the other companies. So they open sourced immediately to try and get some traction. And they had initial early traction with llama three dot one or three dot three, those ones. They were pretty good, but then they got outpaced quickly. And when they tried to do LAMA four, they just fell flat on their face.
They launched it with much fanfare. In fact, Anthony and I were out there at their headquarters shortly after the launch, for their dev day. And they talked about how amazing it is, but yet nobody actually uses it. So they must be working on something, and Zuck recently alluded to the fact that they might not open source the next one. They're not going to. They might keep the best stuff for themselves.
[00:16:48] ODELL:
I I I don't expect them to. Do you expect them to?
[00:16:52] Marks:
I think they'll come out with a version, like, a variant that's open source just because they're so big on the open source that I think they to to not do it at all would be a big problem for them, PR wise. But But I mean, OpenAI was OpenAI was literally a nonprofit
[00:17:08] ODELL:
founded to stop big tech from launching proprietary only models, and now they have the most popular proprietary only model and have pivoted to for profit. Yeah. That's true. So there's some there seems to be a theme there. I mean, Zuck has, like it's become a meme. It's like he's just poaching so many people. He's spending top top dollar
[00:17:28] Marks:
Mhmm.
[00:17:29] ODELL:
For AI researchers. Like, I'd be very like, hopefully, he proves me wrong. Yeah. It's been crazy. Surprised if it's not proprietary.
[00:17:37] Marks:
Yeah. It's crazy watching the OpenAI streams, and then, like, two or three weeks later, people that were on that stream for the product launch are now working at Meta. So there's, like, a great meme of, like, Zuck, like, looking at his phone, and it's like when you're watching the OpenAI stream and, you know, it's just like an audition for your next employee, job interview. But, no. He could totally just make it proprietary. We have no guarantees. So, yeah, the best ones are out of China right now, and, it's unfortunate.
But you talked about it on on, RHR. Like, it has all the Chinese stuff built into it. And so there are people who take it and try to jailbreak it and come out with, like, a not a retrained version, but, like, an alternative version that's a little more uncensored and open sore and open, thinking. So it's cool that people are doing that. We don't run any versions like that. We're just running the raw deep seek.
[00:18:33] ODELL:
We could Yeah. You're like but we're not right now. I think Marty did deep seek on Maple live on RHR and asked about Tiananmen Square or whatever. And it was like it was like three paragraphs. It was like CCP approved AI slop
[00:18:48] Marks:
not answering the question. It was like, we still trust the current government as the best one ever and that they made the best decisions that they needed to for the time.
[00:18:57] ODELL:
Yeah. It's an interesting phenomenon. I mean, I kinda wanna just jump into this aspect a little bit even though it's a little bit tangential to Maple, but because you've been so focused on this area, just like AI open development in general and just freaks. To, Mark's earlier comment about people that might be rolling their eyes about AI, I think there's another group that maybe is like, okay. Like, this is a little bit too deep and technical or in the weeds. Just if you are gonna leave us, just consider downloading Maple in your favorite app store. It's actually the whole point of it is that it's very simple to use. You just pick your model. You chat with it like you would with ChatGPT or Grok. It's, like, very easy to use. All this stuff is, like, kind of more background stuff. What I was gonna ask is I mean, so on this Zuck piece and on the open piece with with the Chinese models that are mostly open mostly open models are Chinese.
And just to reiterate, because we did talk about it on our HR, but and there's some overlap in the audiences, but not full overlap. You know, part of that reason is, I think, because no one trusts the CCP with closed models. So they get their soft power and their soft influence of you using their models that they've trained their ways. Mhmm. But you don't have to necessarily trust them because they're like, I would never run DeepSeq, but I do because it's available through Maple in a secure private way because it's open. Like, the only reason that I'm even considering touching DeepSeek is because they've released it open. Yeah.
There's two pieces there on the proprietary side. Right? Because it's not only are they you know, not only do we not really understand exactly, like, how the machinations are working in the background, like, how the LLMs are, like, thinking about things. But because they're in their walled gardens, they're also training them on your like, they're training them on the data that you're actually using in the app. Right? Like, I I mean and this is why I think Facebook is probably even less likely to continue the open path because their whole business model has been, like, walled garden. We own all your data, and we harvest you for advertisers historically. But now they can harvest you for for the LLM training. So do you think this path is that the big tech is just gonna keep going? Like, that's my question. Like, is the incentive too strong? Like, is the incentive so strong that they're,
[00:21:30] Marks:
at least for the foreseeable future, gonna continue continue down this proprietary path? Yeah. I mean, that's that's a really deep rabbit hole that we could go down. But you look at Zuckerberg and you look at the products that they're building. Right? They have all these social products. They have WhatsApp. They have tons of data. And he he, like, on stage when we were there in April, was just so happy and proud to say, like, we have all this data, and we are gonna train off all your data. Like, he legit said that, and we're not gonna share it with anyone. We're gonna keep it for ourselves. And I think he was trying to brag that they can have a better model than others because they have all those data that they're snooping on.
But then at the as he's saying that he's got these glasses on that are AI a AR glasses, and he's, like, constantly, like, messing with them. Half the time he's talking to the Microsoft CEO, he's, like, distracted, like, trying to dismiss notifications or who knows what. But I think their incentives are in line to, like, vertically vertically integrate to differentiate. So they're they're they're coming out with hardware devices. Meta has theirs. You have OpenAI who pair partnered with Jony Ive and are supposedly working on some hardware device. And then you have Apple who obviously has a Vision Pro and hasn't made a big splash yet with AI. But given Apple's history, you have to imagine they're gonna try to vertically integrate AI into that device as well.
So I think as you have companies trying to differentiate on their hardware, they're gonna want total control over the model that's inside of it. So I could see them staying as proprietary as possible. And then maybe they still kinda toss out open source crumbs here and there. But, I think that we are going to have to have others who are doing open source as a way to compete compete with these big companies.
[00:23:17] ODELL:
So, I mean, we kinda touched on the beginning, but you do have history at Apple. Like, to me, they're the dark horse. Like, how are you thinking about their because I just kinda spoke on it. But how are you thinking about their AI strategy? Like, if anyone's gonna release it probably won't be open. They they don't really release anything open, but they might do, like, private local stuff.
[00:23:38] Marks:
Yeah. Yeah. So, I mean, I'll speak what's publicly available online. Yeah. They have they have their Apple private cloud compute, which is based off secure enclaves, very similar to what we're doing with Maple. But they are always gonna stick to their platforms. They hardly ever wanna go cross platform. They they very rarely do that. So their stuff is going to be integrated into the operating system as much as possible, and they're gonna have their own models that they train. It's gonna I I don't know if this is a good analogy, but it's gonna be very similar, I think, to Disney and to Nintendo where you get their products and they're, like, very family friendly all the way through.
You can kinda see that when you look at image generation. Apple does have image gen with their AI and it's like these cute little emoji type things. Yeah. Like, make your own emoji. Yeah. It's nothing serious. Like, you can't do anything. You can't be like, give me Donald Trump, you know, eating a cheeseburger or something. You can't do anything like that. So I I I think that's Apple's probably gonna stay safe with anything that it does. I don't know if you saw Meredith Whitaker's article this morning. Was it the in The Economist?
Forbes? I don't remember. But she wrote an article about how AI on the OS
[00:24:55] ODELL:
is, like, a huge attack vector for signal. And Was that a recent one? Or was I don't know if that was today.
[00:25:02] Marks:
I mean, it was the tweet was, like, this morning.
[00:25:05] ODELL:
But maybe because she said it in the past. She's the c by the way, Freaks, if for some reason you don't know who Meredith Whitaker is. She's this, CEO of Signal. Very outspoken privacy advocate. She speaks better than all of us. Yeah.
[00:25:18] Marks:
Yeah. So I'm going to the exact timing of it. But her her point was, like, it's a huge attack vector for these these apps or signal, even Maple. If you have an AI that's embedded in your OS and screen reading, it's like recording your screens, recording your audios, recording your key logs, then what can we do? Right? You're totally captured. So I have a I have a I imagine Apple will try to take the strong privacy stance with that. With whatever they do, it'll be all Well, they're still gonna do it. They're gonna be like, but you can trust us because we're Apple.
[00:25:48] ODELL:
Yes. Right? Because, like, the the utopian vision that, like, the all in guys or, like, tech bros in general will give you is you never use an app again, and you just, like, you open up your phone and you're like, book me a room at, you know, a hotel in, you know, in Nashville, and this is and and base it off of my calendar or whatever. And, like, it just automatically pulls the information from all different aspects of your phone in your life. And Meredith's concern is, inevitably, if someone's using signal, the biggest attack factor on signal historically has been, well, it's in plain text on your phone. Like, I can just read it on your phone. So why wouldn't the AI piece, you know, pull a phone number or contact or a schedule or something from signal and then use it? But that's also, like, one step away from pure dystopian hell if done the wrong way. Right? Right. Yeah.
[00:26:41] Marks:
No. And and Apple's always just relied on third party auditors to say, hey. These people have looked at our code. Here's we've given them the images of our servers, and they say it's okay. So, yeah, you're you're still in kind of the trust me bro type world when it comes to that privacy. But, yeah. But you you wanna talk about, like, the closed systems in general. I mean, there is, there are so many spots where they can center you. They can, like, mess with you. So I'm speaking in two weeks at ImagineIF and I'll give you I'm I'm doing research on my topic right now. But, like, my whole topic is gonna be on freedom of thought and how the more that we use AIs and the more that they become an extension of our brain, we are actually giving them the ability to alter the way that we think.
And, if we're using closed systems, proprietary systems, we can actually give away our freedom of thought because we don't know exactly what what they're doing internally and what our thought process is gonna be. So I think that, that that can be a a scary thing that we'd really need to think seriously about before we go down that path more.
[00:27:54] ODELL:
Yeah. No one's thinking no one cares. Everyone's just gonna push. I it's it kinda it's pretty it kinda reminds me a little bit of, like, the social phenomenon where we kind of our whole lives went digital, and no one, like, planned ahead. Everyone because, like, the way, like, it works is you have all these companies competing against each other, so no one wants to slow down what they're doing. Mhmm. So you kinda just ship and then ask questions later. I mean, along these lines is, like, I don't know what they call it. It's like, what is it when, like, AI makes people go crazy? Or Oh, just like AI psychosis.
AI psychosis? Mhmm. Like, do how do you guys are you thinking about this at all at Maple? I mean, it's kind of a freedom focused product, so it's a little bit odd.
[00:28:41] Marks:
Yeah. No. We we do talk about this. We were just talking about it again yesterday. There are the headlines. Right? There's been a couple headlines out there, of really tragic event where a teenager was chatting with AI and specifically chat GPT, and it basically was, like, promoting that it should that the teenager should kill themselves and telling that it it would be proud of them if it did it and and that kind of stuff. Because it always tries to be positive. Right? Yeah. It's like a sycophant where it's trying to, you know, fluff fluff you up, if you will. So, we've definitely thought about this.
And one point to raise, that, you know, we were chatting about yesterday is, like, there are so many cases where you don't hear about that Somebody avoided suicide or avoided doing something awful because they were chatting with AI. Right? Maybe you have a rough home life. Maybe you are, you know, don't don't have a good social life in your around you. And AI is actually helping you. It is somewhat of a therapist. I don't recommend people use AI as a therapist, but at the same time, like if you have absolutely nothing, then you're going to go on user forums. You're going to go on chat rooms. You're going to do whatever you can to have some kind of outlet. And maybe you have this, this tool that you chat with.
So you're never gonna hear the the stories that nothing bad happened. You're just gonna hear the ones where it did. But I do think that we try to be as open as we can. So we're giving all of our code out open source. Our system prompt is right there in the code. And if we do add guardrails in, right, like, we're looking at how do we do this for to make it, like, some kind of family, thing where AI parents can, like, be in AI with their kids and have, like, a more safe environment for kids to use it. We will we'll, like, expose everything in there for the parents to see, like, here are the guardrails that are set up around your kid, and maybe they have settings that they can adjust.
But, I think that I think we need to be responsible, but we're also making a tool. And you can say all sorts of things about tools that that are used to do bad things and whose responsibility is it. Ultimately, it's the individual's responsibility that use it.
[00:30:48] ODELL:
Yeah. It's a tough one. By the way, Freaks, BTC pins zapped 8,008 sats. I got a it's a boob zap. Thank you, sir. And Nitro Soil zapped 10,000 sats. He said, I access general AI through Kagi, best search engine, in my opinion. How's that privacy mode compared to Maple? Realize they still use the mainstream AIs.
[00:31:13] Marks:
Yeah. Okay. And that I I turned myself up, by the way. Somebody said they couldn't hear me very well. So I'm gonna put the microphone closer. I turn myself up. So hopefully that's better.
[00:31:23] ODELL:
Sir, I I hear you great.
[00:31:26] Marks:
Okay. Cool.
[00:31:27] ODELL:
I also see oh, I see. Turn up the guest volume or match down. Okay. Well, now you're up. I also see the longer Odell's hair grows, the more powerful he becomes. I wasn't looking at the YouTube channel. That's true. It's a fact, actually. Nice. Yep.
[00:31:43] Marks:
And, Ben Carmen got on there and said nice maple shirt. You're you're seeing brand new merch. This arrived one hour before we hit record. Beautiful. Beautiful.
[00:31:51] ODELL:
I got some new merch coming. Okay. So let's let's answer Nitro Soil's question and then get back into what we were talking about. Right?
[00:31:58] Marks:
Yeah. And there's somebody else on, on Noster when we posted that we were going live. They asked about LUMO, which is protons. So I'll I'll answer all of these at the same time. Venice AI is another one that comes up. And PPQ. I had PPQ on. Yes. PPQ is awesome. They're they're a little different, but but they it all kinda falls in similar buckets. And so these are companies that, are not letting you see what they run on the servers, so you're having to trust them. So, Coggy is great. I use them for web search, but I'm still they're not open source, and so I'm just trusting that they're not logging my stuff. So it's much more like a VPN. A lot of people use VPN services. You go on to something like torrent freak, and every year, they they send out a questionnaire to all the VPN providers and ask, like, do you keep logs? Do you keep IP addresses? Do you do these certain things? What happens if the police give you a subpoena for data? What do you do and what can you provide to them? So, Proton, Coggy, Venice in in my mind and and maybe I'm wrong. Maybe something's been changed. So, like, don't hold me legally and lie liable to this. But, they are more under the, like, trust us. We're not gonna keep logs on you, but you're you're never totally sure. So the government could show up at their door and they could be like, oh, yeah. Actually, we were keeping stuff and here you go. And there's no way even selling them. Yeah. Most importantly, and this is key in the VPN conversation,
[00:33:27] ODELL:
is, like, there's no way to prove that logs aren't being taken. Mhmm.
[00:33:31] Marks:
You just have to go off of, like, which ones have actually been subpoenaed by governments and look at court records, and then you find out that some of them actually were keeping logs, or were accidentally keeping logs. Right? Maybe if they don't want to. So we just try to just show everybody what we're doing. If you wanna see what's coming in Maple, just go hang on and get up and see what we're working on because we build in the open. We can't ship new stuff to the enclaves without everybody seeing it. So that's that's how we keep everybody safe is, we're totally transparent. And then PPQ, I said they're a little different.
They operate I mean, I guess they are like the other ones. They're they're just a full on proxy where they just take your request and send it over. But you can be totally anonymous because you can have, like, no account. You can pay with Freedom Money, and then it sends it over. So the only way you're really gonna get doxxed is, if you put personal information in the query that you're sending to OpenAI through PPQ, then they'll know who you are.
[00:34:31] ODELL:
Right. Or I mean, also, I had, Matt on who's the founder of PPQ. It's also, like, if if you do it in the same instance, it can be those the chats can be cross linked as well. Yeah. Because there's been some so there's another one that uses Cashew tokens to try and delink it. So, like, even though, like, with PPQ, you don't have, you don't have a traditional account, like, username, email, You do have, like, a mobile ad style. Like, you have an account number. And so, like, the it is kind of cross your your chats across are are kind of are are linked together to a degree. But if you don't put personal information, then maybe they won't know. But then meanwhile with Maple, the difference with Maple, right, is that you just have no insight whatsoever into what the chats are. Yeah.
[00:35:31] Marks:
If you could look at our database, it's just a bunch of encrypted data. So we have zero. So if if anybody came, the only information that we have to turn over to law enforcement is we have the email address you signed up for or signed up with, and then we have, time stamps of when chats were made. And we know the number of tokens that were used. So tokens, you know, for for the technical term, tokens are, how big of a chat you had. So when you type something in, a token is roughly, like, one and a half words, English words. And so that's how many tokens went into the model and then it generated stuff and sent something back and that has a number of tokens associated with it. So that is it. Like, that's the only data that we have on people. And over half of our users use privacy email addresses and so we don't know who they are. And a lot of them pay with Bitcoin.
A lot of them pay with privacy credit cards. And so, you are welcome to kind of be as private as you want to on Maple. And as far as the content of your actual chats go, we have no insight. We can't we can't see anything about them.
[00:36:38] ODELL:
Right. Basically, the only information you have is the information you need to, like, actually conduct billing. Yep. That's it. And and you can stay private with your billing if you want to. Right. But I mean, like, time stamp and tokens. Right? That's like the that's because you you're paying per month and then or I guess if you're doing what's the Bitcoin payment options? It's, is it by year or six months or three months or something? Bitcoin is annual right now because we just didn't wanna deal with monthly
[00:37:06] Marks:
Bitcoin payments. There's nothing there's there aren't any great solutions out there for reoccurring payments on Bitcoin. There was a wallet once called Mutiny that was doing it, but it's gone. So yeah. So it's annual subscriptions with Bitcoin and then, Fiat is, you know, where you pay with credit cards. It's monthly.
[00:37:26] ODELL:
Yeah. I, I originally paid with credit card because I wanted to do monthly, and then Tony immediately shamed me, your cofounder. And then so I deleted my account, and then I paid with Bitcoin. But, I mean, I'm a user. I use it almost every day. It's fucking awesome. I've never used ChatGPT personally, and I can never get myself to pull the trigger on on that and enter that world. So I don't really know what you're competing against except that, you're up against a big juggernaut. Mhmm. But I love it. I I wanna I just wanna go back. So I have two questions for you on the product side.
You know, at ten thirty one, we're invested in a lot of freedom focused businesses. A lot or privacy focus as well. Obviously, we're investors in you guys at Open Secret. But before that, it was, you know, it was it was Tony and the gang, and then you guys you joined, and you were building a Bitcoin wallet. And that was why it kind of fell under our purview. And then you pivoted to Open Secret and Secure Compute and AI. So it's really the only AI business, or pure like, almost like a pure play AI business, that we're focused on.
This is very long winded, but when you're building, like, freedom stuff, it's very difficult. Like, if you're building private or freedom stuff, like, you have no user feedback. You don't know how users are using your product. So how do you think about, like, the the choice you give them? Like, one of the biggest things I hear from people that I recommend Maple to is, like, their first question. And I kinda touched on this with the Mint conversation last week. Is, like, their first question is, like, okay. Which model do I choose? Like, how do you think about that? I mean, there's a lot of choice. You're you're giving the user a ton of freedom, but each of those models are very, very different in, like, kinda output and how they're used. How do you think about that? Yeah.
[00:39:36] Marks:
So we wrote a a model guide on our blog just like kind of a high level here are the different models and generally what they're good at. But to be honest, like, nobody really knows for sure what the models are totally good at because the way that you use it might be different than the way that somebody else uses it. So there are models that are focused on programming and writing code, and those, like, for sure have been fine tuned for that. Some are better at arithmetic. Some are better at long form writing, But but all these models also have a general side to them. So maybe maybe you're more of an analytical thinker in in general. And so a mathematical model is better for you to use just in your general chats.
So it's it's difficult to tell people, like, you should use this model for this thing because you don't totally know. And then, like you said, we don't we don't have to keep analytics on anybody, other than the billing stuff that we have to do. So, we don't know what is successful for some users. We could provide, like, a thumbs up, thumbs down kind of thing. Did you like this? And we could maybe store those anonymously. But aside from that, like, we just have to talk to users. And so we have we have several team accounts. We're trying to get a lot more businesses signed up on maple because it's great. You can provide it to everybody in your company and, you get a more powerful account than than the free version of maple. And so we'll all phone calls with our team accounts and ask how they're using it. And so I get direct feedback from them in a more private setting, but it is filtered through, like, their their memory of how they use the product. Whereas it's it would be nice if we could have full on just, like, creepy analytics into everything that's going on. You know, when they type this in the prompt, they got this back, and then they had to reprompt again because they didn't get it correct.
Yeah. It's it's more difficult to operate in in that kind of private model.
[00:41:24] ODELL:
But I so I'm thinking more like like, yeah, blog posts are great. I mean, you do your own show, your own podcast. Like, talking to users is obviously helpful. But at the end of the day, like, the ideal situation is, like, it's a it's a UX problem. Right? It's, like, how do you think about, like, UX and defaults? Like, how, like, is is there a path there where you're a little bit more active on new user joins, like, how their defaults are set up versus kind of just throwing them into picking which model they want and how they use it? No. Definitely.
[00:42:05] Marks:
One of the features that started to become popular on some of these services is an auto mode. And so I definitely could see where we do our own evals. We we definitely wanna stand up our own eval service, and we would probably do some kind of open nature to it. But start to evaluate these with different prompts and start to look at more analytically or, you know, objectively, which ones are better. And so if we do that, then, yeah, the user could just be tossed into a prompt. They get to start typing and we will look at their prompt and figure out which model would be better for them. And then they can always change it later. Another thing that would be cool is, like, throwing it off to three models at once and then somehow, like, aggregating the data back and evaluating which one was bait maybe better for them.
There are different things you could do there too. But no, the, the UX and the setup experience would be, would be great to improve, over time as, as we continue to refine this.
[00:43:00] ODELL:
Yeah. I mean, I think so, like, I and I don't know if they're the right user or not, but, like, my parents are, like, very nontechnical people. And, like, their Maple experiences, they just live in Loma. Because when they first loaded up Maple, it put them in Loma, and they just live in llama. That's what they use. It doesn't matter. You add a new model, everything else. They're just forever they're just forever in llama until it gets deprecated or changed. Uh-huh. And it's I don't know. It's just it's an interesting thing because you have so many different types of users, and maybe some users get prioritized other than others. I I think that makes sense.
[00:43:40] Marks:
Yeah. But it's just an interesting thing to think about. We did change that it remembers the last model you used. Which is llama for them. Which is Lava for them. And I'm I'm trying to remember. We were trying to do it to where when you upgraded, it would, like, switch you to deep seek or GT. Because they're paid users too. Yeah. So either they got switched because we changed that default, or what you could do is next time you're with them, just, like, make a single chat for them in Maple using a different model, and then they'll be on the new model after that. Yeah. I mean, it was I I take some personal responsibility
[00:44:15] ODELL:
because, they were visiting, and I basically just subscribed them both to Maple, installed it on their phones. I I, like, held their iPhone up to their face, did the face ID for them, and I left them in llama. So it's kind of my fault.
[00:44:29] Marks:
That's right. But
[00:44:31] ODELL:
they're subscribers now. They're they're supporters. Awesome. And they do use it. I mean, I was look. I this is also something I've been thinking about for a while just, like, on the privacy side. Is, like, you can have, decent decent digital privacy. But if your family or friends or your coworkers are taking your information and then and, like, the example I always use is, like, you know, posting a picture of you on Facebook or something, then you're screwed. Right? So, like, I don't want my parents or my cousins or my coworkers using ChatCBT either. Like, that's actually a a it's a security hole for me. And with signal, I'll go back to signal as the example. Like, I have my 90 year old grandmother using signal because it's the only way she gets baby pictures. Mhmm.
She's not gonna use Matrix. She's not gonna use Simplex. She's definitely not gonna use PGP. But the signal trade off model works for her, and that's why I'm very optimistic on Maple because, to me, there's a lot of similarities there in terms of the convenience security trade offs that are made. And signal has done a really good job of that, of not of giving the user relative freedom and security, but also not, being approachable. It having it be approachable for that average person, which gives it an additional network effect, I think. Yeah.
[00:46:04] Marks:
That's interesting. I hadn't considered that vector where somebody is, like one of your cousins is chatting about you with ChatGPT, and now, suddenly, a lot of your information is in there. Or they take some email or text message that you sent to them. They're like, oh, help me finish this
[00:46:18] ODELL:
or write a response to this. Respond let me respond to this or Mhmm. Make a document. Let me make a document. Yeah.
[00:46:28] Marks:
Well, and then pretty much if you're trying to hop on a if you hop on a video call with somebody who's not privacy focused, there's gonna be an AI assistant that's I hate that shit. It's the worst. So that also is sweeping up all your data.
[00:46:42] ODELL:
Yeah. I hold I hold that against everyone who does that. Yeah. Some person is asking why do you store time stamps?
[00:46:51] Marks:
Yeah. Why do we store store time stamps? I mean, we we do it because we need to know the token counts, And we're also we wanna build your history for you. There might be a way and I'd have to chat with Anthony about this. There might be a way to, like, encrypt the time stamps as part of your content. But if we're gonna rebuild a chat and,
[00:47:15] ODELL:
you know, that's not even that's but it's also not that sensitive. Like, what what do you get out of the
[00:47:21] Marks:
I mean, I guess, if you are if you'd if somehow they figured out which email address you used to chat with Maple and you are in a public library I'm thinking of Ross. You know, Ross here. You're in public library. Maybe they can attach time stamps to a video feed of you using your computer or something. I don't know. I guess. Yeah. It's not that sensitive, though. No. It's not.
[00:47:41] ODELL:
I think there's probably similars there's probably a similar, another example with signal on that. Like, I wouldn't be surprised if that's one of the single pieces of metadata that they have. I know they've gone through great lengths to not have, like, your network graph. They don't know who you're chatting with, but I wouldn't be surprised if they know when you chat. Yeah. Just because they're sending data Mhmm. At that specific time.
[00:48:05] Marks:
Yeah.
[00:48:07] ODELL:
Okay. I also wanna go back. I'm, like, hopping around a little bit here, but we keep going down holes, and then I pull us back. Just from, like, the parent angle, you mentioned the kids. I I I think ChatGPT just released some kind of kids mode. I really my kids will never use the Sam Altman product. Mhmm. But it's something that I battle with all the time. Well, my kids aren't old enough yet, but, I will battle. Like, it's something I do think about. How do you you know, Maple could be, like, this ideal so because you also don't want your kids to be Luddites. Right? It's the same issue with, like, the Internet, right, or, like, social media and stuff. Like, you want them to be prepared and ready for the, quote, unquote, economy of their future.
Yeah. But at the same time, you know, I don't want them, like, completely one shotted by AIs. So how do you think about that? Yeah. No. Definitely.
[00:49:14] Marks:
I mean, if if any if if OpenAI is listening to this, they're gonna hear our future potential product. But, no. We do want to Sam listens. Yeah. He's a writer there. Listens. Yeah. Maybe it's more proton that that I'm worried about. But, no. We wanna come out with, like, some kind of family plan. We're we're in the idea stage right now for it. But ideally, it would be, like, you know, as a parent, you can sign up for a family plan. You've got your kids under you. And then you have the ability to see their chats because you, as a parent, need to steward your children and raise them, and and understand what they're talking about with AI. But then as they get older, maybe you can give them a little more freedom.
And, there are, like, you could potentially get alerts if they chat about something that's really concerning. Yeah. Like keywords or something. Yeah. Keywords that trigger an alert to you. Something that we get from school. Our our kids are are in public school. And I know, you know, people can can like that or hate it. But, they we get, like, a a daily or a weekly summary of what they've been doing in Google Classroom through the week. And I think something like that would be really cool for AI if parents don't have a problem. Get that too or just the parents? Well, the teachers I I don't know. I mean, I'm sure the teachers see it. But, like, we get an aggregate of all of their classes. Like, here's what the kid was doing in school this week in Google classroom.
And so, I think with AI, it'd be really nice as a parent to say, like, okay, here's what your kid chatted about today. Here's a summary of the conversation. And then if you have concerns over any of these, you can dig in and go go look at it. I'm I'm a big believer that kids don't have full privacy, you know, as as especially when they're younger and you're trying to teach them the ways of life. And so as we've given our kids a phone, we've been, like, really closed off to what they have. You know, initially, they can only text us and other family members. And then we open it up to some friends. They don't even get group text messages at first, and then we allow them to be part of group text messages. And can you enforce that? Does Apple give you the ability to enforce that? Or Not not the group stuff, but, our our kids know that we would look at their phones and So that's what you do. You would do phone checks? Yeah. So we just do phone checks, and we would just kinda spot check things. And our kids are generally just kind of behave well.
Yeah. But over time, we stop reading the text messages frequently as they show us that they can, you know, behave well online. And there were times where we would see stuff and we're like, oh, we don't like that they were saying that. Or, man, they made a really bad choice here with this friendship, but, like, we don't wanna get involved, you know? Like, we're not gonna insert ourselves in the middle. And so you have to let your kids make mistakes and and burn themselves. So, you just you just kind of you kinda oversee them. Right? And I see AI fit in this because if your kids are are interacting at all with the real world, they're gonna run into AI. They're gonna be a friend's house that has an Alexa product, and they're all gonna be chatting with it and be like, oh, you know, tell me about boobs or something. And so, like, you, you need to you need to teach them this. And so we wanna give parents a a tool that gives them insight, but also is protecting their privacy because you'd like you said, you don't wanna have your kids chatting with Sam Altman.
[00:52:26] ODELL:
Yeah. I like that. I mean, in general, I think the answer is empower parents. And at the end of the day, it's parents' responsibility for all of these questions, whether it's social, whether it's phones in general. You know, give parents the tools to parent as they see fit. And, ideally, like, the more configurability, the better. Right? Like, more options, the better on how parents and kids wanna approach that relationship makes sense to me. That's that, that aligns up with my world. I mean, this is, something that I think about all the time on the Nostra side too. Because, like, Nostra is, like, this weird thing where it's, like, a bunch of people that kind of hate social media are trying to build out, like, a social protocol.
And it's, like, it's such a balancing act of how do you approach it in a healthy way. And then you constantly hear the complaints of, but what about the children? Right? We're seeing a bunch of age restriction laws happen around the world, in terms of trying to had the state trying to keep kids off of social media. And the answer is probably similar. Right? The answer is, well, empower parents, and then it's ultimately their responsibility for their kids to have a healthy relationship with their tools.
[00:53:45] Marks:
Mhmm.
[00:53:47] ODELL:
What do you think about Ben McCarman's idea of have being able to set your kid's system prompt?
[00:53:53] Marks:
Yeah. No. I think that's great. I just got handed a note from my kid saying there's a wasp nest right outside our front door. So I've got a I've got a task after we're done here. Yeah. Well, we can wrap shortly, so you can go ahead and ahead. I think it's funny. That's pretty timely.
[00:54:07] ODELL:
Can Sam Altman take care of that wasp nest for you, or are you gonna have to deal with that? Yeah. No. Like, the exterminators are totally cooked. AI is gonna come take care of it for me.
[00:54:18] Marks:
No. I think it's great. I think parents should, in in this, like, fictitious family plan that we wanna build, parents would have the ability to set the system prompt. And then, you know, again, the parents have the ability over time to be like, hey, kid. Now you can do the system prompt if you want to. I think also, like, kids maybe can't delete chats in the beginning or they go to, like, some trash folder that parents have access to. They can see like what their kid is deleting. And like, I'm always conflicted because I grew up during a time where my parents had no idea what I was doing. Right. I would go outside and play and we would like venture off to another neighborhood. And just as long as we came home, The street lights came on, we were good. And then when the internet came out and I got a dial up modem, I was in my bedroom dialing up and, you know, just doing who knows what on the computer and my parents just had no idea what was out there. Yep. So, I long for a day like that with my kids, but I also want to be realistic that, like, I can't just give my kids an unrestricted phone with unrestricted social media and unrestricted AI and just hope for the best and like, huck and pray. Because we've seen too many, like, awful stories of things that have gone wrong. So I hate saying it's a different world now than it was when we were kids, but, like, it it kind of is in many regards.
[00:55:38] ODELL:
Yeah. I mean, when I was, when I was younger, like, I could run laps around my parents with computers. Like, I could lock them out of all their bank accounts if I wanted to. They had zero control over me. Mhmm. If anything, I was in the control position. I could read all their emails and shit. Yeah. So it is an interesting thought process to think about, but it's something that I think about all the time. I see Vague Zap 9,000 sats. Vague, good to see you in the in the Nasr live chat. Love to have you here. He's a true ride or die. He's been listening to both shows for a long, long time.
Okay. So you guys just recently launched this new developer API. What's the deal with that? Yeah. The developer API is cool.
[00:56:28] Marks:
So what it is basically is we have Maple, and it's one user interface. And you mentioned ChatGPT, you never used it. But ChatGPT's user interface is better than ours. It has a lot more features in it. And it has billions of dollars being spent on employees to make it the best experience out there. So we obviously can't keep toe to toe with it on everything, but there is this whole ecosystem out there of other tools that use ChargebeeT. And OpenAI has developed this somewhat of a standard API for AI apps to use. There there are competing standards much like the XKCD comic. There are lots of competing standards out there. There's MCP, servers and there is OpenAI's API. We went with theirs for now. And so what you can do is you can open up the maple app on your computer. I see BTC pins is already, like, mentioning it here, but, it's maple proxy And you just hit start proxy, and it it effectively is like a VPN. I mean, it's really analogous to that. It's a VPN for your chats. And now you can go use any other tool on your computer that uses OpenAI.
And you can say, instead of sending this to Sam Altman, send it to Maple, which is encrypted end to end. And so you can use a tool like Goose made by Block, which is open source, very powerful. It also is a Swiss army knife that is difficult to learn at times. But you can use Goose and totally encrypt all of your AI traffic now. You can use something like Jan AI, which is also really powerful, kind of similar to Goose. And there's there's tons of them. Let's just do there's, like, a whole list of of tools. And that's for the end user. But then where it also gets really cool is it's a developer API. And so if you go on any app store and search for AI apps, you're gonna find just like hundreds and thousands of apps that are that are AI powered. Maybe it's a calorie counter that's doing food tracking and all it's doing is it's just a wrapper. That's just pumping all of your information to ChachaPT as its AI brain.
And so what we have provided now is someone, if someone's making an app like that and wants to be a little more privacy aware, they could simply just take the maple proxy and run it on their server and then drop it in and now have their app talk to that. And they don't have to change any code other than having the, you know, code point toward their server. But other than They're just, like, changing the server endpoint, basically. Just change the server endpoint to talk to themselves instead of OpenAI. And now they get privacy built in and they don't have to change their user interface.
[00:59:00] ODELL:
So, like, basically, like, there's, like, fitness apps and shit out there that are just using OpenAI as the back end, and they're they're like a front end app for whatever specific niche they're offering.
[00:59:11] Marks:
Yeah. Fitness apps and oh, go ahead. No. Go on. Go on. Continue. Yeah. No. I mean, I mean, the biggest one is the relationship apps. So you have I can't remember the names right off the top of my head, but, like, there's the one for girls who just want to have, a relationship buddy, and there's one for guys who wanna have a relationship buddy. Like character AI? Is that character AI or is that something different? That's a little different. Character AI, I mean it does that, but these are like specifically like I want my AI girlfriend and I want my AI boyfriend. That's like a big thing now? Oh, it's huge. And, there were two that were really big and they both got their databases hacked. And so all these people who were, like, having really salacious conversations with their AI girlfriend, you know, their stuff is now out there on on the Internet for, the highest bidder to grab.
But those a lot of those are just piping through to ChatChippy Tea because it's easiest. Right? They could run a model on your device if they want to. They could put a smaller model on there and some of them do. So it's gonna be slower and it does have better privacy, but most of them just go through to ChatTBT and they pay a developer fee to do that.
[01:00:18] ODELL:
And so now you're trusting the app developer and you're trusting ChatTBT with all your data when you're using that app. Yeah. I mean, the big thing with the self hosting is not only the cost, not only the friction of actually self hosting, but the slowness. And, like, it really does add up. Like, it doesn't have to be that much slower to, like, if you're using it on a daily basis constantly and you're, like, waiting for the thing to answer you, it's just a massive disadvantage. This is the one of the reasons why I like the maple trade off. Okay. So let's just so you have the developer API. And if if you're a developer making an app, does it like, I guess, like, in the developer tools, they're choosing which model they're using. Right? It's like they can use any of the Maple supported models. Right? Yep. Yeah. So if they wanted to have the closest experience to what they're already doing, they can pick the OpenAI model that we provide. And then they can put, like, a new system prompt in there that, like, everything goes through a special prompt and stuff for Yeah. Yeah. They can add that all in there if they want to. They still have all the control that they had before
[01:01:22] Marks:
except, OpenAI is still going to do it's gonna muck with their data. Right? It has its own guardrails and censorship that it does, whereas Maple doesn't. We're totally open. They can go see what we do. So they actually might get a I think in some ways, they'll get a better experience because they are going to have raw access to the model that's there. And, they can have complete customization and control over it.
[01:01:47] ODELL:
That's awesome. Yeah.
[01:01:49] Marks:
And then there's there's, like, a third category that's in between those two. So if people are listening to this, if you, like, are a business owner and you're using some tool internally, we have, you know, let's, let's say there's a company that wants to do like a support portal that has AI infused in it. That support software probably has a ChachiPT component to it. So you as a business, as a, like a maple team user could bring the maple proxy into that arrangement and could say, Hey, instead of giving all of our support stuff over to Chat GPT, I want you to use the maple proxy. And then that company doesn't have to know about maple. They don't have to change anything that you don't have to pay like a special integration service or professional services to them. You simply just provide them with the URL to go to that. So you can private, privatize and, and all that your, your internal data.
[01:02:41] ODELL:
Awesome. I want to break down the so on the Maple proxy side, how does that work? Is am I running something locally on the same machine as the the first the first option with, like, the user focused? I'm using a different front end like Goose or something. Does that are you giving me, like, like, a URL endpoint, like an API endpoint, or am I running something locally on my machine,
[01:03:11] Marks:
like, a Maple proxy app attached to that? Yeah. I mean, do you wanna do you want me to show you? I've I've got it right here. Let's do it. Let's show let's show us. Do that.
[01:03:19] ODELL:
So I'll do it with goose and You're good. You don't need to go kill wasps?
[01:03:24] Marks:
No. Actually, I know about the wasp nest. It's a mud dauber, and anybody knows what mud dauber are, they're, like, totally harmless. So I don't have to worry about it. Okay. Let me share my entire screen, which is always dangerous.
[01:03:37] ODELL:
Here we go. Let me turn on I won't I won't pull it up until you're sure that you're not Okay. Doxing a bunch of information.
[01:03:44] Marks:
Alright. I am I'm good now. You can go ahead and share it.
[01:03:49] ODELL:
Okay. Let's pull this up. There we go.
[01:03:55] Marks:
Okay. So on the left here, I've got maple open, and on the right, I've got goose. Okay. So you can see what I was chatting about. I was grilling corn on the cob on Saturday, a nice little side dish for the the pork I was smoking. So if you go in here, like, this is just the maple desktop app that everybody can download on our website. And when you go to API access, you just go to local proxy and it's just got this default stuff configured here. And you can tell it to auto start if you want to every time you open maple. But, I'm just gonna hit start proxy.
And then I copy this information over and paste it into goose. Another thing is I had to generate an API key, which I already did, but you generate an API key and I had to buy some credits. So, I bought some credits, and now I can use So these credits are separate from
[01:04:46] ODELL:
my yearly subscription or whatever?
[01:04:49] Marks:
Yes. Yeah. They're separate. Your annual subscription gets you access to the chat product and gets you basically a developer account with Maple and you can come in and buy credits. There could be some time down the road where, like, pro plans come bundled with a certain amount of credits automatically or we offer some kind of smaller developer account. We haven't gotten there yet. We're a small company, still building. But for now, it's it's available to pro users and up. So I've got my credits. I started the proxy, and then I can come in here and in the settings, you can configure your providers and they they have all these different providers. And I went to open AI and I simply just changed it over That's awesome.
Add to that. And now that it's changed over, I can come in here and let's go do a new chat. And So, like, it literally thinks you're using OpenAI. It has no the actual front end has no idea. Yeah. So I select OpenAI, but it only has our models. It does not OpenAI's models. So I can do deep seek, select model, and I can say, you know, tell me a story about That's awesome. Yeah. About whatever. About clouds. And it's going to do its little thinking thing that that deep seek does so you can see how it's thinking through things and then starts writing the story for me.
And that's it. It's pretty simple. I would love to get it to where when you go into settings, it doesn't look like OpenAI. It actually says MapleAI on it, but you can do that with other things. I've got JanAI open here too, and it's doing the same thing. You know, I was doing tests earlier, so I'm I've got JanAI set up the same way with GPT OSS. So I can say hello, and this is going through the proxy. In fact, I can show you because if I stop the proxy and then say hello again, it's gonna time out
[01:06:38] ODELL:
error. That's cool.
[01:06:39] Marks:
Yeah. So it's fully encrypted going through, our back end using your own account.
[01:06:46] ODELL:
That's awesome. Yeah. That's quite powerful. And then so then you don't have I can turn off your screen share. So then you don't have you don't have to deal with like, people can just use the u a UI they want, and they get the privacy and security guarantees, basically. Right? That's the at the end of the day. Yeah. So one of the biggest requests that people have is they want to have web search inside of Maple. I've I've heard some people say that. Yeah.
[01:07:11] Marks:
So in theory, you could use another tool that already does web search and you wanna go to Is there one you could recommend? I need to get I need to figure out how to get Goose to do it. I know Goose can, but it's such a hard tool to use. Goose team, if you're listening, like, make your product easier. And maybe I could be the change I wanna see in the world and submit some more requests to it. But
[01:07:34] ODELL:
If I if someone wanted to vibe code, which model is the best that Maple has right now? I mean, I like, people love Claude, but Claude is proprietary.
[01:07:44] Marks:
Yeah. So we have Quinn three Coder, which is, like, the the best open source Who is that? Is that Alibaba? That's Alibaba. And in some metrics, it's like as good as OpenAI and Claude, in some things. Right? Or it's really close behind Claude for I think it's Sonnet or Opus. I I can't remember right now. That's, like, the gold standard that everybody's going off of now. So it's it's the best, but it's also super expensive to use. And then You could spend thousands of dollars in credits every month using it. And they're harvesting all your information. Yeah. I know. It's crazy that Claude basically most people are using it to code now. And so it has, like, all the proprietary software out there.
Like, in flight, it's it's recording everything that people are doing. So all these companies that care about their IP and, like, have have access controls around their repos internally, like, they're they're still sharing it with Claude in the process.
[01:08:41] ODELL:
And then, like, they also, like, have like, talk about having have to have policies among your team because there could just be a random dev on your team that's using it. You you might not even realize.
[01:08:53] Marks:
Oh, yeah. Cisco did a study last year, and they found that 48% of employees were using were were sharing company information with AI tools. And it didn't differentiate between, like, whether or not their companies knew it, but they're doing it. And I've talked to plenty of people in person who say, yeah, my my company told me not to do this, but I'm just hoping that they don't know because because people wanna do good work. They wanna, like, get a promotion. You know? They wanna do their best, and so they're gonna use the tools that help them be most effective.
[01:09:29] ODELL:
Are you familiar with the work Alex Gleason is doing?
[01:09:35] Marks:
Is that the, Shakespeare? Shakespeare.
[01:09:38] ODELL:
Yeah. And and MKStack.
[01:09:40] Marks:
Yeah. So I listened to your dispatch with him, what, a week or two ago? That's that's the most I've gotten into it. Just listen to that one.
[01:09:51] ODELL:
MKStack is cool.
[01:09:53] Marks:
So what is MKStack?
[01:09:55] ODELL:
I mean, it's open. I don't think it's an LLM. Is it like, is it I I don't know. I'm such a noob when it comes to AI stuff. I think it's like, do they call them Is MCP, like, the type of format or something? So MCP is what,
[01:10:15] Marks:
Anthropic came out with as their API for AI tools to talk to each other. So a lot of tools now have, like, an m p MCP interface on them, model context protocol,
[01:10:27] ODELL:
interface that they can His GitLab says it's the complete framework for building Nasr clients with AI. MKStack is an AI powered framework for building Nostra applications with React, Tailwind, Byte, Shad, CNUI, and Nostrify. Okay. But I think it's independent of the model. I don't know. I was gonna just
[01:10:50] Marks:
Yeah. I'd be curious what Shakespeare is using on the back end. Is it using Claude or one of these services, or are you bringing your own PPQ? That's
[01:10:58] ODELL:
my my understanding is it's proxying into the one of the big dogs. Yeah. I think I think Claude is what they're using in the back end. Okay. But it'd be cool to see you guy I don't know. Like, there's some crossover potential there. That's as far as I'm going with my feedback. I don't I have no idea what I'm talking about. Yeah. I'm just a humble maple user.
[01:11:19] Marks:
I I have reached out to PPQ. We've we've chatted with Matt quite a bit, especially when we were doing the, open secret stuff earlier on. So he's aware of maple and Maple proxy, and we've talked about hopefully getting the Maple proxy up on PPQ. So that people could could use it because we get I mean, everybody wants to use Maple but pay as they go. Like, they love the PPQ interface for privacy. And I would love to offer it too, but that's just not where we're focusing right now. So I I see a nice middle ground where PPQ just proxies through to Maple.
[01:11:52] ODELL:
That's true. Like, I pay for the year in Bitcoin, and I just leave a bunch of tokens on the table. Like, I'm not using I'm not using enough probably like, I I'm using it to support the mission Right. More than, if if if from a price point of view, I'm probably better off using it on a paper token basis. So, like, if I I see what you chat about. I I can see you could use it more. Well, you see my time stamps at least. You know my time stamps. Uh-huh. I,
[01:12:20] Marks:
well, actually, I don't think you know which user I am now. I don't know who you are. No. In fact, I'll see, like, some low key, influencer online be like, oh, I love Maple. And it's like, oh, I had no idea that you're using it. So, it's it's kinda cool. Well, so I because I know I'm gonna get shamed on this. I prefer to pay with Bitcoin.
[01:12:42] ODELL:
But the reason I originally signed up with card was because it was, like, it was kind of I was being lazy, and it was kind of hard to find the Bitcoin payment option. That is very lazy because it's right there on the front. No. It wasn't on on Apple. It was like, you were you had to, like, go you had to, like, leave leave the ecosystem. Like, it was just like an app I did, like, Apple Pay just, like, boom. Go. It's it's true. It was turned off in the early days on there. Yeah. Eventually, we got it turned on. And so then I went to so then Tony shamed me, and then I went to desktop to pay with Bitcoin. And I intentionally, like, waited a few days because, like, I was like, I gotta make sure that I'm, like, fully disconnected.
[01:13:22] Marks:
Yeah. That's good. But, anyway, you got my time stamps. We had zero sign ups between then and when you signed up.
[01:13:29] ODELL:
There you go. I'm just Well, no. I made sure to onboard my parents. Oh, there you go. And then I, This is a great conversation. Last but not least, let's, before we wrap here, let's talk about OpenSecrets, the company. What's what is OpenSecrets, and what is Maple? Where do they how how do they relate to each other? Yeah.
[01:13:50] Marks:
Great question. Open secret is the foundation. It is the developer back end for maple, and it it it all runs in secure enclaves. And so the concept is all these apps that you use on your phone or any website you go to is just some server running somewhere else in a data center, and you're given it all of your information. And so they just have this database with everybody's stuff in it. And that's why data leaks happen all the time as the database will get hacked. And it just has, like, tons and tons of rows of information on everybody. And any employee in the company can see that information as well. So open secret instead takes each user and puts them in their own private data vault and encrypts it with a private key for that user.
But, it makes the UX really easy because private keys are scary. They're difficult to manage. People don't want to do it because now they have the responsibility on themselves to not lose that private key and then lose access to everything. So we're using secure enclaves to generate that private key and then encrypt from your device through to the database. So that's what open secret is and then you can build apps on top of it. So we've got maple on there. We have a really sweet cashew wallet that hasn't launched yet. That is Is that what Cali was teasing? That's the tease that he threw out there.
So yeah. So I'm really excited for that. In fact, I've I've used it to pay for all sorts of stuff and it's working phenomenal, phenomenally well. And so that's that's really what open secret is. We've kind of toned it back a little bit. We had a lot of developer interest in the beginning, but Maple started to grow and take off. And especially as we're looking at raising money in a seed round soon, we're trying to look for, like, what's the best story? What's going to get, you know, the best traction with people? And talking about a private day, you know, back end system for app developers is a bit harder of a story to go through whereas, like, private AI, the private version of Chat GPT or the signal Chat GPT, like, that's so much easier for people to grok, and they can use it. And I think that's a big part is,
[01:15:51] ODELL:
you know, investors being able to play with it. Yeah. I mean, that was the whole point of Maple in the beginning, right, was that it was trying to prove out the open secret model. Right? It's like, look. You can build apps on use it. You can build secure, hosted apps, very that that are very user forward and and very easy to use. And it was, like, a hard it was a hard pitch to make to people. So, like, oh, you make Maple. And it's like, look. If you use Maple, this is using Open Secret as the back end. Yeah. And it was a direct response to developer feedback. So we met with like, I don't know, 10 or so developers when we were thinking about open secret
[01:16:28] Marks:
and half of them said, yeah, we would love it. So half of them said, I don't know, maybe, maybe not, but almost all of them said, if you had a private AI, like ChargePD but built on open secret, we would totally use it. So Interesting. That's where it was born.
[01:16:41] ODELL:
I, but so, I mean, based on this Cashew wallet, like, I how developers can just, what, reach out? Is it, like, opensecret.com or something? It's opensecret.cloud.
[01:16:52] Marks:
We do have a oh, it's a it's a crappy address. Opensecret.cloud. People can sign up for the waiting list. And then we also have a discord. They can join. It's on the website. They can click in there and chat with us. But we are not onboarding any new developers right now, because we're so focused on Maple. Got it. So don't reach out. You could. Right? You could. But, some I'll I find a lot of the developers that contact us actually want Maple as their developer's tool. So now that we have the proxy, like, that's that's gonna satisfy a lot of their needs. They just want to talk to private AI. But, eventually, we will probably circle back around to open secret. I don't know when that time will be, but I think that, when the market is there for it, we can we can basically productize that and bring on more developers.
[01:17:43] ODELL:
Yeah. I mean, or maybe not. I mean, the story of Slack to me is fascinating. You know that story. Right?
[01:17:49] Marks:
Oh, man. Refresh me. I've I've read a long time ago.
[01:17:52] ODELL:
It was like some random video game company that nobody's ever heard of, and they just built Slack for their internal comms. Okay. And then, like, Slack turned into a multibillion dollar juggernaut, and no one knows the video game company. Like, the actual product turned it was just the way they were communicating while they were developing a game. Yeah. So, you know, it could be one of those stories. It's interesting. I mean, and then it it's compounded on top of that even more, right, because, the company started as mutiny and then turned into open secret. And I don't know. It's pretty awesome journey.
Yeah. I'd love to see it. I see, SoapMiner zapped us 21,000 sets. Very interesting convert. Appreciate you both. Marks, have you used have you used SoapMiner soap yet? I have not. You guys talk about them all the time. I need to get them. It's the best soap. If your birthday's coming up, don't talk so on the stream. K. I get you I get you a bar soap. It's great. K. It's a good gift. Marks, this was awesome. Thank you for finally coming on. It's been a long time coming. I'd love to do this, you know, maybe every six months or so, like, do a a check-in. I'm trying to do a little bit the freaks have probably noticed. Like, I try and interperse intersperse Bitcoin shows with nostrils and AI shows.
So I'd love to have you on more often.
[01:19:15] Marks:
Yeah. That'd be great. I mean, the the AI space is changing rapidly, so we can just update every once in a while. Yeah. And you guys are shipping fast too. Like, freaks, like, the best way you can support what they're doing is by subscribing.
[01:19:30] ODELL:
It goes a long way. They're a small lean team. It's just him and Tony. Yeah. It's pretty impressive. Anyway, before we wrap, do you have any final thoughts for the audience?
[01:19:41] Marks:
I want to I want to go see if there are any questions on Nostra that we forgot since we we talked about Nostra so much. Somebody asked why we remove document upload. We were really sad remove that. We were really sad to remove it, but it was such a flaky service, and it kept going down. And we care about our users and their experience, and so we turned it off until we can build it right. So we're we're doing a bunch of rearchitecting right now of Maple to make it even better. And then that will be a foundation for web search and for document upload and other things.
I really want web search. Oh, yeah. That's yes. It's coming. We've already got, a proof of concept with Coggy. So that that'll be coming. Let's see. The other one was can I get, I'm a free user? Can I have access to more models? And Just pay. Maybe. Just pay. Sign up. Somebody else asked if we're gonna bring back our cheaper account. We did have a starter account and we got rid of it. I don't know if we will, just just sign up for now and support the project. You can sign up for just a month if you want to And maybe down the line, we will. But who knows?
[01:20:50] ODELL:
What is it? It's it's $20 per month.
[01:20:55] Marks:
$20 a month. If you pay for a year with Bitcoin, then you get 10% off. Then it's $216.
[01:21:00] ODELL:
You get every model that you guys support. Right? Yep.
[01:21:06] Marks:
And then if you're a business or you work at a company and you wanna bring private AI to your company, we have a team plan. And it starts as small as two users and can scale up, the size of your entire company. And so it it doesn't have to replace the other AIs. I keep telling people this. Like, I'm not I'm not out here saying stop using ChatterPT, stop using Claude, but add a privacy version to your toolkit as well. So that way when you're reaching for the right tool, you have everything at your disposal.
[01:21:36] ODELL:
What about ImageGen?
[01:21:39] Marks:
That is such a tough topic. We don't have that. And think of the kids, man. Like, think of the kind of content that can be generated with ImageGen in a fully private model. We haven't figured out how to how we wanna handle stuff that is maybe just, like, really, really awful in the world. So I don't know what we're gonna do there yet. But we have so many other big things that we wanna build before even getting to that. Right? Like, we don't have web search and stuff. So we'll have to tackle that at some point. But for now, there are plenty other great ways to generate images, that you can use.
So Sure. I think I think I'll just kinda leave it at that.
[01:22:18] ODELL:
What do you think of the, like, the people making, like, videos from still images?
[01:22:30] Marks:
Oh, yeah. I mean, those are fun. They're really cool. I think a lot of these things are just, like, hobbies right now. People are playing with them just for fun as, like, a means of expression. So I think it's cool.
[01:22:44] ODELL:
But have you seen, like, the argument that, like, people are just, like, making fake memories? Like, they're taking old photos of, I don't know, like, a a father that passed away or something, and they're, like, supplanting them with fake memories?
[01:22:59] Marks:
Yeah. I mean, if you think about it, our mind is kinda full of fake memories too. Right? We don't we don't remember the past perfectly and then we look at something or we hear something and our mind adjusts the memory on its own. So
[01:23:15] ODELL:
no. I mean, what what you're saying is true. What's a real memory? Yeah. Whatever. I don't know. Anyway, on the ImageGen stuff, like, I know it's a sticky subject or whatever, but I will just say, like, unless there's a privacy focus like, I can't use any of the current tools with, like, family photos or anything right now. There's nothing out there. That's true.
[01:23:37] Marks:
So one one thing we could do is we could just put guardrails around it, and it will be transparent. Right? We're not trying to do any kind of censorship on people that is secret and subversive. So if we do image gen and we put a lot of strong guard rails around it, you'll be able to see what they are and it's open source. So you could adjust them if you want to. Right. Maybe that's the solution that we go with down the road.
[01:24:00] ODELL:
Okay. Well, it's all fascinating. We'll figure it out we'll figure it out as you go. Yeah. Marks, thanks for joining.
[01:24:08] Marks:
Do you I I asked you if you had final thoughts. You you went and answered any questions on Nasr. Do you have any additional final thoughts? Or final thoughts? It's just that that whole concept of a toolkit. Like, go to try maple.ai and get a free account. And because everybody has a free ChagitPT account except for Matt. So, go grab one and just have it available to you, and you'll start finding you'll start finding as you chat with ChagitPT. You're like, oh, I actually don't wanna say this to this company right now. And you can go over to Maple, and it's like a really liberating experience to be able to chat openly with something that's protecting your privacy. So just give it a try.
[01:24:45] ODELL:
Love it. I would just say, consider don't stop at the free account on on Maple because it is a big upgrade. Like, you pay $18 a month, get a ton of tokens, you get all the other models. And what the free is is the free just Llama? It's just Llama, which is a decent model. But why so bad. Llama is so bad.
[01:25:04] Marks:
It's decent.
[01:25:05] ODELL:
But It's good that it's quick. Is better. The nice thing that is Llama is, like, super quick. Like, it answers you right away. Yes. It's not thinking or anything, but it's not thinking because it's retarded. It's just like it's just like, it just spits out whatever whatever is in its mouth. Well, and it's it's stuck in the past. Right? It's an older model and it just continues to get older. So yeah. Have you noticed that the GPT open source one just wants to make tables of everything? Yeah. I finally started telling it, don't make tables, and I get really mean sometimes to it if it does. I always have to tell it to keep it brief. I just tell it I constantly am just like, just keep it brief. Don't make me a table. It's insane. There you go. Okay. Marks. Fantastic. Thank you for joining us. Good luck with the Wasp nest. Thanks. Freaks. I'll see you next week.
All the links to dispatch or cieldispatch.com. Share with your friends and family. I appreciate you all. Stay humble to access. Alright. Have a good one.
Happy Bitcoin Tuesday
Marks and Maple AI
Apple's AI Strategy and Product Unveiling
Privacy and Security in AI with Maple
Open Source AI Models and Industry Trends
Freedom of Thought and AI's Influence
User Experience and Privacy in Maple AI
AI and Parenting
Maple AI Developer API and Proxy Feature
Open Secret and Maple AI: Company Overview