A few great tips for ensuring your R package doesn't "talk too much" (within reason), shrinking down the size of your images with a new API directly available in a new package, and the first opportunity in 2024 for submitting your proposals for R Consortium projects is on the horizon.
Episode Links
- This week's curator: Jon Calder (@jonmcalder) (Twitter)
- Please Shut Up! Verbosity Control in Packages
- R Consortium Infrastructure Steering Committee (ISC) Grant Program Accepting Proposals starting March 1st!
- Optimize your images with R and reSmush.it
- Entire issue available at rweekly.org/2024-W07
Supplement Resources
- {lifecycle} Manage the life cycle of your exported functions and arguments https://lifecycle.r-lib.org/
- {logger} lightweight, modern and flexible, log4j and futile.logger inspired logging utility for R https://daroczig.github.io/logger/
Supporting the show
- Use the contact page at https://rweekly.fireside.fm/contact to send us your feedback
- R-Weekly Highlights on the Podcastindex.org - You can send a boost into the show directly in the Podcast Index. First, top-up with Alby, and then head over to the R-Weekly Highlights podcast entry on the index.
- A new way to think about value: https://value4value.info
- Get in touch with us on social media
- Eric Nantz: @theRcast (Twitter) and @[email protected] (Mastodon)
- Mike Thomas: @mike_ketchbrook (Twitter) and @[email protected] (Mastodon)
Music credits powered by OCRemix
- Sunny Side Up - Yoshi's Island DS - ZackParrish - https://ocremix.org/remix/OCR04558
- Salut Voisin! - Final Fantasy IV - colorado weeks, Aeroprism - https://ocremix.org/remix/OCR04553
[00:00:03]
Eric Nantz:
Hello, friends. We're back with episode 152 of the R Weekly Highlights podcast. We're happy to join us wherever you are around the world. And this is the weekly show where we highlight, no pun intended, the awesome, highlights section of the our weekly website, particular issue atrog.org. My name is Eric Nansen. Yeah. We're almost midway through February. This has always been the time of year where I kinda wanna get to spring now, you know, just to cheer things up a bit. But, yes, we are back, and I'm not alone. I'm always joined at the virtual hip here by my awesome cohost, Mike Thomas. Mike, how are you doing today?
[00:00:40] Mike Thomas:
Doing well. Waiting on spring. Also, we had about 60 degree weather here in New England this past weekend. And today, we got a foot of snow. So
[00:00:50] Eric Nantz:
Them's the brakes. Them's the brakes, man. It just never ends, does it? Yep. But we'll get through February soon enough. But, one way to speed things up is listening to those banter about this week's our weekly issue. And if you're not familiar with the project, every week, we have a new curator to to take the reins. And this week, it is John Calder, another longtime member of our curation team. And as always, he had tremendous help from our ROCE team members and contributors like you all around the world. So I wouldn't be shocked that if some of you listen to this podcast, there might be a section or 2 when you might hear yours truly kind of ramble, but you'd be like, yeah. Yeah. We get it, Eric. Maybe tone it down a notch. Well, guess what? That can also happen with our packages as well, especially those that like to, you know, through no fault of their own, give you a heads up through messages or warnings or other diagnostics as, you know, operations are commencing analytics.
But there may be some cases where you want the user to kind of be able to opt in or opt out of this kind of behavior. So our first is gonna talk about, as a package author, how you can take advantage advantage of some nice utilities both within r itself and within the r ecosystem to help give you a give your package users a little more control on the verbosity, so to speak, within their package experience. And this comes from the awesome rOpenSci project once again. In particular, the blog post has been authored by Mark Padgham, who is a open source software developer at rOpenSci as well as, returning once again, Myles Salmon. The street continues with highlights that she's involved with. And this blog post starts off with, basically, if you've written any package or function, you've probably have done this once or twice or, frankly, in my case, a lot more where you have a function. You know it's going to do some complicated stuff. And especially for you as a developer, you want to see what's happening in your console as things are being processed.
And you might have an argument that says something like verbose or quiet as a simple true or false just indicating, you know, a little switch to turn that diagnostic on and off. It's spread throughout the R ecosystem. You're going to find many, many functions and packages that have this approach. But as they say in the blog, it could introduce a bit of clutter and making the user have to customize this every time for each function. Well, maybe instead, the approach you might wanna take is having this configured at the package level. And that's where using an option statement could come in quite handy as you, the package author, where you might have an option talking about, you know, is my package message gonna be quiet or not?
And then the user could set that option themselves, run the function without having to introduce another parameter in that set function. And then maybe they want the situation switched, they can just run that option again, rerun the function without any changes to the function code or the function call, and it will give them the behavior they want. So that is one approach, and we're starting to see more of that. But, you know, guilty as charged here on this very podcast. I have not done this much enough, but I'm definitely thinking about, this approach.
Now there as usual, there are community, you know, you know, comes to the rescue again, so to speak. If you want to tap into somebody kind of doing this boilerplate for you, so to speak, guess what? There is, packages that we talk about on this show quite a bit, the CLI package and by proxy, the rlang package, which is actually, you know, building a lot of this functionality with their own equivalent of, like, the typical message, warning, and stop functions that you find in base r. And so you can opt into using rlang and CLI in your package if you want to take a dependency on that to kind of mimic this kind of behavior at a package level.
They have functions called, like, inform, COINFORM. And then there are options that you can tweak at the COI or rlang level that could your package could tap into. And so that is another great approach if you just want to take advantage of other great work in your package to control verbosity. Now there are some things that you wanna consider with that, especially if you think about, you know, how deep you wanna go with this. One of those is just simply how do you wanna control the level verbosity.
[00:05:42] Mike Thomas:
Yeah. So one interesting thing that I've seen done in a lot of packages that I've used, but never really knew exactly how to implement it in some of the packages that we've authored is the idea of displaying a warning or a message only once per session. I think that's that's really interesting. I think it's really powerful because it sort of lets lets you know, okay, this is something that you should be aware of but we're not gonna continue to to throw it in your face over and over and over again. And, I think that's really useful, probably a nice feature of the the R ecosystem. I don't know if that really that concept exists in other ecosystems as well.
But, you're able to do that by setting, the frequency parameter of the rlib message verbosity options to the string once, which I hadn't seen before. So there's a great little section here that talks about exactly how to do that and I am looking forward to trying to implement that in some of the packages that we've developed because I think that could be a much better user experience, to just be able to display some of these these messages or warnings, only one time per session. Then Mel also talks about and, excuse me, not just Mel, but Mark as well, also talk about, regaining package level control from your global options. And and this is sort of the issue that takes place when you have dependencies, right, upon other packages that are using our lang or CLI or with our to display messages, and you don't necessarily have as much control to turn that verbosity on and off because it's it's not your package. It's not your code. It's a dependency, of your package. So there's a a really nice couple of code snippets in here that employ the local options function from R link, which is really interesting. And that would allow you to essentially locally, within the function that you're authoring, turn on verbosity or or off. So you could set the r rlibmessage verbosity, as as verbose or wants or or essentially whatever you want and control that sort of in this to me, it feels with R ish, but it but it's a function called local options from the rlang package that allows, you know, within that function to to handle the verbosity specifically.
And I think that a lot of these concepts and code snippets can be especially helpful when you are developing your package or debugging your package because maybe you you know that this warning message is going to pop up, and you're you're trying to do a lot of little tweaks and you don't necessarily wanna see that over and over and over again as you develop. But, obviously, in in the production version of your package that you're going to release out to the world, you you do want those warnings to pop up to users who aren't going to be running the same exact function over and over and over again like you might be doing during development. So I thought that that was a really powerful concept too. There's some fantastic, R code snippets as well. And it's a really great overview and blog post around this topic of verbosity, which which, again, Eric, I think is pretty unique to the highlights and not one that we see too often. And it gets back to really usability, user experience, and, you know, how easy it is for other folks to be able to to use and be comfortable with your r package. And to me, the little things go a long way. I know you feel the same way. That's certainly a a shiny concept as well, but I think it it extends very much, into our package development to try to make the tools that we create as useful as possible to others.
[00:09:18] Eric Nantz:
Yeah. It was really excellent, summary here. And, also, I I kind of laugh about this, but I stumbled into this by happy accident almost as I'm updating an internal package at the day job where I'm deprecating a couple of function arguments in favor of a more simple third one. And I tapped into, you know, looking at the r packages online book, talking about their deprecation section, and I saw a package called life cycle, which is what the tidyverse often uses to give these messages. And, apparently, they default to once per session, as you said, with the Arlang options of, hey. This function parameter for, like, say, tidy r, gather, or spread is deprecated. Please use this instead. And lo and behold, I was able to tap into that with my internal package as well where, you know, for it's been over 5 years of existence, if not longer. And it's only now that I'm introducing this, in essence, soft deprecation right now.
And then a version later, it's gonna be like it's gone after that. But I'm being nice right now and saying, hey. Guess what? Use this new path argument, not these, like, 2 arguments instead. And this Boeing display once per session, so they don't get annoyed by it, but enough to get the hint. So it's nice that life cycle is just another one in these packages. We wanna see how others wrap the use of Arling or COI. That's a great demonstration of it. And, yeah, Mike, it's something where I just haven't done this a lot in practice, but seeing what options are available, whether it's just a simple Boolean to turn it on and off or the different levels of it. Another great complement to this as well, if you wanna do more systematic messages and maybe parse by other systems, a lot of these concepts also apply with the logger package, which I've been using quite a bit in my more back endy kind of package development in Shiny apps where I need to send that session kind of operation pipeline diagnostic to not just the r console, so to speak, but also to, say, a database where I'm keeping track of all the activity so I know where maybe some of the gotchas are that or where time is being spent on my app or my package. So there's lots of the principles here can apply in many different ways. So it's really, really good to see here.
[00:11:34] Mike Thomas:
I couldn't agree more, Eric. And those are 2 great shout outs to the life cycle and logger package. I think that complement these concepts, similarly.
[00:11:52] Eric Nantz:
Speaking of compliments, I I don't have a great segue for this, but I'll go with it. Bear with me, Mike, here. But, you remember the early days of the Internet when you would load these pages that were, you know, configured by frameworks like GeoCities, things like this? And there might be a page that you found that's kind of entertaining. But do you notice that it's taken a while to load? Has this big image, and it just scrolls slowly, slowly, slowly, slowly into focus until it's finally rendered. That's one of those cases where an image was probably uploaded in its most raw form possible, you know, straight from whatever software they use to produce it or or camera that they took a picture of. Who knows what else? Well, this can also happen if you're developing or writing a blog, maybe with R Markdown or another framework.
And you notice that, yeah, that image you put on there, that's that's a bit hefty. Well, there are ways that you can, optimize that for web viewing or other documents that you want to minimize the footprint of. That's where our second highlight comes in. This is a blog post called optimize your images with r and the resmush API, fun name. This is authored by Diego h. I couldn't quite track down, what he does, but, apparently, he is part of the rOpen Spain project, which is quite intriguing. But he talks about a recent need that he had in in his, in his work with one of his package vignettes, such as his package called TidyTerra, which has a lot of images.
And he likes to include precom precomputed images or precomputed results in his vignettes. And he noticed that that was producing higher size file size images that maybe the CRAN maintainers wouldn't necessarily like in big nets and such in PDF form or whatnot. So he decides, write this r package that ties into this, service called resmush where it's an API. And this is interesting. It's freely available. Don't need your API keys for this, at least yet, where it'll give you for an image of, say, 5 megabytes or less, it'll give you a way to compress that with various optimization algorithms. Throw your PNGs or JPEGs or bitmaps or whatever have you, and it will give you that compressed image.
And with the demonstration in this blog post, it really looks like no discernible difference. Really top notch, easy to use. And it seems like this is a great use case for not just those static files you might produce as part of a standalone document, but it looks pretty online as or friendly for online files as well. So I yeah. I'm intrigued by this. I never heard of this service before, but curious, Mike, if you had to deal with image sizes and looking for a way to optimize those further like this? I have but,
[00:14:56] Mike Thomas:
you know, I haven't found a great tool for optimizing image sizes. So this blog post is is very, very timely. It's it's pretty incredible to me that he was able to get this, you know, geospatial image down from from 1.7 meg to, like, what is it? 762 kilobytes? I mean, he almost took an entire meg off of this image size and I can't tell the difference at all. And obviously, when we are submitting packages to Kran, I think 10 meg is the the top, the largest package size that you can submit to to Crayon, if I'm not mistaken, Eric. Does that sound right? That sounds about right. Yeah. That sounds right. So I mean if if you have vignettes, detailed vignettes, especially that are containing images and things like that, that can you can hit that fairly quickly with some of these additional assets like images and it makes it very difficult.
I will say this blog post is going to be super helpful for some folks who are trying to create detailed vignettes, with images that are looking to to ensure they stay under that CRAN threshold. It looks like resmush, allows a lot of different formats that all of some of the existing packages like XFun, TinyR, OptOut, maybe handled a couple of these cases like PNG and JPEGs, but but not necessarily GIFs or bump files or or TIFFs or PDFs. And ReSmush handles all of these different formats, I guess, except PDF. It looks like it's the only format that it doesn't handle. But, obviously, I think most of the time when you're you're working on image compression, you're you're typically working with probably a PNG or a JPEG or a TIFF file or something like that, on the case of some of these geospatial images. So this is super handy. One of the other things this just reminded me of that I I will call out as well is if you are trying to create very thorough documentation around your R package, particularly with a package down site, you'll often see a section on the package down site that has articles on it. And initially, I thought that that was just sort of an, representation of the vignettes that you have in your R package.
And it is. But you can also do something called create an article, which is not a vignette, but it lives on your package down site within, you know, within that same article's section, but doesn't go to CRAN if you were to essentially submit that package. It's it's it's ignored from from the r build, and you can create one of these articles, if you will, using the use this package, use article functions. That's really handy, I think, when you are trying to, you know, add additional documentation around your package that isn't necessarily a vignette that that folks using your R package in an interactive setting within an r an IDE necessarily need to be able to have access to. You know, as long as they have an internet connection, they can go take a look at, this this article's section on your package down site. And it's not something that's going to be built as part of your r package. So it'll help save that size if you are looking to keep your package size, smaller. So I think these are all fantastic resources. I'm excited to check out the re smash package. It looks like it takes advantage. Eric, you you may have mentioned this, of this re smush dot it, free online API, that does a lot of the horse work here, which is is pretty cool. So shout out to that service as well, and and shout out to Diego for this awesome r package.
[00:18:33] Eric Nantz:
Absolutely. And, apparently, this ReSmush, service has been used heavily in content management frameworks like WordPress, Drupal. I mean, those things may sound familiar. So, apparently, it's got its battle tested as they say. So I'm curious to give it a shot. And and not to sound too meta here, but in our weekly itself, whenever you see images in each issue, we actually do have some routines that go through the images that the user has linked to. And we we used to have an automated way of doing this. Now it's a script way of doing it where we compress those images using some other utilities. So I'll have to see if Resmush can give us an even better take on that. But it is a nice segue to say that there are more than one way to do this in the our ecosystem. So, Diego's blog post below talks about some functions packages from EWAY, who, of course, we've mentioned many times on this very podcast.
His x fund package has two functions called Tinnify and OptiPNG that if you have those utilities on your system, well, one uses a service called tiny PNG, an API service. Another uses an OptiPNG system library. So, yeah, you may you may find that there are other routines in your ecosystem that can help you just as much as the resmush one. But it's a good roundup at the end of the blog post if you wanna see just what is available in this space because, as usual, there's always more than one one way to handle it. These will have their advantages and disadvantages. But, hey, an API service that's free to use, it almost sounds too good to be true, but, apparently, it's still there. So we'll we'll keep leveraging it, I guess, after seeing this blog post.
[00:20:16] Mike Thomas:
Free to use and no API key. It's pretty incredible.
[00:20:33] Eric Nantz:
Yes. And wouldn't it be nice, Mike, if everything we could do just didn't really have a big cost to it or much time involved? It just magically worked. Well, sometimes you have this idea for a major project that can benefit perhaps the entire our ecosystem, whether it's a package or a suite of packages or to help with community efforts as well. Well, this is a time of year. If you wanted some backing for it, this is the time to think about it because our last highlight is more of a, my call call to action or public service announcement here where the our consortium for another year is about to open their proposal, opportunity for their infrastructure steering committee to accept, you know, proposals for grants backed by the r consortium project itself. So if you're not familiar with the r consortium, as we talked about quite a bit on the podcast, but just to state a brief background of it, it is a joint collective effort of multiple industry multiple companies within industry as well as the R Foundation to help bring support for infrastructure of the R project itself and community efforts that support the mission of R itself.
And so starting March 1st this year, they will have the open, open call for proposals. And you may wonder what will these proposals look like. Well, guess what? On the our consortium site, they do have a section on all of the already funded projects. If you wanna get a familiar or you wanna get familiar of what's been funded before, both on a technical level and community effort. And this call for proposal is a bit on the technical side, but it can be used to help, you know, things with community efforts. Actually, a a listener of this very show, a regular listener, John Harmon, actually has been funded by the r consortium with this API to r package development, which is we've been keeping eyes on that in the suite of packages that John's been developing. That's been really fun to watch.
We've seen efforts like DBI go through the R Consortium, which is, of course, the translation of database APIs to R. That's been huge in my daily work and seeing DBI really take their infrastructure up, you know, a few notches. That was that was great to see as well. And like I said, technical efforts to bootstrap the community as well. So there's lots of opportunities here. So if you or a team are thinking of a way to help the R ecosystem, help the R community in general, and you want support from the R Consortium, this is the time to get those proposals ready to go. But I can speak from experience working in the ARC consortium and my, submissions working group. It's been a pleasure to work with them, and they've been instrumental in a lot of our life science efforts. So certainly, highly recommend to check that out.
[00:23:34] Mike Thomas:
No. And, again, this is one of those things that just makes me love the our ecosystem. You know, the fact that we have initiatives like the ARC consortium who are are trying to essentially, you know, further the science and improve the world using our software. And it's it's incredible. It warms my heart. So, you know, this is also another reason if you you have, you know, at year end, you start thinking about donations and causes that you or your company may want to support, you know, take a look at our open site, take a look at our consortium. I guess this is part of the Linux Foundation projects Correct. Which is is pretty cool. And consider, I would say, contributing, to one of these causes because because they're phenomenal, that they help everybody. And I do like the fact that this is both a technical and social sort of call.
So submit your proposals, check out the blog post. All the information that you could ever need is in here, including a link to learning more about how exactly you can submit that proposal. And, those important dates around the grant cycle are are March 1st, closes April 1st. So you have that that one month window. And then, I guess, there's a second grant cycle that'll be later in the fall of 2024. So keep your eye out. I'm sure we will, rehash this as some of these dates get closer. And looking forward to seeing what comes out of this next round of proposals.
[00:25:00] Eric Nantz:
Yeah. And, I I've got my eye on this in a few ways. So, that'll be the, teaser in this business to say stay tuned. But in any event, as you said, Mike, we're we're gonna continually keep an eye on this. And you may be wondering, well, how do you keep an eye on efforts like this in general? Well, guess what? It's all in our weekly itself. Right? Each issue does have a section on upcoming events. Maybe it's webinars or presentations where you might hear about these efforts, and there is a specific section on grants and funding proposals. So this is right in that very section. So that's why you if you haven't bookmarked our weekly, you definitely should. Am I biased? Heck, yes. I am, but in a good way, of course. And speaking of the issue, there's always more than just the highlights that we touch on here. So we'll take a few minutes to talk about the additional highlights that we found here. And for me, I found a, quite entertaining post from Angela who has the r critique blog about some of the discoveries she had about, quote, unquote, hidden objects.
Definitely had her, books like scratch her head a little bit. At a high level, what she discovered is that in r, much like you have in your Linux or Unix system, you might have, in the case of r, objects that have a period in front of their name. That's kind of in Linux and Unix, and I'm a clencher, to denote what we call a hidden object, which by default viewing, you're not gonna see right away, only if you do, like, a special option to show hidden files. Well, guess what? When she tried to wipe out objects in her Rstudio session using, like, the erase button in the environment pane or whatever it is, she saw a dialogue appear that says, okay, so wipe out all your objects. And then there's a checkbox that's saying include hidden objects as well.
That was kind of bewildering, and that would be for somebody new to this. But apparently, you can take advantage of this in r. Maybe you want to make an object that may be more, like, behind the scenes and you don't want it to, like, be up front and center in any environment viewer. Putting the period prefix on on that could help in that case. In any event, she does have some interesting commentary on how that might not be the best approach depending on your situation. But nonetheless, one of those things where I can sympathize seeing some as odd as that and kind of going deep into just why that is. And, apparently, it's still a feature request to be able to view those, like, more easily in in the viewer. But, yeah, to each their own, I guess. But entertaining read nonetheless.
[00:27:38] Mike Thomas:
No. That's that's a great call out. And sometimes, when I am removing some of these objects in my environment, and I think I'm always including that that checkbox, including the hidden objects as well, sometimes my memory doesn't really decrease as much as I was sort of expecting it to. So I gotta look in into that, another hidden, hidden feature of of our studio that I have to dive into, maybe for another blog post for another day. But an awesome blog post that I am super super excited about because I've been watching this project. It's been, I think, probably over a year in the making with some folks working really, really hard on it from ideation to now, production, is the censored package has arrived. I believe censored 0.3.0 is now on CRAN, and this is a package that allows you to work hand in hand within the Tidymodels ecosystem, for censored regression and survival analysis. So if you are in the life sciences, wink wink, to to somebody I know, I I think you may be very excited about this package as well. You may be surprised that even for some of us who are not in the Life Sciences, for credit risk problems, which are problems that that we work on Fairly often, we may want to know the time until a loan defaults or Absolutely. Until some bad credit event takes place or some good credit event takes place. You can you can have the opposite. Right? If somebody's gonna gonna prepay their mortgage early, I I kinda wanna know, when that could potentially happen. But now I'm now I'm getting into the weeds, because survival analysis makes me makes me very happy and and gets me very excited. And I'm nerding out, but this censored package is all I've ever been looking for to be able to work doing survival, regression analysis within the Tidymodels ecosystem.
There's all different, type arguments that allow you to predict, survival probabilities, the the quantiles of the event time itself, the hazard, just based upon tweaking this one argument, called time within the particular engine, that you are setting. And if you use tidy models, you'll be familiar with the syntax of how you construct these models. It's gonna feel very natural. So I'm I'm very excited about this. Big thanks to Hannah Frick and Emil Fettfield for, I think, doing a lot of the legwork to get this package, onto CRAN and in a place where we can all use it.
[00:30:03] Eric Nantz:
Oh, this is awesome. Yeah. I remember when Tidy Miles was kinda first person on the scene. The survival analysis was requested quite heavily in the beginning, and it's great to see, yeah, to see this factored in. Hey, man. I have a soft spot for survival techniques as well. My dissertation used this quite heavily, and, boy, we didn't have tiny miles around back then. So don't don't look at my r code from back then. My goodness. Shout out to those that dealt with the s sweep and and latex compile issues. I see you. I hear you. I've been there. Done that. But, yeah, sensor looks absolutely awesome. I in fact, we have some projects going on that are, you know, not quite the typical, like, did a patient survive or not? It's, certain events in, say, a trial, like, life cycle of, like, enrollment prediction and things like that, and and we could definitely use some time to event analysis on that. So I will definitely be keeping an eye on this.
[00:31:00] Mike Thomas:
Me too. And, just to to let folks know, it looks like there are probably 11 different engines or algorithms that you can use within, this censored package all the way from tree based models to proportional hazard models to, random forests and and more of your traditional survival regression model. So lots of options here to leverage sort of that that tidy models feel and approach to survival analysis problems.
[00:31:28] Eric Nantz:
Yeah. It it fits really nicely with the whole ecosystem in general, and I definitely am expecting that as the community gets our hands on this, we're gonna see more engine supported. In fact, if I had if in a time, I would go back to said dissertation research and see if I could augment what's called a competing risk engine to this because that's exactly the the type of survival technique I used. Hey. Maybe someday. Who knows? But, nonetheless, ER communities, you know, and the tidy models team does a terrific effort once again. And there's a whole boatload of terrific efforts you see in each our weekly issue. If you haven't known by now, please please do this. Bookmark arweque.org.
Just do it now. You'll never be disappointed. There's so much great content on here. Awesome packages that we didn't scratch the surface of, especially as I see this issue a lot in the space of spatial visualization and other visualizations. I can definitely help an EDA, like an exploratory data analysis, and to visualize these things quite quickly. Some awesome enhancements to data. Table, which we covered about last week, getting more details on that. And, of course, upcoming events that we're seeing in the r community. So definitely bookmark rweekly.org. And, of course, this is for the community, by the community, so to speak. So we value your contributions from wherever you are. If you found a great blog post, new package, or other resource that you want the world to know about, we're just a pull request away, folks. It's all marked down all the time. If you can write a sentence in plain text, you know how to do this. And it's simply linked directly in each each issue. We have a link directly to the upcoming draft. Very easy to send us a poll request on that front. And, also, we'd love to hear from all of you, not just for your poll request, but how we're doing on this very podcast. We have a little contact page linked directly in the episode show notes. We also have a fun little opportunity if you have a new modern podcast app like Paverse, Fountain, Cast O Matic, many others.
I just responded to somebody's post on Mastodon asking for a better podcast app, and I sent them to a nice resource called podcastapps.com, which has a whole boatload of these to choose from. They can send us a little boost along the way if you wanna get in touch with us directly. And then, also, we are available on the social medias, mostly on Mastodon these days for me personally, where I'm at our podcast at podcast index.social, sporadically on the weapon x thing with at the r cast, and mostly also on LinkedIn from time to time with, a little little post about these very episodes. And I love hearing from you and the community. It's always great to to hear from all of you. But, Mike, where can the listeners find you? Likewise. Probably the best place is LinkedIn. You can
[00:34:14] Mike Thomas:
see what I'm up to if you search for Catchbrook Analytics, k e t c h b r o o k. Or you can get in touch with me on mastodon@[email protected].
[00:34:26] Eric Nantz:
Awesome stuff. And, yeah, you keep up with Mike. He's he's never he's never slowing down. He's had some great posts on LinkedIn recently, so you definitely wanna check those out. And, yeah. So nice and tidy episode this week. We're gonna close-up shop here. And, again, thanks for all of you for listening. And, we hope to see you back for another episode of our weekly highlights next week.
Hello, friends. We're back with episode 152 of the R Weekly Highlights podcast. We're happy to join us wherever you are around the world. And this is the weekly show where we highlight, no pun intended, the awesome, highlights section of the our weekly website, particular issue atrog.org. My name is Eric Nansen. Yeah. We're almost midway through February. This has always been the time of year where I kinda wanna get to spring now, you know, just to cheer things up a bit. But, yes, we are back, and I'm not alone. I'm always joined at the virtual hip here by my awesome cohost, Mike Thomas. Mike, how are you doing today?
[00:00:40] Mike Thomas:
Doing well. Waiting on spring. Also, we had about 60 degree weather here in New England this past weekend. And today, we got a foot of snow. So
[00:00:50] Eric Nantz:
Them's the brakes. Them's the brakes, man. It just never ends, does it? Yep. But we'll get through February soon enough. But, one way to speed things up is listening to those banter about this week's our weekly issue. And if you're not familiar with the project, every week, we have a new curator to to take the reins. And this week, it is John Calder, another longtime member of our curation team. And as always, he had tremendous help from our ROCE team members and contributors like you all around the world. So I wouldn't be shocked that if some of you listen to this podcast, there might be a section or 2 when you might hear yours truly kind of ramble, but you'd be like, yeah. Yeah. We get it, Eric. Maybe tone it down a notch. Well, guess what? That can also happen with our packages as well, especially those that like to, you know, through no fault of their own, give you a heads up through messages or warnings or other diagnostics as, you know, operations are commencing analytics.
But there may be some cases where you want the user to kind of be able to opt in or opt out of this kind of behavior. So our first is gonna talk about, as a package author, how you can take advantage advantage of some nice utilities both within r itself and within the r ecosystem to help give you a give your package users a little more control on the verbosity, so to speak, within their package experience. And this comes from the awesome rOpenSci project once again. In particular, the blog post has been authored by Mark Padgham, who is a open source software developer at rOpenSci as well as, returning once again, Myles Salmon. The street continues with highlights that she's involved with. And this blog post starts off with, basically, if you've written any package or function, you've probably have done this once or twice or, frankly, in my case, a lot more where you have a function. You know it's going to do some complicated stuff. And especially for you as a developer, you want to see what's happening in your console as things are being processed.
And you might have an argument that says something like verbose or quiet as a simple true or false just indicating, you know, a little switch to turn that diagnostic on and off. It's spread throughout the R ecosystem. You're going to find many, many functions and packages that have this approach. But as they say in the blog, it could introduce a bit of clutter and making the user have to customize this every time for each function. Well, maybe instead, the approach you might wanna take is having this configured at the package level. And that's where using an option statement could come in quite handy as you, the package author, where you might have an option talking about, you know, is my package message gonna be quiet or not?
And then the user could set that option themselves, run the function without having to introduce another parameter in that set function. And then maybe they want the situation switched, they can just run that option again, rerun the function without any changes to the function code or the function call, and it will give them the behavior they want. So that is one approach, and we're starting to see more of that. But, you know, guilty as charged here on this very podcast. I have not done this much enough, but I'm definitely thinking about, this approach.
Now there as usual, there are community, you know, you know, comes to the rescue again, so to speak. If you want to tap into somebody kind of doing this boilerplate for you, so to speak, guess what? There is, packages that we talk about on this show quite a bit, the CLI package and by proxy, the rlang package, which is actually, you know, building a lot of this functionality with their own equivalent of, like, the typical message, warning, and stop functions that you find in base r. And so you can opt into using rlang and CLI in your package if you want to take a dependency on that to kind of mimic this kind of behavior at a package level.
They have functions called, like, inform, COINFORM. And then there are options that you can tweak at the COI or rlang level that could your package could tap into. And so that is another great approach if you just want to take advantage of other great work in your package to control verbosity. Now there are some things that you wanna consider with that, especially if you think about, you know, how deep you wanna go with this. One of those is just simply how do you wanna control the level verbosity.
[00:05:42] Mike Thomas:
Yeah. So one interesting thing that I've seen done in a lot of packages that I've used, but never really knew exactly how to implement it in some of the packages that we've authored is the idea of displaying a warning or a message only once per session. I think that's that's really interesting. I think it's really powerful because it sort of lets lets you know, okay, this is something that you should be aware of but we're not gonna continue to to throw it in your face over and over and over again. And, I think that's really useful, probably a nice feature of the the R ecosystem. I don't know if that really that concept exists in other ecosystems as well.
But, you're able to do that by setting, the frequency parameter of the rlib message verbosity options to the string once, which I hadn't seen before. So there's a great little section here that talks about exactly how to do that and I am looking forward to trying to implement that in some of the packages that we've developed because I think that could be a much better user experience, to just be able to display some of these these messages or warnings, only one time per session. Then Mel also talks about and, excuse me, not just Mel, but Mark as well, also talk about, regaining package level control from your global options. And and this is sort of the issue that takes place when you have dependencies, right, upon other packages that are using our lang or CLI or with our to display messages, and you don't necessarily have as much control to turn that verbosity on and off because it's it's not your package. It's not your code. It's a dependency, of your package. So there's a a really nice couple of code snippets in here that employ the local options function from R link, which is really interesting. And that would allow you to essentially locally, within the function that you're authoring, turn on verbosity or or off. So you could set the r rlibmessage verbosity, as as verbose or wants or or essentially whatever you want and control that sort of in this to me, it feels with R ish, but it but it's a function called local options from the rlang package that allows, you know, within that function to to handle the verbosity specifically.
And I think that a lot of these concepts and code snippets can be especially helpful when you are developing your package or debugging your package because maybe you you know that this warning message is going to pop up, and you're you're trying to do a lot of little tweaks and you don't necessarily wanna see that over and over and over again as you develop. But, obviously, in in the production version of your package that you're going to release out to the world, you you do want those warnings to pop up to users who aren't going to be running the same exact function over and over and over again like you might be doing during development. So I thought that that was a really powerful concept too. There's some fantastic, R code snippets as well. And it's a really great overview and blog post around this topic of verbosity, which which, again, Eric, I think is pretty unique to the highlights and not one that we see too often. And it gets back to really usability, user experience, and, you know, how easy it is for other folks to be able to to use and be comfortable with your r package. And to me, the little things go a long way. I know you feel the same way. That's certainly a a shiny concept as well, but I think it it extends very much, into our package development to try to make the tools that we create as useful as possible to others.
[00:09:18] Eric Nantz:
Yeah. It was really excellent, summary here. And, also, I I kind of laugh about this, but I stumbled into this by happy accident almost as I'm updating an internal package at the day job where I'm deprecating a couple of function arguments in favor of a more simple third one. And I tapped into, you know, looking at the r packages online book, talking about their deprecation section, and I saw a package called life cycle, which is what the tidyverse often uses to give these messages. And, apparently, they default to once per session, as you said, with the Arlang options of, hey. This function parameter for, like, say, tidy r, gather, or spread is deprecated. Please use this instead. And lo and behold, I was able to tap into that with my internal package as well where, you know, for it's been over 5 years of existence, if not longer. And it's only now that I'm introducing this, in essence, soft deprecation right now.
And then a version later, it's gonna be like it's gone after that. But I'm being nice right now and saying, hey. Guess what? Use this new path argument, not these, like, 2 arguments instead. And this Boeing display once per session, so they don't get annoyed by it, but enough to get the hint. So it's nice that life cycle is just another one in these packages. We wanna see how others wrap the use of Arling or COI. That's a great demonstration of it. And, yeah, Mike, it's something where I just haven't done this a lot in practice, but seeing what options are available, whether it's just a simple Boolean to turn it on and off or the different levels of it. Another great complement to this as well, if you wanna do more systematic messages and maybe parse by other systems, a lot of these concepts also apply with the logger package, which I've been using quite a bit in my more back endy kind of package development in Shiny apps where I need to send that session kind of operation pipeline diagnostic to not just the r console, so to speak, but also to, say, a database where I'm keeping track of all the activity so I know where maybe some of the gotchas are that or where time is being spent on my app or my package. So there's lots of the principles here can apply in many different ways. So it's really, really good to see here.
[00:11:34] Mike Thomas:
I couldn't agree more, Eric. And those are 2 great shout outs to the life cycle and logger package. I think that complement these concepts, similarly.
[00:11:52] Eric Nantz:
Speaking of compliments, I I don't have a great segue for this, but I'll go with it. Bear with me, Mike, here. But, you remember the early days of the Internet when you would load these pages that were, you know, configured by frameworks like GeoCities, things like this? And there might be a page that you found that's kind of entertaining. But do you notice that it's taken a while to load? Has this big image, and it just scrolls slowly, slowly, slowly, slowly into focus until it's finally rendered. That's one of those cases where an image was probably uploaded in its most raw form possible, you know, straight from whatever software they use to produce it or or camera that they took a picture of. Who knows what else? Well, this can also happen if you're developing or writing a blog, maybe with R Markdown or another framework.
And you notice that, yeah, that image you put on there, that's that's a bit hefty. Well, there are ways that you can, optimize that for web viewing or other documents that you want to minimize the footprint of. That's where our second highlight comes in. This is a blog post called optimize your images with r and the resmush API, fun name. This is authored by Diego h. I couldn't quite track down, what he does, but, apparently, he is part of the rOpen Spain project, which is quite intriguing. But he talks about a recent need that he had in in his, in his work with one of his package vignettes, such as his package called TidyTerra, which has a lot of images.
And he likes to include precom precomputed images or precomputed results in his vignettes. And he noticed that that was producing higher size file size images that maybe the CRAN maintainers wouldn't necessarily like in big nets and such in PDF form or whatnot. So he decides, write this r package that ties into this, service called resmush where it's an API. And this is interesting. It's freely available. Don't need your API keys for this, at least yet, where it'll give you for an image of, say, 5 megabytes or less, it'll give you a way to compress that with various optimization algorithms. Throw your PNGs or JPEGs or bitmaps or whatever have you, and it will give you that compressed image.
And with the demonstration in this blog post, it really looks like no discernible difference. Really top notch, easy to use. And it seems like this is a great use case for not just those static files you might produce as part of a standalone document, but it looks pretty online as or friendly for online files as well. So I yeah. I'm intrigued by this. I never heard of this service before, but curious, Mike, if you had to deal with image sizes and looking for a way to optimize those further like this? I have but,
[00:14:56] Mike Thomas:
you know, I haven't found a great tool for optimizing image sizes. So this blog post is is very, very timely. It's it's pretty incredible to me that he was able to get this, you know, geospatial image down from from 1.7 meg to, like, what is it? 762 kilobytes? I mean, he almost took an entire meg off of this image size and I can't tell the difference at all. And obviously, when we are submitting packages to Kran, I think 10 meg is the the top, the largest package size that you can submit to to Crayon, if I'm not mistaken, Eric. Does that sound right? That sounds about right. Yeah. That sounds right. So I mean if if you have vignettes, detailed vignettes, especially that are containing images and things like that, that can you can hit that fairly quickly with some of these additional assets like images and it makes it very difficult.
I will say this blog post is going to be super helpful for some folks who are trying to create detailed vignettes, with images that are looking to to ensure they stay under that CRAN threshold. It looks like resmush, allows a lot of different formats that all of some of the existing packages like XFun, TinyR, OptOut, maybe handled a couple of these cases like PNG and JPEGs, but but not necessarily GIFs or bump files or or TIFFs or PDFs. And ReSmush handles all of these different formats, I guess, except PDF. It looks like it's the only format that it doesn't handle. But, obviously, I think most of the time when you're you're working on image compression, you're you're typically working with probably a PNG or a JPEG or a TIFF file or something like that, on the case of some of these geospatial images. So this is super handy. One of the other things this just reminded me of that I I will call out as well is if you are trying to create very thorough documentation around your R package, particularly with a package down site, you'll often see a section on the package down site that has articles on it. And initially, I thought that that was just sort of an, representation of the vignettes that you have in your R package.
And it is. But you can also do something called create an article, which is not a vignette, but it lives on your package down site within, you know, within that same article's section, but doesn't go to CRAN if you were to essentially submit that package. It's it's it's ignored from from the r build, and you can create one of these articles, if you will, using the use this package, use article functions. That's really handy, I think, when you are trying to, you know, add additional documentation around your package that isn't necessarily a vignette that that folks using your R package in an interactive setting within an r an IDE necessarily need to be able to have access to. You know, as long as they have an internet connection, they can go take a look at, this this article's section on your package down site. And it's not something that's going to be built as part of your r package. So it'll help save that size if you are looking to keep your package size, smaller. So I think these are all fantastic resources. I'm excited to check out the re smash package. It looks like it takes advantage. Eric, you you may have mentioned this, of this re smush dot it, free online API, that does a lot of the horse work here, which is is pretty cool. So shout out to that service as well, and and shout out to Diego for this awesome r package.
[00:18:33] Eric Nantz:
Absolutely. And, apparently, this ReSmush, service has been used heavily in content management frameworks like WordPress, Drupal. I mean, those things may sound familiar. So, apparently, it's got its battle tested as they say. So I'm curious to give it a shot. And and not to sound too meta here, but in our weekly itself, whenever you see images in each issue, we actually do have some routines that go through the images that the user has linked to. And we we used to have an automated way of doing this. Now it's a script way of doing it where we compress those images using some other utilities. So I'll have to see if Resmush can give us an even better take on that. But it is a nice segue to say that there are more than one way to do this in the our ecosystem. So, Diego's blog post below talks about some functions packages from EWAY, who, of course, we've mentioned many times on this very podcast.
His x fund package has two functions called Tinnify and OptiPNG that if you have those utilities on your system, well, one uses a service called tiny PNG, an API service. Another uses an OptiPNG system library. So, yeah, you may you may find that there are other routines in your ecosystem that can help you just as much as the resmush one. But it's a good roundup at the end of the blog post if you wanna see just what is available in this space because, as usual, there's always more than one one way to handle it. These will have their advantages and disadvantages. But, hey, an API service that's free to use, it almost sounds too good to be true, but, apparently, it's still there. So we'll we'll keep leveraging it, I guess, after seeing this blog post.
[00:20:16] Mike Thomas:
Free to use and no API key. It's pretty incredible.
[00:20:33] Eric Nantz:
Yes. And wouldn't it be nice, Mike, if everything we could do just didn't really have a big cost to it or much time involved? It just magically worked. Well, sometimes you have this idea for a major project that can benefit perhaps the entire our ecosystem, whether it's a package or a suite of packages or to help with community efforts as well. Well, this is a time of year. If you wanted some backing for it, this is the time to think about it because our last highlight is more of a, my call call to action or public service announcement here where the our consortium for another year is about to open their proposal, opportunity for their infrastructure steering committee to accept, you know, proposals for grants backed by the r consortium project itself. So if you're not familiar with the r consortium, as we talked about quite a bit on the podcast, but just to state a brief background of it, it is a joint collective effort of multiple industry multiple companies within industry as well as the R Foundation to help bring support for infrastructure of the R project itself and community efforts that support the mission of R itself.
And so starting March 1st this year, they will have the open, open call for proposals. And you may wonder what will these proposals look like. Well, guess what? On the our consortium site, they do have a section on all of the already funded projects. If you wanna get a familiar or you wanna get familiar of what's been funded before, both on a technical level and community effort. And this call for proposal is a bit on the technical side, but it can be used to help, you know, things with community efforts. Actually, a a listener of this very show, a regular listener, John Harmon, actually has been funded by the r consortium with this API to r package development, which is we've been keeping eyes on that in the suite of packages that John's been developing. That's been really fun to watch.
We've seen efforts like DBI go through the R Consortium, which is, of course, the translation of database APIs to R. That's been huge in my daily work and seeing DBI really take their infrastructure up, you know, a few notches. That was that was great to see as well. And like I said, technical efforts to bootstrap the community as well. So there's lots of opportunities here. So if you or a team are thinking of a way to help the R ecosystem, help the R community in general, and you want support from the R Consortium, this is the time to get those proposals ready to go. But I can speak from experience working in the ARC consortium and my, submissions working group. It's been a pleasure to work with them, and they've been instrumental in a lot of our life science efforts. So certainly, highly recommend to check that out.
[00:23:34] Mike Thomas:
No. And, again, this is one of those things that just makes me love the our ecosystem. You know, the fact that we have initiatives like the ARC consortium who are are trying to essentially, you know, further the science and improve the world using our software. And it's it's incredible. It warms my heart. So, you know, this is also another reason if you you have, you know, at year end, you start thinking about donations and causes that you or your company may want to support, you know, take a look at our open site, take a look at our consortium. I guess this is part of the Linux Foundation projects Correct. Which is is pretty cool. And consider, I would say, contributing, to one of these causes because because they're phenomenal, that they help everybody. And I do like the fact that this is both a technical and social sort of call.
So submit your proposals, check out the blog post. All the information that you could ever need is in here, including a link to learning more about how exactly you can submit that proposal. And, those important dates around the grant cycle are are March 1st, closes April 1st. So you have that that one month window. And then, I guess, there's a second grant cycle that'll be later in the fall of 2024. So keep your eye out. I'm sure we will, rehash this as some of these dates get closer. And looking forward to seeing what comes out of this next round of proposals.
[00:25:00] Eric Nantz:
Yeah. And, I I've got my eye on this in a few ways. So, that'll be the, teaser in this business to say stay tuned. But in any event, as you said, Mike, we're we're gonna continually keep an eye on this. And you may be wondering, well, how do you keep an eye on efforts like this in general? Well, guess what? It's all in our weekly itself. Right? Each issue does have a section on upcoming events. Maybe it's webinars or presentations where you might hear about these efforts, and there is a specific section on grants and funding proposals. So this is right in that very section. So that's why you if you haven't bookmarked our weekly, you definitely should. Am I biased? Heck, yes. I am, but in a good way, of course. And speaking of the issue, there's always more than just the highlights that we touch on here. So we'll take a few minutes to talk about the additional highlights that we found here. And for me, I found a, quite entertaining post from Angela who has the r critique blog about some of the discoveries she had about, quote, unquote, hidden objects.
Definitely had her, books like scratch her head a little bit. At a high level, what she discovered is that in r, much like you have in your Linux or Unix system, you might have, in the case of r, objects that have a period in front of their name. That's kind of in Linux and Unix, and I'm a clencher, to denote what we call a hidden object, which by default viewing, you're not gonna see right away, only if you do, like, a special option to show hidden files. Well, guess what? When she tried to wipe out objects in her Rstudio session using, like, the erase button in the environment pane or whatever it is, she saw a dialogue appear that says, okay, so wipe out all your objects. And then there's a checkbox that's saying include hidden objects as well.
That was kind of bewildering, and that would be for somebody new to this. But apparently, you can take advantage of this in r. Maybe you want to make an object that may be more, like, behind the scenes and you don't want it to, like, be up front and center in any environment viewer. Putting the period prefix on on that could help in that case. In any event, she does have some interesting commentary on how that might not be the best approach depending on your situation. But nonetheless, one of those things where I can sympathize seeing some as odd as that and kind of going deep into just why that is. And, apparently, it's still a feature request to be able to view those, like, more easily in in the viewer. But, yeah, to each their own, I guess. But entertaining read nonetheless.
[00:27:38] Mike Thomas:
No. That's that's a great call out. And sometimes, when I am removing some of these objects in my environment, and I think I'm always including that that checkbox, including the hidden objects as well, sometimes my memory doesn't really decrease as much as I was sort of expecting it to. So I gotta look in into that, another hidden, hidden feature of of our studio that I have to dive into, maybe for another blog post for another day. But an awesome blog post that I am super super excited about because I've been watching this project. It's been, I think, probably over a year in the making with some folks working really, really hard on it from ideation to now, production, is the censored package has arrived. I believe censored 0.3.0 is now on CRAN, and this is a package that allows you to work hand in hand within the Tidymodels ecosystem, for censored regression and survival analysis. So if you are in the life sciences, wink wink, to to somebody I know, I I think you may be very excited about this package as well. You may be surprised that even for some of us who are not in the Life Sciences, for credit risk problems, which are problems that that we work on Fairly often, we may want to know the time until a loan defaults or Absolutely. Until some bad credit event takes place or some good credit event takes place. You can you can have the opposite. Right? If somebody's gonna gonna prepay their mortgage early, I I kinda wanna know, when that could potentially happen. But now I'm now I'm getting into the weeds, because survival analysis makes me makes me very happy and and gets me very excited. And I'm nerding out, but this censored package is all I've ever been looking for to be able to work doing survival, regression analysis within the Tidymodels ecosystem.
There's all different, type arguments that allow you to predict, survival probabilities, the the quantiles of the event time itself, the hazard, just based upon tweaking this one argument, called time within the particular engine, that you are setting. And if you use tidy models, you'll be familiar with the syntax of how you construct these models. It's gonna feel very natural. So I'm I'm very excited about this. Big thanks to Hannah Frick and Emil Fettfield for, I think, doing a lot of the legwork to get this package, onto CRAN and in a place where we can all use it.
[00:30:03] Eric Nantz:
Oh, this is awesome. Yeah. I remember when Tidy Miles was kinda first person on the scene. The survival analysis was requested quite heavily in the beginning, and it's great to see, yeah, to see this factored in. Hey, man. I have a soft spot for survival techniques as well. My dissertation used this quite heavily, and, boy, we didn't have tiny miles around back then. So don't don't look at my r code from back then. My goodness. Shout out to those that dealt with the s sweep and and latex compile issues. I see you. I hear you. I've been there. Done that. But, yeah, sensor looks absolutely awesome. I in fact, we have some projects going on that are, you know, not quite the typical, like, did a patient survive or not? It's, certain events in, say, a trial, like, life cycle of, like, enrollment prediction and things like that, and and we could definitely use some time to event analysis on that. So I will definitely be keeping an eye on this.
[00:31:00] Mike Thomas:
Me too. And, just to to let folks know, it looks like there are probably 11 different engines or algorithms that you can use within, this censored package all the way from tree based models to proportional hazard models to, random forests and and more of your traditional survival regression model. So lots of options here to leverage sort of that that tidy models feel and approach to survival analysis problems.
[00:31:28] Eric Nantz:
Yeah. It it fits really nicely with the whole ecosystem in general, and I definitely am expecting that as the community gets our hands on this, we're gonna see more engine supported. In fact, if I had if in a time, I would go back to said dissertation research and see if I could augment what's called a competing risk engine to this because that's exactly the the type of survival technique I used. Hey. Maybe someday. Who knows? But, nonetheless, ER communities, you know, and the tidy models team does a terrific effort once again. And there's a whole boatload of terrific efforts you see in each our weekly issue. If you haven't known by now, please please do this. Bookmark arweque.org.
Just do it now. You'll never be disappointed. There's so much great content on here. Awesome packages that we didn't scratch the surface of, especially as I see this issue a lot in the space of spatial visualization and other visualizations. I can definitely help an EDA, like an exploratory data analysis, and to visualize these things quite quickly. Some awesome enhancements to data. Table, which we covered about last week, getting more details on that. And, of course, upcoming events that we're seeing in the r community. So definitely bookmark rweekly.org. And, of course, this is for the community, by the community, so to speak. So we value your contributions from wherever you are. If you found a great blog post, new package, or other resource that you want the world to know about, we're just a pull request away, folks. It's all marked down all the time. If you can write a sentence in plain text, you know how to do this. And it's simply linked directly in each each issue. We have a link directly to the upcoming draft. Very easy to send us a poll request on that front. And, also, we'd love to hear from all of you, not just for your poll request, but how we're doing on this very podcast. We have a little contact page linked directly in the episode show notes. We also have a fun little opportunity if you have a new modern podcast app like Paverse, Fountain, Cast O Matic, many others.
I just responded to somebody's post on Mastodon asking for a better podcast app, and I sent them to a nice resource called podcastapps.com, which has a whole boatload of these to choose from. They can send us a little boost along the way if you wanna get in touch with us directly. And then, also, we are available on the social medias, mostly on Mastodon these days for me personally, where I'm at our podcast at podcast index.social, sporadically on the weapon x thing with at the r cast, and mostly also on LinkedIn from time to time with, a little little post about these very episodes. And I love hearing from you and the community. It's always great to to hear from all of you. But, Mike, where can the listeners find you? Likewise. Probably the best place is LinkedIn. You can
[00:34:14] Mike Thomas:
see what I'm up to if you search for Catchbrook Analytics, k e t c h b r o o k. Or you can get in touch with me on mastodon@[email protected].
[00:34:26] Eric Nantz:
Awesome stuff. And, yeah, you keep up with Mike. He's he's never he's never slowing down. He's had some great posts on LinkedIn recently, so you definitely wanna check those out. And, yeah. So nice and tidy episode this week. We're gonna close-up shop here. And, again, thanks for all of you for listening. And, we hope to see you back for another episode of our weekly highlights next week.