How the babeldown package enables low-friction updates to living documents, uncovering innovative functions all within the base R installation, and supercharging a static Quarto dashboard with interactive tables and visualizations.
Episode Links
- This week's curator: Sam Parmar - @parmsam_ (Twitter) & @[email protected] (Mastodon)
- How to Update a Translation with Babeldown
- Six not-so-basic base R functions
- 3MW (Making dashboard interactive)
- Entire issue available at rweekly.org/2024-W04
Supplement Resources
- babeldown R package https://docs.ropensci.org/babeldown/
- DeepL API https://www.deepl.com/en/docs-api
- Albert Rapp's Quarto dashboard repository https://github.com/AlbertRapp/quarto_dashboard/tree/master
Supporting the show
- Use the contact page at https://rweekly.fireside.fm/contact to send us your feedback
- R-Weekly Highlights on the Podcastindex.org - You can send a boost into the show directly in the Podcast Index. First, top-up with Alby, and then head over to the R-Weekly Highlights podcast entry on the index.
- A new way to think about value: https://value4value.info
- Get in touch with us on social media
- Eric Nantz: @theRcast (Twitter) and @[email protected] (Mastodon)
- Mike Thomas: @mike_ketchbrook (Twitter) and @[email protected] (Mastodon)
Music credits powered by OCRemix
- Seven Pipes to Heaven - Super Mario Land - Nostalvania - https://ocremix.org/remix/OCR03256
- Smooth Mana - Secret of Mana - Gux - https://ocremix.org/remix/OCR00352
[00:00:03]
Eric Nantz:
Hello, friends. We're back with episode 149 of the R Weekly Highlights podcast. Oh, we're getting close to another fun little milestone, I guess. The episode numbers keep going up and so does each issue of Our Weekly. We're here to talk about the current issues, latest resources, tutorials, and specifically the highlights from the particular issue. My name is Eric Nantz, and I'm delighted that you joined us wherever you are around the world. And, hopefully, you're staying warm, especially if you're in the winter season and hopefully avoiding some ice apocalypses out there. But, nonetheless, we hope you enjoy this episode.
[00:00:36] Mike Thomas:
And staying warm in his humble abode is my awesome cohost, Mike Thomas. Mike, how are you doing today? Doing well, Eric. Yep. It's, it's pretty chilly out here in Connecticut. We live fairly close to a lake that's that's frozen for the first time in a few years, which is it's kind of nice, going out in the ice and and skating around. So trying to enjoy, as much as we can and excited to have some consistency now in 2024. I think we're we're back to back weeks for a couple weeks now on our weekly and hoping to keep it up. Yeah. The momentum is, in our favor, so to speak. So we'll keep that keep that rolling along here. And
[00:01:11] Eric Nantz:
and as always, the Rwicky project rolls along because we have an awesome team of curators that are helping every single week. And this week, our curator is Sam Parmer. And as always, he had tremendous help from our fellow Rwicky team members and contributors like all of you around the world with your awesome poll requests and other, heads up to us about the latest resources that you found. So let's dive right into this. We know, Mike, we have a lot of advances in technology right at our disposal via the magical world of APIs to help automate a lot of the stuff that would take a long time to do. Well, there is a very interesting area that this first highlight exposes in terms of where these APIs can really help in a much needed domain of translation of different languages for our documentation.
So our first highlight is a blog post from the esteemed rOpenSci blog by Mel Salmon, who, of course, is a former curator here at rweekly and now is a research software engineer supporting rOpenSci as well as other endeavors. And this blog post, in particular, talks about the use of the babble down R package to update an existing translation after its changes. And this is, apparently, part of a more broader initiative from rOpenSci for publishing in multiple languages, their various pieces of documentation. And as part of that effort, this babble down package has been developed to help translate markdown based content with leveraging what's called the DeepL API, which before this, I actually didn't know this exists. But apparently, this is a full fledged API built specifically for translation across many of the common languages in the world.
So in this blog post, my old blogs are a pretty simple example but yet very relatable. Having an existing markdown document in English language, and then as it's being kicked off, how would you go ahead and translate that to French in this example? So we have a very simple markdown syntax, which has got a typical headings, subtitles, and narrative inside. The babble down package has a function called deepltranslate, Give it the path to the markdown file, the source language, the target language for translation, and there you go. It's going to call the API under the hood, and you'll get that text right back, in this case, in the French language.
Looks good to me, although I'm not a French speaker, so I'll I'll defer to my own others for the authenticity of it. But that's not all. That's great for, like, your initial document. But what happens if, like anything else, you're gonna update that document, you know, through, you know, maybe pull requests from your collaborators. Maybe you got a new feature you wanna document in that package or tool or whatever this is meant for. And so assuming that this document is in version control because, well, if you're not using version control, you should, especially for larger efforts. Mike and I can attest to that. The babble down package is doing some pretty clever things under the hood to detect the changes that are happening in this document so that when you feed in this updated document to the the BabbleDown package, you can there's a function called deeplupdate where it's going to take this newly changed file.
And again, with very much a similar function parameters as the kind of the initial launch of the of the translation, you will get that new updated language of the document in French with your changes reflected. Now this is using, apparently, a hybrid of the get kind of diffs under the hood, but not just that. It's actually translating the representation of that markdown syntax of that file into XML representation. Because, again, in the web, even though markdown looks like we're just writing this in plain text, when you render it to HTML, you're putting it into another markup language, and XML and HTML are very much related in that space.
So, apparently, this XML representation is a bigger help to pinpoint exactly what is changed in that document so that she the the user of the BabbleDown package doesn't have to send the entire document back for retranslation. It's only gonna send the bits that change. And just like anything in the API world, there's no such thing as a free lunch sometimes. So if you were sending a volumous, you know, lengthy document over and over again, that could perhaps incur some costs if you're leveraging this API more regularly. So being able to only take what you need and translate it back, I think, is a really neat feature and pretty welcome, I'm sure, for those that are using this regularly.
So this is very much scratching the itch of a big need in the community as a whole in data science and other domains of making sure that we make our documentation for our tools, packages, or other, like, analytical pipelines as accessible as possible to those around the world. So I'm really excited to see just all the nifty things going on under the hope of this babble down package. Something I'm gonna keep in mind for my open source projects in the future. Yeah. I couldn't agree more, Eric. And I think this is the topic that we've talked about
[00:07:01] Mike Thomas:
on previous episodes, you know, trying to make R and the packages that we develop, and the documentation that we write as accessible as possible to as many folks as possible. Right? Around the world because R is an international programming language, and that means that we should try to do as much as we can to try to accommodate those folks. And the fact that the people working on on BabbleDown, including Ma'el, have provided us with this tool to just make it much easier for us to do so is is awesome. I think it's what open source is all about. And I really like sort of this walk through with the the API.
This deep l translate function is is really cool. As you mentioned, it allows you to specify your input file, where you want your output file to be written to, your source and your target language. And one interesting argument that I saw is an argument called formality. And in the blog post, Mal has specified this argument, as the string less. But I imagine that you could, need to have your translation be be more formal or less formal or or maybe somewhere in the middle, I'd be interested to learn a little bit more about that. And I imagine that that's sort of a parameter that the API, itself handles, which, you know, is really interesting. I think that there's probably a whole field of study here in terms of language and translation, and, you know, how different cultures represent formality versus informality.
But I I just thought that that was pretty nifty that that argument exists. And would be interested to see, sort of, how changing that argument would change the output. And I imagine that it it fully depends on on your input text and and how specific that is. This this DeepL update function is is really impressive too, and also really impressive that it doesn't use the git diff at all. As you said, it's this XML representation, and it looks like there's a package called Tinkar that sort of helps, with that translation between, your original document, it's XML representation, and finding the differences between those two XML structures. So that's that's really fascinating to me. Sort of reminds me of of the Waldo package, maybe, in some of its functionality, and being able to compare 2 files against each other. Recently, worked on a project using the Waldo package, where we're just reading, you know, these these dot TXT files, which have this this really sort of strange structure, but we're able to really easily show our end users sort of the differences between, 2 different dot TXT files, which is is super important to them. And it's it's just incredible sort of what the the community has created in terms of these packages that allow us to identify these differences, and then take action based upon the differences that we find. So this is a a really nice short and sweet introduction into,
[00:09:56] Eric Nantz:
how these DeepL functions within the BabbleDown package work and maybe able to help you out in your own work. Yeah. I'm even thinking if I do, like, open source Shiny work in the future, having, like, a toggle for, like, changing the language of, like, the interface elements. But then, yeah, it does bring up a lot of possibilities how this could be tied with something like BabbleDown. I'm the the wheels are turning or this could make my apps way more accessible. So this is really neat. Moving on with our second highlight today, we got, you know, as we talk about a lot on the show as well, yes, a community of our packages can supercharge so many of your workflows in data science, tool development, computing in general.
But you know what? Base R itself comes with a lot under the hood, and, honestly, sometimes it doesn't get enough of the spotlight. So that's where this blog post is gonna shed a little bit of much needed spotlight on some additional functions that may come from Bayesar, but they're definitely not so basic and they're quite powerful. And this is authored by Isabella Velasquez, who is a senior product marketing manager at Posit. And she does really great work of her blog as usual. And, this blog post in particular, we'll get to the meat of this shortly, but she's got some bells and whistles here that I think you're gonna really like as we we talk about this. But she opens with the list of 6 functions, actually, and an operator on top of that, that she's been using quite a bit and things that deserve a little more love. And we'll hit each of these quickly 1 by 1. But, we probably won't do each of them enough justice. But you may have seen as you've, like, perused maybe someone else's R package, you know, source code, that sometimes at the end, instead of an explicit return state or return of, like, a function parameter with a return function, you might see a function called invisible put in at the end instead.
What this really means, and a cool name, by the way, is that you are, in essence, returning a temporarily invisible copy of the object. So it's going to still execute normally if you run this like in the R console. But then if you want to save this to a variable, it's not going to print the result when you run that function after saving it to the variable. So you can kind of run it interactively with just calling the function itself and then also when it's not. But here's the cool part about this blog post. You're going to see the snippets of code, but you notice that little run code button at the top there? Guess what, folks? You hit that button, it's going to run it in your web browser.
Two guesses what that's powered by. I smell a WebR implementation here. This looks really nifty. So this is this is just as an aside here. This is the potential we're starting to see here, folks, is that on top of sharing code to do something, WebR, WebAssembly in general, is gonna let us try it out in the browser about you installing a single thing. So if you're a new user to R, man alive. This is a great time to get into the language of these kind of resources inside. But you can quickly see the examples that, that Isabella puts in here and run them yourself and see what's happening. So it's really, really neat to see. So invisible definitely is something I'm starting to use more in my function authoring, in the future.
Another one that I did not know existed, so, you know, mission accomplished for her blog post is the no quote function which basically means if you want to show the syntax of, like, a character string but don't want the quotes around it, you can simply feed it into the no quote function, and now it's going to print as if you don't have the quotes around it. I think this can be very helpful, especially as you're dealing with HTML language, like links or other things that you want to maybe copy into another program or a browser toolbar, then having that no quote function for, like, a URL type function that she highlights here would be very helpful to let you copy and paste without too much friction there.
So I could see other uses for that as well. Here's one that brings back memories for me in my very early days of my R usage of visualization is the co plot function. This is a very handy function when you have, you know, a situation of analyzing multiple variables at once where you could look at different pairs of variables, perhaps even conditioning on another one as well, and you can quickly, kind of, get a read for how these variables are going to interact with each other visually. Great for correlation analysis or other association analysis at a very high level. So that's all built right in. Very nice straight to the point. You can even customize how the rows are constructed and everything like that. So really nice examples throughout.
This one, I have theories on why it's named this way, but we'll see what you think, Mike. Nz char or nz car, depending on how you want to pronounce it. When I look at that name alone, I honestly have no idea what that does at first glance. But what this function really does is that it is a way to simply return true or false on whether the character vector that you supply to it is empty or not. Now n z, at first, I thought, does that mean like non zero? I don't know about that. But then I thought, well, here's maybe an egg. This is speculation on my part.
We know, if you're a historian about the R language itself, that it was founded by Ross Ahaka and Robert Gentlemen while they were teaching at the University of Auckland
[00:16:12] Mike Thomas:
in New Zealand. For it. New Zealand.
[00:16:16] Eric Nantz:
I I cannot take I I don't know. I've never seen this in writing, but I wouldn't be shocked if there was a little Easter egg in there somewhere because why is it called nzchar otherwise? I don't know. But in any event, I don't use this function enough. Have you used this function before, Mike? Nzchar?
[00:16:31] Mike Thomas:
No, I haven't. I haven't. I've seen it, obviously just come across like my, you know, sort of automated, whatever it is, let, you know, within our our studio that sort of pops up, different functions for you as you start start typing. But I don't think I've used it before. Let me see if I can take a look at what the documentation says about nzchar.
[00:16:57] Eric Nantz:
I did look at this before the show. I didn't see any references to my You know. You know. That's
[00:17:02] Mike Thomas:
like thinking there. Easter eggs there, but I would have thought non zero. Well, the n char obviously means number of characters.
[00:17:13] Eric Nantz:
Yes. That one I get. Yeah. Yeah. Number of
[00:17:18] Mike Thomas:
I don't know. I don't know. Because it's not number of 0 length character vectors. It's the opposite.
[00:17:26] Eric Nantz:
Right. Right. So if you're listening, I'm gonna let you know how to get feedback to us. We love to hear theories from all of you in the audience on this particular one because I've I've wondered about this for years, but, admittedly, I have not used the function much in daily practice. But, hey, now if I have a need to check if they're empty or not, I will
[00:17:45] Mike Thomas:
leverage this for sure. Well, I wonder if, you know, sometimes I feel like functions like this, especially within bass art sort of inherit their names maybe from the the c functions that underlay them. So I wonder That could be. A relationship there.
[00:17:59] Eric Nantz:
That could be because r is standing on the shoulders of c, Fortran, and the like, and of course, the s language before r. So there's a lot of legacy under the hood that, you know, you could go down lots of rabbit holes for the history of r on this. So alright. We'll move along here. Another function that I meant I have a checkered pass with, curious your pass on this, Mike, is the with package. The when I first used this, this was, in essence, a shortcut function for me where if I wanted to feed into another function, in this case, the example, Isabelle puts in here is the plot function, and you're feeding into it a data frame, but there are specifically variables of a data frame. If you're lazy and don't want to type like, in this case, mtcars$HP or mtcars $mpg, you can use the width function, supply the data frame, and then the plot function and just reference the HP and MPG without the dollar sign syntax, not too dissimilar to what you might see in a tidyverse pipeline as you're doing the piping operations.
Yes. It is helpful in this case, but I have tripped myself up more than I care to admit when I use this in the past. So, admittedly, I moved away from it. But hey. You know what? It is a way to to take that shortcut as long as you use it responsibly, I would guess. Yeah. It's another it's another option. Not one that I admittedly use very often, but it is an option. Yes. It is. And now this next one, I knew about what I call the single version of this. But I know there was a plural version of it, and that is the length function with the s at the end because I use length all the time to check, you know, the it says the length of a number or string or whatnot.
But if you want to quickly check for each element in a vector, length is basically a shortcut to, like, the more verbose, like, s apply or l apply syntax for doing this. So this is great. This may be another shortcut that you can put in your toolbox instead of having to do a per map on using length under the hood or an s supply under the hood. So that was that was a new one to me for sure. And then here comes the operator that I admittedly should have used way long ago. And that is called the no coacine operator. I'm probably not saying that right. But if you've seen in various conditional logic, the percent sign to vertical pipes and the other percent sign closing it, that is this null coalescing operator.
And this is a shortcut. If you've ever done a check for if an object being null or not and you've got like an if statement. If is dot null something, then return something else, return something else. This is a shortcut to that. And, also, the esteemed Jenny Bryan herself highlighted this in one of her talks at the USAR itself in 2018, which is linked to in the blog post. So part of her underlying kind of theme of code smells and things that you can shore up in your day to day coding development and R. This operator, she shows some great examples of how it streamlines a whole lot on that if else syntax.
So that's that's a whirlwind tour of all this, but there's a lot to choose from. And, of course, that just scratches the surface of what Baysar has. So Isabella concludes the post with some additional blog posts from others in the community and what they've seen in Bayesar that's been useful to them. So really like to see it. And, again, love, love the idea of being able to run the code directly in this blog post to try things out. So
[00:21:53] Mike Thomas:
awesome stuff all around. I think it is thanks to Isabella's implementation or integration here with WebR that I have finally wrapped my brain around what the invisible function does because I have seen it in so much code on GitHub. I have never used it. I still don't know where I would really have much of a use case for it, maybe in some of our more object oriented programming, but, you know, really the idea again there is that, your your your function is primarily called for its side effect. So I guess, one of the examples, that I read up on, I can't remember whether it's within this blog post or outside this blog post, but it's like the the right CSV function from the reader package. So that's that's obviously going to write a CSV somewhere to to some destination path that you supplied and won't return anything within your console when that happens. Right? Most of the time you're doing that and you're not you're not assigning that right CSV function to a variable.
So you wouldn't know, right, that that would potentially ever return anything. But if you did assign that to a variable, it would actually return the data frame, I believe, to that variable. Which is, which is pretty interesting. You know, it makes me curious about how many different functions out there do have this invisible call at the bottom of them, such that if you did assign that function to an object, you know, that objects would be populated with with something. So I think it's interesting. I think it's really good to know. Maybe something nice to have in your back pocket, and I've finally wrapped my brain around that. And I'm not gonna walk through all the other functions that you walk through, Eric, but I will just note that the this null coalescing operator, I I think is going to save me a lot of code writing once it's finally integrated in BaseR here shortly, as opposed to, you know, in just about every project I'm working on, I I do have some test for if if this is null, right? Then take this action within some particular if statement.
The coalesce operator the the coalesce function within SQL SQL has been huge, for for Catchbook and a lot of the SQL work that we do for our clients on. Some particular, address standardization projects where we're we're trying to test if, you know, the the user has a valid address too. Right? As a second line in their address like a PO box or something like that or not. So that's that's a function in sequel that we use really really often. I believe that within d player, there is a similar coalesce function as well that can return sort of the first non null, value within a list of values that you you pass it. So we I find that pretty handy, and maybe some other folks will as well. But I couldn't agree more that the WebR implementation here that allows us to actually touch and feel these functions as we're reading about them is a game changer. So thanks to Isabella for taking the time to do that, and I am going to dive deep into the GitHub behind her pipe dreams blog here to see how she did that.
[00:25:11] Eric Nantz:
Yeah. And the best part is, in fact, I remember when I first saw these kind of posts, I would see the run code, and I would I would hit it. I mean, this reminds me of the learn r package a little bit too. But then I realized, oh, wait. I can actually click in that code box and change it myself. Like, it's not just the pre canned example, no less. You can experiment with all this, which makes it even more fun. My goodness. So you can kinda see why this is gonna become a huge in the realm of teaching, the realm of illustrating these concepts. I mean, think about this, Mike. I know we're not far off, I think, from a packages documentation site. We're gonna let you run the package code itself as a way to try it before you, quote unquote buy so to speak. I can't wait until we can integrate that into our package down sites. It's gonna be incredible, right? Within your examples or
[00:26:02] Mike Thomas:
or wherever, your your reference. That's gonna be huge. And, you know, from the first blog post that we ever saw with WebR chunks in it to where we're at now, in particular, this installation of the the tidy tab package and Isabella's blog that's coming from our universe. The installation is so fast, so much faster than it used to be. I think we're waiting, like, you know, 2 or 3 minutes previously. And now, you know, we might be waiting 10, 15 seconds.
[00:26:30] Eric Nantz:
Yeah. George Stagg, the the engineer behind this at Pazit, he is, he's doing some divine work here, I must say, to make all this happen. And we are all super, super appreciative for it. So this is an area that I mentioned on this podcast I'm exploring very actively right now, and the possibilities are practically endless. But, yeah. And credit to Isabella for, again, putting a much needed spotlight on these gems inside the R language itself that you get every time you install the language for free, just like everything in R, it is open source all for you to leverage at your leisure.
And speaking of interactivity, Mike, as we saw in Isabelle's awesome blog post from the last highlight, making it interactive with the WebR functionality Well, in the Quartle ecosystem, now we've got tremendous ways to make interactive dashboards as well. And dashboards, if you're familiar with the flexdashboard package that many use in the R Markdown ecosystem, The quartile syntax or dashboards is very similar. So you can get up and running pretty quickly, but just what is the easiest ways for you to make that into a more interactive display and not just a stack display? Well, friend of the show, frequent contributor, Albert Rapp, is back once again returning to the highlights with his, latest 3 minutes Wednesday style post of making a portal dashboard interactive.
And you might say, well, just how do we go about this? Well, this is a continuation of a previous post where he put in the syntax needed to make, in essence, a placeholder dashboard. We've got, you know, a column here, a column there, and a row below it, and then a sidebar, but nothing in it yet. So how do we replace all that with things that can be both static or interactive? Well, just like anything in quartile, give yourself an R code chunk or even a Python code chunk, and you'd be able to add in things like a really nice markdown syntax for your sidebar. You can leverage in HTML tools hidden gem. I use this a lot in my Shiny apps. The include markdown function, where instead of writing the markdown literally inside the source code of your app or your UI function, you could have that as an external markdown file. Just reference that markdown file, and it's going to compile it in the web markup and put it anywhere you want in your HTML report or Shiny app. So he does that in the sidebar with a little bit of narrative around the dashboard itself.
And then let's spruce it up with some nice tables, shall we? And that's where the first example of an element to put in this table or put in this dashboard is a table of the palmer penguin set using the very incredible GT package by Riccione over at Posit. Love this package. And it gives you a very attractive looking, static looking table, which, again, for many purposes would be very, very good for the majority of reports. Now, of course, no dashboard would be completely about some form of, you know, more traditional visualization. And that's where, of course, ggplot2 will feed in very nicely into a portal dashboard or any type of report for that matter.
But on the old net, it's static. Right? How do we make that interactive? Well, Albert highlights another package that didn't get a lot of love initially, but, boy, it sure took off, especially last year, the ggiraffe package or ggiraffe package, maybe how you wanna pronounce it. I still don't know which one it is, but I'm go with either one, authored by David Goel on being able to turn a static ggplot produced visualization into interactive with tooltips and other great, little interactive features as well. So you can quickly give you that kind of hover functionality or filtering functionality by clicking on different points. There are lots of lots of cool things you can do with an interactive visualization on that.
Now how about going back to that table? The GT table looks great, but it is a bit static in its presentation. Well, making a short little pivot to the reactable package, which, again, is one of my favorites from my Shiny apps and reports, you can have that sorting and filtering functionality inside. And then lo and behold, you could even bake in at the end of the post a way to filter the table with some controls that are embedded in the sidebar of the portal dashboard itself. And you don't necessarily need Shiny for that. There are ways you can build that with the observable JS code chunks or other ways with crosstalk as well to make that HTML element linked to that input that you put in, even the sidebar or maybe above the table or visual.
And you can have that interactivity so that the user can customize how they want to display. In this case, the penguin's weight distribution in that table. Apparently, there's gonna be another video that he releases coming soon about making those reactable tables even more interactive. So we're gonna be staying tuned for that. That's a space I'm looking at quite closely in my exploits at the day job. So, again, portal dashboards are becoming very popular now, and you can make them very interactive very quickly. And we didn't get into the the bits that you could do with a Shiny back end as well, but, you could do a lot with portal dashboards. And I'm actively pursuing this as we speak of an open source project right now. So credit to Albert. Once again, terrific post, easily digestible with links to more detailed tutorials that he's done on these various topics, including his ebook that he's written as well on creating GT tables. So he's I don't know if he ever sleeps, man, but, boy, he is busy.
[00:32:51] Mike Thomas:
Reminds me of somebody else I know who I feel like never sleeps. Who would give you that idea? But, this is awesome blog post. I am already deep into Albert's code which is, looks like we got a little bit of JavaScript going on to connect these these filters, these check boxes to the reactable table. I did not know that that was possible, but that is really really cool that he's done that. Obviously, I've seen that, sort of interactivity that you could deploy to a static site if you are using Observable JS, with the ability to to have filters that drive your charts and things like that, but did not know, we could do that with reactables. So that is super cool. That is some code that I'm going to be taking a hard look at, and really excited and grateful that Albert has put this out in the open. You know, one thing that is consistent with Albert is his data visualization projects and products are always really aesthetically nice.
So I think that that's that's awesome. And you can see him leveraging sort of the new quarto dashboard framework, where you have a card with a plot in it, and that card has a little icon in the bottom right hand corner to expand that full page, which is another one of my favorite features. Our clients absolutely love that with our quarto dashboards that we're creating these days and and with, leveraging bslib, essentially within our Shiny apps, we have the same functionality within that card function. And it looks like maybe the g g I raf package also allows you to download these interactive plots, maybe as a a static image or an HTML file as well with a little icon, a little save icon in the top right hand corner, which is just a nice utility on top of that. Albert, phenomenal blog post, with videos, content, visuals, GitHub links, everything, you name it. So this is actually a pretty big repository, it looks like, behind, this behind this blog post. It's a blog it's a repository, under albertwrap, slash quarto dashboard is where you're going to be able to find that on GitHub if you're interested. And it looks like there's just a ton of examples for interactive plots, tables, interactive selection, OJS, reactable plots, the whole 9 yards. So this is a wealth of knowledge baked into a fairly concise blog post, which seems to be a theme with Albert, and I hope nothing changes in the future. So thank you for not sleeping, Albert. And, thank you for putting this together.
[00:35:26] Eric Nantz:
Yeah. There's a lot to look at in this repo. We'll have a link to this in the show notes, but it does go through each of these different iterations of how he's built this dashboard going from the completely static approach all the way to that fancy interactive, reactable version and interactive selection. So, yeah, there's gonna be a lot a lot to choose from here. Certainly, if you're a power user of things like CSS, there's a handy little SCSS snippet here too that style things a a bit more. So there's a lot to choose from in this space. So as we mentioned a number of times highlighting the Quartle dashboard functionality, there was many, many in the community asking for the flex dashboard set up in Quartle. And now that is here. And I'm very excited to see to see where that goes.
And just as exciting is the rest of the issue of our weekly itself because the highlights don't do enough justice to the great content in the rest of the issue, which, of course, is linked as always in the episode show notes. But Mike and I are gonna take a couple of minutes to talk about some additional fines that we've, found in this issue. And going back to our first highlight, our our author, Mel Salmon, of that blog post, She's also been hard at work on another what can be a very thorny topic for both learning and teaching as well. Recently, I've been following this a bit on her Mastodon account.
She has authored a new r package with a very unique name, which I'm probably gonna butcher it right here, called sacralopopet. Yeah. Send your send your feedback to me, I guess, for that one. But in any event, this is a package that is meant to help create in your R session the not so pleasant experiences that can happen with Git. Looking at things like, you know, maybe messed up committed files, maybe a merge gone completely wrong. This is inspired by a very famous, site that you've probably bookmarked if you had a problem with Git called okay Git.
You can fill in the blank on that. But in any event, in your R session, you can do some very fun things to learn how to resolve some of these issues in Git itself that you often will find yourself in one way or another, whoever willingly or not willingly, in your version control escapades. So I'm gonna be looking at this because I'm gonna be likely teaching some form of get, you know, training or workshop at the day job. Maybe I even do that in the open source world. Who knows? But having a way to illustrate, you know, what just happens when things go wrong and let you practice how to fix it, I think that's extremely helpful because almost nothing in good goes exactly as planned the first time around. So knowing how to handle these thorny issues, especially on those merger requests, I think is very very helpful.
Now, Micah, what did you find?
[00:38:34] Mike Thomas:
No. That's a great find, Eric. I found that the with our package, has a new release, version new major release, version 3.0.0. There's a nice blog post from Leonel Henry, who's on the Pasa team, I believe, talking about, sort of, what the improvements are here. It looks like a lot of the improvements are around the performance and compatibility with BaseR's on dot exit function, which if you are a shiny developer, especially somebody who's authoring shiny apps that maybe connect to a database that you want to disconnect from, which you should be doing at the end of the users at the end of the users session, not the global session, the users session, you should be leveraging that. So the with our package, may be able to to help you do that and help you test, that functionality as well. We recently put out an open source, our package for working with some agricultural finance data, and that download some data from the web. And within my testing suite, the unit test would test that. I leverage the with our package to download that data into a temp file, ingest that data into a data frame, and run my test against that.
And everything sort of disappears at the end of those unit tests running and all the the checks pass with with, dev tools. And I don't have to worry about actual locations, within my own machine or with somebody else's machine or or within Crayon's machine when we finally send this off to Crayon about where, those those files are gonna be temporarily downloaded to with our just takes care of all that for me, and I can't speak highly enough about that package, as well as the blog post that Me'el Saman has authored around with our functionality that sort certainly helped me get set up there. So long story short, I am very much a new fan of of Withar. We'll be using it from here on out. And thanks to Leonel for, letting us know what's new in Withar 3 dot0.0.
[00:40:40] Eric Nantz:
Yeah. It's been, it's been a a very helpful package in my exploits as developing both apps and packages. And, frankly, your idea of building this into the testing of your package, I think, is extremely novel, especially as you have to deal with maybe other systems or resources from other systems, and you may have to download something, may have to temporarily write a config file to send to something. You don't want that left around because you're only doing it in a disposable way, maybe through a CICD pipeline. So with our very valuable in that space, especially for other thorny issues, like even just having a temporary change of directory that I'm working in just for some esoteric reason because of some other pipeline. I have to be somewhere else just for that function and get back to where I was.
Yeah. With r is is is very much appreciative in that kinda utility toolbox that I have for my day to day package and app, development needs. So, yeah, really enjoyed reading about version 3 that just got released. And, of course, as we mentioned, we love hearing from you and the community, and we're gonna tell you about the various ways you can get in touch with us. Of course, first, everything you wanna learn about our weekly is at rweekly.org. So if you haven't bookmarked that, please do. That's where you'll find every new issue and every every new every back catalog, so to speak. Every previous issue is right there at the taking.
And if you wanna join our curator team, we definitely have open slots. Please, get in touch with us. We have the details on the GitHub repository for our weekly. That's linked to, directly in the our weekly site itself. And, also, we love hearing from you directly for this very show. Whether it's me butchering another package name or whatnot or or our, speculation of our history, we'd love to hear it. So you can do that via the contact page that's linked in this episode show notes. It's always there. And, also, if you wanna send us a fun little boost of your feedback using a podcast app such as Podverse, Fountain, Cast O Matic, or the podcast index itself, that's all right there. We have links to how to do that in the show notes as well. And thanks to all of our previous boosters for giving us some much needed encouragement, and we always welcome your feedback on that side too.
And, also, we are sporadically on the social media spheres. I'm on that weapon x thing from time to time with at the at the r cast, but also more frequently, I'm Mastodon with [email protected], and I will cross post from time to time on LinkedIn. Just search for my name. You'll probably find me. And, Mike, where can the listeners find you? Likewise. Probably best on mastodon@[email protected].
[00:43:27] Mike Thomas:
And if you wanna find me on LinkedIn, best way is to search Catchbrook Analytics, k e t c h b r o o k, to find out what I'm up to lately.
[00:43:36] Eric Nantz:
Awesome stuff. I enjoy seeing your post from time to time. Your your your hustle never stops either, so I hope you get some rest too when you can. But in any event, we're gonna stop hustling, so to speak, on this episode. We'll wrap things up here, and we'll be back with another edition of our weekly highlights next week.
Hello, friends. We're back with episode 149 of the R Weekly Highlights podcast. Oh, we're getting close to another fun little milestone, I guess. The episode numbers keep going up and so does each issue of Our Weekly. We're here to talk about the current issues, latest resources, tutorials, and specifically the highlights from the particular issue. My name is Eric Nantz, and I'm delighted that you joined us wherever you are around the world. And, hopefully, you're staying warm, especially if you're in the winter season and hopefully avoiding some ice apocalypses out there. But, nonetheless, we hope you enjoy this episode.
[00:00:36] Mike Thomas:
And staying warm in his humble abode is my awesome cohost, Mike Thomas. Mike, how are you doing today? Doing well, Eric. Yep. It's, it's pretty chilly out here in Connecticut. We live fairly close to a lake that's that's frozen for the first time in a few years, which is it's kind of nice, going out in the ice and and skating around. So trying to enjoy, as much as we can and excited to have some consistency now in 2024. I think we're we're back to back weeks for a couple weeks now on our weekly and hoping to keep it up. Yeah. The momentum is, in our favor, so to speak. So we'll keep that keep that rolling along here. And
[00:01:11] Eric Nantz:
and as always, the Rwicky project rolls along because we have an awesome team of curators that are helping every single week. And this week, our curator is Sam Parmer. And as always, he had tremendous help from our fellow Rwicky team members and contributors like all of you around the world with your awesome poll requests and other, heads up to us about the latest resources that you found. So let's dive right into this. We know, Mike, we have a lot of advances in technology right at our disposal via the magical world of APIs to help automate a lot of the stuff that would take a long time to do. Well, there is a very interesting area that this first highlight exposes in terms of where these APIs can really help in a much needed domain of translation of different languages for our documentation.
So our first highlight is a blog post from the esteemed rOpenSci blog by Mel Salmon, who, of course, is a former curator here at rweekly and now is a research software engineer supporting rOpenSci as well as other endeavors. And this blog post, in particular, talks about the use of the babble down R package to update an existing translation after its changes. And this is, apparently, part of a more broader initiative from rOpenSci for publishing in multiple languages, their various pieces of documentation. And as part of that effort, this babble down package has been developed to help translate markdown based content with leveraging what's called the DeepL API, which before this, I actually didn't know this exists. But apparently, this is a full fledged API built specifically for translation across many of the common languages in the world.
So in this blog post, my old blogs are a pretty simple example but yet very relatable. Having an existing markdown document in English language, and then as it's being kicked off, how would you go ahead and translate that to French in this example? So we have a very simple markdown syntax, which has got a typical headings, subtitles, and narrative inside. The babble down package has a function called deepltranslate, Give it the path to the markdown file, the source language, the target language for translation, and there you go. It's going to call the API under the hood, and you'll get that text right back, in this case, in the French language.
Looks good to me, although I'm not a French speaker, so I'll I'll defer to my own others for the authenticity of it. But that's not all. That's great for, like, your initial document. But what happens if, like anything else, you're gonna update that document, you know, through, you know, maybe pull requests from your collaborators. Maybe you got a new feature you wanna document in that package or tool or whatever this is meant for. And so assuming that this document is in version control because, well, if you're not using version control, you should, especially for larger efforts. Mike and I can attest to that. The babble down package is doing some pretty clever things under the hood to detect the changes that are happening in this document so that when you feed in this updated document to the the BabbleDown package, you can there's a function called deeplupdate where it's going to take this newly changed file.
And again, with very much a similar function parameters as the kind of the initial launch of the of the translation, you will get that new updated language of the document in French with your changes reflected. Now this is using, apparently, a hybrid of the get kind of diffs under the hood, but not just that. It's actually translating the representation of that markdown syntax of that file into XML representation. Because, again, in the web, even though markdown looks like we're just writing this in plain text, when you render it to HTML, you're putting it into another markup language, and XML and HTML are very much related in that space.
So, apparently, this XML representation is a bigger help to pinpoint exactly what is changed in that document so that she the the user of the BabbleDown package doesn't have to send the entire document back for retranslation. It's only gonna send the bits that change. And just like anything in the API world, there's no such thing as a free lunch sometimes. So if you were sending a volumous, you know, lengthy document over and over again, that could perhaps incur some costs if you're leveraging this API more regularly. So being able to only take what you need and translate it back, I think, is a really neat feature and pretty welcome, I'm sure, for those that are using this regularly.
So this is very much scratching the itch of a big need in the community as a whole in data science and other domains of making sure that we make our documentation for our tools, packages, or other, like, analytical pipelines as accessible as possible to those around the world. So I'm really excited to see just all the nifty things going on under the hope of this babble down package. Something I'm gonna keep in mind for my open source projects in the future. Yeah. I couldn't agree more, Eric. And I think this is the topic that we've talked about
[00:07:01] Mike Thomas:
on previous episodes, you know, trying to make R and the packages that we develop, and the documentation that we write as accessible as possible to as many folks as possible. Right? Around the world because R is an international programming language, and that means that we should try to do as much as we can to try to accommodate those folks. And the fact that the people working on on BabbleDown, including Ma'el, have provided us with this tool to just make it much easier for us to do so is is awesome. I think it's what open source is all about. And I really like sort of this walk through with the the API.
This deep l translate function is is really cool. As you mentioned, it allows you to specify your input file, where you want your output file to be written to, your source and your target language. And one interesting argument that I saw is an argument called formality. And in the blog post, Mal has specified this argument, as the string less. But I imagine that you could, need to have your translation be be more formal or less formal or or maybe somewhere in the middle, I'd be interested to learn a little bit more about that. And I imagine that that's sort of a parameter that the API, itself handles, which, you know, is really interesting. I think that there's probably a whole field of study here in terms of language and translation, and, you know, how different cultures represent formality versus informality.
But I I just thought that that was pretty nifty that that argument exists. And would be interested to see, sort of, how changing that argument would change the output. And I imagine that it it fully depends on on your input text and and how specific that is. This this DeepL update function is is really impressive too, and also really impressive that it doesn't use the git diff at all. As you said, it's this XML representation, and it looks like there's a package called Tinkar that sort of helps, with that translation between, your original document, it's XML representation, and finding the differences between those two XML structures. So that's that's really fascinating to me. Sort of reminds me of of the Waldo package, maybe, in some of its functionality, and being able to compare 2 files against each other. Recently, worked on a project using the Waldo package, where we're just reading, you know, these these dot TXT files, which have this this really sort of strange structure, but we're able to really easily show our end users sort of the differences between, 2 different dot TXT files, which is is super important to them. And it's it's just incredible sort of what the the community has created in terms of these packages that allow us to identify these differences, and then take action based upon the differences that we find. So this is a a really nice short and sweet introduction into,
[00:09:56] Eric Nantz:
how these DeepL functions within the BabbleDown package work and maybe able to help you out in your own work. Yeah. I'm even thinking if I do, like, open source Shiny work in the future, having, like, a toggle for, like, changing the language of, like, the interface elements. But then, yeah, it does bring up a lot of possibilities how this could be tied with something like BabbleDown. I'm the the wheels are turning or this could make my apps way more accessible. So this is really neat. Moving on with our second highlight today, we got, you know, as we talk about a lot on the show as well, yes, a community of our packages can supercharge so many of your workflows in data science, tool development, computing in general.
But you know what? Base R itself comes with a lot under the hood, and, honestly, sometimes it doesn't get enough of the spotlight. So that's where this blog post is gonna shed a little bit of much needed spotlight on some additional functions that may come from Bayesar, but they're definitely not so basic and they're quite powerful. And this is authored by Isabella Velasquez, who is a senior product marketing manager at Posit. And she does really great work of her blog as usual. And, this blog post in particular, we'll get to the meat of this shortly, but she's got some bells and whistles here that I think you're gonna really like as we we talk about this. But she opens with the list of 6 functions, actually, and an operator on top of that, that she's been using quite a bit and things that deserve a little more love. And we'll hit each of these quickly 1 by 1. But, we probably won't do each of them enough justice. But you may have seen as you've, like, perused maybe someone else's R package, you know, source code, that sometimes at the end, instead of an explicit return state or return of, like, a function parameter with a return function, you might see a function called invisible put in at the end instead.
What this really means, and a cool name, by the way, is that you are, in essence, returning a temporarily invisible copy of the object. So it's going to still execute normally if you run this like in the R console. But then if you want to save this to a variable, it's not going to print the result when you run that function after saving it to the variable. So you can kind of run it interactively with just calling the function itself and then also when it's not. But here's the cool part about this blog post. You're going to see the snippets of code, but you notice that little run code button at the top there? Guess what, folks? You hit that button, it's going to run it in your web browser.
Two guesses what that's powered by. I smell a WebR implementation here. This looks really nifty. So this is this is just as an aside here. This is the potential we're starting to see here, folks, is that on top of sharing code to do something, WebR, WebAssembly in general, is gonna let us try it out in the browser about you installing a single thing. So if you're a new user to R, man alive. This is a great time to get into the language of these kind of resources inside. But you can quickly see the examples that, that Isabella puts in here and run them yourself and see what's happening. So it's really, really neat to see. So invisible definitely is something I'm starting to use more in my function authoring, in the future.
Another one that I did not know existed, so, you know, mission accomplished for her blog post is the no quote function which basically means if you want to show the syntax of, like, a character string but don't want the quotes around it, you can simply feed it into the no quote function, and now it's going to print as if you don't have the quotes around it. I think this can be very helpful, especially as you're dealing with HTML language, like links or other things that you want to maybe copy into another program or a browser toolbar, then having that no quote function for, like, a URL type function that she highlights here would be very helpful to let you copy and paste without too much friction there.
So I could see other uses for that as well. Here's one that brings back memories for me in my very early days of my R usage of visualization is the co plot function. This is a very handy function when you have, you know, a situation of analyzing multiple variables at once where you could look at different pairs of variables, perhaps even conditioning on another one as well, and you can quickly, kind of, get a read for how these variables are going to interact with each other visually. Great for correlation analysis or other association analysis at a very high level. So that's all built right in. Very nice straight to the point. You can even customize how the rows are constructed and everything like that. So really nice examples throughout.
This one, I have theories on why it's named this way, but we'll see what you think, Mike. Nz char or nz car, depending on how you want to pronounce it. When I look at that name alone, I honestly have no idea what that does at first glance. But what this function really does is that it is a way to simply return true or false on whether the character vector that you supply to it is empty or not. Now n z, at first, I thought, does that mean like non zero? I don't know about that. But then I thought, well, here's maybe an egg. This is speculation on my part.
We know, if you're a historian about the R language itself, that it was founded by Ross Ahaka and Robert Gentlemen while they were teaching at the University of Auckland
[00:16:12] Mike Thomas:
in New Zealand. For it. New Zealand.
[00:16:16] Eric Nantz:
I I cannot take I I don't know. I've never seen this in writing, but I wouldn't be shocked if there was a little Easter egg in there somewhere because why is it called nzchar otherwise? I don't know. But in any event, I don't use this function enough. Have you used this function before, Mike? Nzchar?
[00:16:31] Mike Thomas:
No, I haven't. I haven't. I've seen it, obviously just come across like my, you know, sort of automated, whatever it is, let, you know, within our our studio that sort of pops up, different functions for you as you start start typing. But I don't think I've used it before. Let me see if I can take a look at what the documentation says about nzchar.
[00:16:57] Eric Nantz:
I did look at this before the show. I didn't see any references to my You know. You know. That's
[00:17:02] Mike Thomas:
like thinking there. Easter eggs there, but I would have thought non zero. Well, the n char obviously means number of characters.
[00:17:13] Eric Nantz:
Yes. That one I get. Yeah. Yeah. Number of
[00:17:18] Mike Thomas:
I don't know. I don't know. Because it's not number of 0 length character vectors. It's the opposite.
[00:17:26] Eric Nantz:
Right. Right. So if you're listening, I'm gonna let you know how to get feedback to us. We love to hear theories from all of you in the audience on this particular one because I've I've wondered about this for years, but, admittedly, I have not used the function much in daily practice. But, hey, now if I have a need to check if they're empty or not, I will
[00:17:45] Mike Thomas:
leverage this for sure. Well, I wonder if, you know, sometimes I feel like functions like this, especially within bass art sort of inherit their names maybe from the the c functions that underlay them. So I wonder That could be. A relationship there.
[00:17:59] Eric Nantz:
That could be because r is standing on the shoulders of c, Fortran, and the like, and of course, the s language before r. So there's a lot of legacy under the hood that, you know, you could go down lots of rabbit holes for the history of r on this. So alright. We'll move along here. Another function that I meant I have a checkered pass with, curious your pass on this, Mike, is the with package. The when I first used this, this was, in essence, a shortcut function for me where if I wanted to feed into another function, in this case, the example, Isabelle puts in here is the plot function, and you're feeding into it a data frame, but there are specifically variables of a data frame. If you're lazy and don't want to type like, in this case, mtcars$HP or mtcars $mpg, you can use the width function, supply the data frame, and then the plot function and just reference the HP and MPG without the dollar sign syntax, not too dissimilar to what you might see in a tidyverse pipeline as you're doing the piping operations.
Yes. It is helpful in this case, but I have tripped myself up more than I care to admit when I use this in the past. So, admittedly, I moved away from it. But hey. You know what? It is a way to to take that shortcut as long as you use it responsibly, I would guess. Yeah. It's another it's another option. Not one that I admittedly use very often, but it is an option. Yes. It is. And now this next one, I knew about what I call the single version of this. But I know there was a plural version of it, and that is the length function with the s at the end because I use length all the time to check, you know, the it says the length of a number or string or whatnot.
But if you want to quickly check for each element in a vector, length is basically a shortcut to, like, the more verbose, like, s apply or l apply syntax for doing this. So this is great. This may be another shortcut that you can put in your toolbox instead of having to do a per map on using length under the hood or an s supply under the hood. So that was that was a new one to me for sure. And then here comes the operator that I admittedly should have used way long ago. And that is called the no coacine operator. I'm probably not saying that right. But if you've seen in various conditional logic, the percent sign to vertical pipes and the other percent sign closing it, that is this null coalescing operator.
And this is a shortcut. If you've ever done a check for if an object being null or not and you've got like an if statement. If is dot null something, then return something else, return something else. This is a shortcut to that. And, also, the esteemed Jenny Bryan herself highlighted this in one of her talks at the USAR itself in 2018, which is linked to in the blog post. So part of her underlying kind of theme of code smells and things that you can shore up in your day to day coding development and R. This operator, she shows some great examples of how it streamlines a whole lot on that if else syntax.
So that's that's a whirlwind tour of all this, but there's a lot to choose from. And, of course, that just scratches the surface of what Baysar has. So Isabella concludes the post with some additional blog posts from others in the community and what they've seen in Bayesar that's been useful to them. So really like to see it. And, again, love, love the idea of being able to run the code directly in this blog post to try things out. So
[00:21:53] Mike Thomas:
awesome stuff all around. I think it is thanks to Isabella's implementation or integration here with WebR that I have finally wrapped my brain around what the invisible function does because I have seen it in so much code on GitHub. I have never used it. I still don't know where I would really have much of a use case for it, maybe in some of our more object oriented programming, but, you know, really the idea again there is that, your your your function is primarily called for its side effect. So I guess, one of the examples, that I read up on, I can't remember whether it's within this blog post or outside this blog post, but it's like the the right CSV function from the reader package. So that's that's obviously going to write a CSV somewhere to to some destination path that you supplied and won't return anything within your console when that happens. Right? Most of the time you're doing that and you're not you're not assigning that right CSV function to a variable.
So you wouldn't know, right, that that would potentially ever return anything. But if you did assign that to a variable, it would actually return the data frame, I believe, to that variable. Which is, which is pretty interesting. You know, it makes me curious about how many different functions out there do have this invisible call at the bottom of them, such that if you did assign that function to an object, you know, that objects would be populated with with something. So I think it's interesting. I think it's really good to know. Maybe something nice to have in your back pocket, and I've finally wrapped my brain around that. And I'm not gonna walk through all the other functions that you walk through, Eric, but I will just note that the this null coalescing operator, I I think is going to save me a lot of code writing once it's finally integrated in BaseR here shortly, as opposed to, you know, in just about every project I'm working on, I I do have some test for if if this is null, right? Then take this action within some particular if statement.
The coalesce operator the the coalesce function within SQL SQL has been huge, for for Catchbook and a lot of the SQL work that we do for our clients on. Some particular, address standardization projects where we're we're trying to test if, you know, the the user has a valid address too. Right? As a second line in their address like a PO box or something like that or not. So that's that's a function in sequel that we use really really often. I believe that within d player, there is a similar coalesce function as well that can return sort of the first non null, value within a list of values that you you pass it. So we I find that pretty handy, and maybe some other folks will as well. But I couldn't agree more that the WebR implementation here that allows us to actually touch and feel these functions as we're reading about them is a game changer. So thanks to Isabella for taking the time to do that, and I am going to dive deep into the GitHub behind her pipe dreams blog here to see how she did that.
[00:25:11] Eric Nantz:
Yeah. And the best part is, in fact, I remember when I first saw these kind of posts, I would see the run code, and I would I would hit it. I mean, this reminds me of the learn r package a little bit too. But then I realized, oh, wait. I can actually click in that code box and change it myself. Like, it's not just the pre canned example, no less. You can experiment with all this, which makes it even more fun. My goodness. So you can kinda see why this is gonna become a huge in the realm of teaching, the realm of illustrating these concepts. I mean, think about this, Mike. I know we're not far off, I think, from a packages documentation site. We're gonna let you run the package code itself as a way to try it before you, quote unquote buy so to speak. I can't wait until we can integrate that into our package down sites. It's gonna be incredible, right? Within your examples or
[00:26:02] Mike Thomas:
or wherever, your your reference. That's gonna be huge. And, you know, from the first blog post that we ever saw with WebR chunks in it to where we're at now, in particular, this installation of the the tidy tab package and Isabella's blog that's coming from our universe. The installation is so fast, so much faster than it used to be. I think we're waiting, like, you know, 2 or 3 minutes previously. And now, you know, we might be waiting 10, 15 seconds.
[00:26:30] Eric Nantz:
Yeah. George Stagg, the the engineer behind this at Pazit, he is, he's doing some divine work here, I must say, to make all this happen. And we are all super, super appreciative for it. So this is an area that I mentioned on this podcast I'm exploring very actively right now, and the possibilities are practically endless. But, yeah. And credit to Isabella for, again, putting a much needed spotlight on these gems inside the R language itself that you get every time you install the language for free, just like everything in R, it is open source all for you to leverage at your leisure.
And speaking of interactivity, Mike, as we saw in Isabelle's awesome blog post from the last highlight, making it interactive with the WebR functionality Well, in the Quartle ecosystem, now we've got tremendous ways to make interactive dashboards as well. And dashboards, if you're familiar with the flexdashboard package that many use in the R Markdown ecosystem, The quartile syntax or dashboards is very similar. So you can get up and running pretty quickly, but just what is the easiest ways for you to make that into a more interactive display and not just a stack display? Well, friend of the show, frequent contributor, Albert Rapp, is back once again returning to the highlights with his, latest 3 minutes Wednesday style post of making a portal dashboard interactive.
And you might say, well, just how do we go about this? Well, this is a continuation of a previous post where he put in the syntax needed to make, in essence, a placeholder dashboard. We've got, you know, a column here, a column there, and a row below it, and then a sidebar, but nothing in it yet. So how do we replace all that with things that can be both static or interactive? Well, just like anything in quartile, give yourself an R code chunk or even a Python code chunk, and you'd be able to add in things like a really nice markdown syntax for your sidebar. You can leverage in HTML tools hidden gem. I use this a lot in my Shiny apps. The include markdown function, where instead of writing the markdown literally inside the source code of your app or your UI function, you could have that as an external markdown file. Just reference that markdown file, and it's going to compile it in the web markup and put it anywhere you want in your HTML report or Shiny app. So he does that in the sidebar with a little bit of narrative around the dashboard itself.
And then let's spruce it up with some nice tables, shall we? And that's where the first example of an element to put in this table or put in this dashboard is a table of the palmer penguin set using the very incredible GT package by Riccione over at Posit. Love this package. And it gives you a very attractive looking, static looking table, which, again, for many purposes would be very, very good for the majority of reports. Now, of course, no dashboard would be completely about some form of, you know, more traditional visualization. And that's where, of course, ggplot2 will feed in very nicely into a portal dashboard or any type of report for that matter.
But on the old net, it's static. Right? How do we make that interactive? Well, Albert highlights another package that didn't get a lot of love initially, but, boy, it sure took off, especially last year, the ggiraffe package or ggiraffe package, maybe how you wanna pronounce it. I still don't know which one it is, but I'm go with either one, authored by David Goel on being able to turn a static ggplot produced visualization into interactive with tooltips and other great, little interactive features as well. So you can quickly give you that kind of hover functionality or filtering functionality by clicking on different points. There are lots of lots of cool things you can do with an interactive visualization on that.
Now how about going back to that table? The GT table looks great, but it is a bit static in its presentation. Well, making a short little pivot to the reactable package, which, again, is one of my favorites from my Shiny apps and reports, you can have that sorting and filtering functionality inside. And then lo and behold, you could even bake in at the end of the post a way to filter the table with some controls that are embedded in the sidebar of the portal dashboard itself. And you don't necessarily need Shiny for that. There are ways you can build that with the observable JS code chunks or other ways with crosstalk as well to make that HTML element linked to that input that you put in, even the sidebar or maybe above the table or visual.
And you can have that interactivity so that the user can customize how they want to display. In this case, the penguin's weight distribution in that table. Apparently, there's gonna be another video that he releases coming soon about making those reactable tables even more interactive. So we're gonna be staying tuned for that. That's a space I'm looking at quite closely in my exploits at the day job. So, again, portal dashboards are becoming very popular now, and you can make them very interactive very quickly. And we didn't get into the the bits that you could do with a Shiny back end as well, but, you could do a lot with portal dashboards. And I'm actively pursuing this as we speak of an open source project right now. So credit to Albert. Once again, terrific post, easily digestible with links to more detailed tutorials that he's done on these various topics, including his ebook that he's written as well on creating GT tables. So he's I don't know if he ever sleeps, man, but, boy, he is busy.
[00:32:51] Mike Thomas:
Reminds me of somebody else I know who I feel like never sleeps. Who would give you that idea? But, this is awesome blog post. I am already deep into Albert's code which is, looks like we got a little bit of JavaScript going on to connect these these filters, these check boxes to the reactable table. I did not know that that was possible, but that is really really cool that he's done that. Obviously, I've seen that, sort of interactivity that you could deploy to a static site if you are using Observable JS, with the ability to to have filters that drive your charts and things like that, but did not know, we could do that with reactables. So that is super cool. That is some code that I'm going to be taking a hard look at, and really excited and grateful that Albert has put this out in the open. You know, one thing that is consistent with Albert is his data visualization projects and products are always really aesthetically nice.
So I think that that's that's awesome. And you can see him leveraging sort of the new quarto dashboard framework, where you have a card with a plot in it, and that card has a little icon in the bottom right hand corner to expand that full page, which is another one of my favorite features. Our clients absolutely love that with our quarto dashboards that we're creating these days and and with, leveraging bslib, essentially within our Shiny apps, we have the same functionality within that card function. And it looks like maybe the g g I raf package also allows you to download these interactive plots, maybe as a a static image or an HTML file as well with a little icon, a little save icon in the top right hand corner, which is just a nice utility on top of that. Albert, phenomenal blog post, with videos, content, visuals, GitHub links, everything, you name it. So this is actually a pretty big repository, it looks like, behind, this behind this blog post. It's a blog it's a repository, under albertwrap, slash quarto dashboard is where you're going to be able to find that on GitHub if you're interested. And it looks like there's just a ton of examples for interactive plots, tables, interactive selection, OJS, reactable plots, the whole 9 yards. So this is a wealth of knowledge baked into a fairly concise blog post, which seems to be a theme with Albert, and I hope nothing changes in the future. So thank you for not sleeping, Albert. And, thank you for putting this together.
[00:35:26] Eric Nantz:
Yeah. There's a lot to look at in this repo. We'll have a link to this in the show notes, but it does go through each of these different iterations of how he's built this dashboard going from the completely static approach all the way to that fancy interactive, reactable version and interactive selection. So, yeah, there's gonna be a lot a lot to choose from here. Certainly, if you're a power user of things like CSS, there's a handy little SCSS snippet here too that style things a a bit more. So there's a lot to choose from in this space. So as we mentioned a number of times highlighting the Quartle dashboard functionality, there was many, many in the community asking for the flex dashboard set up in Quartle. And now that is here. And I'm very excited to see to see where that goes.
And just as exciting is the rest of the issue of our weekly itself because the highlights don't do enough justice to the great content in the rest of the issue, which, of course, is linked as always in the episode show notes. But Mike and I are gonna take a couple of minutes to talk about some additional fines that we've, found in this issue. And going back to our first highlight, our our author, Mel Salmon, of that blog post, She's also been hard at work on another what can be a very thorny topic for both learning and teaching as well. Recently, I've been following this a bit on her Mastodon account.
She has authored a new r package with a very unique name, which I'm probably gonna butcher it right here, called sacralopopet. Yeah. Send your send your feedback to me, I guess, for that one. But in any event, this is a package that is meant to help create in your R session the not so pleasant experiences that can happen with Git. Looking at things like, you know, maybe messed up committed files, maybe a merge gone completely wrong. This is inspired by a very famous, site that you've probably bookmarked if you had a problem with Git called okay Git.
You can fill in the blank on that. But in any event, in your R session, you can do some very fun things to learn how to resolve some of these issues in Git itself that you often will find yourself in one way or another, whoever willingly or not willingly, in your version control escapades. So I'm gonna be looking at this because I'm gonna be likely teaching some form of get, you know, training or workshop at the day job. Maybe I even do that in the open source world. Who knows? But having a way to illustrate, you know, what just happens when things go wrong and let you practice how to fix it, I think that's extremely helpful because almost nothing in good goes exactly as planned the first time around. So knowing how to handle these thorny issues, especially on those merger requests, I think is very very helpful.
Now, Micah, what did you find?
[00:38:34] Mike Thomas:
No. That's a great find, Eric. I found that the with our package, has a new release, version new major release, version 3.0.0. There's a nice blog post from Leonel Henry, who's on the Pasa team, I believe, talking about, sort of, what the improvements are here. It looks like a lot of the improvements are around the performance and compatibility with BaseR's on dot exit function, which if you are a shiny developer, especially somebody who's authoring shiny apps that maybe connect to a database that you want to disconnect from, which you should be doing at the end of the users at the end of the users session, not the global session, the users session, you should be leveraging that. So the with our package, may be able to to help you do that and help you test, that functionality as well. We recently put out an open source, our package for working with some agricultural finance data, and that download some data from the web. And within my testing suite, the unit test would test that. I leverage the with our package to download that data into a temp file, ingest that data into a data frame, and run my test against that.
And everything sort of disappears at the end of those unit tests running and all the the checks pass with with, dev tools. And I don't have to worry about actual locations, within my own machine or with somebody else's machine or or within Crayon's machine when we finally send this off to Crayon about where, those those files are gonna be temporarily downloaded to with our just takes care of all that for me, and I can't speak highly enough about that package, as well as the blog post that Me'el Saman has authored around with our functionality that sort certainly helped me get set up there. So long story short, I am very much a new fan of of Withar. We'll be using it from here on out. And thanks to Leonel for, letting us know what's new in Withar 3 dot0.0.
[00:40:40] Eric Nantz:
Yeah. It's been, it's been a a very helpful package in my exploits as developing both apps and packages. And, frankly, your idea of building this into the testing of your package, I think, is extremely novel, especially as you have to deal with maybe other systems or resources from other systems, and you may have to download something, may have to temporarily write a config file to send to something. You don't want that left around because you're only doing it in a disposable way, maybe through a CICD pipeline. So with our very valuable in that space, especially for other thorny issues, like even just having a temporary change of directory that I'm working in just for some esoteric reason because of some other pipeline. I have to be somewhere else just for that function and get back to where I was.
Yeah. With r is is is very much appreciative in that kinda utility toolbox that I have for my day to day package and app, development needs. So, yeah, really enjoyed reading about version 3 that just got released. And, of course, as we mentioned, we love hearing from you and the community, and we're gonna tell you about the various ways you can get in touch with us. Of course, first, everything you wanna learn about our weekly is at rweekly.org. So if you haven't bookmarked that, please do. That's where you'll find every new issue and every every new every back catalog, so to speak. Every previous issue is right there at the taking.
And if you wanna join our curator team, we definitely have open slots. Please, get in touch with us. We have the details on the GitHub repository for our weekly. That's linked to, directly in the our weekly site itself. And, also, we love hearing from you directly for this very show. Whether it's me butchering another package name or whatnot or or our, speculation of our history, we'd love to hear it. So you can do that via the contact page that's linked in this episode show notes. It's always there. And, also, if you wanna send us a fun little boost of your feedback using a podcast app such as Podverse, Fountain, Cast O Matic, or the podcast index itself, that's all right there. We have links to how to do that in the show notes as well. And thanks to all of our previous boosters for giving us some much needed encouragement, and we always welcome your feedback on that side too.
And, also, we are sporadically on the social media spheres. I'm on that weapon x thing from time to time with at the at the r cast, but also more frequently, I'm Mastodon with [email protected], and I will cross post from time to time on LinkedIn. Just search for my name. You'll probably find me. And, Mike, where can the listeners find you? Likewise. Probably best on mastodon@[email protected].
[00:43:27] Mike Thomas:
And if you wanna find me on LinkedIn, best way is to search Catchbrook Analytics, k e t c h b r o o k, to find out what I'm up to lately.
[00:43:36] Eric Nantz:
Awesome stuff. I enjoy seeing your post from time to time. Your your your hustle never stops either, so I hope you get some rest too when you can. But in any event, we're gonna stop hustling, so to speak, on this episode. We'll wrap things up here, and we'll be back with another edition of our weekly highlights next week.