Thinking Elixir Podcast — Episode 1 —
295: Is Your Type System Leaking?
News includes José Valim's deep dive into Elixir's type system cutting worst-case checks from 10s to 25ms, a new Dashbit post on type systems as leaky abstractions, Oban Pro teasing Workflow UI improvements, MDEx v0.11.6, Livebook Desktop on Linux, and more!
- Speakers
- Mark Ericksen, David Bernheisel
- Duration
Transcript(3 segments)
Hello, and welcome to the Thinking Elixir podcast, where we cover the news of the community and learn from each other. My name is Mark Erickson. And I'm David Bernheisel. Let's jump into the news. Oh, actually, before we get to the news, I got one little call out. Let's start with this. I'm going to put this into perspective a little bit for us. Most of the work for companies that use Elixir in its community and ecosystem get to benefit from all of that effort, in many cases, for free. So a lot of us may personally sponsor some good work and folks in the ecosystem. Personally I do that for the LSP folks. It's a simple principle, right? Put your money where your mouth is. If you need that tool to be better, put some funding toward it, right? Vote with your wallet, as they say. And so it wasn't much, but it was something. And at a personal level, it really is not as much as it should be, in my opinion. And I know that, and I know who's doing that. And so I know what to do to go sponsor them, right? And they put out little calls of, you know, I need funding or whatever. And that takes work for them to even do. In a lot of other cases, a lot of us don't know who or what to sponsor either. We might share that feeling like, yeah, I appreciate that work here and I want to contribute somehow. I can't contribute my time, but I can contribute my money. That's what the Erlang Ecosystem Foundation is here to help with, right? That's not their sole purpose, but that is a major purpose of what they do. They do collect funds and redistribute that to the community in various ways. It is, in my opinion, the primary place for the goodwill to be distributed to the community. More so, maybe this is just the anti -American part of me that's very American, by the way. But I think it's a very American attitude that all responsibility ends up on the individual and that there's like no community responsibility. You know what I mean? Compared to the Japanese culture, the Japanese culture is like the opposite, right? You do everything for the community, nothing for yourself, but in America, it's like, no, you do nothing for your community. You do it all for yourself. I don't like that philosophy. Your most active and immediate elixir community is most likely, yours truly, because you listen to our podcast and we thank you for that. That's just a joke. The real most immediate elixir community, if you're listening to this, is most likely your workplace. And your workplace probably has other folks that at least know a little bit about elixir or write elixir or that kind of stuff, your Slack rooms, your Discords, whatever. So here's the ask, let me get down to it. If your company is not sponsoring the EEF, it probably should. I'll say that, yeah, it probably should. In my opinion, it probably should. Your company is most likely benefiting from a lot of the goodwill of the elixir community and the folks that are in it for probably virtually free, you know, if they are not sponsoring the EEF in some way. And I know companies range from small to large to contractor to solo to enterprise, right? So don't even worry about amounts. The ask is all that we're looking for. So maybe today you would consider asking someone in your company that has access to the purse. If they're able to go to erlef .org, E -R -L -E -F .org, that's erlingecosystemfoundation .org, and they can find out some more information there about how to sponsor the ecosystem and let them decide how much would be appropriate. You know, maybe they'll ask you even what would be appropriate because yeah, community's in need and there's lots of efforts that the foundation wants to fund and we can only do that if they get funding themselves. All right, that's it for that. But I want to make sure that I was on the top because I know a lot of folks will tune in for the news, you know, and then maybe their commute prevents them from hearing the tail end of the news. So I want to make sure you guys hear that. The Erlang Ecosystem Foundation is looking for funding. Go to erlef .org and see if you can help. And first up for the news, Jose Valim published a new technical blog post on the Elixir Lang org website. It is actually titled Lazy BDDs with Eager Literal Intersections. So this is actually really deep academic sort of really going in on how this type system stuff is working. So for you deep language geeks out there, Ashton, I'm thinking of you, we will try and keep this brief and high level for everyone else. But the post is quite technical. It's discussing how Elixir changed its set theoretic type representations from DNFs or disjunctive normal forms to BDDs, which is binary decision diagrams, and how that solved one set of problems and then introduced another set of problems and then how they solved that. It changed it to be a lazy system instead of having to pre -compute and flatten all of these types together. The end result for the blog post, just to quote some of it here, is the initially implemented Eager Literal Intersections as part of Elixir version 1 .20 release, which reduced the type checking time of one of the worst cases from 10 seconds to 25 milliseconds. So this worst case situation, if you did it absolutely the weirdest way you could to create these type system issues, it could really take 10 seconds to resolve and compile some of this. And they were able to solve that and get down to 25 milliseconds. And in the blog post they continue, however, their initial implementation caused a performance regression as they did not distinguish between open and closed maps, which they explain what that is in the article. And this regression was addressed by applying the optimization only to closed maps as they discussed in a previous article and they link to a commit where you can see that in action. The TLDR, if you're like, wow, I'm glad that this kind of stuff is being published and written about, but the TLDR for it is they did some really cool stuff deep in how the type system works and it will work better and faster now. So that's cool. This episode is sponsored by paraxial .io, the only security platform with true Elixir support. I sat down with founder Michael Lubas to ask this week's security question. Lots of companies and teams are really focused on velocity and shipping features. And is that fundamentally incompatible with good security? No, I would say that companies that ship software regularly tend to have much better security because if you're only doing release every month or year or so in some cases, it usually means the security is very bad, packages are not being updated, you're running outdated software. When I do penetration tests, I've done them professionally, even before Elixir. When you get a modern application, almost the user interface, you can kind of feel when it was coded. And if the user interface is old, it doesn't mean that the application is insecure, but you could tell that it's kind of been neglected, they maybe haven't been keeping up with best practices or they're running an outdated library that you can exploit. So I would say shipping features regularly and having a very fast development pace, I associate that with better security in the majority of cases. Reach out to Michael at paraxial .io to get personalized security feedback for your projects and your business. That's at paraxial .io. Next up, so just following up on the technical blog post, right, another little bit of technical stuff about type systems. Jose did post a link or a blog post about how type systems are leaky abstractions. And so I was like, hmm, I don't know what that means. I had to think about it, I'm like, hmm, I don't know what that means. I had to go read this. It's basically talking about a story of a pull request to add a function, map take bang. So we already have map take, but map take bang is what the proposal was about. The bang part of this means that it would raise if it didn't have the keys in there, right? So map take works that way. You pass in a map and a list of keys that you want to take out of it so it kind of just doesn't split it, but it just, well, takes those keys out of it and just leaves you a new map with only those keys in there and values. Map take bang would do the same thing, but if it didn't have those keys, then you get your error. I won't rehash the whole article, but that's the origin story of this whole thing. And he intersects it with how type systems should work because, well, structs are maps with dressing on it, right? Additional guards around it. And so what would happen if you put in a struct in there, you know, with the bang? What should happen? You should already have guarantees, right, that those keys should be there, knowing that it's a struct. And it talks about TypeScript's solution to this, which is key of, I've actually had to deal with that. It's casting via TypeScript is a very weird world. I never quite could understand what the heck was going on here. But we already know that TypeScript is a gradual type system, too. It is unsound in the runtime, and Jose has repeatedly pointed to that as like, yes, this is good, and yes, it's popular, but it's incomplete. And we're trying to get a more complete theory here, hence the set type theoretic theory that they've been espousing through this whole process. It's a very interesting read. I highly recommend that you check it out. This is not one of those blog posts that says, hey, we released something, or hey, the compiler is 10 times faster now, or you know, that kind of stuff. This is just like a, did you know, and interesting to think about, there's no results on the other side. Yeah, it's a good learning opportunity if you want to learn a little bit more about the type system and the edge cases, the things that you don't always think about, you know, basically, what they're doing is hard. And here's a good blog post that describes one of those little hard points that you just really had to think on, fascinating read, highly recommend it, we got a link in the show notes. Yeah, I will point out that in the social media post where Jose was talking about it, he said, spoiler alert, that the article is not about which is better, dynamic versus static types, but rather an attempt to shine a light on the underlying trade -offs, and he is in no illusion that he will change anyone's mind on the topic of whether they prefer static or dynamic. So it's not about that, it's just like, hey, understanding some of this, and when you get into some of it, then you realize when you're trying to do something that would otherwise make sense, like I should just be able to refactor my code in this way, and then the type system says you can't, that's the leaky abstraction where you have to understand the internals of how that type system works just to make it so your refactoring of your code is valid when logically and runtime -wise it's all valid. So it's just interesting to kind of get some appreciation. And next up, Zach Daniels shared a new usage rules feature. So usage rules is the package that we've talked about recently that helps bring LLM rules from your dependencies into like a unified Claude MD or Agent MD file. So your developer experience with the application and all of its dependencies is a smoother experience. So what his new feature is, is that usage rules now supports copying skills from your package in addition to constructing skills from the usage rules files. And package authors can now provide entire skills for your project. And when you update your depth, you can get the new version of those skills. So if you use something like Claude code, where you can do like a slash and you can create custom skills, so you can say, this is how I want to create my PR comments. And this is the structure and the and all the rules around it. So I can just easily have a shortcut to doing a common task. So I thought, no, that's pretty cool. And I could actually even imagine someone creating a package that has no code. That is only a set of skills. Like, hey, this is like the best set of elixir skills, you know, that you could do that. Or you could just do it for your own company and have it as a dev only dependency. But it's a way of versioning, distributing and upgrading a set of skills across a team or a group of people. So pretty cool idea. Yeah, it is nice. I wrote my first skill the other day. Oh, nice. Or is it a command or a skill? I don't know what it is anymore. I think they've just kind of all merged into skills. But I have a slash command now of ship. And so, yeah, it'll run Claude code's own skill called simplify. Highly recommend that, by the way. I often end up with like duplicated code, you know, repeated things or just bad practices generally just slop, you know, AKA slop. Simplify is their built -in skill to help with that. I think it's... I haven't tried that one. The creator of Claude code uses it all the time, so he says. But then I also created two sub -agents, a elixir reviewer. It says more than this, but it's basically says, you are Jose Valim. Review this code, among other things. And then lastly, I do a terrible job with this, of self -marketing sometimes. So the other sub -agent it'll do is technical product manager, I guess it'll review the branch or the diff or whatever and tell me the impact of this and like what the good things are out of this. So that way I can kind of copy and paste a little bit of that, like into a Slack message or a PR description, you know, that kind of stuff. Very helpful, right? Essentially, anytime you are finding yourself repeating things, that's the whole purpose of these plugins and skills and all. Wrap it up in a skill. All right. Next up, this is just a tease. Soarin2, their handle that Parker and Shannon Selbert of OpenFame, they teased some upcoming, I presume, OpenPro workflow improvements, workflow plus web improvements, actually. So we've got a link to their tease. So just to describe where they currently are with this, right? If you installed OpenPro and then OpenWeb, OpenPro gives you a whole module here for tying and correlating multiple jobs together, it's called workflows. And so you can kind of like create whatever workflow you want, this job and then that job and then that job and then fan out and then have dependencies and all that kind of stuff, right? Build yourself a little DAG basically, right? One of the shortcomings though has been seeing that represented in one place in OpenWeb. OpenWeb is open source, but it's made, I think, primarily for just jobs, right? And so if your workflow has 10 jobs and they're all leading into each other, in a simple sense, if they're all just serial one after another, you can't see that in one page on OpenWeb. Now comes the tease. That's going to be a solved problem. I'm very excited about that. So the tease shows a new workflow search page. I'm assuming this is going to be OpenWeb, who knows, we'll see. It shows a workflow progress bar. You get search filters, the various stats on it. When you click into the workflow, you get like a graph of the nodes and by nodes, I mean like the jobs, the individual jobs in there, but it's in like a canvas, you know, with like the nodes with the connections and all. Seems like everybody's using that nowadays. Very helpful for just illustrating the whole workflow. So wonderful that it's there. And then of course, more in depth stats on the workflow progress and all. Very happy to see that coming up. Can't wait to throw that into my mix, my mix depths, because I need, I need that like yesterday, so looking forward to that. When it does come out, tune in here and we'll, I'm sure we'll, we'll announce it when we see it. And next up, MDEX version 0 .11 .6 was published. So MDEX is the markdown support for Elixir. It's a great library. We've talked about it multiple times. So I just wanted to highlight a couple of cool things with this release. So this adds a new code fence renderers option. And it also has some fixes in the syntax highlighter and the stream ring parser to recognize more patterns and make it more reliable. So you'd want to update it for that if nothing else, just the stability and the fixes. But these new renderers are interesting because they allow you to create custom code fences. You know, code fences are like the triple back tick markdown or a custom one like alert where it will be styled uniquely and you, you have a separate area where you say the code fences for alerts are handled this way. But it also supports some other code fences for things like picture. It's P I K C H R, which is a format that turns into a rendered SVG. Then there's also chart, which takes just regular SVG, there's CSV and others. So if you're looking for opportunities and ways to say, I would like to have a prettier representation of maybe CSV data in a markdown file. Hey, that's already built in through these code fences and you can create custom ones. So I just wanted to call that out. Thought it was really neat. I'm actually installing a M decks for a project of mine, so very cool. I've enjoyed it so far. I love all the different ways you can export the markdown, right? And then you never have to worry about syntax highlighting anymore, like whether that's happening server side or client side, I remember that being a big gap a long time ago. All right, next up, oh, oh, this is me. I released a NeoVim plugin to help with hex PM completion results has been bugging me for years. I'm trying to get around to doing it. Here's what this does. Okay. For anyone that's not using NeoVim, ignore me for the next minute or maybe not. Maybe you should consider adopting NeoVim because of your other editor overlords. You have to rely on them to give you these features with NeoVim, you can program it yourself like I did here. All right. So what is this doing? This is definitely a NeoVim plugin and not just NeoVim, but a Blink completion plugin, which is as far as I know, the most popular completion plugin for NeoVim. And it has a plugin system for allowing you to auto complete various sources, things like I've got these files on my file system or LSP information. And it just gives you the typical auto complete experience that most editors have. I added one here called hex -CMP. CMP is short for completion. It's only enabled within the mix .exs file and leverages tree sitter parsing to find that you are currently, your cursor as in you, is in the function called deps and you're inside of a tuple, right? So convention, right? So we know that mix .exs is totally a script. You can do whatever you want in there. It doesn't have to be called deps. You can call it fubar and it would still work, right? It's a script. But conventionally, I've never seen that. It's always deps, right? Every generator out there is deps. So I think it's a safe bet. So given that safe bet, if you are in the deps function in the mix .exs file only, doesn't operate anywhere else, and you're in a tuple, the tuple is three positions, right? The first one is the package, the second is the fuzzy version matcher, and the third is all the options for that dependency. And so given that knowledge via the tree sitter parsing, which is also the popular way now, the modern way of most editors to perform syntax highlighting, but also just know where you are for more intelligence and how to jump through things. Hex CMP will call hex's API for listing and searching for packages. It does all the disk caching here, so we're not hammering hex's servers all the time. Yeah, so it'll search for packages. It'll get metadata on those packages so you can see what that package even is, what its latest version is, how popular it is by way of downloads. And then the second position is that fuzzy matcher, so it'll also autocomplete or list those versions for you for that package. So it's exact versions, right? Conventionally, I think everyone does the little tilde arrow one. At least version one, I don't care what patch or what miner, right? So the completion engine does its own sorting, but at least the plugin will provide it at least with the fuzzy ones, the popular ones at top. So hopefully those show up first in the list, so that's the idea. All right, and then of course the third option is all the options that the dependency can take like runtime false, only in prod or dev or whatever. So it gives you a little bit of that intelligence as well. And it does that by acting like a lightweight little in -process LSP that provides the hover functionality. So it's both a completion engine and a little bit of a LSP, a very narrow focused LSP, so that way you hover over the dependency, you hit, in Neovim you'd probably hit K, you hit K over it and you see the pop -up, the signature of the dependency row and then also the information about the package itself. Incredible that it works, I love it. I feel so happy having that out there now. I've used it so many times. If you're using Neovim, you gotta go check it out. All right, there it is. It's self -plugged, sorry, but it's short of life -changing. Nice. And next up, Mike Binns announced that version 1 .0 of Flame On, which is the flame charts in live dashboard, has been released. The release is mostly just version upgrades and some minor bug fixes. But since Flame On has been out for over four years and it's been stable, they figured version 1 .0 is appropriate. You know, it's shown that it's stable. I just thought that was funny. It's like, yeah, we should probably call it 1 .0. So they also say stay tuned for some upcoming improvements and expansions on what Flame On can do. I know I've seen some other situations where people may have been using Flame On, so it was accessible in live dashboard, but may have unloaded it or removed it from their project because of the version dependency issues, because it wasn't being actively updated to keep on the latest updated depths. So if that's been the case for you, now would be a good time to plug that back in and be able to play with that again. All right, next up, Gleam has a new static site generator. It is called Bligato. Bligato. So here's how they describe themselves. It's a Gleam framework for building static blogs with Lustre and Markdown. Oh, nice. Yeah. Bligato generates your entire static site from a single configuration. So blog posts from Markdown with front manors, static pages from Lustre views, RSS feeds, site maps, robots, TXT files, all rendered via mod components. That's one thing that probably ought to be considered that baseline static site generation stuff now are LMs .txt now. I don't see that listed here. Maybe they support it, maybe they don't, but that just came to mind because I know XDoc recently started supporting that as well. And you see that when you generate your docs. Now you do mixed docs and you see the outputs and by default, it's EPUB and HTML. And now I see the LMS .txt file out there, which is pretty nifty. All right. Well anyway, if you're in the Gleam ecosystem, maybe consider Bligato if you want to stay with your favorite language, Gleam, and still have a totally HTML artifact of your site. Looks pretty cool. And next up, Lucas Samson shared that he released a new library called Spark underscore EX. So Spark EX. And what this is, is it's a native Elixir Apache Spark Connect client with live book integration. If you're wondering what is Apache Spark TM from the website, it says Apache Spark TM is a multi -language engine for executing data engineering, data science, and machine learning on single node machines or clusters. That's the niche it's filling. This is something that helps you hook Elixir into that. And I just thought it was funny because all over that website, they put little Apache Spark TM, like every opportunity they could find is Apache Spark TM. So he should have called it like Spark TM EX or something, you know, just
to... But
we've got a link to a live book demo notebook where you can quickly start to play with that and experiment with that if you're using Apache Spark, if that'll help work into the flow that you're wanting to play with. All right. Next up, quickie, live book desktop now supports Linux. Yay. We got a link to the announcement from Hugo Barahuna. We've reported about this last week, I think about a tide wave for what it's worth. So we won't repeat too much, but the short version is that they're using, I don't know what it's technically called, but it's called Tari. And Tari is just a way to package an app to be cross platform. So tide wave, I think did that first and now it's happening in live book desktop. So if you're a Linux nerd, you can have a live book desktop now. Yay. And next up, Easel can now render to the terminal as well. So if you remember, Easel was the thing we talked about that Jason Steeves created, which you're creating graphics on the Phoenix Live View side, and then you're able to render that to WX, like WX Windows widgets or to HTML. And we were talking about it with in context of being able to play with the BOIDs, B -O -I -D -S, the little life simulator with basic little rules. Well, Jason wasn't done having fun yet, so he said, I'm going to see if I can make that render to the terminal. And that's exactly what he's done. It's just kind of interesting how he's accomplished this. So he's rasterizing off screen while using Easel WX rasterize. So it is still requiring the WX library and runtime requirement to be able to do this. Once he's rasterized it, he extracts color silhouettes and fitting printable ASCII glyph masks per cell. And then he writes frames through a library called Termite that handles low level terminal APIs. Pretty interesting. So it also requires that you have an interactive TTY and that it's run from a real terminal session to be able to actually have it render there. I haven't seen what this looks like. Have you seen this in action? What is this like? I haven't seen it in action, but they've got a screenshot of it, a little video. Yeah, it's pretty funny. It looks pretty cool. Oh my gosh. It's beautiful. It's quite a pipeline. So I would just look at this and think, it's like the Boyd simulator in ASCII characters that change and everything to find a best fit blended map. It's crazy. You just look at this and you think, wow, what else could I do with this? That could be interesting. Yeah. I wonder what performance is like there if it has to rasterize everything like that. Anyway. True. Very cool. All right. Last up, there is a new Elixir podcast on the block. It was started by Francisco Cesarini of Erlang Solutions and Alan Wyma. Alan Wyma, I believe, has run another podcast already, so they're just teaming up and creating a new one. So we've got some links to their show in case you might enjoy that. It's called Beam There, Done That. Very nice name. Little jealous. So they launched their first episode where they talked to Andrea Leopardi about concurrency, OTP, and the evolution of the Beam. And we've got links to all these to Spotify and Apple Podcasts in the show notes. So you want to try a different pair of voices? Go try theirs. Well, that's all the time we have for today. Thank you for listening. We hope you'll join us next time on Thinking Elixir.