Changelog & Friends — Episode 50

Kaizen! Just do it

Gerhard Lazu joins to discuss the Pipe Dream CDN project becoming reality, custom feeds for Plus Plus members, faster deploys, and the 'just' task runner.

Transcript(33 segments)
  1. SPEAKER_01

    Welcome to Changelog and Friends, a weekly talk show about the perfect name. Thanks to our partners at Fly .io. Over three million apps have launched on Fly, including ours. You can too in five minutes or less. Learn how at Fly .io. Okay, let's Kaizen. What's up friends? I'm here with a new friend of ours over at Assembly AI, founder and CEO, Dylan Fox. Dylan, tell me about Universal One. This is the newest, most powerful speech AI model to date. He released this recently. Tell me more.

  2. SPEAKER_00

    So Universal One is our flagship industry leading model for speech to text and various other speech understanding tasks. So it's about a year long effort that really is the culmination of like the years that we've spent building infrastructure and tooling out Assembly to even train large scale speech AI models. It was trained on about 12 and a half million hours of voice data, multilingual, super wide range of domains and sources of audio data. So it's super robust model. We're seeing developers use it for extremely high accuracy, low cost, super fast speech to text and speech understanding tasks within their products, within automations, within workflows that they're building at their companies or within their products.

  3. SPEAKER_01

    Very cool. So Dylan, one thing I love is this playground you have. You can go there, assemblyai .com slash playground, and you can just play around with all the things that is Assembly. Is this the recommended path? Is this the try before you buy experience? What can people do?

  4. SPEAKER_00

    So our playground is a GUI experience or the API that's free. You can just go to it on our website, assemblyai .com slash playground. You drop an audio file, you can talk to the playground, and it's a way to in a no code environment, interact with our models, interact with our API. To see what our models and what our API can do without having to write any code. Then once you see what the models can do and you're ready to start building with the API, you can quickly transition to the API docs, start writing code, start integrating our SDKs into your code to start leveraging our models and all our tech via our SDKs instead.

  5. SPEAKER_01

    OK, constantly updated speech AI models at your fingertips while at your API fingertips. That is a good next step is to go to their playground. You can test out their models for free right there in the browser, or you can get started with a 50 dollar credit at assemblyai .com slash practical AI. Again, that's assemblyai .com slash practical AI. Kaizen 16, Gerhard, what have you prepared for us? This Kaizen, I think every time I don't know what to expect. And this time I do know what to expect. So what changed? What's new? What's fresh? Well, I share the slide show you mentioned last episode. I have a slide show with my talking points, couple of screenshots, things like that. This time I shared it ahead of time and I prepared ahead of time as well. But also I've been making small updates to the discussion, I think more regularly than I normally do discussion 520 on GitHub. I mean, we always have one for every Kaizen, but this time I just went a little bit further with it. And I think it will work well. Let's see. All right. Well, take us on this wild ride. Adam's also here. Adam, what's up? Hey Adam, everything's up. Whenever someone asks me that. Everything is up. That's the SRE answer. Everything's up. Everything is up. Otherwise, I'm not here. If something's down, I'm not here. You know it's up because Gerhard's here. Yep. So everything's up. I like that. Well, last Kaizen, we talked towards the end about the pipe dream. Oh yeah. That was the grand finale. So maybe this time around, we start with that. We start with a pipe dream. We start with what is new. Start where we left off. Exactly. Love it. So we mentioned that, or at least you mentioned, Gerhard, that, was it Adam? Can't remember anyways. We will clarify this after I mention what I have to say. Wouldn't it be nice if we had a repository for the pipe dream self -contained separate from the application? Whose idea was it? I think it was both of ours. Adam said, can this be its own product or something? And I said, well, it could at least be its own repo. Something like that. That's right. So github .com forward slash the changelog forward slash pipe dream is a thing. It even has a first PR that was adding dynamic backends. And we put it close to the origin, a couple of things. So you can go and check it out. PR one. And what do you think about it? Is the repo what you thought it would be? Well, for those who didn't listen to Kaizen 15, can you tell us what the pipe dream is? Well, I think the person whose idea it was should, should, should do that. However, however I can start. So the idea of the pipe dream was to try and build our own CDN, how we would do it, a single purpose, single tenant running on the flight at IO. It's running varnish cache, the open source variant. And we just needed like the simplest CDN that we needed which is I think less than 10 % of what our current CDN provides. And the rest is just most of the time in the way and it complicates things and it makes things a bit more difficult for the simple tasks. How the idea started, I would only quote you again, Jared, would you like me to quote you again? That was Kaizen, Kaizen 15. So many quotes. Sure, let's hear it. I like hearing what I have to say. I like the idea of having like this 20 line varnish config that we deploy around the world. And it's like, look at our CDN guys. It's so simple and it can do exactly what we want it to do and nothing more. But understand that that's a pipe dream, right? That's where the name came from. Because the varnish config will be slightly longer than 20 lines and we'd run into all sorts of issues that we end up sinking all kinds of time into. Jared Santo, March 29th, 2024. Change it on with friends, episode 38.

  6. SPEAKER_00

    OK,

  7. SPEAKER_01

    so there you go. What's funny is, you know how when you're shopping for a car and you look at a specific car, maybe you buy a specific car and then you see that same car and color everywhere. After this, I have realized not just hearing the word pipe dream or maybe the words, if we can debate, is it two words or one? But I actually realized I say that a lot. I call lots of things pipe dreams, and I didn't realize it until you formalized it. And now I'm like self -conscious about calling stuff pipe dreams. I think I did it on a show just the other day. I was like, dang it, because now it's a proper noun and I feel like it's a reserved word. You know, it's almost a product. It's almost a product. If you can package up and sell 20 lines of varnish, we would do it. But if you can't, we would at least open source it and let the world look at what we did. So it has its own repo and it has its own pull request. So, you know, it's going to be a real boy. Does it work? Does it do stuff? I mean, I know you demoed it last time and it was doing things, but does it do more than it did before or is it the same? Yeah, I mean, the first the initial commit of the repo was basically I extracted what would have become a pull request to the changelog repo. That was initial commit. And we end up with 46 lines of varnish config. The pull request one, which added dynamic backends and it does something interesting with a cache status header, we end up with 60 lines of varnish config. Why dynamic backends? That was an important one because whenever there's a new application deployment, you can't have static backends. The IP will change, therefore you need to use the DNS to resolve whatever, you know, the domain is pointing to. So that's what the first pull request was and that's what we did in the second iteration. Now, I captured what I think is a roadmap. It's in the repo and I was going to ask you, what do you think about the idea in terms of what's coming? So the next step would be to add the feeds backend. Why? Because feeds, we are publishing them to Cloudflare R2. So we would need to, you know, proxy to that, basically cache those. I think that would be like a good next step. Then I'm thinking we should figure out how to send the logs to Honeycomb exactly the same as we currently send them. So that, you know, same structure, same dashboard, same query, same SLOs, everything that we have configured in Honeycomb would work exactly the same with the new logs from this new CDN. Then we'd need to do implement the purging across all instances. I think that's slightly harder because as we deploy the CDN in like 16 regions, 16 locations, we would need to expire, right? Like when there's an update. So that I think is slightly harder, but not crazy difficult. And then we would need to import all the current edge redirects from our current CDN into the pipe dream. And I think with that, we could try running it in production, I think. Good roadmap, I dig it. So our logs currently go to S3, not to Honeycomb in terms of logs that we care about. And I know that I previously said we only care about our MP3 logs, not our feed logs in the sense of statistics and whatnot, but that has since changed. I am now downloading, parsing and tracking feed requests like I am MP3 requests. And so we would either have to pull that back out of Honeycomb, which maybe that's the answer. Or somehow I have it also write to where S3 is currently writing to in the current format for us that I have major rewriting on the app side. Thoughts on that? So we can still keep S3, whatever intercepts the logs, right? Because in our current CDN, obviously the CDN intercepts all the logs and then some of those logs, they get sent to S3 indeed. But then all the logs, they get sent to Honeycomb. So you're right, I forgot about the S3 part. So on top of sending everything to Honeycomb, we would also need to send a subset to S3 exactly as the current config. So yes, that's an extra item that's missing on that roadmap indeed. All right, cool. So we add that item to the roadmap and I think it's all honky dory. Do you know how you're going to implement purge across all app instances? Like what's the strategy for that? No idea. No idea currently. I mean, based on our architecture and what we have running so that we avoid introducing something new as a new component, a new service that does this, we could potentially do it as a job using Oban, I think. Because at the end of the day, it's just hitting some endpoints, HTTP endpoints, and it just needs to present a key, right? If we don't use it, anyone can expire our cache, which is a default in some CDN, say.

  8. SPEAKER_00

    Yeah,

  9. SPEAKER_01

    we found that out the hard way. Exactly. So that's something that we need. I think an Oban job would make most sense. It's actually pretty straightforward. We already have a fastly purge function in our app that goes and does a thing. And then we just change this to go and background job a reset on all these different. Now there has to be some sort of orchestration of like the instances have to be known. Maybe that's just like a call to fly or something, or I don't know how. Okay. DNS based. Yeah. We can get that information by doing a DNS query and it tells us all instances and then we can get all the URLs. Yeah. That sounds like a straightforward way of doing it. Where's the, uh, where's the data being stored? We upload currently, uh, in pipe dream pipe dream is just a cash. So you mean where's the cash data being stored? Okay. So pipe dream is just, what exactly does pipe dream do? So pipe dream is our own CDN, which caches requests going to backends. So imagine that there's a request that needs to hit the app and then the app needs to respond. So the first time, like let's say the homepage, right? Once the app does that subsequent requests, they no longer need to go to the app. Pipe dream can just serve because it already has that request cached. And then because pipe dream is distributed across the whole world, it can, you know, serve from the closest location to the user person. Exactly. And same would be true, for example, for feeds, even though they are stored in cloud flare R2, the pipe dream instance now goes to cloud flare R2, gets the feed and then serves the feed. Gotcha. And so varnish is storing that cache locally on each instance in its local disk storage or however varnish does what it does. So by default, we're using memory, but using the static backend, like a disk backend would be possible, yes. I was just thinking about expiring because we just did this yesterday where we had to correct deployed slash published episode. And we ran into a scenario where Fastly was caching, obviously, because it's the CDN. And then I went into the Fastly service and purge that URL. And then it wasn't doing what we expected. And I bailed on it and handed it to Jared and Jared checked into R2 and R2 was also caching. And so we essentially had this scenario where our application was not telling the CDN that this content is new, expire the old, purge, etc. And I just wonder, in most cases, aside from the application generating new feeds, which happens usually at the action of a user. So me, Jared, somebody else publishes an episode or republishes, couldn't the expiry command, so to speak, come from that action and inform the CDN? Yeah, exactly, which is how it works right now with Fastly, like after you edit an episode, we tell Fastly to purge that episode. The problem we had yesterday is that Fastly purged it, but then Cloudflare also had a small cache on it. And so Fastly would go get the old version again and say, OK, now I'm fresh. And so we had two layers of cache that we didn't realize. And so that's probably fixed now. But yes, it would be basically everywhere in our app that we call fastly .purge, we would just replace that with pipedream .purge or whatever, which would be an OBAN process that goes out to all the app instances. I see. So the question was mechanically how to actually purge the cache, not so much when. Yeah, because we already have when pretty much figured out. Gotcha. Which is pretty straightforward, really, because when we publish and we edit or delete, like those are the times that you purge the cache. Otherwise, what's the point? Yeah, like, you don't do it, please don't make any sense. Now you're just change hasn't happened, so don't change. So, OK, how plausible is this pipedream? Like, is it should we rename it to like something else because it's not a pipedream anymore or more or less of a pipedream? Obviously, I'm not suggesting that naturally, but like it becomes real. Does it become an oxymoron when it becomes real? I don't know. I quite like the name, to be honest. I think it has a great story behind it, you know, so it just goes back to the origin. And the CDN is a pipe, right? I mean, it is exactly so. Yeah. Yeah, I like that pipe idea. That was like one of the follow up questions. Do we keep a space or introduce a space or no space? That's a really important decision. Space or no space. What about a tab? Should we put a tab in there? We can. Camel case, no space, space. What do the listeners think? I mean, you've you've you've been hearing this story for a while and you've you've heard us thing. I think we should have a poll. And that's how you know, I know that's that's how we end with names like Boaty McBoatface. Very aware of that. This is not that we're just asking, like, how do we how what would be the the way to spell it? That would make most sense. Pipe dream one word pipe space dream pipe tab dream. I'm not sure about that. I think we can do one like us for fun or camel case indeed. I'm leaning towards one word. The Merriam Webster dictionary and the Cambridge dictionary both say that it's two words. I'm seeing it two words everywhere. Yeah. Except for old English. Yeah. Whereas pip dream. All one word. I'm leaning towards one word though. Just like just pipe dream one word. Okay. And I'm leaning in the other direction. So we need a poll. Great. Well, the repo name is already like lowercase pipe dream. No spaces. No nothing. No. No dash is nothing like that. So, you know, I think it would make sense. So, yeah. All right. We'll run a poll. See what people think. See what people want. What give the people what they want. Correct. And when it comes to when we do switch it into production, whenever we know that that happens, I think we could maybe discuss again whether we rename it when it stops being a pipe dream for real. For now, it's still like a repo. It's still a configured runs. I mean, if you go to pipe dream dot change dot com. Yeah. You know, it does its thing. But it's not fully hooked up with everything else that we need. I have a new name. Pipe reality. Pipe reality. Just let it marinate. Not now. Not yet. Pipe media. I don't know. Pipe log. Pipe log. Ooh. Oh, here's a better one. Change pipe. Pipely. Pipely. Oh. I think that's the wither. I think that's the wither. Oh, quick by the debate before someone else buys it. Pipe dot Lee. Oh, yes. That one's almost too good. Almost. Yeah. Is this really where we're marching towards? I know this began as literally a pipe dream and it's becoming more real. You've had some sessions. You've according to the, maybe I'm jumping the gun a little bit on your presentation here, but you've, you've podcasted about this slash live demo. This we've been talking about the name. We've been talking about the road map. Like, is this really a true possibility to do this successfully? Well, based on the journey so far, I would say yes. I mean, it would definitely put us in control of the CDN too. A CDN is really important for us. So it's even more important than a database because we're not heavy database users and we'll get to that in this episode, I'm sure. So a CDN really is the bread and butter. Now we need something really simple. We need something that we understand inside out. We need something that I would say is part of our DNA because we're tech focused and we have some great partnerships and we've been on this journey for a while. You know, it's not something that one day we woke up and we said, let's do this. So this has been in the making for a while. We were almost forced. In a way, yes. I would say encouraged, you know, in a way, like we're pushed in this direction or other options. Yeah. But I think there is like this like natural progression towards this and it doesn't mean that we'll see it all the way through. But I would say that we are well on our way to the point that I can almost see the finish line. I mean, even the roadmap, right? Putting the roadmap down on paper, it made me realize actually the steps aren't that big and we could take them comfortably between Kaizens. And I don't want to say by Christmas, but wouldn't it be a nice gift, a Christmas gift? What do you think? I mean, I think that's a bold roadmap. Let me add this to the roadmap, or maybe I'm not seeing it in the repotence there. A test harness. Is there a test harness? No, there isn't a test harness now. I would love to be able to develop against this with confidence, especially once we start adding those edge redirects and different things. I would love to have that as part of the roadmap so that I can fire it up and create an issue. I would love that. Yeah, go for it. Cool. Open source for the win. Cool. So I'm going to open source the issue and then you open source the code. Amazing. I love that. Just making sure you didn't say PR is welcome and you're moving on. Cool. Yeah. Can we revisit the idea of this being a product? Single tenant, single purpose, simple, seems like a replicated problem set. Honestly, I think so. Honestly, I can definitely see this being part of a flat .io. Well, there's this name for which we cannot name. In regards to fly, it's more of a, a class of people. I would say it's probably that and I'll be, I'll be even more vague. Sorry, listen. That's so vague that I don't even know what you're talking about. There is some information. I'm not sure how much we can share,

  10. SPEAKER_00

    but

  11. SPEAKER_01

    then there's like Tigris that has led the way in a lot of ways. And I just talked to OVACE cause by the way, they may even be sponsoring this episode. Fly is not only a partner, but also a sponsor of our content. And I had a conversation with OVACE who is one of the co -founders of Tigris. And he shared with me that if it weren't for fly, it would have taken them years to build out all of the literal machines across the world with the NVMe drives necessary to be as fast, to be what Tigris has promised. And I don't want to spoil it for everybody, but Tigris basically is an up and coming S3. And because of the way that fly networks and because the way that fly handles machines across the world and the entire platform that fly is very developer focused, Tigris was able to, I think within nine months to stand up Tigris. And so you can, you can deploy Tigris via a single command in the fly CLI. And then you can also have all of your billing handled inside there. This is not an atom, I'm just describing it. But, you know, when I said that back in the day, I was thinking about Tigris cause I had first learned about them and knew about the story and I knew they were built on fly. I knew their story was only possible because of what fly has done. And I think that this pipe dream is realized or capable of being realized because of fly being what fly is. And I feel like if we have this simple nature, sort of the, I said really simple CDN, but I'm not tied to that because RSS is, you know, kind of won the really simple part of it. But I think that's kind of what it is. It's like, I feel like other people will have this and it can certainly live in this world of fly. I don't know. There's a possibility there. I think we build it for ourselves and then we'll know more. Are you thinking make it private? The repo, it's still not too late. You're going to rug pull these people before it's even, there's a rug down? Well, yeah, no one's using it. So yeah, private rug and we're, it's a 60 lines of varnish. I think we're getting ahead of ourselves, right? I think so. But once we start adding the test harness, once we start adding the purging, which by the way is specific to our app, but maybe that would need to be generic by the way. So if we, this was to be a product, we would need to have a generic way of purging. Doesn't matter what your app is. So there's a couple of things that we need to implement to make this as a product. And in that case, it would be in this repo, I think, but it could also be like a hosted service, like Tigris is, maybe, especially if we get the cool domain. Why not? I can see that. And this can be our playground, like the pipe dream can be our playground. But then the real thing with all the bells and whistles could be private. Yeah, I think we build pipe dream in the open. And then if we decide that there's a, there's a possibility there, then you genericize it in a separate effort. The one thing which I do want to mention is that there's a few people that helped contribute. So I would like to, this is also time for shout outs to Matt Johnson, one of our listeners and also James A. Rosen. He was there from the beginning. So the first recording that we did, that's already live, the second one as well that we recorded, I haven't published it yet. I still have to edit it, but that was like basically the second pull request that we got together. And even though a bunch of work, you know, went obviously in the background before we got together, when we did get together, it was basically putting all the pieces, you know, so we did like in this very open source group spirit. And, yeah, so there's that. So I think keeping that true to open source would be important. And if not, then we would need to, you know, make the decision soon enough so we know which direction to take. But you're right, rug pulls, not a fan at all. We should never do that. And even the fact that we're discussing so openly about this, I welcome that. I think it's amazing, this transparency, so that we're always straight from the beginning what we're thinking so that no one feels that they were misled in any way. Agreed. Agreed. I like it. Well, the last thing I would like to mention on this topic before I'll be ready to move on is that we live stream the CDN journey, a changelog with Peter van Nougle. There'll be a link in the show notes we got together and we talked about where we started, you know, how we got to the idea of the pipe dream and where we think of going. So if you haven't watched that yet, maybe worth. There was a slideshow. Not as good as the last one, the last Kaizen, but it was, I'm happy with it. Let me put it that way. Awesome. Cool. We'll link that up. Okay, friends, here are the top 10 launches from Superbase launch week number 12. Read all the details about this launch at superbase .com slash launch week. Okay, here we go. Number 10 snaplet is now open source. The company snaplet is shutting down, but their source code is open. They're releasing three tools under the MIT license for copying data, seeding databases, and taking database snapshots. Number nine, you can use PG replicate to copy data, full table copies and CDC from Postgres to any other data system today. It supports big query, duck DB and mother duck with more sinks to be added in the future. Number eight, back to PG, a new CLI utility for migrating data for vector databases to sub base or any Postgres instance with PG vector. You could use it today with pine cone and QDrant more will be added in the future. Number seven, the official sub base extension for vs code and get up copilot is here and it's here to make your development with soup base and vs code even more delightful. Number six, official Python support is here. As soup base has grown, the AI and ML community have just blown up soup base and many of these folks are Pythonistas. So Python support expands. Number five, they released log drains so you can export logs generated by your soup based products to external destinations like data dog or custom endpoints. Number four, authorization for real times broadcast and presence is now public beta. You can now convert a real time channel into an authorized channel using RLS policies in two steps. Number three, bring your own off zero cognito or firebase. This is actually a few different announcements support for third party off providers, phone based multi factor authentication. That's SMS and WhatsApp and new auth hooks for SMS and email. Number two, build Postgres wrappers with Wasm. They released support for Wasm WebAssembly foreign data wrapper. With this feature, anyone can create an FDW and share it with the soup based community. You can build Postgres interfaces to anything on the Internet. And number one, Postgres .new. Yes, Postgres .new is an in browser Postgres with an AI interface. With Postgres .new, you can instantly spin up an unlimited number of Postgres databases that run directly in your browser and soon deploy them to S3. OK, one more thing. There is now an entire book written about super base. David Lorenz spent a year working on this book, and it's awesome. Level up your super based skills and support David and purchase the book. Links are in the show notes. That's it. Super base launch week number 12 was massive. So much to cover. I hope you enjoyed it. Go to super base dot com slash launch to get all the details on this launch or go to super base dot com slash change law pod for one month of super base pro for free. That's S .U .P .A .B .A .S .E .com slash change law pod. What's next? Custom feeds. That's one of your topics, Jared. Custom feeds. So tell me about it. I don't know what it is. I know what it is, but I don't know what exactly about custom feeds

  12. SPEAKER_00

    you

  13. SPEAKER_01

    want to dig into. So custom feeds is a feature of change log dot com that we wanted to build for a long time. Probably not quite as long as we waited on chapters, but we've been waiting mostly because I had a false assumption or maybe a more complicated idea in mind. We wanted to allow our plus plus members to build their own feeds for a long time. The main reason we want to allow this is because we advertise change log plus plus as being better, don't we, Adam? Yeah, it is better. It's supposed to be better. However, people that sign up and maybe only listen to one or two of our shows, whereas they previously would subscribe publicly to JS party, for instance, and maybe ship it. They now have to get the plus plus feed, which was because of supercast, all of our episodes in one ad free master feed. And so for some people that was a downgrade because they're like, wait a second. I want the plus plus versions, but I also don't want all your other shows to which we were quite offended, but we understand. And that's been the number one request. I would, I would call it a complaint, but actually our supporters have been very gracious with us. They ask for it, but they don't, they say it's not a big deal, but it would be nice. In fact, some people sign up for plus plus and continue to consume the public feeds because that's what they want to do. But we wanted to provide a solution for that for a very long time. And because it was plus plus only, I had it in terms of like blockers. I had this big blocker in front of it, which was, we need to get off supercast first, because that's the reason why it's a problem is because supercast works this way, which is our membership system that's built all for podcasters and it's served us very well, but it has some technical limitations such as this one. So moving off supercast is a big lift and not one that I have made the jump yet because there's just other things to do and it works pretty well and lots of reasons. And so I didn't do custom feeds for a long time thinking, well, we got to get off of supercast first. And then one day it hit me, why, why do we have to get off of a supercast? Can't we limp into this somehow? Can't we just find out a way of doing it without getting off of supercast? And the answer is actually pretty simple. It's like, well, all we need to know is, are you a plus plus member or not locally to our system, which lives in supercast? And then I remembered, well, supercast is just using Stripe on the backend and it's our Stripe account. And that's awesome by the way, they give us direct access to our people. And no lock in and stuff. And so kudos to them for that. And so I was like, no, all we actually have to know is, are you, do you have a membership and all the membership data is over in Stripe. And so it's simply a Stripe integration away from having membership information here in changelog .com. So I built that, worked just fine. And then I realized, okay, now I can just build custom feeds and just allow it to people who are members. And so we build out custom beads and it's pretty cool. Have you used them Gerhard? Have you built a custom feed? No, I still consume the master feed, the master plus plus feed with everything. Master plus plus feed. Yeah. Okay. That's fair. But do you know what I would love to do to build one now? Oh, you would. Yeah. Let's see. Let's see what happens if we do that. So changelog .com. How do I do that? Like run me through that, Gerhard. I sign in. How are you signed in? Are you a, are you a plus plus member? Yes, of course you are because you have the plus plus feed. Yeah. Okay. So sign in changelog .com. Yep. And go to your home directory, the tilde. Yes, I'm there. And there, you should see a section that says custom feeds. I do see it. Okay. Click on that sucker. Get started. Okay. New feed. All right. There you go. Add a feed. Now you're going to give it a name that's required, you know, call it Gerhard's feed. Yes, sure. Gerhard's feed. You can write your own tagline and that'll show up in your podcast app. Okay. You can be like, it's better. Hang on. Or I'm still a tangline. Gerhard made me do this. Okay. Okay. Moving on. Uh, then you get to pick your own cover art because Hey, you may be making a single show feed. Maybe you're making all the shows. Uh, you can pick the plus plus one. You can pick a single show, pick your cover art, or you can upload it on your own file. Uh, you get to pick a title format. So this is how the actual episode titles come in to your podcast app. Maybe you want to say like the podcast name, colon, the title of the episode. Maybe you just want episode titles, you know, put a format in there. Um, and then you can limit your feed to start on a specific date. Some people want like fresh cuts between their, like the old days and the new days. And so they want to start it on this date because it doesn't mess up. They're marked as red or whatever. Okay. September 13th start today. It'll start today. Okay. It's going to be empty. And then, uh, pick the podcast you want. Okay. So hang on. I used, Oh, I see. Okay. Okay. I see. I see. So upload cover art. That's the thing which was messing with me because I wanted to add mine, but then it said or use hours. And when you say or use hours, I'm basically changing the cover art, which I uploaded with one of yours. Interesting. Hours as in a changelog cover art that previously exists. Got it. So you can like use JS parties or upload your own file and you'll have your own cover art for your own feed. Okay. So I've made a few changes. First of all, the name is Gerhard and friends. Okay. Description is Kaizen 16 this episode. Okay. The cover art, I uploaded one, but then I changed the changelog and friends. Okay. Starts today, 13th of September. Title format. I will leave it empty and for the podcast I'll change. I'll choose changelog and friends. Okay. Yes. And this feature should contain changelog plus plus at free extended audio. Yes. Bam. And automatically add new podcasts with launch. I'm going to deselect that because I only want changelog and friends save. Perfect. Boom. It's there. There you go. You build a custom feed. You can grab that URL, pop it into your podcast app, subscribe to it. Got it. And I found the first bug. No, you didn't. So the bug is if I upload my cover art and then I select another cover art from one of yours, it uses my cover art, but not in the admin. In the admin, it shows me that it's using yours, but when I create the feed, it's using my cover art. Okay. So you did both. I did both. Yes. And then submitted the form. Correct. Yes. Okay. You are the first person that's done that. I think. Of course.

  14. SPEAKER_00

    Of

  15. SPEAKER_01

    course, people usually pick one or the other, so, okay. Uh, open an issue. I will get that fixed. I will. Let me take a screenshot so that I remember boom there. Awesome. Cool. Looks great. Actually. Hang on the picture, which I want for this cover art is us three recording right now. So if Adam looks up, I'll take a screenshot. There you go. That will be by cover art. Okay. Got it. So well, too good, too good. So custom. So cool. You know, one thing I was going to do, which I haven't done yet. This is a reminder is I want to put the changelog legacy cover art in the list. Don't you think so? Adam? Like you can have the old changelog legacy logo if you want. That would be cool. Actually. Yeah. Actually one that's an idea we had is like to expand these, you know, to like a bunch of maybe have like custom artists come in and create new, uh, cover art you can select from. That might be cool. Very cool. But yeah, it's been kind of a screaming success. Honestly, we have currently, we have 320 changelog plus plus members. And those 320 people have created 144 custom feeds so far. Including mine. Including yours. I see yours right there and the cover is your face. Yes. Cool. So cool. So that's the feature. That's amazing. It worked very well. I have to say, I just still have to load it in my podcast player, but once I do that, amazing. Well, let's stop there then, because that's where I'm at and that's where I'm stuck. Jared, you're also stuck. Yes. So Gerhard's next step is to do what I've done and I think he may have the same outcome. I don't know. My outcome was I loaded the URL into my clipboard on my iPhone, opened up Overcast, add podcasts via URL, did that, clicked add URL, and it says not a valid URL. Does yours have a start date? No. Okay. I don't think so. So yours has a bunch. The URL only contains feeds. It's forward slash feeds, forward slash a sha, so it doesn't have the full. Oh, I might have changed. I might have screwed that up yesterday when I was fixing something else, when I was giving you your copy. This has been weeks for me. I just haven't reported it to you yet. And you've been waiting for this to do it live? Why would you wait this long? Are you waiting for this? Yes. Public embarrassment. Okay. No, just the fact that I just haven't done it yet. I'm sorry. Okay. No, that's all right. I think that that copy URL button should have copied the entire URL. Did it just give you the path Gerhard? It did. Yes. No wonder it's not a valid feed. So I literally fussed with that yesterday because I was giving Adam a different copy paste button and I might've broken it yesterday. Now, interestingly, if I hover over it, I can see the correct link. Yeah. But when I click on it, I only get the path. Yeah. The href is correct, but the data dash copy value is incorrect. Right. And I'm pretty sure I broke that yesterday. So that used to work because all these other people are happy, but you're sad because I broke it yesterday. So I have a quick fix. You right click the get URL. Yeah. And you say copy URL rather than relying on the click action. Right. And then you get the proper URL. Try that, Adam. Let's see if that works. Let's see here. Copy link. Did it solve my problem? Let me enter it. Boom goes the dynamite. It's at least not yelling at me. It is taking its time though. So, well, the other reason why that was happening probably a few weeks ago is because if you load a feed that has all of our episodes, for instance, a thousand plus 12 megabyte XML file. Yes. We would serve it slow enough that overcast would time out and it wouldn't think it was a valid feed. But then I, I fixed that by pushing everything through the CDN because at first, when I first rolled it out, it was just loading directly off the app servers. I know it's just a little bit too slow for overcast. Right. Okay. Next question then. This is a UX question. I am not a plus plus subscriber, but I can click the option and I assume it does nothing to say this feed should contain plus plus ad -free extended audio. I haven't clicked play because I just literally loaded it for the first time now, but I'm assuming that I won't have plus plus content because I'm not a plus plus subscriber. Is that true? No, I do have plus plus content. I'm thinking you are an admin and so it doesn't matter. Gotcha. Yep. So does this check then only show up for people who can check it? The entire UI for building custom feeds only shows up if you are active plus plus member or an admin, which is literally the three of us. Okay. That makes more sense then. Like you can't even build custom feeds. Now I did consider custom feeds for all, you know, let the people have the custom feeds, but plus plus people obviously would only be the only ones who get the checkbox. That's something that I'd be open to if lots of people want it, but for now I was like, well, let's let our plus plus people be special for awhile. Is there a cost center with these custom feeds? Like, is there an additive to the cost if we were having to deal with costs? Marginal. Okay. Every custom feed has to be updated every time an episode is updated. And so if we had a hundred thousand of them, there'd be some processing and maybe hit some R2, too many put actions versus, you know, it's free egress, but it's not free all operate all operations. And so there's like class a operations, class B operations, and the more you edit those files and change them, I think eventually those operations add up to costing you money, but

  16. SPEAKER_00

    it's

  17. SPEAKER_01

    marginal on the margins. If it got to be a huge feature where, I mean, if we had a hundred thousand people doing custom feeds, we'd find a way of paying for that, you know, it's a different problem, but yeah, it's a marginal cost, but not, not worth considering. Gotcha. Okay. So the copy can be updated pretty easily. It's probably a fix going on already for that because it's so simple for the ships. It'll be out

  18. SPEAKER_00

    there.

  19. SPEAKER_01

    Good. Well, cause I mean, I was like, well, how do I get this URL to my iPhone? I guess I can like copy it and like airdrop it to my iPhone. Maybe it'll open in the browser and I'll say, well, let me just go on the web and you know, get URL essentially. Yes. Our user experience assumes that our users are nerds and so far before I broke that copy button yesterday, there's been zero people are like, now how do I get this into my podcast app? Like no one's asked us that because all of our plus plus members completely understand how to copy and get into their whatever, you know, they are smarter than me. Most of them. Now, if it was a, for a broader audience, if this was a baking show and we're going to provide custom feeds for bakers or aspiring bakers, then I probably would have to have more of a hand holding it. And Supercast actually does a really good job of hand holding you into your private feed because it's not a straightforward mental process for most people, just for nerds. Yeah. Yeah, I agree. It kind of requires some work around. I was, uh, there's really nothing you could do about that, right? I mean, you're adding literally a custom feed via URL that, that no index knows about. So it's obvious you have to do some sort of work around to get there, to get your feed into your. Yeah. I mean, a better UX would be after the custom feeds created, we send you an email, that email contains a bunch of buttons, each buttons, like add to Overcast, add to Pocket Casts, add to Apple Podcasts, and depending on, you know, I like that idea a lot. That's how Supercast works. Yeah. I like that idea a lot. Email them every time it changes, that they go and upon creation, and now that is immutable until, well, theoretically mutable until they edit it again. And then it's muted, you know, so it's, it's, it's in stone. It's mutated. We could certainly add a button that says, email this to me, you know, next to the get URL, maybe like email me the URL. It's a good idea. And that's like a fast way to get it into your phone without

  20. SPEAKER_00

    having

  21. SPEAKER_01

    to do phone copy paste or airdrop like Gerhard did. Cause you don't know about the email happening. So that's a good feature, even for nerds. Cause it's just easier that way. Well, that would have solved the problem of me having to get the data onto my iPhone. Totally. Which my email is. Exactly. I think that we should add that as a feature. That's a good idea. Hey, Jared here in post that email it to me feature just shipped today. And that copy paste bug fixed Kaizen. Custom feeds are here. Y 'all if you're a plus plus subscriber, by the way, change log .com slash plus plus it's better. If you are not a plus plus subscriber and you desperately want this feature, let us know. Cause you know, squeaky wheels and oil. Must be in Zulip is the other catch, right? Anyways, not even get girls, not even Zulip yet. So let's not get ahead of ourselves. No, but what's the URL? Because I would like to join a changelog .zulipchat .com. Okay. But can you just get on from there? I don't know. We're it's new to us. Zulipchat .com I'm doing it now. Let's see. Log in. Okay. Log in with Google and go, there you go. Yes. Continue. Oh, okay. Sign up. You need an invitation to join this organization. All right. Go to our Slack, go to main, scroll up a little bit. You'll see there's an invite link to get into Zulip. You have to go to Slack, the Trojan horse. That's how you do it. That's right. You install one through the other listeners. You could do this too. You can follow the same instructions. It is in main. I think it's Friday, September 6th. Okay. Jared posted it as a reply to after that conversation. Now we're trying out Zulip in earnest and there's a link that says join us. We'll appear. And it's a long link that I could read on the air, but no one would ever hand type that in. No, I can't put it in the show notes though. So it might be there. So there you go. Yeah. We've shared our thoughts already elsewhere on friends with this, but you know, I'll be, I'd be curious, we'll be so many Kaizen's away. Well, at least one more Kaizen away multiple months before we get Gerhard's. By the next Kaizen, we may be like transitioned over to Zulip. We might be self -hosting it, but I don't think we should do

  22. SPEAKER_01

    that. No way. There's a Kaizen channel. This makes me so happy. And it's for all ideas about making things better and stuff. I even put one in there. You can read it. Oh, wow. Okay. I'm definitely going to check this out. This is nice. This is a very nice surprise. It was worth joining just for this. Oh, wow. So cool. This is nice. Isn't that cool? I thought a Kaizen channel would be on point.

  23. SPEAKER_00

    So

  24. SPEAKER_01

    cool. So I was kind of thinking like, well, how do we replicate our dev channel over here? And it's like, well, dev is just one thing. Like let's have a Kaizen and then different topics can be based on thumbs up. Yeah. Big thumbs up. So amazing. All right. Awesome. Custom feeds, Zulip, chat, Kaizen in what's next on the docket. Well, I'm going to talk about one very quick improvement, actually two, which I've noticed the news. Yes. I love the latest. Oh, you like that? That graphic is so cool. I really like, like the small tweaks. Also the separators, the dividers between the various news items. I really like, they just stand out more. I really, really like it. And it feels like more, the play button is amazing, by the way. I love it. I can definitely see it. I mean the play button stand out. Yeah, it feels so polished. Thank you. Really does. But the latest is so amazing and the news archive it's there and it works. Yes, it is amazing. So I appreciate your enthusiasm to tell everybody what the latest is. I literally put an arrow and the words, the latest on our homepage that points to the issue because it's kind of, it could be discombobulating, like you look at it on a desktop, at least on mobile, it goes vertical, but like on the left is kind of the information about news and the signups and stuff, and on the right is the latest issue, but you may not know, like, what am I looking at when you land on the page, like what's the thing on the right hand side, and so I just put this little arrow hand crafted with SVG, by the way, and the words, the latest, like someone's just scratched them on the page that points to that issue. So it's just kind of giving you a little bit of context and Gerhard loves it. So I appreciate it. It's another dimension. It's playful. It's, you know, like there is some fun to be had here. It's not just all serious. It's not like another news channel, but it's really, really nice. Like the whole thing, it feels so much more polished compared to last time. I can definitely see like the tiny, tiny improvements. Yeah. Very cool. So much kaizen. Indeed. Polishing. Cool. Well, the next big item on my list is to talk about twice, 2x faster time to deploy. This is something we just spend a bit of time on. I was surprised by the way of the latest deployed was slower than 2x, but we can, we can get there. Okay. The first thing, which I would like to ask is how do you feel about our application deploys in general? Like, does it feel slow? Does it feel fast? Does it feel okay? Do you feel surprised by something? How do application deploys when you push a change to our repo feel to you? Historically or after this change? Historically. Historically, I would say too slow. Too slow. Okay. Yeah. Historically too slow. Okay. Okay. So what would make them not too slow? Like, is there like, um, maybe like a two action duration. That's so leading though. I didn't be like, I literally meant like how many minutes or seconds we talked about that. Would it feel that it's, it's good enough? There's like this threshold that I'm not sure exactly. It's probably fuzzy, but it's the point where like you're waiting so long that you forget that you're waiting and you go do something else. And I think that's measured in single digit minutes, but not necessarily seconds. Like I can wait 60 seconds. Well, that's my seconds. I can wait one minute and maybe I'm just hanging out in chat waiting for that thing to show me that it's live. Yeah. But as soon as it's longer than that, I'm thinking, well, I should come back in five. Then I forget what I was doing. I don't come back and I've lost flow basically. So I would say around a minute, you know, 30 seconds would be spectacular. It doesn't have to be instant, but I think two, three, four, five minutes. It's getting to be where you're like, yeah, it's kind of like friction to deploy because you deploy. Then you're like, now I got to wait five or 10 minutes. Okay. That's my very fuzzy answer. Okay. That's a good one. So what used to happen before this change, we used to run a dagger engine on the fly so that it would cache previous operations so that subsequent runs would be much quicker, especially when nothing changes or very little changes. The problem with that approach was that from GitHub actions, you had to open a wire guard tunnel into fly so that you'd have that connectivity to the engine. And what would happen quite often is that tunnel, for whatever reason, would maybe be established, but you couldn't connect to the instance correctly. And you would only find that out a minute or two within the run. And then what used to happen, you would fall back to GitHub, which is much slower because there's no caching, there's no previous state, and the runners themselves, because they're free, they are slower. Two CPUs and seven gig, which means that you have to, when you have to recompile the application from scratch, it can easily take seven, eight, 10 minutes. And that's what would lead to those really slow deploys. So what we did between the Kaizens, since the last Kaizen, let me see which pull request was that. It was pull request five, two, two. So you can go and check it out to see what that looks like. So when everything would work perfectly, when the operations would be cached, you could get a new deploy within four minutes, between four and five minutes, thereabouts. And with this change, what I was aiming for is to do two minutes or less. And when I captured, when I ran this, like the initial tests and so on and so forth, we could see that while the first deploy would be slightly slower, because, you know, there was nothing, subsequent deploys would take about two minutes. Two minutes and 15 seconds, the one which I have right here, which is a screenshot on that pull request five, two, two. So how did we accomplish this? We're using namespace .so, which they provide faster GitHub actions runners, basically faster builds. And we run the engine there. And when a run starts, we basically restore everything from cache, the namespace cache, which is much, much faster. And we can see up there basically like per run, we can see how much CPU is being used. We can see how much memory. Again, these are all screenshots on that pull request. And while the first run, obviously, you use quite a bit of CPU because you have to compile all the Elixir into byte code and all of that, subsequent runs are much, much quicker. And the other thing which I did, I split the, let's see, is it here? It's not actually here. We need to go to Honeycomb to see that. So I'm going to Honeycomb just to look at that. I've split the build time, basically the build, test, and publish from the deploy time because something really interesting is happening there. So let's take, for example, before this change, let's take Dagger on fly, one of the blue ones, and have a look at the trace. So we have this previous run, which actually took four minutes and 21 seconds. And all of it is like all together. It took basically three minutes or like some time to start the engine, to start the machine, whatever, whatever. All in all, four minutes and 20 seconds. So in your run, for example, this one, which was fairly fast, it was two minutes and a half, if we look at the trace. We can see that, Dagger on namespace, the build, test, and publish was 54 seconds. So in 54 seconds, we went from just getting the code to getting the final artifact, which is a container image that we ship into production. In this case, we basically publish it to ghcr .io. And then the deploy starts. And the deploy took one minute and 16 seconds. So we can see that, you know, like with this split is very clear where the time is spent. And while the build time and the publish time is fairly fast, I mean less than a minute in this case, the deploy takes a while because we do blue green, new machines are being promoted, the application has to start, it has to do the health checks. So there's quite a few things which happen behind the scenes that, you know, if you look at it as like one unit is difficult to understand. So this was ideal case. This is what I thought would happen. Of course, the last deploys, if I'm just going to filter these Dagger on namespace, by the way, we are in Honeycomb. We send all the traces and all the build traces from GitHub actions to Honeycomb. And you can see how we do that integration, our repo. You can see that we had this one, 2 .77 minutes, right, which is roughly 2 .40. But the next one was surprising, which took nearly five minutes. And if I look at this trace, this was again, nothing changed, like stuff had to be recompiled. But in this case, the build test and publish took nearly three minutes. Which this tells me there's some variability into the various runs when it builds it. I don't know why this happens, but I would like to follow up on that. As a TLDR, this change meant that we have less moving parts. And when namespace works, and this is something again that we need to understand, why did this run take longer? It should take within two minutes, we should be out. Like a change should be out in production, half the time is spent in build, and half the time is spent on deploys. So when it comes to optimizing something, now we get to choose which side do we optimize. And I think build, test, and publish is fairly fast. The slower part is the actual deployment. So how can we maybe half that? How can we get those changes once they're finished and everything is bundled? How could we get it out quicker? I love it. I think, do you have ideas on that? Well, I think the application boot time could be improved because it takes a while for the app to boot. When I say it takes a while, it may take 20, 30 seconds for it to be healthy, all the connections to be established. Now I'm not sure exactly which parts of those would be the easiest one to optimize. But I think the application going from the deploy starting and the deploy finishing, taking a minute and a half is a bit long. So I'll need to dig deeper. Is it when it comes to connecting to the database? Is it just the application itself being healthy? Which part needs to be optimized? But again, we're talking, this is like a minute and a half. We're optimizing a minute and a half just to put this into perspective. And that's why I started with the question, how fast is fast enough? Yeah, I mean, I think if you're at 90 seconds, you're probably right about there. I would still go in and spend an hour thinking like, is there a low hanging fruit that we haven't looked at yet that we could squeeze 10 more seconds off? And then I would stop squeezing the radish after that. I see. That'd be my take on it, Adam. Well, the flow it seems is every time new code is pushed to our primary branch on the repository, a new deploy is queued up. And this process happens for each new commit to the primary branch. A new application is spun up. It's promoted. So if I deploy slash push new code, and then a minute later, Jared does the same thing. My push does this process. My application is promoted. Jared's commit does the same thing. His application is then promoted. And that's via networking. And then these old machines are just, you know, like thrown off and then the new machines are promoted and they just fall by the wayside. Correct. Which totally makes sense. I think you have things happening that we want to happen. I agree with you on the low hanging fruit, but on the app boot process, we've got even things like one password being those things being injected from their CLI. I'd imagine that API call is not strenuous, but it's probably seconds, right? So there's probably in each thing we're booting up as part of the app boot process for every commit, there's at least one to several seconds per thing we're instantiating upon boot. Well, that's just me hypothesizing how things work. No, that's a good one. That's exactly what we're trying to hash it out so that we share the understanding that each of us holds. So that we can talk about what would, because we talked about this in the past and I really liked Jared's question. He was asking, we're talking about like kyzening and, you know, we're talking about all this change, but are we actually improving? And that's why when I tried to think about this and I was thinking about like, okay, what would the improvement look like? And can we, I mean, we can measure it and we can check, have we delivered on this? And until like the last deploy that went out, I was fairly happy with the time that the duration that these deploys were taking. But based on the one which I have right in front of me, the build going from one minute and a bit to almost three minutes, I think that variability is something that I would like to understand first before optimizing the boot time. Is it the CPU's then that's impacting it, you think? Like the CPU's and the horsepower behind the build test? Well, let's open up namespace. Let's go to instances. We can see the last build, which you can see here, like all the builds. This is inside Dagger, is that right? This is namespace. All this is namespace, by the way. So we're using namespace for the runners. And I would like... This is a third party service that's... It is a third party service, yes. That you just found or someone told you about or... Exactly, yes. I am paying attention to various build services and, you know, depot .dev. I love it. Namespace .so. Namespace .so, yes. Our trial is almost over. Exactly, yes. Now, how much will it cost us, by the way? Every minute... Three days left on your trial. Three days left on our trial, yes. Oh, I'm getting nervous here. So, hang on. Per minute, we're paying 0 .0015 dollars, which means that for 40 minutes, like, okay, for an hour, we're paying less than 10 cents for an hour of build time. So, you know, pay as you go, it's really not expensive. So it's okay if I have to put my card because we're talking cents per month for our builds. That makes sense. What does a single build cost us then? So, when it's five minutes, let's see, I'll do the math now really quick. Hang on. Thank you. Hang on. It's less than a cent. A build which takes five minutes is less than a cent. Is that right? Yeah, less than a cent. Like, 75. What is less than a cent? Zero cents. No, there was like another unit in the past. I forget what it's called. Whatever. Satoshi? No, that's a different person. 75 percent of a cent. Okay, so it's like… So, reasonable. It's reasonable. Yeah. Very, very reasonable, I would say. And if we get down faster, it's even less. Exactly. So… What exactly does Namespace do, though? I mean, are they just… is it just a machine that has proprietary code on it that we send something to to do a build process? So, Namespace basically, it runs custom GitHub actions much quicker because they have better hardware, better networking than GitHub actions themselves. So, you can literally use Namespace to replace GitHub actions. So, they're just like… they just use the actions API, but you're running on their infra. Correct. Exactly. Smart. Or you can use like faster docker builds, you know. Right. They also have preview environments, which I haven't tried, and code sandboxes. That's something new. That's what I'm thinking, because I have a shout out here. And hang on, let me just get the names straight. To be clear, they are not a sponsor, but we're saying they should be. I think they should be. I think Hugo, I just know his first name and I'm trying to find… Yeah, because our credit card is expiring. Right. We need those six cents, don't we? We need those six cents. That's Gerhard's credit card for now. Exactly. You can use mine. It's okay. Hugo Santos. No relation. No relation. Yeah, no relation. No relation, but I think if there is someone that you should talk at namespace, I think it would be him. Like, as I was like setting all this stuff up, he was very responsive, even on the weekend, you know, to emails. And I think he's one of the founders, by the way. So, I thought that was like a very nice touch. And he really helped like go through all like the various questions which I had and the various like, does this look right? So, even like he even looked at the pull request to see how we implemented it. And all in all, like the promises there, we can see that it does work well when it works like two minutes. We get those two minutes, but sometimes it takes more. And then the question is, well, why does it take more? So, that's something which I'm going to follow up on. Cool. Cool. Well, I am excited for the follow up and for this progress. Indeed. Cool. API testing docs and more. And they just released a new version of their Python SDK generation. It's optimized for anyone building an AI API. Every Python SDK comes with pydantic models for request and response objects and HTTPX client for async and synchronous method calls and support for server sent events as well. Speak easy is everything you need to give your Python users an amazing experience integrating with your API. Learn more at speak easy dot com slash Python. Again, speak easy dot com slash Python. And I'm also here with Todd Kaufman, CEO of test double test double dot com. You may know test double from our good friend Justin Serols. So, Todd, on your home page, I see an awesome quote from Eileen, you could tell. She says, quote, hot take, just have test

  25. SPEAKER_00

    double build all your stuff, end quote. We did not pay Eileen for that quote, to be clear, but we do very much appreciate her sharing it. Yeah, we had the great fortune to work with Eileen and Aaron Patterson on the upgrade of GitHub's Ruby Rails framework. And that's a relatively complex problem. It's a very large system. There's a lot of engineers actively working on it at the same time that we are performing that upgrade. So being able to collaborate with them, achieve the outcome of getting them upgraded to the latest and greatest Ruby on Rails, that has all of the security patches and everything that you would expect of the more modern versions of the framework, while still like not holding their business back from delivering features, we felt was a pretty significant accomplishment. And it's great to work with someone like Eileen and Aaron, because we obviously learned a lot. We were able to collaborate effectively with them, but to hear that they were delighted by the outcome as well is very humbling, for sure.

  26. SPEAKER_01

    Take me one layer deeper on this engagement. How many folks did you apply to this engagement? What was the objective? What did you do, et cetera?

  27. SPEAKER_00

    Yeah, I think we had between two and four people at any phase of the engagement. So we tend to run with relatively small teams. We do believe smaller teams tend to be more efficient and more productive. So wherever possible, we try to get by with as few people as we can. With this project, we were working directly with members from GitHub as well. So there were full -time staff on GitHub who were collaborating with us day in and day out on the project. This was a fairly clear set of expectations. We wanted to get to Rails, I believe, 5 .2 at the time and Ruby like 2 .5. Don't hold me to those numbers, but we had clear expectations at the outset. So from there, it was just a matter of figuring out the process that we were going to pursue to get these upgrades done without having a sizable impact on their team. A lot of the consultants on the project had some experience doing Rails upgrades, maybe not at that scale at that point. But it was really exciting because we were able to kind of develop a process that we think is very consistent in allowing Rails upgrades to be done without providing a lot of risk to the client. So there's not a fear that, hey, we've missed something or this thing's going to fall over under scale. We do it very incrementally so that the team can, like I said, keep working on feature delivery without being impacted, but also so that we are very certain that we've covered all the bases and really got the system to a state where it's functionally equivalent to the last version, just on a newer version of Rails and Ruby. Very

  28. SPEAKER_01

    cool, Todd. I love it. Find out more about Test Double's software investment problem solvers at testdouble .com. That's testdouble .com, T -E -S -T -D -O -U -B -L -E .com. So I would like to switch gears to one of Adam's questions. And he was asking if Neon is working for us as expected and the state of Neon. So is Neon working for us as expected? Based on everything I've seen, it is. Like I was looking at, for example, the metrics. I was looking at how it behaves in the Neon console. This is us for the last 14 days. So what we see here in the Neon console, we see our main database. We can see that we have been using 0 .04 % of a CPU. So really not CPU, but in terms of memory, we have eight allocated. We're using 1 .3 gigabytes. 1 .3 gigabytes use out of eight allocated. So we are overallocating both CPU and memory. So fairly little load, I would say, and things are just humming along. So no issues whatsoever. Do we need to push this harder somehow? Do we need to get the vector search in our database or something? Weren't you going to set us up an AI agent, Gerhard? Yes, I was. I didn't get to that, but that would not use this database, by the way. That would be something different now. PG vector, man. PG vector. Get it in there. Right. I would, but not in this production database. So this is special. I mean, this is exactly what we want to see. If anything, we can, because we have the minimum compute setting set to two CPUs and eight gigs of memory, and I know that Neon does an excellent job of autoscaling us when we need to, we didn't need to get autoscaled because we are below the minimum threshold. So we could maybe even lower the threshold and it would still be fine. So we're not using this to its fullest extent, is my point. No. So we need some arbitrary workloads in order to push it. Well, to see where it breaks. We wouldn't need it to break. I think if anything, one thing that I would like us to do more is use Neon for development databases. And I have something there I haven't finished, but I would like to move on to that as well, if everyone's fine. Adam, further thoughts or questions around Neon? This was your baby. I think the question I have is, you know, while the thresholds are low and we're below our overallocation, you know, what should we expect? And this is good news. This is good news that we're not. Yeah, I'm just saying, like, it's hard for us to use it and see if it's good or bad because we're not heavy database users. And I was just saying we just need some more arbitrary workloads to actually flex this thing, but I was mostly just being facetious. Gotcha. I'm in the dashboard, too, and I'm looking at a different section of that same monitoring section, which is, like, rows. I believe rows being added, which is kind of cool because over time you can kind of see your database updates, essentially. Deleted, updated, inserted. So there's definitely, obviously, activity. We're aware of that. I think the other things that we should pay attention to in terms of is it working for us as expected, I would say some of that is potentially on you, Jared, and you too, Gerhard, is that we've got the idea of branching. Garrett, I know that you're familiar with it because you demonstrated some of this last time we talked, but being able to integrate some of those futuristic, let's just say, features into a database platform. This is managed. It's serverless. We don't have to manage it. We get a great dashboard. We get the opportunity for branches. Have you been using branches, Jared? Do you need to use branches? Does that workflow not matter to you? I think that's the DX and the performance is the two things I think I care about. So I have a custom branch, which I use to not develop against, but to sync from. I guess it's not mine. It's that dev 2024. That's the one I use. Maybe Gerhard created that, but that's the one that I do use. I pull from that, so I'm not pulling from our main branch because there's just less load on our main branch to do that. So I'm using that, but I synchronize it locally, manually, and then develop against my own postgres because I have a local postgres. The one thing about it is because it's a neon branch, I will have to go in here and synchronize it back up to main and then pull from it. And I'm sure that's automatable, but I just haven't done that. I've been waiting for Gerhard's all -in -one solution. Yes, that's coming. That's my next topic to share. What exactly is that? Well, that would mean tooling that's local to make all of this easy. Jared wouldn't need to go to the UI to click any buttons to do anything. He would just run a command locally and the thing would happen locally. He wouldn't need to even open this UI. Shouldn't that be a neon native thing, though? It is. It does have a CLI, but the problem is you need to, first of all, install the CLI, configure the CLI, add quite a few flags, connect the CLI to your local postgres, all that glue. That's the stuff which I've been working on, and I can talk about that a bit more. And so the idea would be to just automate some of that, not have to go through all the steps. Still do the CLI installation like any normal user would. Correct. But maybe a neon setup script that probably populates a file with credentials or something. Some command that you run locally that knows what the sequence of the steps is and what to do. For example, maybe you don't have the CLI installed. Well, install the CLI. You need to have some secrets. Well, here's the 1Password CLI, and by the way, the secrets is here in this vault. So stuff like that, all that. Speaking of 1Password, did you notice their new SDKs? Did you pay attention to their new SDKs they deployed? TypeScript, Go, a couple others for native integrations. Obviously we're Elixir, so it doesn't really matter to us. But maybe in some of the Go pipelining that you've probably done, would it make sense to skip OP and go straight to Go with the SDK? Because OP is their CLI, right? It's the same. It's not an SDK. The SDK lets you native integrate into the language. So it's possible to use something else, but at the end of the day, the integration needs to work. And the implementation, whether you use the SDK or whether you use the CLI, is just an implementation. It just doesn't matter, yeah. What we care about is, is it reliable, our implementation? Do we have any issues with it? So far, no. Yeah. Are we using service accounts? And that's something that we've been waiting. Because without service accounts, we'd need to set up a connect server, which I didn't want to do. So that was a big deal for us. Whether you use the CLI or the SDK, we could, but it wouldn't make that much of a difference. Now, if the application itself, while it runs, it was doing certain things, maybe that's interesting. Maybe we could change some of the boot phase so that we wouldn't inject the secrets from outside the application. The application itself could get them directly. But I really want to get Elixir releases going. And once we have those, things change a little bit. But it's all just like maybe shuffling some code from here to here. But ultimately, it will still behave the same, just like you would maybe bring it into the language. So I haven't seen their latest SDKs, but I would like to check them out. That's a good one for me to look into. So the tooling that Jared was mentioning to make things simpler for him, I've been thinking about it from a couple of perspectives. And I realized that to do this right, it will be slightly harder. And the reason why it's slightly harder is because I would like to challenge a status quo. The status quo is you need a dagger for all of this. Maybe you don't. So I'm trying a different approach. And the mindset that I have for this is Ken Beck, September 2012. For each desired change, make the change easy. Warning, this may be hard. Then make the easy change. So what I've done for this Kaizen, I made that change easy, which was hard, so that I could make the easy change. How hard was it? So that's what happened. Well, let's have a look at it. So we're looking now at pull request 521. And 521 introduces some new tooling, but I promise it's just a CLI. And what's special about it is that everything runs locally. There's no containers. There's no Docker. There's no dagger. Everything is local. And I can see Jared's eyebrows go up a bit because that's exactly what he wanted all this time. So what pull request 521 introduces is Just, which is a command runner. It's written in Rust, but it's just a CLI. And if you were, for example, Jared or even Adam could try this. If you were to run Just in our repository at the top level, you would see, Just calls them recipes, what is possible. And the one which I think the audience will appreciate is Just contribute. So remember how we had like this manual step, like install Postgres, you know, get Erlang, get Elixir, get this, get that. I mean, that's still valid, right? You can still use that manual approach. Or if you run Just contribute, it will do all those things for you running local commands. It still uses homebrew. It still uses SDF. But everything that runs, it runs it locally. And the reason why this is cool is because, I mean, your local machine, whatever you have running, it remains king. There's no containers. Again, I keep mentioning this because that adds an extra layer. And what that means, stuff like, for example, importing a database in a local Postgres SQL is simpler because that's what you already have running. Resolving the Neon CLI again, it's just like a brew install. It's there and you wire things together. You don't have to do with networking between containers. You don't have to pass context inside of containers, which can be tricky, especially when it comes to sockets, especially when it comes like to special files. So I'm wondering, how will this work out in practice? And the thing which I didn't have time to do, I didn't have time to implement just db -prod -import, which would be the only command that you'd run to connect to Neon, pull down whatever needs to pull down, maybe install the CLI if it doesn't have it. And then just in your local Postgres, import the latest copy. Same thing for just db -fork, which would be an equivalent of what we had before. The difference is that was all using Dagger and containers. Have you used it, Jared, apart from when we've done it? There you go. Adam, have you ever run Dagger? Never. In the three years that we've had it? Never. Not one time. There you go. How many times did you have to install things locally for you to be able to develop changelog in the last three years? Well, that's where my personal angst relies. It lives right there in that question. How many times, what is the pain level it's high for me? So Adam might be more excited about this than I am. Pull request five to one. I mean, even you, Jared, if you want to try it. I mean, if you do dry run, it has a dry run option, by the way. It won't apply anything, but it will show you all the commands that would run if you were to run them yourself, for example. And there may be quite a lot of stuff, right, when you look at it that way. But it's a good way to understand, like, if you were to do this locally, and if you were to configure all these things, what would it do without actually doing it? So I tried it on a brand new Mac, and I think that's the recording that I have on that pull request. I might need to get a brand new Mac so I can try this. Look at that. That's very, very... Been waiting for a good reason to upgrade, you know? There you go. And honestly, within five minutes, depending on your internet connection, everything should be set up. Everything is local, the Postgres, everything. What we don't yet have, and I think this is where we're working towards, is how do we, first of all, cleanse the data so that, you know, contributors can load a type of data locally? But I think that's like a follow up. First of all, we want Jared to be able to do this with a single command, refresh his local data. And after I have this, the bulk of the work done, this step is really simple. How simple? Maybe half an hour at most. That's what I'm thinking. So not much. Almost there. So it should be done before the day's over. Yeah, it should be done. Exactly. It should be done. The one thing I'm noticing is that you're switching back to Brew install Postgres. I'm just curious about that change. So I mentioned it in one of the comments when I committed. Basically, when I was installing it via ASDF, the problem was with ICU4C, I just couldn't compile Postgres from ASDF correctly. And since then, in Homebrew, we can now install Postgres at 16. So you can specify which major version, which was not possible, I think, two years ago when I did this initially. So there is that. Now, let's see where this goes. I'm excited about this. If anyone tries it, let us know how it goes for you. If you want to contribute a changelog, like how far does it get? And by the way, I tested this on Linux as well. The easiest way, there's like something hidden there in the just. It's called ActionsRunner. What it does is exactly what you think it does. It runs a GitHub ActionsRunner locally. For this, you need Docker, by the way. And it loads all the context of the repository inside of that runner. So that's the beginning of what it would take to reuse this in the context of GitHub Actions. And what I'm wondering is, will it be faster than if we use Dagger? That's me challenging the status quo. The answer is either yes, it is, and maybe we should do that instead. It will shave off more time, or no, it's not. And then I get to finally upgrade Dagger because we run a really old version. So you still work at Dagger, right? I do, yes. Very much so. I just want to know how much you want to challenge the status quo. No, no, no. That hasn't changed. I'm just kidding. Cool. So for our listener, if you want to try this, github .com slash the changelog slash changelog dot com, clone the repo, brew install just, just contribute. That's it. Try those three steps. If you're on Mac OS, if you're on Linux, it's not brew install just. It's apt -get install or yum install or... The installations are there. Yeah. And just contribute. And what should we expect to see when we type in just contribute? Is instructions or... No, no. We do actually run the commands. It's going to do it for you, man. If you do just dash n... Now what if you have an existing repo like Adam does? Can he do it and it should pick up where he... Yeah. Give that a shot there, Adam. I'm so scared. What you could do is if you want maybe start a new user, it shouldn't mess anything up, to be honest. It just installs. Maybe it does things differently or does things twice. Yeah. I don't really know. But it should be safe. I like this. I mean, this is... I did run just in our repository. You get contribute, deps, dev, install. These are all the actions or recipes. Correct. Install, Postgres down, Postgres up, tests. And each of those have a little hashtag next to it, which is a comment, essentially, of what the recipe does. So over time we can expect to see more of these just recipes if this pans out to be, you know, long -term. These recipes will potentially get more and these will be a reliable way to do things within the repository. And it's all local. That's the big difference because before... I mean, even now, right, because we still kept Dagger, we still have everything that we had so far, that would always run in containers, which means it won't change anything locally. And in some cases, that's exactly what you want, especially when you want to reduce the parity between test and production or staging and production. But in this case, it's local, right? So you want something to happen locally. And local is not Linux. It means it's a Mac. So then you have that thing to deal with, in which case Brew helps and ASDF helps and a couple of tools help. But you still have to know what are the commands that you have to run, in what order, what needs to be present, when. And this basically

  29. SPEAKER_00

    captures

  30. SPEAKER_01

    all those commands. It's a little bit like Make, which we had, right, and we removed. But this is a modern, I would say, version of that. Much simpler, much more streamlined, and a huge community around it. I was surprised to see how many people use Just. By the way, huge shout out to Casey, the author of Just. I really like what he did with the tool, like 20 ,000 stars on GitHub. A lot of releases, 114. Fresh releases, 170 contributors. Yeah, it's a big ecosystem, I have to say. Mm -hmm. Without, one more question on this, without me having to read the docs, thank you, if you can help me on this. Can I do just dash n install, so I can just see what it might, just so many times, can I just see what it might do? Exactly. Okay. And dash n, it basically stands for dry run. Right. The reason why you have to do it before the recipe is because some recipes can have arguments, and if they don't, like if you do the dash n at the end, it won't work. So it has to be the command just, the flags, and then the recipe or recipes, because you can run multiple at once. Very cool. But yes. I assume that, because like any good hacker that writes a CLI that's worth his weight in gold would always include a dash n, right? A dry run, yeah. Good job. What was his name, the maintainer? Casey, let me see if I can pronounce his surname. He's Casey on GitHub, by the way. Rod Armor, the blue planet, apparently. Casey Rod Armor. You can correct us, Casey. Shout out to Casey. C -A -S -E -Y. GitHub .com slash C -A -S -E -Y. GitHub .com slash, I'm just kidding. I was going to say it one more time. Thanks, Casey. Rod Armor. Rod Armor. Rod Armor. Rod Armor. Rod Armor, yes, I like that. That's how you're pronouncing it. Casey Rod Armor, correct us if that's correct or correct us if it's not correct. Or don't correct us. But go to GitHub .com slash Casey C -A -S -E -Y. Just do it. Just do it. Just do it. That's a good one. I like it. That's cool, man. Thank you for doing that. Not a problem. I enjoyed it. It was fun. Okay, Homelab production. Homelab to production. So next week on Wednesday is TalosCon. And I'm calling it Justin's conference. It's the Garrison Con. The Garrison Con, exactly. I'll finally meet Justin in person. I'm giving a talk. It's called Homelab to production. I think it's 5 p .m. So one of the last ones. We'll have a lot of fun. I'm bringing my Homelab to this conference. So we will have fun. I almost commented on that. It's not quite a Homelab. It's more of a mobile lab. It is a mobile lab. But I will have a router with me. So it will be both the actual device and the router. And yeah, we'll have some fun. Now you're bringing two of them with you or just one? The device, the Homelab, plus the router. So two devices. Well, you want two of everything. Yes. Well, we are going into production. So we're going to take all the workloads from the Homelab. And we're going to ship them into production

  31. SPEAKER_00

    during

  32. SPEAKER_01

    the talk. And we're going to see how they work. We're going to use Talos, Kubernetes. Dagger is going to be there. So yeah, we'll have some fun. So this is a live demo then, basically? It's a live, yes. Well, it's recorded because I want to make sure that things will work. But I will have the devices there with me. You never know what Wi -Fi is like. And that's the one thing which I don't want to risk. Yeah, you can never. Even 4G, 5G, even mobile networks are sometimes unreliable. But I'm looking forward to that. And it will be a recorded talk as well. So yeah. Well, that's good because TalosCon is on -prem, free, and co -located with SRE Day. However, it's also over with. By the time this ships, it'll be two days in the past. And so happy to hear, Gerhard, that there'll be a video because certainly our listener will want to see what you're up to. And

  33. SPEAKER_01

    it's in the past tense. So there you go. And guess what? What? I'm going to be recording myself as well. What are you holding up there? I'm holding a Rode Pro. Do you know the Rode Pros? Like the mini recording microphones? Yeah, you can clip them to your shirt, something like that. Exactly. So I have two of those. Boom. And two cameras, I'll take them with me. They're 361. So I'll be recording like the whole talk and then editing and publishing it. So that's the plan. Cool. So whatever the conference does, great. But I also want to do my own. Yeah. So that's the plan. Full control. Indeed. Awesome. Well, great conversation. Good progress this session. What do you call it? This Kaizen. This Kaizen, yes. What do we want to accomplish for the next one? Are we on the right trajectory? Like in terms of the things that we talked about, in terms of what we think is coming next, did we miss anything? It'll be Christmas or just before Christmas. I think the just stuff with the database and branching with Jared being able to pull that down would be a small but big win. Okay. I think, you know, continue progress, obviously, on the pipe dream. pipely .tech. pipely .tech. I like it. Did you buy the domain? No, but it's available. Not available? It is available for 10 bucks. pipely .tech. I don't know. I think we got to get pipe .ly. Otherwise. Yeah. pipe .ly. We're just posers. But I like pipely .tech as well. So we might have to raise some money for this. If we're going to have to buy pipe .ly, we might need 50 grand. The future's coming and we're going there. Kaizen. Kaizen. Kaizen. Bye friends. What do you think about our pipe dream? Should we turn it into a pipe reality? A pipely, if you will. Let us know in Zulip. Yes, we are hanging out in Zulip now. It's so cool how we have it set up. Each podcast gets a channel and each episode becomes a topic. This is great because you no longer have to guess where to share your thoughts about a show. Even if you listen to an episode way later than everybody else, just find its topic and strike the conversation back up. There's a link in our show notes to join Changelog's Zulip. What are you waiting for? An engraved invitation? Hey, it's still September, which means we're still trading free Changelog sticker packs for thoughtful five -star reviews and blog posts about our pods. Just send proof of your review to stickers at changelog .com along with your mailing address and we'll ship the goods directly to your mailbox anywhere in the world. Let's do this. Thanks once again to our partners at Fly .io, to our beat freaking residents, The GOAT, BMC, and to our longtime sponsors at Sentry. Use code CHANGELOG when you sign up for the team plan and save yourself a hundred bucks. That's almost four months free. Next week on The Changelog, news on Monday, Ryan Doll talking Dino 2 on Wednesday, and a fresh episode of Changelog and Friends on Friday. Have a great weekend. Leave us five -star reviews if you want some stickers and let's talk again real soon.