Changelog & Friends — Episode 58

Kaizen! Slightly more instant

Gerhard discusses Kaizen 12 improvements: moving from S3 to Cloudflare R2 reduced AWS costs from $155 to under $17/month, enabling HTTP/3 and Brotli compression, fixing S3 caching, implementing WebSockets through Fastly, and migrating Changelog Nightly to Fly.io.

Transcript(24 segments)
  1. SPEAKER_00

    Welcome to Change, Log, and Friends, our weekly talk show about making continuous improvements. Kaizen! Thank you to our partners for helping us bring you world -class podcasts every single week. Fastly .com, Fly .io, and typesense .org. Okay, let's talk. Kaizen 12. We are here to iteratively get better. Welcome, Gerhard, to Kaizen. Hey, it's good to be back. It feels like I haven't gone anywhere. I don't know where the two and a half months went. It was just... A fast car. This one definitely snuck up on me. So, whereas last time around, you were celebrating all my wins, this time around, I don't feel like I did very much. So, not sure what we're gonna talk about, but... I think you're being modest. I think this is a cue for Adam. To compliment me? I celebrate you. Now it's Adam's turn to celebrate you, to celebrate your wins. I'm fishing for compliments. Well, I celebrate Jared often, and in public, on podcasts. I did ask our neural search engine on Changelog News the other day, for Changelog News, I asked it to answer me in the database of Adam Sakowiak, to talk about how cool his co -host is. And it said, Adam has never done that on a show. Busted. Did you hear about this, Gerhard? We had a listener build a neural search engine, all using our open source transcripts. Pretty cool. Where is it? I want to check it out. Let me grab that URL. While he's digging it up, I want to defend myself here, in the fact that, in the back channel, he also asked the same neural search engine, my opinions on Silicon Valley, and there was none, so. Yeah, that's a lie. It's broken. It's true. Which is incorrect. It's alpha. It lives at changelog .duarteocarmo .com. Let me just send you the link, Gerhard, put it in the show notes. Oh yeah. It's going to ask, is there a W or a U? Okay, it's a U. And you can use the search to just like search for what people have said, you don't have a transcript, or click over to the chat and you can ask a question to Adam, for instance. And I asked Adam, how cool is your co -host? And he responded, I have not mentioned my co -host's coolness in the given context. Now, as you said, Adam, Duarte has been furiously working on this. It's a side project of his, so it's getting better, but I think it did have some missing data because yeah, I also said Adam doesn't talk about Silicon Valley. So maybe you say I'm cool a lot, and it just didn't find it in your transcripts. I also asked the same neural search engine about our good friend, Adam Jacob. I just said, thoughts on Mongo question mark. And it comes back with this like multi -paragraph argument. He says, I have some experience with MongoDB from building my business chef and thinking about open source communities. I believe that the soul of why we create software should not solely be about monetization. It's also important to differentiate and it goes on and on and on. But like, it came back with a good response. And I was like, wow, that's so cool. As if Adam Jacob himself. It felt like I was talking to Adam Jacob. Right?

  2. SPEAKER_01

    Let me

  3. SPEAKER_00

    ask it again now that Duarte has fixed a few bugs. By the way, when he goes to this URL, if you go there, switch from search to chat, defaults to search. Yes. I think chat is the one that's a bit more interesting. And then you can ask a question too, and you have a dropdown. You can ask the question from a bunch of people. Yeah, everybody who's ever been in transcripts basically. Very nice. So let me ask Adam, what is Kaizen? That's a good one. You're gonna be disappointed. Generating response. I'm not sure. Sounds about right. It's like live coding in a way, but not really. Yep. Well, we've definitely mentioned Kaizen. I've definitely mentioned Kaizen. I'm not sure I've elaborated on the meaning of Kaizen. So that could be accurate. Say it more in this sentence. We'll get it into the search engine now that you've said it specifically. Well, I'm a lot more narcissistic than you guys. So I just ask questions about myself. So I asked Gerhard Lazu, how cool is Jared? And it answered, I'm not sure, but Gerhard mentioned that he was impressed with Jared's knowledge and appreciated him creating something. Gerhard also mentioned that Jared did a lot of work and has lots of commits. All true. Based on this, it seems like Gerhard holds a positive opinion of Jared and thinks highly of his abilities. Approved. It's got hard to prove. So there we go. There's some reinforcement learning from human feedback. That's a good response. Gerhard approves this message. Pretty cool. He also open -sourced all of the bits. So if you're a changelog news listener slash reader, you already know this and you have the link. If not, check out the links for that in our show notes so you can go play with it. And probably even more instructive, you can go check out how he built that. He used SuperDuperDB, which I was not previously aware of. And yeah, pretty sweet little side project for Duarte. Should we begin most podcasts now just gushing about each other, just to adjust the engine? At least the Kaizens. Yeah. Yeah, at least the Kaizens. Yeah, we have to improve our perception of ourselves and our impression of ourselves. Naval gazing was mentioned in the past. I think it's something different, but still. The sentiment is similar. Well, in good practice of podcasts, then let's give them what they came for. What did they come for this time? Some Kaizen or some naval gazing?

  4. SPEAKER_01

    I

  5. SPEAKER_00

    think some Kaizen. I'm pretty sure everyone is here for the Kaizen. The one thing which I would like to do differently, this time, or at least try and see what people think, is to talk about or start talking about the things that we didn't do. Oh, goodness. We always talk about the improvements. We did this, we did that. How about what we procrastinated for so long then we didn't have to do it? I think that's part of the Kaizen. That's kind of one of my favorite things to do in life. That's your influence on all of us, Jared. Some problems do solve themselves over time or become obsolete. Well, turns out, don't even need that, such as a caching solution that spans multiple Erlang nodes or clustered Elixir thingies because, I don't know, we rolled out static -generated feeds to R2, put them behind fastly, and just regenerate all the feeds whenever we need to, and object storage is a nice cache when you're caching things that don't change all that often, and that problem kind of solved itself. I mean, I'm happy. I'm grinning ears to ears over here. There's probably a time in the future where I will want that again. I'm starting to think of one, but for now, don't do it long enough, and you may never have to do it the procrastinator's way. When you add a cache like this, if you added this new cache variant to the multi -node cluster, like discussion, is it a discussion 451, or is it a pull request? Yes, discussions. Yeah, there's still pull requests either way, right? If we did that, what would have been involved to produce as new code, new infrastructure, and then what would have been required to maintain it? Well, our good friend, Losh Beekman, definitely showed us the way to write the code, so it wasn't very much code. If you looked at his pull request, which we ultimately closed, much to his dismay. We did not receive that one. His solution was using Postgres PubSub in order to notify all the various app servers of a need to flush or refresh or whatever, delete this particular key from their caches, and that's cool, but Phoenix has PubSub itself built right in, so he mentioned you could do it with that, and that's probably how I would build it, so basically just a fair bit of spiking it out and then coding it out to be robust, and then maintenance would be minimal. It'd be minimal, but we would have to get releases involved with our deploy process, because that's how you can actually achieve the clustering, is via releases, which we don't currently use, so it would have required a little bit of infrastructure changes from Gerhard in order for those nodes to actually be able to cluster and talk to each other. Otherwise, I think you PubSub through Phoenix and just nobody else hears it. Is that right, Gerhard? That sounds about right, yeah. It wouldn't be much. Yeah, I think the nodes can still cluster, but again, it's a bit more involved. I remember doing this with the RabbitMQ, so you can definitely do it without reusing releases. I've done it in the past, so I know it is possible. Maybe there are a few things different in Phoenix, but they shouldn't at the end of the day, still in the Erlang cluster. What are Elixir releases? What are they? Yeah, do you understand? Is it some sort of a tarball or something? You're packaging everything in a way that's self -contained, and you have the option of doing hot upgrades, so like in -place updates, live updates, hot code reloads. It just opens up the world to a whole new set of possibilities. Also, you no longer need Erlang to be available. I mean, it's all packaged part of the app, and when you release it, it's almost like a Go binary, but a bit more involved because you have a bunch of other pieces, but it's all self -contained, so that when you start it, it runs. Everything's there. You don't need extra dependencies. Right, versus what we're doing, which is effectively booting up a Docker image and then telling it to start its Phoenix server and go, right? Yeah, so we install Erlang and a bunch of other dependencies, so it's already in the image, and then we just add the app code on top, which gets compiled. So it just boots up, and it runs the code, while this one, we wouldn't need Erlang separately. It would be all part of the release. Now, I did mention this, I think, last time, and I don't want to go too much into it, but I mostly solved it, like 90 % solved it, but then the idea was let's just finish the migration to whichever version of Dagger it was at the time. I think we went from Q, a Q -based one, to the Go SDK, so a code -based approach, and the focus was let's get that done. Let's leave releases. Basically, it was de -scoping, so I would get things out the door. And if you look in the code, it's still there, like commented, like, hey, we have releases, but Jared really needs this. I think it's actually in the to -do, so let's just get it out there, and then we'll figure releases later. So I know that they will come, so it's just a matter of time when we add releases. But yeah, I think releases were linked to this clustering. The bigger issue was that now you have a cluster of instances, which is both a good thing and a not so good thing. How do you push out updates? You have to roll through every single instance, and it just complicates things, because you're no longer updating a single instance. And because we have a CDN in front, which is able to basically serve 95 -plus percent of all the traffic, especially with all the recent changes, do we really need a cluster? I think it's like the whole thing of simplicity. We can keep things as they were, and they've been running reliably for a bunch of years. We don't really need to go to cluster. We don't really need to run multiple machines, and then have all that communication between them. And what if something, I mean, network is unreliable? Right now, we have a single instance. Everything is happening there. Again, Erlang is amazing at this, so I'm sure you have handled everything really, really well. But it's a big change. From one to many, it's a big change. We didn't have to do it, ultimately. We didn't have to do it. That's the beauty of this. One thing we're going to have to change, though, is how we mention Fly. Because I have, I think, Jared, you've resisted. I thought we were going there, so I was preempting the going there. I've always said we put our app and our database close to our users with no ops, which is a true statement on their behalf, but that's not what we're doing now. And we were trying to go there, and now I guess we're not. So we're a single instance, not a geolocated application, although we could be, right? We're just not going to do that? Well, we have a CDN in front. So what that means is that we are close to our users. Most of the responses are cached. Just not via Fly, right? Exactly. Well, I'll have to stop saying that. I'll just tell them that they can do it. That's what I say. I say, put your app, servers, and database close to your users, right? We do have multiple nodes, though, right? We have two Fly nodes. I don't know what the particular term is. It's just one. The app instance is only one. We only deploy one instance. I thought we had two at one point. Oh, you're right. We did go to two. I forgot about that. You're right, but they're not clustered. Right, they're just not clustered. I forgot about that. We could go to as many as we want. We just haven't. Yeah, they're not talking to each other. That's correct, yes. Because all that matters is if they don't have shared cache state, then they can just be, I mean, they're basically passed through app servers, right? They're all talking to Postgres. They're all uploading to R2. And so that's where they have no local anything. So we can, and we have. We have two, because Gerhard always wants to have two, even when he doesn't know it. That's true. I remember that, yeah. Even when I forget it, I still have two. Where's the second one? Just keep looking into our bag, it's there. I think they're both in the same data center right now. They are, yes. But we could scale that horizontally at this point. I just figured, because we have Fastly, and because we're not, I mean, we're a mostly static web app, it's just kind of overkill. But we were going to do it because it's fun. If we didn't have Fastly, then we would leverage the built -in no -ops geolocate inside of Fly. Well, that's the thing, is with Fly and with services like Fly that will do this, they say you don't really need a CDN, because you're running these app servers all around the world. But you've already got one. Yeah, there's an extra thing there, by the way. There's also the proxy, the Fly proxy, which is what intercepts the requests, and that is distributed. So even when I connect to the Fly app, which our instance is running in Virginia, I'm not connecting directly to the app. I'm connected to a proxy, which is whichever is closest to me. That happens to be in London. And from there, it's within the Fly network. It talks to the app instance, which is closest, because we have two, and they're both in the same region. It goes from Heathrow through the Fly network into Virginia. And then the app eventually serves the request. Now, the other thing, I mean, there's like two parts here. It's the app that needs to be clustered. But separately from that, if, let's say, the app instance is in Tokyo, let's give an example, but then the PostgreSQL, the database, is in Virginia, like now you need to have an instance of the database close to where the app is. Yeah, latency would be too much, right? Yeah, and then you use that for the reads, and then the writes would still go through the master. So it just complicates things. And by the way, we're not doing that. We're using the master for everything for both reads and writes, the primary. Mainly because I think we don't have a lot of people contributing content to the database, right? So it's pretty located within the Midwest here in the States. Obviously, you're in London, but we don't have a lot of global users generating content, creating content. It's mostly reads. It's read heavy, like 95 % is read. Which is why R2 made sense, which is why S3 made sense, which is why a CDN makes sense. Yep.

  6. SPEAKER_00

    Well, we did punt the need for all this extra stuff, but we also gained some dollars back in our pocket. In June of this year, our AWS bill was $155 .21, and the most recent bill from AWS, S3, was $16 .46. Nice. So moving to R2 really shaved off quite a bit. And how much is our R2 bill? I think it was like less than five bucks. Yeah, I didn't even pay attention, it was so small. I think it was sub $5, it was like three or four bucks for a month. Now, we were holding AWS S3 wrong. I have to mention that. Okay. So the integration that we had, when it comes to caching. When you say we, who's we? Me? Mostly Jared. Oh, you? No, no, me. I'll take this one, it's okay. I gotta get the, the neural engine's gotta know how we're feeling about guarantees by my friends. So we gotta put that clear. I get things wrong a lot, but sometimes I fix them so fast, people don't even realize they're a problem. Unless I admit it, so I'm gonna admit it now. He's gonna admit it this time. So I wanna give a big shout out to James A. Rosen. Oh yes. He's James A. Rosen on GitHub. So he listened to the previous Kaizen, Kaizen 11. He reached out via Slack, and we had two amazing pairing sessions. If you go to the GitHub discussion for this episode, which is GitHub discussion 480, you will see a lot of screenshots, a lot of details about how we debug this. And James was super, super useful. He worked at Fastly in the past, and he had specific insights, including some very nice like diagrams and formulas. They're all there, go and check them out. And we went through a few debugging sessions. And what that meant is that I not only understood very well how Fastly works, how the shielding works, what do the various things mean. Again, it's all captured in the discussion. The problem was that the headers that we're getting from S3 in the Fastly configuration, we were not processing them correctly, which means is that the surrogate control and the caching was not being respected. Therefore, Fastly was hitting AWS S3 more often than it needed to, and just hitting it from the shield because the shield still had like that content cached. The shield is a series of very big beefy servers, like think vertically scaled, and they keep everything in memory so they're super, super quick. But sometimes they need cycling. So based on the server which is cycled, the content which was cached, maybe it's no longer cached. Therefore, it has to go back to S3. So that configuration was not very good. And the shield will basically have to keep going to S3 to pull content which has already seen way more than needed to. So we have three to four X improvements across the board now that we're caching things correctly with R2. Let me ask you this question. How long have we been holding it wrong? Like how many years? All of them. Is this 2023, right? We established this relationship with Fastly back in 2016, I want to say, right, Terrence, 2016? All of the years. Well, we weren't always on S3. We had local files for a long time. I think it was somewhere six to 12 months, I think, roughly. But again, the problem wasn't that big that you would see it. I mean, as we kept driving improvements, we kept getting closer and closer to getting these things improved, the latency reduced, the cache hit ratio. Basically, we were trying to get it high. And there's also a lot of noise. So when you look at all these requests flying through, it's not immediately obvious what the problem is because things are kind of working. And okay, the bill is slightly higher, but it's not as efficient as it could be. But it's not broken in that sense. We moved to S3 when we left Linode, right? We had all of the assets stored locally on disk with Linode, if I recall correctly. When we moved to Fly, we moved to S3. Is that correct? No, I don't think so. I think we were on S3 prior. So no, this is for the MP3s. Yeah, well, we had MP3s stored locally. We were uploading them locally. Remember, Gerhard? And you had a volume, a Linode volume, that always had issues with read latency and crap. And then we switched to S3 while we were still on Linode. And then later on, we moved to Fly. And it was simpler because we already had our assets on S3, so there was no moving of those, if I recall the order of operations. It makes me also think about these problems at scale. This is a small scale. And I think it's part of the beauty of Kaizen and continuous improvement at our scale. We've done things like Linode and Kubernetes, not Linode in particular being a bad choice, but Kubernetes is not necessarily a smart choice for us to do because we were in a single node for a long time. It's obviously better at multi -node and there's much bigger problems that Kubernetes solves. But we get to expose these very small problems, really. I mean, we're talking about 100 bucks, really, in cost that was incorrect. And we've hunted it down through, why is this bill growing? Paying attention, iterating, all collaborating. I just wonder, at larger scales, with larger teams who just have multiple teams with legit AWS billing issues, like hundreds of thousands of dollars, hundreds of machines even, hundreds of instances, and how this problem permeates in a team at scale. I mean, how much money is being wasted, really, by holding it slightly wrong, or completely wrong? Yeah, every system has inefficiencies. Unless you look at them, they can be growing. They can be, worst case, not getting fixed. And that has always been a problem, you don't even realize, and benefit of the hindsight, of course, I should have felt that, it's a simple thing. But unless you're paying attention to these things and are doing a conscious decision, it's okay, we will be improving, and we'll be looking at this thing, we'll be trying to drive these small improvements. It took us a while to get here. And I think the details aren't exactly clear, because we went through so many of these cycles, in my mind, they're starting to blur at this point. I know we talked about clustering for so long, to the point that it stopped being relevant. You know, like, hey, we can solve this differently. I think that's the beauty of it. But at the same time, you should be driving improvements constantly. I think that's why I want to emphasize this, I want to share our story, in that, hey, even us, as amazing as we are, again, going back to how this episode started, we still get it wrong, and it's okay to admit it publicly, and have a laugh about it, because otherwise, you'll get miserable. You really will. Right, well, you get to laugh and learn, right? Laugh and learn. Pretty much. I'm looking at this chat history between you and James, and just seeing, like, equations of, you know, like, Fastly, Edge, Shield combined, and all this offloading the R2, and, like, this went deep, this collaboration, the learnings it came from, it's pretty deep. Yeah, big shout out to James, I really enjoyed it, it's all there for you to enjoy, and see and learn from if you want, if you're using Fastly, or any CDN. I think that would really help. The SLO improved. I mean, if you open up Honeycomb, that's the last thing which I want to mention here, and we have a link there, so we're going to open up the dashboard. I have to log in, of course I have to log in. All right, give me a second. Do you have the same link open, Jared? Which one? The one, so this is in SLO, 96 % of all feeds serve within one second, last 60 days. And I was saying, notice the gain since August 31st. So this is us moving the feeds to R2, and we're seeing how we're using caching correctly, and we're seeing this like boomerang almost graph, which we were just under 95%, and then shot up, literally shot up, it's always like 45 degree angle. This is the SLO budget going upward, 96 .8%, and we're looking good, we're looking really, really good. So this is a combination of feeds being now published to Cloudflare R2, and us consuming feeds from Cloudflare R2 with proper caching, which means it really fastly delivers most of them, and that just improves everything. Question, Jared, how do we expire the feeds, these various podcast feeds, in Fastly? How does that happen? We hit their purge API. Nice, that's the bit which I was missing. So there's like two parts to this, upload the feed to R2, and then hit the Fastly API to purge that specific endpoint. Exactly, nice, which is kind of cool. Here's a Fastly tip that I've learned through doing this stuff. You can actually do that with cURL as well, by simply by setting the request method to purge, which I think, is that an HTTP method, or did Fastly make that up? But it's not like a post with a special parameter or like anything like that. If you just do cURL and then the endpoint, it'll obviously get you the endpoint. If you do cURL dash capital X space purge, then the endpoint, you're just telling Fastly to purge that thing, which is kind of weird, because couldn't you like DDoS a CDN that way, by like just continually purging somebody's URLs for them? I mean, don't do that, dear listener. Edit this out, beep, beep, beep, beep. That's like a Fastly feature, that's kind of cool. I mean, it makes it really easy when you're like, do I want to make sure that this request is going to be completely pristine, is before I actually cURL it, I'll just cURL dash X purge it, and then the next cURL is going to be fresh. Can anyone do that? I thought like you need like some sort of a key to do that, really? Anyone can do that. That's why I said, please don't do that. It seems like a... Are the endpoints hard to guess though? No, man, it's just whatever URL you're getting. That doesn't sound right. I think we're still holding it wrong. It doesn't sound right. It works.

  7. SPEAKER_00

    Okay, well, these things are beasts. Just like everyone listening to this, there's such complicated systems that they all come together, right? So there's a bit of appreciation for how complicated these things get. There's all sorts of edge cases. There's always edge cases. I don't know whether we are hitting one, but this doesn't sound right to me. No one should be able to purge our cache, our Fastly cache, except us, if we have the correct key. I agree, but you can just do that. So let's put a pin in that and follow up, rather than live debug it. What would happen if you purged the root of changelog .com? Well, it's just gonna regenerate it on the next request. Just the homepage, right? Yeah. Whatever assets are there for the homepage. No, just that URL. It's just a single URL purge. So you can't do like a wildcard. No, you can't do a wildcard. Well, what's in the cache for, let's just say the root of changelog .com? It's just whatever that response is. Oh, I see. So maybe we talk to James next session and see if this is, did I find a bug in Fastly, a global purge bug that would allow very simple CDN DDoS by anybody against any Fastly endpoint? Oh, check the documentation first. So this is like a feature, like node feature. I think it is, because I read about it. I didn't learn this by trying it. I read about it, but I don't remember. I know that, now I'm Googling it. Maybe we should just put a pin in it like you said. That's what I'm thinking. This rabbit hole is OD. Super handy, it's super handy. How many years have we been digging at Fastly, this specific rabbit hole? And that basically shows what it takes to achieve mastery in any one thing. In this case, it's Fastly. Kubernetes, I think, is like another special hole. And there's a few others. Fly .io, there's like so many things within Fly .io. Every single hole needs a shovel and some digging before you can call yourself, I know this hole. So anyways. I found the docs, URL purge. A single URL purge invalidates a single object identified by URL that was cached by the read -through cache. This can be done via the web interface by using the Fastly API or by sending a purge request to the URL to be purged. For example, curl dash capital X purge, HTTP, blah, blah, blah, example .com, path to object purge. There's no authentication or anything on that. So that's right there in their docs. Okay, so you must be feature then. It must be a feature. I'll link to it. It must be a feature then. It's a cool feature. I appreciate it as someone who's developing against their system. How do we disable it? That's what I want to know. Is there a button to disable it? There has to be a reason why this isn't dangerous. There has to be a reason why this doesn't matter all that much. We can dig into that. Speaking of buttons in Fastly. He doesn't want to talk about it Jared, he's done. That's all right. I'm pushing your buttons on Fastly, okay. That's great. I pushed a few as well. That's yours, Fastly's buttons. I appreciate that. What's up friends? There's so much going on in the data and machine learning space. It's just hard to keep up. Did you know the graph technology lets you connect the dots across your data and ground your LM and actual knowledge? To learn about this new approach, don't miss Nodes on October 26th. At this free online conference developers and data scientists from around the world will share how they use graph technology for everything from building intelligent apps and APIs to enhancing machine learning and improving data visualizations. There are 90 inspiring talks over 24 hours. So no matter where you're at in the world, you can attend live sessions. To register for this free conference, visit neo4j .com slash Nodes. That's N -E -O, the number four, j .com slash Nodes. Finally, we have HTTP3 enabled. So if you have a client that supports HTTP3, the website should be quicker for you. 20 to 30%, it was just a button, literally. I was just like, oh, this should be enabled. I just enabled it. There's nothing in the VCL, by the way. I was a bit surprised. I was expecting some sort of a conflict to change, didn't. Okay. So we have, I think about 30%. I'm just going to click this link now to look at the beta Edge browser in Fastly. And again, I have to log in. I'm doing that now. Just to see how many requests in the last few days are now HTTP3. 5 ,000. This was in the last one hour. So in the last one hour, 5 ,000 requests were HTTP3. Do we know who's making those requests? Well, we can dig into them, but basically we're serving HTTP3, which is much, much quicker. And that was just a button push. Just a button push. I'm going to say much quicker, 20 to 30 % speed. I mean, for me, it's mostly instant, but HTTP3, like many, just like slightly more instant. So I could appreciate that. Slightly more instant. Slightly more instant, exactly. That's what happened. Like four milliseconds to, I know, or like six to five, something like that, or six to four, which is nearly instant. Also, we enabled Brotli compression. So they claim 10 to 15 % reduction in traffic over GZIP. How do we measure that? I'm not sure. I was looking at different ways of measuring it, looking at the responses. Some don't seem to be smaller. I mean, even though they're Brotli, genuinely Brotli encoded, but they're not smaller. So, not quite sure what's going on there. Maybe, I know that if it's already encoded with GZIP, it won't re -encode it as Brotli, but in our case, it's not. I can see there is a difference in bytes, in the size of bytes. So they do change, but they're not smaller. Also, we are redirecting RSS at the edge. So we have like an extra, config it fastly, which means that most users should get responses in one millisecond versus 400. Most users that do what? That go to RSS, forward slash RSS. Which is an old URL that we no longer serve from, right? Well, it redirects to feed, but we still get like thousands of requests going to it. So, you know, no need to hit our app basically. And there's a bunch more improvements there in like Fastly. So there are a few low hanging fruit like that. I pushed a button in Fastly. What did it do? I pushed the enable web sockets button. Ooh, tell us about that. Because it's a trial by the way, and it's going to expire. It is. So we're waiting for a phone call. On October 29th. Yeah, we'll see what happens. So this is part of our Oban web story, which is a bigger story, but includes this subplot, which is that when you rolled out ObanWeb, which is a web interface for our background job processing library, Oban,

  8. SPEAKER_01

    it

  9. SPEAKER_00

    uses Phoenix Live View, which uses web sockets to be super continuously automatically updated and all those cool things that modern web apps like to do. And it worked great until it didn't work because you had to allow web sockets through Fastly because we're sending all change .com traffic through Fastly and Fastly requires you to push a button for that to work. And then here's what was kind of funny. I went and pushed the button, activated it, got tested it, didn't work. Dang it. I'm thinking, did this free trial actually activate? You know, maybe it takes somebody to actually on there and go do a thing for that to actually work out. So of course I immediately blame somebody else. And then like a half hour goes by, I'm thinking, you know, it's just going to take them a minute to actually activate the half hour. You know, I go do something else, come back. That still won't stink and work. So I go back to the web UI and it has web sockets turned off and I was like, well, maybe I didn't push the button. You know, so I do it again. Same process, reload the page. Web sockets are turned off. This, it turns out is a bug in their web UI. Nice. Not the feature. This is this web bug. Actual bug where web sockets were on, but for some reason the actual configuration UI just wasn't recognizing that they were on. You didn't push it hard enough. That's what it was. Yeah, that's right. So then I go back to the docs and realize it was two steps. And I only followed the first step, which was turn it on. And the second step was don't turn it off on accident. No, the second step was you have to go update your VCL and put a snippet in to allow basically HTTP upgrade requests to immediately pass through versus hitting the rest of your config. And then that's why I went and did that. And it still didn't work. And then I was really at the end of my wits, but I remembered a hack that you and I did years ago when it came to just manually setting the backend on those cookie requests. Remember that? And I was like, you know what? This code looks a lot like that code. I'm going to copy the same hack. And now we have two hacks. And now we have two hacks and they both work. Okay, I was going to say, no wonder nothing works. Well, I mean, why only have one Gerhard when you can have two?

  10. SPEAKER_01

    So

  11. SPEAKER_00

    not hacks, features. You like to have two of everything. So now we have two hacks. And they both work perfectly. And Oban Web Works and the WebSockets pass through is a beautiful thing. And I'm now monitoring. I'm watching our emails get sent without connecting to the production instance database. Yay. So do you still feel like a boss? I got it. Because you connected to the production DB instance. That was the moment when you felt like a boss. You had all the power. I can RMRF everything. I did. Well, you know, I can still do that whenever I want. So I still have the power. I just don't have to wield it if I don't want to. So that's cool. So I clicked the WebSockets button a bunch of times and I finally got it working. And now it's enjoyable. My favorite thing is to send off changelog news, which has over 20 ,000 subscribers and watch them get their emails, you know? Cause like the cue balloons up and then it's just like fly in. I'm just imagining like, you know, emails just flying everywhere. That sounds amazing. You have to show me when that happens. That sounds really cool. It is kind of fun. Okay. I want to give a big shout out to Parker Selbert, Sorrento on GitHub. You can check the debugging chat in our Slack dev channel. Remember, don't archive the dev channel. I don't know how to disable that, but still like don't do it. It's a bunch of spammy robots. We think we disabled it, but you never know. You never know what buttons stick and which ones don't. Right. As long as you don't press that one, we're good. And I also want to give a small shout out to Lars. He's also helped a bunch. So go and check it out. The PR is four, seven, two. All the details, what went into it, the integration, everything. Thanks guys. Thank you both. All right. I think we're getting closer to the best bit. Because just before we started recording this, I opened the new PR on Nightly. Remember what we talked about Nightly? I saw this pull request come through. Okay. I hope you didn't look at it, because I wanted to see your reaction live. I saved it on purpose just before the show. Oh, is that why you waited? Yeah, exactly. I thought you were just procrastinating coding. No. No, I was waiting. It was like there for like ages and I kept improving it. Not in a branch. The branch is called Dagger Eyes. So if you containerize and you use Dagger to deploy, it's Daggerized. Okay. Let's see if it sticks. It's in the Nightly repo, pull request 42. Okay. So this was my marching orders for you on the previous Kaizen, which were doing Kaizen -driven development, which is why I thought the PR got opened so late, like literally like 10 minutes before we hit record. Yep. And this was to take changelog Nightly, which is an old Ruby -based, cron job -based

  12. SPEAKER_01

    program.

  13. SPEAKER_00

    Yeah. Monstrosity. I wouldn't call it a program at this point. Yeah. Oh, thanks. It's a script. It's a script. It's an app. We're calling it an app, the changelog Nightly app. Yeah, the app changelog Nightly, which sends out thousands of emails every night to just dutifully, you know, like a script should. For years, right? For years on an old DigitalOcean server, which is pretty linode then, I guess, Adam, because this predates linode. I'm seeing 2015. It does, yeah. I mean, this probably been 2015, so almost a decade, changelog Nightly has been sending me emails and thousands of other humans. It's just amazing. We should add up the math on how many emails Nightly has sent out. Anyways, I said, please get this off of DigitalOcean. Let's get it on the fly. Daggerize it. And, you know, 10 minutes before we hit record, you finally got it done. That's it. I was waiting. I'm like, look at this dog, sweating. Will it be finished? It will be finished. I am surprised. I just thought you weren't going to do it. No. I spent so much time with this and I took my time. I took my sweet, sweet time. And I even did like a Cloudflare pages experiment. So it's using Wrangler, by the way. That's up and running. You can go and check it out. Nightly .changelog .place. Nothing to purge. I also did like a Cloudflare art experiment. There were like some issues with index HTML. And if you go to this pull request, you'll see, obviously there's a Dagger pipeline, which is captured as code, as go code. There is a GitHub actions workflow that runs all of this. There's an nginx conf. So we are using nginx to serve this. There's something called super chronic, which is a modern way of running crontab. Go and check it out. It's there. Okay, I'm excited. Procfile support. Who remembers Foreman from the audience? I remember Foreman. Exactly. Hasn't updated in three years, but doesn't matter. It's amazing. It's old tech. It's new to us, I guess. So we're using Foreman. We tried, for example, to use multiple processes in Fly, but when you do that, you get multiple machines, which we didn't want. We want a single machine. This is really small, really simple. We have a super chronic process, which basically runs this cron. It's really nice. And the crontab, which now is in the repo. We are versioning our cron. It finally happened and it works. You can debug it, a bunch of things. This is cool. And the last thing remaining is the one password service account integration, which is the secrets. So 99 % done. Let's see how long this is going to take to merge. Too much, yeah. That last mile. So it's there. And this just gave me a bunch of ideas. I'm wondering, do you want to try it locally, Jared, to get some reactions out of you? Or do you think you're not feeling brave? I mean, you mean right now here on the show? Yep, right now, right now. How hard do you think this is going to be? If it takes more than two minutes, we should skip. Okay, let's try it and then we'll just skip. All right. If we must. All right, walk me through this. Do you have Dagger CLI? Oh goodness. No. Let's skip, let's skip. Okay, no, that's okay. There's Brew. I actually just reinstalled Docker on this machine. Well, you got to wait for two minutes for Brew update. No, I just ran it because I literally installed Docker the other day. Okay, good. Much to my chagrin, I got Docker back on this sucker. Docker? Okay, we need to talk about that, but not now. Three more minutes. Okay, Brew install Dagger CLI or Brew install Dagger? Basically, it's Brew install Dagger forward slash tap forward slash Dagger. That's the command. Boom. Oh, do you have Docker running locally? Yes. Good. Okay, you'll need that. Oh, good. We'll talk about that.

  14. SPEAKER_01

    I

  15. SPEAKER_00

    thought you just said you should talk about getting me off Docker. Yeah, I should. But again, it's a bit more complicated than that. But for now. All right, I have Dagger and I have Docker. Excellent. So now check out the branch. Okay, check out the branch. And then in the repo in the top level, I'm assuming you have Go installed. So much assumptions. What's the branch name? Dagger Eyes. Get check out Dagger Eyes. Okay, which Go? Doesn't matter. 120, if you have 120, you're good. I know, I was just literally typed in which Go. Go dash dash version, one out 20. That's good. Okay, I'm there. Dagger run. Dagger run. Go run dot. Go run dot. Dagger space run space go space run space dot. Correct. You got them all. I'm going. Oh, I like this little UI you have here. That's a little like. That's Alex at Veto. That's really cool. Alex built this. Episode 64, ship it 64, I still remember it. How do you have Go installed, by the way? Yeah, I actually have it installed via ASTF. Do you think I set it as a global or local or something? I think you need a global one or you can commit it both work. I forgot that you use ASTF too. Right. So you can either do global or, I think global is the easiest one. Dagger run, Go run dot. Dot. Yeah, that's a Go run dot. We're just basically wrapping. Okay, it's downloading. We are, we're connected. We started the engine, we started the session, we're connecting. Ooh, success. Name nightly, usage nightly, global options, command, version, author, Gerhard Lezou. Gerhard, come on, give me some credit here. I will. The PR hasn't been merged yet. You don't have to approve it, by the way. I wrote some of this stuff. You did, you did. Actually not, okay, none of the Go code. Did you rewrite the whole thing? Yeah, but isn't this the nightly app right here? No, no, it's not the, it's just the automation for the nightly app. Oh, I see. So you get a CLI that does all the automation, and it's called nightly. Okay, so you authored the nightly CLI. Correct, which basically is a Dagger pipeline. It basically wraps Dagger and has a bunch of commands. What are the commands that you see there? What does it list? Build, test, CICD, and help. Great, I think we should try build. So now you have to run Dagger, run, Go, run, build? Yeah, or you can press up. Yeah, I just pressed up. Space build, you just append build to the previous command. After the dot, the current directory. Yeah, so you have the go run dot, basically which looks for the main, it like, I guess like the main package, and it's just like, you know how Go works. And then you tell it, again, if we had the CLI bundled, it would be just a binary, so it would run the binary. But in this case, we haven't built it yet, and maybe we shouldn't. The idea is like all this code is there, and we're running it from there, and Go does its magic. It's doing its magic. For those who are listening, this UI looks like, in the terminal, but it looks like a git commit graph, you know where the merges and the branches are. It's a lot like that, only it's multicolored. Right now, it's executing bundle install, which is why it's taking a little bit. And it's very shiny, and if everything goes correctly, then I'll be very happy. But if it goes incorrectly, I'm gonna look for this author, Gerhard Lazu. Yes, if you know him, we need him. I'm gonna blame him. So this is the beauty, or one of the advantages, of packaging pipelines this way. They work the same on my machine, they will work the same on your machine, and they will work the same in GitHub, or any CI that you use for the matter, GitLab, whatever you use, Jenkins, even Jenkins. Even Jenkins. Even Jenkins. It makes a couple of, so for example, like the whole provisioning, for the engine to be automatically provisioned, it makes an assumption that Docker is available. And it's because it just basically needs to spin up a container where all of this runs. So if you don't have that, then you get into issues where our platforms on Linux is supported. Anyway, so it just basically shortcuts a lot of things. In production, we run these engines in Kubernetes. In our case, we run them, change log, we run them on fly. So we have a bunch of engines, of Dagger engines deployed on fly, we spin them up on demand, they're just machines, we suspend them when we're done. When the pipelines start, we spin them up, we run the code against those engines, they're stateful, we have everything, they're super fast because everything is cached, and then we spin them down when the job is finished. So that's why we don't have to worry about this and we're not using the built -in Docker that comes with GitHub Actions. So we don't make use of that because we run our own engines. And by the way, all that code is in change log. In this case, it's slightly different because the nightly repo is different. So we do use the Docker and GitHub Actions, or it automatically provisions the Dagger engine and then everything runs there. And by the way, you can look at the actions because part of this pull request, we added a new action, it's called ship it, and you can go and check it out. This is a GitHub Action. You can see how that runs, how long it takes, a few things. So go build, worked. Now I ran go test while you were talking because I got bored. No offense, Skaard. No, it's all good. I talked for a while there, all good. And it ran. It was for you, it was for our listeners. Sorry, I had to pay it back. 50 examples, zero failures, okay. So all my tests are passing, I just want to point that out. But it took 20 seconds to run and the test runner took 0 .85 seconds to run. So is it doing a lot of setup every time you run because it's inside of this whole pipeline or is it not caching gems? I mean, it was faster than the build was, but it was still 20 seconds to run the test. So I think it will show you what was cached and what isn't. So if you're looking at the output, it tells you all the operations that were cached. Can you see that? So which parts weren't cached? It looks like exec bundle install. Was not cached? Wasn't cached, it ran it. It fetched three, used the rest, installed a few. Maybe it's because there was some that were only test. So that's probably why. It had to run it again because it's going to get, let me run it now a second time and see how. Exactly, because by default, it says without test, if you look at the output. And then it says, when you want to run tests, it says with test. There you go. So there is 3 .57 seconds. And those were all cached, so much more reasonable. Pretty cool. By the way, I ran the same thing. And for me, it was nine seconds. I ran exactly the same thing as you. The gems had to be installed. Sometimes the internet connection has something to do with it. We're talking seconds is not a lot, but still. This is rad. So what if I want to hack on this now and I want to run the web server or the rake file or et cetera. That's the new stuff that's coming. It's not out yet. That is the cutting edge. That is the cutting edge. I always know where the edge is, don't I? I always can find the thing like, ah, you can't do that. But you can smell it. Yeah, all this stuff is on the edge. So there's Dagger Shell coming. There's Dagger Surf coming. And by the way, these are all experimental features which may change. But exactly to your point, you want to be running it locally. You want to drop in that environment that's been created for you. You want to do a bunch of things in these contexts. Now, there is something special about the build in that, and this is, again, in the pull request, in that you can use a dash dash debug. And what dash dash debug does, you can open the code and see exactly what it does. But in a nutshell, it exports a local image. It builds a local image, and this is a container image that you can then import in Docker, and then you can get in the context of that. This is a temporary workaround until we get Dagger Shell available. It makes a couple of assumptions. It asks you to have .env. Basically, it requires these files so that it works locally. And they just need to exist. I mean, they don't need to be valid or anything. You don't have to have production credentials, but they need to be set. And it also needs a GitHub DB. That's the other thing that I wanted to talk about. How is this basically wired together? And what else do we need from Nightly to finish the migration? So I know we have the secrets that we need, but there's also GitHub DB, which in my understanding is just SQLite. So as long as we stop it and move that across, actually, we don't even have to stop because there's nothing to stop. Just move it across. And that's what I did just manually. This is the database, by the way, that we backed up thousands and thousands of times over to S3 and realized we had just gigabytes of backups of Nightly. So we have plenty of copies. If you need to get GitHub DB, I can find you one. So that's all good. The question is, as you know, in Fly, there's the dist directory, which is stored on a volume, but the database currently stored in the context where the app code is, and that is not using a volume. That's just a container image. So I think we'll need to relocate this database to a place which would be persistent. So Fly has some SQLite features, don't they, where they can just use their SQLite? It does. LightFS, yes, that is correct. We can use all of that, but again, we need to put this database on the volume, which is LightFS, which basically has the LightFS feature on. So I think we'll need to make a change around where we configure this database and where it's stored, because right now, I think it's hard -coded, the GitHub DB part. I think it's just local to the app code. So the app code will have to change to find where the database is, basically. Right, so we need to have some sort of flag which basically is able to configure where this database is stored. And then we put it on a LightFS volume. The other thing is that, in addition to, so .env is easy to replace, obviously, on Fly, because you just declare environment variables. But specifically, this uses BigQuery, which requires a key. You can't do environment variables. You have to have a file. And so I know that that's been an issue in the past with read -only file systems or things that are gonna get wiped away, is how do you actually get that file into place? Is that gonna be a problem? So I was thinking, well, it needs to be a secret, right? So we could store it on a persistent volume, same one as a database, maybe. But I think really should be an environment variable. I think it's a binary file, right? It's not like a text file. You can't read it. Yeah, I don't think we have that option, because it's an old Ruby gem that's reading this file. Like, it's not code that I wrote that loads that into memory. So if we can somehow put it into the secrets, and then when the app boots, write that into a file, then the app would read that file, maybe. I mean, the other thing which we can do, and again, I know it's not ideal, but the container image gets only pushed directly to the Fly registry. Sorry, yeah, exactly to the Fly registry. And this is the app's Fly registry. So really, only if we have authentication or only Fly can read this image. So it's not like on a GHCR or anything like that. And even there, it could be a private image, but this goes directly to Fly. So could we embed this secret in the image? Yeah, if the image is not gonna be distributed, then we could certainly do that. Yeah, it's just distributed for deployment purposes, but otherwise, it won't be public. It won't go anywhere. Now, if CI is doing that image creation, then it would have to have access to this. So that would have to be a private somehow. However, that's where 1Password comes in. With the 1Password integration, when the pipeline runs, it could read this file directly from 1Password. It wouldn't even need to be stored in the CI. That's the improvement which I talked for a while to do for changelog as well. The idea is that we don't want to be storing all these secrets in GitHub Actions. We just want 1Secret, which is the 1Password service account key or token. I think it's called a key. And then with this, we get access to 1Password to specific secrets which are read -only. And then, once you have this secret stored in GitHub Actions, the pipeline can get access to everything else it needs. That's how I do it, including files, right? Because you can store. Is the key all that's necessary, or is there like a cert involved in that, or like a similar to a SSH key kind of thing? Or is the key itself the key? It's more like a token. If you think about like an API key or a token, that's what this is. This is a new feature that 1Password introduced, they're called service accounts. And before, you had to have like a connect server running, which then connects to vaults. It was like a more complicated setup. You had this extra component. And I was very excited about the service accounts that were, I think, announced in January of this year, and they're finally generally available. The idea is that as long as you have this token, API key, token, you can use OP, the OP CLI, the 1Password CLI, to talk to 1Password. What's up, friends? I'm here with Vijay Raji, CEO and founder of Statsig, where they help thousands of companies, from startups to Fortune 500s, to ship faster and smarter with a unified platform for feature flags, experimentation, and analytics. So Vijay, what's the inception story of Statsig? Why did you build this? Yeah, so Statsig started about two and a half years ago. And before that, I was at Facebook for 10 years, where I saw firsthand the set of tools that people or engineers inside Facebook had access to. And this breadth and depth of the tools that actually led to the formation of the canonical engineering culture that Facebook is famous for. And that also got me thinking about, how do you distill all of that and bring it out to everyone, if every company wants to build that kind of an engineering culture of building and shipping things really fast, using data to make data -informed decisions, and then also inform to, what do you need to go invest in next? And all of that was fascinating, was really, really powerful. So, so much so that I decided to quit Facebook and start this company. Yeah, so in the last two and a half years, we've been building those tools that are helping engineers today to build and ship new features, and then roll them out. And as they're rolling it out, also understand the impact of those features. Does it have bugs? Does it impact your customers in the way that you expected it? Or are there some side effects, unintended side effects? And knowing those things help you make your product better. It's somewhat common now to hear this train of thought where an engineer or developer was at one of the big companies, Facebook, Google, Airbnb, you name it, and they get used to certain tooling on the inside. They get used to certain workflows, certain developer culture, certain ways of doing things, tooling, of course, and then they leave, and they miss everything they had while at that company. And they go and they start their own company, like you did. What are your thoughts on that? What are your thoughts on that kind of tech being on the inside of the big companies, and those of us out here, not in those companies, without that tooling? In order to get the same level of sophistication of tools that companies like Facebook, Google, Airbnb, and Uber have, you need to invest quite a bit. You need to take some of your best engineers and then go have them go build tools like this. And not every company has the luxury to go do that, right? Because it's a pretty large investment. And so the fact that the sophistication of those tools inside these companies have advanced so much, and that's left behind most of the other companies and the tooling that they get access to, that's exactly the opportunity that I was like, okay, well, we need to bring those sophistication outside so everybody can be benefiting from these. Okay, the next step is to go to statsig .com slash changelaw. They're offering our fans free white glove onboarding, including migration support, in addition to 5 million free events per month. That's massive. Test drive statsig today at statsig .com slash changelaw. That's S -T -A -T -S -I -G .com slash changelaw. The link is in the show notes. OP is such a weird. Just now when you said one password, I think the word O -N -E, one password, which is O -P. For the longest time I've been like, what does O -P stand for? Like, what does that, I mean, I've used it before, and I'm like, why is it O -P? I know. Finally, it makes sense. Why wasn't it like one thing? I don't know. No idea. Well, in certain contexts, can you start command lines with a number versus, I mean, like certain, you know, in programming languages, certain variables cannot start with a number in their name. They have to have an alphanumeric character. And so O -P might just be more globally useful than like. And that's probably why, honestly. Up until now though, like literally when he said one password I spell out the word one in my brain. I think, okay, that's why it's called O -P. I just thought their command line was overpowered this whole time. So I thought they were just calling it O -P, you know. There you go. Now, there are a few like binaries that have, for example, two, digit two, two, letters T -O three, the digit three. Apparently it's a Python package, lib two. I have that as well. Oh, it's going to upgrade from Python two to Python three, kind of a thing. You could just alias one P to O -P if you wanted to, of course, but that's the easy button. I like that idea. That's what we're doing in our pipeline. One P. One P. This key that you have for one password, it's in GitHub actions and no one can see that, right? That's not something that's public. It's a secret, correct. It's a secret secret. That's cool. That's cool that we could do that on the fly via CI though, because

  16. SPEAKER_01

    that's

  17. SPEAKER_00

    the way you want it. Exactly. And then the secrets, they can modify in one, we can modify them in one password and we no longer have to update them anywhere else, because whatever is connected to one password is able to retrieve the latest values. This will be so much nicer. That is the way to do it. I sure hope they win, okay? I sure hope they win because there's just some, as a user, daily active user of one password for a decade or more, there's some oddities with how it operates from a UX standpoint as a user. The application, great. Even the application I have some issues with, but whatever. It's a little strange. So I just hope they figure out how to win long -term, because that's a great feature. Well, we need another password manager. We need to go and establish that. Passbolt. Okay, tell us more. Well, what I know about them is they're open source. We did talk about that via DM, Gerard, and I did reach out to the CEO who I spoke with through our ad spots. I was impressed and I didn't consider the license. So this is one of those moments where I was like, okay, you say you're open source. I'm gonna assume you're the best version of open source, but I think it's AGPL, which is frowned upon in some cases. It's not open source. It's just not always not user hostile. It conforms to the open source definition, does it not? Right, it does conform, but some businesses have issues with it. And then I think the TLDR response from the CEO was because this is used like an application, it doesn't have an issue. Whereas if you're trying to consume the software and repackage it, that's when AGPL actually has more of a compliance issue. And so to his defense on that, I think that's kind of pretty accurate. Although the premise of Pass Vault seems quite awesome, where they actually have, if I understand correctly, different than one password where you have one secret and you get access to the whole vault, which is encrypted. This encrypts more like ACLs where you have more finite access controls to particular buckets, basically, or folders, similar to like a file directory. And it's really designed for teams who may have multiple projects with hundreds of passwords per project. Whereas like if I go into one password now, if I'm not in my particular Adam only one password, it can get a little messy with like all the extras in there. Like I can search for something and now I'm seeing our organizational secrets where I don't really care to see them. Like I would love to have the same kind of access, but I don't really care when I'm searching for Adam's favorite password, .com or whatever, and I'm finding like all of our GitHub secrets and like other things, like it's just messy. So I think there's some things with Passbolt that they're doing to sort of like compartmentalize where you keep your secrets, who you give them to, who has access to them, and different stuff. It's a bit more fine -grained, their approach towards cryptography and sharing and decrypting and encrypting. Well, I'm all for trying as a second alternative. Always need to, right? What if the primary one fails? Need to figure out how to sync things between the two, but that sounds fun. I gotta have two, right? We have to have two, correct. You should have two. I'm not sure you have to, you should have two. You should have two, yes. Even when you don't know it. Yeah, exactly. Even when you forget about it. Oh, hang on, you're right, I do have two. Yes, I did forget about that. That was awesome. We could fix that problem in one password as well, by the way, Adam. We could create a new vault. Currently we have a shared one which is shared amongst all of us, say all of us, the three of us, but we could create another one which is infra specific or whatever specific and then just a few of us can be part of it. I mean, it's not that big of a deal. It's more like, I suppose if you were in a larger organization, like again, we're a smaller team so we have smaller scale problems. I think they're more like warts in this scenario. It's not that big of a deal. I can operate around it. But I use one password personally and in business and then also in the secrets context. I got three different contexts I use my one password and so I was just complaining a little bit that whenever I'm using it personally, like literally Adam in this context, not Adam as part of change, like it's just my own secrets in there. I gotta like wade through SSH keys and just like different things that are part of our infrastructure stuff, which isn't that big of a deal except for it's just not relevant in that moment. Maybe that would make more sense to have an actual separate vault. Aren't there separate vaults? Yeah. Like I have a private and a shared vault in mine. I don't use it personally, so I don't have this problem. So I'm not sure exactly the context, but it seems like you could just activate your private vault and not see any of our stuff. I think in one password you can disable shared vaults, I think, but I'm not sure. Because you just basically toggle visibility, say I don't want to see this vault in my one password. I have a private, Jared. We have changelaw .com. We have a shared. And then you can have more sub vaults. So everything that I'm talking about is in changelaw .com. But whenever I search, this is all kind of, almost TMI in a way, but whenever I search in search in one password, it's... It's everything. Yeah, it's all vaults. That's on iOS and desktop. What was your master password tattoo? I forgot. TMI. Just making fun of his laugh over there. Is that TMI? That is TMI. Yeah, you almost got me. Just kidding. So close, so close. Mother's maiden name. Here you go. It's capital B. What was the name of your first pecs? Yeah. So yeah, when you search in there, it's everything. But I know what you mean. I mean, there's a couple of things like in the UI, which I wish they were improving just as much as we are. I really wish that one password did that. They should Kaizen things. They really should Kaizen things. They should consult us in Kaizen. They should hire us to come there and tell them the Kaizen stuff. Don't you guys know that you're supposed to be improving continuously? Here's the t -shirt to prove it. Okay, so I'm excited about changelog nightly. This is really cool. We're the last mile. We'll probably take us to the next Kaizen before we get this over the finish line, it seems. Don't you think, Gerhard? Oh, well, I think there's just like a few things which we need to figure out. I don't think it should be that long. I mean, I was sitting on it for a while, I have to say. But I think it will be more interesting to figure out whether we want to serve these assets from Fly .io, like the files, or whether we want to upload them to R2. I just serve them. Okay. I mean, these are like as low traffic as you can get. It literally just, we send one version of this to campaign monitor and say email this to this many people. And then we serve the index file for anybody who hits open in web. Okay.

  18. SPEAKER_00

    So I just don't feel like any more than that's necessary. What about buffer? Do we keep buffer or can we remove that part? No, we can remove buffer. Okay. We don't even use buffer anymore, yeah. Because I know like the scripts, whatever it's called, script something, if you get in the crontab, it does the generate, the deliver, and there's like one more buffer. So you want generate and deliver. Yeah, I think the buffer is probably a no op at this point. I definitely disabled it. Okay. So it probably just doesn't do anything. Yeah, maybe there's like a bunch of cleanup to do there as well, maybe. Or just leave it as is. It's all good. Okay. And then we want to deliver after we generate and don't deliver if we don't generate. Correct. Okay. What if something fails? How would you know that something fails? I won't get the email. Okay, that's a good one. I've enabled Sentry DSN. Okay, what's that? So SuperChronic has this built -in integration with Sentry where you can see your crons that fail directly in Sentry. So I've set up the same Sentry DSN that we have for the app, for the changelog app. So you should be able to see failures in cron using the Sentry integration. That's cool. I'll take it. I thought so too. I'll take it. It's like, ooh, I can enable this variable. I enabled it. It's a public one, by the way. Don't use it because we don't want to see your failures, basically. I mean, Sentry says it's okay for it to be public and we haven't had it public for a while. No one has thought about this, so. Yeah.

  19. SPEAKER_00

    Why are you telling everybody? Well, we don't want to see your failures. That's the point. I mean, why would you spam us with your failures? I mean, go and get yourself a key. That's a better idea, I think. Right. I mean, I don't even want to see my own failures, let alone somebody else's, right? Exactly. Maybe only if someone wants us to kaizen their failures. That's the only reason why they would share them. That's right. So that's cool. We have to figure out what we're going to do here, though. For what? Because we're going to get back another $30 and 70 cents when this move is complete. I think maybe we go out at KubeCon and we treat ourselves, you know? To have a drink. Hang on, how are we getting $30 back? We pay for Digital Ocean. You know, Digital Ocean, that's however it can go away. Oh, I see, yes. Yeah, I don't know why it's 28 bucks a month, but it is. You beefed it up at some point, because that's not bottom of the barrel. That's like

  20. SPEAKER_01

    two

  21. SPEAKER_00

    steps up. No, I don't know what's making it 28 bucks. You can usually get in on those things at five bucks, 10 bucks on every VPS provider, pretty much five bucks entry fee. So 20, that's a beefy machine. I don't think I created that server. It's been so long, I'm sure that whoever created it forgot. Yeah. It's been 10 years almost. In fact, I'm almost certain I didn't do it, but. Well, I think that's the cleanup. I think that's something that we can follow up on. But the important thing is that we can go out for drinks at KubeCon when we see each other, November 6th through 9th in Chicago. Oh yeah, finally, after seven years, we finally meet one another. Oh my gosh, I'm not sure what I'll do. Well, you'll realize there's more than a head. Like, wow, there's more to you. Let me look at the rest of you. Hold still, Gerard, let me look at the rest of you. There's your arms, oh my gosh, you've got arms. We've only been seeing each other's heads for seven years now, it'll be so weird. It will be weird. Yeah. So we're excited, we're gonna be there. We're gonna be doing some recordings at the, not at the venue, because reasons. Mostly reasons that I don't understand. But regardless of that, at the Marriott Marquis, which I assume is right next to the venue. It's connected via breezeway. Because people aren't gonna want to Uber or Lyft over to talk to us. And we'll also of course be at the main show and just enjoying conversations. Gerard, you've been to this event before. Adam, you have too. I have not, so I'm not really sure what to expect. Maybe you guys should talk because I don't even know what it's like. It's big, it's lots of people, it's a lot of energy. A lot of things are happening. Well, he's been there since it's been bigger since I was there, which was just before the pandemic, which was probably the biggest that had been since I'd been there. I think the last year I was there was 2018, Seattle. That's been a while. Big conference, a lot of fun stuff. Imagine nearly 20 ,000 people. Is it that big, 20 ,000? Close. Yeah, it's tiring too. It's a people overload. I could only imagine being like a superstar there, having to talk to everybody about Kubernetes. Like Gerhard? The Cloud Native Computing Foundation, like this direction of everything. Like Brendan Byrne, for example. When I talked to him there last time, I think he almost fell asleep during the conversation and it's not because I was boring. No.

  22. SPEAKER_00

    See how fast you had to qualify that? He knew it was coming. He preempted it. No, it wasn't because of that. It was the last day. Actually, it wasn't him. It was somebody else that almost fell asleep, but I can't remember the person's name. They're from WeWorks. Point being is that people were tired. Very tired, yeah. It is a high energy. You need to pace yourself for sure. I've been to a few. You do. It's a marathon, like four days for some of these folks. The trainings, the workshops, the pre -meetings, the deals that might happen as a result of being there. If you run a company, for example, you're probably gonna not just be there talking to people on podcasts. You're probably gonna be selling your thing. I think you need to be very deliberate about the people that you want to talk to, and it's something which I've learned. And there's a couple that if you really want to talk to them, make sure you talk to them before the conference and you set something up. I want to give a shout out to Fredrik from Polar Signals. He reached out. They launched today Polar Signals Cloud. And I'm so excited to be talking to him. ShipIt 57 was the last episode when we talked. And we had a couple of conversations, starting with KubeCon 2019. That was our first one. You can find him on Changelog. And we'll be talking about Polar Signals Cloud, Park, a bunch of things. We had Shipmas. We had a couple of things with Fredrik. Solomon, I really want to record with him. Really, really want to record with Solomon. So that's another one. And Eric. Have you ever recorded with him? It's been a while, though, right? You had one with ShipIt a while back. We had a few, but it's been a while. But never solo. It's always been with somebody else. Never solo, that is correct. I'm thinking one -on -one, maybe. Maybe. Well, you have to talk to the person who owns the room. I know, which is you, right? It's us, technically, but just saying. Could be fun. Putting it out there. But again, I would really like to talk to him also. Eric, I mentioned, he's a BuildKit maintainer. He's been doing some really cool things with distributed caching. Think R2, S3 on steroids, B2. Basically, all the really cool stuff there. All the twos? All the twos, exactly. He's on all the twos. So yeah. And I'll be at booth N37. Dagger will have a booth, so you can come and say hi. I'll be definitely there. Or you can also record with us in the changelog room. So we can talk in a bunch of ways. But I would say be deliberate. If you're hearing this and you want to talk to us, make sure that you reach out, because we may not meet. That's how crazy it gets. Not because we don't want to, because we didn't know. I would say Twitter DMs or Slack. That's free and open. Or emails. Or email. That works as well. changelog .com slash community. Mastodon. You can hit us up on threads

  23. SPEAKER_01

    or

  24. SPEAKER_00

    on Instagram. Or just leave a comment on this episode. That also works, right? We have comments on the website. That's true, yeah. Does that cover all of our communication media? LinkedIn. Oh, yeah, we do have a link in. And you can get us there. CareHards on LinkedIn. I deleted it, and I had to create it. All the spam. Anyways. Well, we do have the phone number as well that they can call. You can call us as well if you want to. You can call us? Do we have a phone number? Yeah. Really? We have a phone number. Oh, yes. We do have a phone number. It's right there in Fastly's cache. I would say hello. Adam is looking it up. He doesn't know. Well, I know it. I just forgot the middle numbers there. So it's been and always has been 1 -888 -974 -CHLG and or 2454. So 888 -974 -2454. Again. I'm just kidding. No worries. And if you call that number, Adam will answer it. And I will. It'll be awkward unless you have something to talk about. So if you're going to call it, think of it ahead of time. What would I want to talk to Adam about? That's right. Otherwise, it's just going to get super awkward quickly. And we've had some of those conversations that had turned out. It still might get awkward, but. You can give him some kudos. You can, you know, obviously, we haven't praised each other enough in this episode, so we can always do that. No. I think if we're going to get kudos, you should put them directly into our transcripts repo. Oh, yes. So we can train our neural search engine on them, you know? That's right. Well, KubeCon, we're coming for you, whether you like it or not. Yep. It's been many years since I've been there, and I'm excited to meet you face to face. Yeah, me too. Gerhard. Me too, me too. To see the rest of your body beyond your head. That's too funny. You said it. I know. You see, that's how it happens. Just plonk the idea and then see what happens. Oh, my gosh. Chicago. We've got 30 more bucks to spend because of DigitalOcean going away. So we're coming there on fire with some money to spend. 30 bucks per month. Amortize that thing for a year. Yeah, we have to switch it off first. It doesn't count otherwise. Oh, OK. Well, you better wrap that PR up then. Come on. I know. The pressure's on us, which won't be as nice to Kizen. Yeah, Kizen. Will you be wearing your Kizen t -shirt, do you think? Who is that question addressed to? Yeah, who are you asking? We all have Kizen t -shirts. I don't have one. Both of you. What? No. You don't have one. We sent you one. No. How is this possible? Well, maybe you did, but I never received it. It never came? Well, you know, there was a couple of years there where shipments were not easy to land. But I'm pretty sure, didn't I at least give you a coupon code or something for the merch shop at some point? I don't. We talked about that. But everybody that we work with has our shirts. And the fact that you don't is a crying shame. I think we should just bring some to Chicago with us or something. That's a good idea. It's just a month away. Then we don't have to worry about all this shipping across the Atlantic. That's a good idea. Or the Pacific. Well, I'll go to merch .changela .com, and I will order a Kizen t -shirt and have it shipped to the Marriott Marquee because that's where you'll be. That sounds dangerous. Do we have the size because, oh, we don't have the size. That was the problem. And yeah, that was the case since as far as I can. You think so? Yeah. We better get to work on that. That's why I always said last time. Well, this is embarrassing to end a show like this. I guess we'll have to iterate. Well, that's one thing to improve. And by the way, we can talk about all the other things in person about what else you want to improve, unless there's something specific that you want to shout out now. What happens? Because I listen to these not only to realize which jokes didn't work out, but also what should I do next? So if there's anything that we want to shout out, now is a good time. I think we just want to get nightly shipped so that we can switch off digital ocean. Spend money in KubeCon. OK, nice. Anything else? I got $30 that I want to burn a hole in my pocket. That's a quick one. I'm pretty sure we can do that this week even. Before this goes out, challenge. Accept it. I don't have any other marching orders at the moment, Gerhard. I think we'll think of some in the meantime for our next Kaizen, but right now I haven't put much thought into what we should do next. OK. Or what you should do next in that regard. I know Passbolt came up. There's a bunch of things like that. Erlang releases. So there's always things. I think it's just a matter of priority. And if you can't think of anything, there's so much to do. It's just like nothing obvious. That's good. Cool. Cool. Well, let's end it right there. If you're going to be at KubeCon in November, hit us up via any of those channels and let us know. We'd love to record. We'd love to just say hi. We'd love to see the rest of everybody's bodies in Chicago. That'll be amazing. Maybe too much. So many bodies. Everybody. We look forward to seeing everybody there. Yeah, that's it. All right. Kaizen. Cool. Kaizen. Bye, friends. So if your body is going to be or wants to be in Chicago for KubeCon 2023 in November, we want to meet you. And one of the hidden secrets here at Changelog is we have a free Slack community. Yes, you can go to changelog .com slash community and tons of folks are in there always talking about something and it's a lot of fun. So if you want to be in there, just go to changelog .com slash community and sign up. It is free, of course. And we want to see you in Slack. So, hey, if you're going to be at KubeCon, hop in Slack, say hello on Twitter, send us a DM, whatever it takes. Just let us know. You want to meet up and we will happily give you instructions. We'll likely also put these instructions on social media that you'll see. So that could work, too. But either way, we'd love to say hi. We'd love to meet you. If you're a fan of Kaizen like we are, we also have an awesome Kaizen T -shirt. You can find that T -shirt and it's actually one of our most popular T -shirts. You can find it at merch .changelog .com. And sadly, we learned on this episode that Gerhard does not have his shirt. And I think the reason why is we only have two X and small shirts available. We have to do a restock on this. But this is our most popular, the Kaizen T -shirt, continuous improvement, merch .changelog .com, threads for fans of Ship It and continuously improving. Love that T -shirt. So check it out. But hey, friends, it's over. This show is done. Next week will be at All Things Open. Monday, news will happen. And I'm not sure of the rest of the schedule, but we're playing it by ear right here in post -production. If you're gonna be at All Things Open, say hello, say hi, DM us, hop in Slack, all the good things. We'd love to meet you. Well, that's it. This show's done. We'll see you again soon, Kaizen.