Changelog & Friends — Episode 55

Kaizen! Pipely is LIVE

The Changelog team gathered in Denver for their first in-person Kaizen episode, showcasing the launch of Pipely, an open-source CDN project built on Varnish.

Speakers
Jerod Santo, Adam Stacoviak
Duration
Transcript(127 segments)
  1. Jerod Santo

    Welcome to changelog and friends a weekly talk show about doing big things Thanks to our partners at fly.io the public cloud built for developers who ship We love fly you might too learn more at fly.io Okay, let's Kaizen live in Denver. Well friends

  2. Adam Stacoviak

    I'm here with Damian shingle men VP of R&D at OZ where he leads the team exploring the future of AI and

  3. Jerod Santo

    Identity so cool. So Damian everyone is building for the direction of Gen AI artificial intelligence Agents agent ik what is off zero doing to meet that future possible?

  4. Adam Stacoviak

    So everyone's building general apps general agents. That's the fact it's not something that might happen It's going to happen and when it does Happen when you are building these things and you need to get them into production. You need security You need the right guardrails an identity essentially authentication authorization is a big part of those cartridges What we're doing at on zero is using our 10 plus years of identity

  5. Jerod Santo

    developer tooling to make it simple for developers whether they're working at a fortune for one 500 company and Working just at a startup that right now came out of white combinator to build these things with SDKs great

  6. Adam Stacoviak

    Documentation API first types of products on our typical OZ DNA friends

  7. Jerod Santo

    It's not if it's when it's coming soon If you're already building for this stuff, then, you know go to off zero comm slash AI Get started and learn more about off for Gen AI at all zero comm slash AI Again, that's off zero comm slash AI

  8. Adam Stacoviak

    recording

  9. Jerod Santo

    Go well, we're here. How did Kaizen begin? It was a long time ago in Japan Yeah, I don't know. How did our Kaizen begin 1300?

  10. Adam Stacoviak

    Like that it began with this crazy idea that Let's improve but in a consistent way so that every 10 ship it episodes We'll talk about the improvements that we drive in the context of changelog share them publicly And that was our reminder that hey ten episodes. We'll be talking about everything that we've done Some called it navel gazing. I don't think that's what it was. That was me. Probably Jared. It was good fun. And Yeah, it's been four years. It's been four years in four years. It's been longer than four years

  11. Jerod Santo

    Has it a decade? We've seen a decade

  12. Adam Stacoviak

    We've been calling it as a Kaizen. Yes. So Kaizen one was about four years ago

  13. Jerod Santo

    Materialized roughly four years ago, but the relationship that's why I bring that up So I want to say one thing is like these guys are my ride-or-die people Gerhard's amazing, you know, you're amazing but I am the the Magic and the beauty that's come from this relationship has just been tremendous and to be here and share them with you all and to share Kaizen 20 this navel gazing approach to our platform of this Constant attention to detail of improvement and I think particularly with what we'll talk about today is Unique to us specifically and that we built some infrastructure that's used by us

  14. Adam Stacoviak

    Specifically where a behemoth not negatively is just not maybe the right fit We've been holding it wrong to some degree, but right this what we're doing here is just a wild ride

  15. Jerod Santo

    I'm excited for this moment. Is there a way to hold it, right? That's the question I continue as myself It turns out if you gaze at your navel long enough, there's cool stuff in there definitely Proactive approach to a few of our treasures that we've found. Yeah, and something that we've built over the last

  16. Adam Stacoviak

    When do we start pipe Lee pipe Lee was I think 18 months ago, roughly? The idea was shall we do this thing? I mean we've been talking about it long enough shall we actually try doing something about it and The beginning was as all beginnings like can we do this? How long will it take? What will it take? Do we know what even needs to happen and that's how the conversation started and many of you that have listened to those conversations Remember us right how we were Pondering should we should be not are we crazy? Three wise men the question mark was really the emphasis like are we wise doing this? We have no idea right and then along the journey. The best part was the friends that joined us so it turned out there was not such a crazy idea and just the idea of improving something in public so that others see how we do it and it's our approach and Maybe it will inspire others and I think that worked really well and here we are today With with with friends with friends and that's again, that's the focus so Improving with friends makes me so happy. Thank you all for being here for that. Thank you. We appreciate you very much. Absolutely

  17. Jerod Santo

    So it all started with a dream. Yeah a pipe dream indeed on a Kaizen Number that I can't remember 13. I think Kaizen 13. So when we were Lamenting our cash miss ratio on fastly But they had this really nice varnish as a service that we've been using for a long time We just didn't like the way that we had to use it through fastly which is through a web UI. Yeah and a strange Comment based versioning control system that we invented on top of it. Mm-hmm Yes, put the name of the person who does the thing and the thing you're doing as you update the config Which would produce this gnarly. I don't know if I was in line varnish config. Mm-hmm. That would work mostly Sometimes yeah, and it was great when it worked but when it didn't work it was very difficult for us to have visibility and debug ability In order to fix that and so I said on Kaizen 13 Wouldn't it be cool if we could just have this 20 line varnish config? Yeah in the sky Just deployed around the world and it could just be just for us everything we need and nothing we don't And I and it Gerhard got a twinkle in his eyes. Yeah Challenge 20 line I can give you 500 lines

  18. Adam Stacoviak

    Well, I think it's close to a thousand now

  19. Jerod Santo

    But anyway, anyway, so it was a reason but that began our journey down this particular path Which we are at the end of or milestone at least you never at the end of this kind of a path Nope, where we decided our kaizen 19. Hey, we're ready to run this We call it pipe Lee is the open source project pipe dream is our instance of that because this has been our pipe dream And we were we were there. We're almost there. We're there now. Yeah, but I got 19 We were like right at the precipice. We are there and I said, what if we just get together and do it together Yeah, and you guys said yeah, how about on stage and Denver and they're like sure and so here we are Yeah, there's your setup Gerhard the way Kaizen's usually work is Gerhard works way harder than Adam and I do We show up Yes, and I say something like Gerhard take us on a ride. Tell us what we're gonna do. And so Gerhard take us on a ride. What are we gonna see today out of pipe Lee and kaizen 20? Thank you very much. So

  20. Adam Stacoviak

    We will start with a little bit of history So that everyone is able to visualize a couple of very important milestones Then we're going to ask you for your questions. So You better think of some good questions to get back at B or at us. So, let's see how that goes and Then we'll do something special because everyone took their time This took considerable effort on all your parts to be here and we want to recognize that effort by doing something special on stage live So, let's see how we're doing. All right. So first of all, let's start with the beginning The beginning is July 26 2025. This is an important moment. We're all here today It's the first time that this is happening kaizen 20 is the moment. Thank you all for being here I'm sure that you all know this now. It says 20 episodes. Actually, it's 19 like one was republished But it's been 19 kaizen episodes and this started actually in 2021 in July Like I had to look this up. I wasn't sure exactly how long it was, but that's when they started so this journey So this journey officially when we started having these conversations all recorded all remote Happened many years ago and it took us this long to finally do this in person Ten years right and if you count exactly like what?

  21. Jerod Santo

    Unofficially started yet. It's been it's been a decade been a decade. So we're good friends by one 2016 was the year we Launched what is now our platform. It's an open source platform change law comm you all know that it's on github Some of you contributed you're probably in the issues or at least reading them, but it's been a journey. It's 2025 It's not quite ten years, but I'm rounding up. Oh, yeah here. Give me give me a gimme. Yeah

  22. Adam Stacoviak

    next year Maybe we do something even greater, right? That's right. Maybe we're building up to that. Okay, so the reason why we're doing this is because It's about friends, this is the new context it used to be ship it Before it was unofficial. It used to happen just among like a few people and now all of you are here We appreciate you. So thank you very much because as you know, it's so much better with friends. So change login friends is the context and Including friends that want to be here But maybe can't next year hopefully or the next time we meet It's a good reason to make this crowd bigger, but this is how it began This is the first moment we have met in person and it's amazing. All right now You haven't noticed here But this is something that I pay attention to because the person that takes picture is pick that the pictures is never in the pictures so we need to Acknowledge the person taking pictures. Yes, and that is Aaron. He's over there. He's still big Hey Aaron, hey, thank you very much so Yeah, thank you for being here and capturing this moment for all of us. That's so great. So We are friends I think we can call ourselves friends when you go for a beer and we go for a hike and we share a meal So that's I think what it means to you the beginning of a nice friendship that enjoy making things better That's really what brings us together and we do it in a public way we share and even when we get it wrong That's fine because it's always about the improving. It's not about being a hundred percent right or knowing it all Figuring things out and that makes me so happy. So You all know this We post these as discussions on the changelord.com GitHub repository this specific Kaizen is discussion five four six and If you want to do a bit of digging to see what went into making this Kaizen That's where you can go for like the technicals for the technical stuff for the pull request for the code all of it is there and I think the first step that we need to do is set the record straight and The reason why I remembered it is because of that very small thumbnail. There's three people there. There's BSD PHK so PHK is pool Henning camp. I didn't know who he was until recently He is the guy that had a huge contribution on free BSD NTP time counters free BSD jails MD 5 crypt and You all know this. I'm gonna say it. Yes, the bike shed

  23. Jerod Santo

    Bike shed. Yeah, he invented the bike shed

  24. Adam Stacoviak

    1999 the guy that so I think that's that's really important and we didn't know a bit of history This was an important moment and I think I'm going to switch the screen now to this Because this is important. So Andrew Oh He wanted to set the record straight no all green. Sorry all green May 13th He wanted to set the record straight about varnish Which is the technology that we use the relationship to fastly and the relationship to varnish enterprise and it's all here. So Fastly is not running varnish plus Which is a varnish software product and they have their own fork. So that is important It's similar but not quite the same What we use is not nothing from varnish software not varnish plus. We're using the varnish cache the open source project as Anyone can consume it. So we have built everything that we built is an open source Technologies and that is important to us because that is in our DNA. So there is some history here. That's important. So we are Building on varnish cache open source. All right back to where we were Cool, so we did this thing and this thing is important But actually we did a few things over I think four years We did quite a few things and years ten years ten years. Yes ten years week. Okay. Okay ten years

  25. Jerod Santo

    Just keep keep that in mind that I'm not gonna let it go. It's in there nine and the next year is later next

  26. Adam Stacoviak

    That's a good one. So they were all good I think most of the things were good right that we've done most of the things were good, but that's not all things, right? Shall I fix this now? No, it's okay. Okay. Sorry my bad So what are some of the things that you don't think went as well? Let's talk about that out of your Collective memory, maybe even someone from the audience can tell us if he thinks that didn't go so well

  27. Jerod Santo

    S3 cost S3. Yeah, we spent more money. We should have on s3. We're ballooning. Yeah, and we didn't know it egress until that one kaizen episode where I finally looked at it and then I was like, oh That we should change this we should address this the bill went from like 15 bucks a month late Maybe 180 at peak I believe Which is still small, but like we're we're not massive. We're not an operation. Yeah. Yeah. Yeah, that's not cool But now we're on r2 and we're spending like eight bucks a month. Yeah, they basically pay us So now they should be us Yeah, now it's good, but that wasn't good. Hmm. I deleted a few things

  28. Adam Stacoviak

    As you do Right flying too close to the Sun. It's a good thing

  29. Jerod Santo

    there was one there was one time where Gerhard went in to I don't know what it is anymore a config or a One of you know, my pieces of code and he changed something That to me was inscrutable like why what who but I had so much respect for the guy And so much imposter syndrome that I thought surely he knows better than I do. I remember that Yeah, yeah, and I don't know surely but it looks wrong, but I'm gonna go ahead and roll with it cuz Gerhard knows Yeah, what he's doing Later, you had no idea what you're doing

  30. Adam Stacoviak

    So this is something really important and it's like at the heart of what we do, right? We are figuring stuff out and we are okay to admit it publicly, right? Like we mess things up, but there's no way you're going to learn if you don't make mistakes Doesn't matter how much experience you have doesn't matter how many things you think, you know, you never know, let's be honest You don't really know you're mostly making stuff up Some things help you but it's all in the confidence that you will be able to figure it out We'll be able to push through just stick with it long enough. That's that's all it takes Alright, so today we did the biggest thing ever What was that a live show a live show? Yes. Yes. Yes. That's one

  31. Jerod Santo

    We showed up in a city. We don't live in right people flew here with us. That's a big thing Is that what you're referring to? Yeah, that's one of the big things. So what else? Well, I'm so curious what we did What do we do? Tell us what did Gerhard do? What you do

  32. Adam Stacoviak

    So I did something I will show you very soon. Trust me. It's coming. But it's the biggest thing ever. All right, so The problem the fact that it's green don't let that mislead you. It's a bad thing Okay color psychology of color is very important. So what is this the seventeen point nine three percent is the cash hit? ratio on our current production CDN Which is low, right? That's really low. It means that less than 20% of the requests get served really fast 80% plus are slow and While it doesn't really impact us I mean it's all good for us it impacts you when you load something it takes a while to load and it shouldn't be that way right things should be instant things should be very very smooth and When something goes wrong in the back end, for example If 80% of the requests they have to go back to the back end It means that they can fail so the chances of something failing are fairly high There's something wrong in the back and then that's the other thing which I keep thinking about a lot

  33. Jerod Santo

    Yeah, and it's worth noting that our data is is uni-directional. I mean we rarely change

  34. Adam Stacoviak

    Anything once we publish. Yeah episode one is still episode one. Yeah, everyone is later

  35. Jerod Santo

    you know, we might put the wrong audience in the wrong episode or you might

  36. Adam Stacoviak

    Have to edit something that or you shut the entire wrong audio. Yeah, like I've done recently

  37. Jerod Santo

    Recently so we make mistakes and when you make a steak you want to be able to quickly Rectify it purge everything and get back to where you were But generally speaking you put an mp3 up on a CDN and then you deliver that same mp3 in perpetuity Yeah, and so this number is abysmal We should never have misses. Yeah, and that's what I've been saying for ten years Right. We can cash misses. Okay, and

  38. Adam Stacoviak

    The problem is that in this case, it's mostly the website So the changelog website appears slow in a lot of cases and that's not great So that was the problem that you were trying or that was the thing that we're trying to improve. That's how this started All right episode 26 this is what Jared was mentioning when we began should we build a CDN? That was the moment. We're thinking should we do this? I mean are we are we really at that point and it took a while but the conclusion was yes I mean, we should at least try and see how far we get that was 18 months ago Pipe little tech we had this up for a while. We talked about it at Christmas a new CDN is born It hasn't been updated recently, but it will be but this is the home for the open source project That we would like others to use at some point. I think it's getting there It's not quite there yet, but we have made many improvements to make it easier to consume

  39. Jerod Santo

    Well friends I'm here with a new friend of mine hard jot Gil co-founder and CEO of code rabbit where they're cutting code review time in half With their AI code review platform so hard jot in this new world of AI generated code We are we're at the perils of code review getting good code into our code bases Reviewed and getting it into production help me understand the state of code review in this new AI era the

  40. Adam Stacoviak

    success of AI in code generation has been just mind-blowing like how fast some of the companies like cursor and GitHub copilot itself have grown the developers are picking up these tools and running with it pretty much

  41. Jerod Santo

    I mean, there's a lot more code being written and in that world The the bottleneck ships record viewer becomes like even more important than it was in the past even in the past Like it's my company's care about code quality had all this pull request model for code reviews and a lot of checks but post gen AI now we are looking at first of all a lot more code being written and Interestingly a lot of this code being written is not perfect, right? So the bottleneck and the importance of code review is even more so than then it was in the past You have to really understand this code in order to ship it You can't just wipe code and ship you have to first understand what the AI did That's where code rabbit comes in It's kind of like think of it as a second-order effect where the first-order effect has been gen AI and code generation Rapid success there now as a second-order effect there's a massive need in the market for tools like code rabbit to exist and Solve that bottleneck and a lot of the companies we know have been struggling to run with specially the newer AI agents if you look at the code generation AI the first generation of the tools will just tab completion which you can review in real Time and if you don't like it don't accept it if you like it just press tab, right? But those systems have now evolved into more agentic workflows We're now you're starting with a prompt and you get changes performed on like multiple files and multiple questions in the code And that's where the bottleneck has now become code review bottleneck Every developer is now evolving into a code reviewer nor the code being written by AI That's where the need for code rabbit started and that's being seen in the market like code rabbit has been non-linear lee growing

  42. Adam Stacoviak

    I would say it's a relatively young company, but it's being trusted by

  43. Jerod Santo

    100,000 plus developers around the world. Okay friends. Well good. Next step is to go to code rabbit dot AI That's C O D E R a b b i t dot AI Use the most advanced AI platform for code reviews to cut code review time in half bugs in half all that stuff Instantly you got a 14-day free trial to easing the credit card required and they are free for open source learn more at code rabbit dot AI

  44. Adam Stacoviak

    This is something that has been bugging me for years We run on flat of IO and flat of IO has points of presence all over the world But our application only runs in a well in Ashburn, Virginia because it's closest to the database. Of course, it's going to be close to the database right because data has gravity but We wanted to distribute the application for a long long time, but it was never the right model with a CDN That's exactly what we want to do, right? We want to get those instances all over the world so finally we can say that after all these years we are holding flat of IO right and It's been working pretty well. Is that the big thing? I think it is a big thing. Is that a big thing? No, no, no, no, it's coming. It's coming. I'm building up This is one of the things we are holding it right I want to say one thing too real quick like leave that there

  45. Jerod Santo

    The next one's good. Next one? Yeah, the next one's fine. Let me show some things but you see Fly here They're not here. We didn't make this about sponsors. We wanted to be about you all Us doing a normal live show together. That wasn't like hey, let's Charge a normal spot for tickets or whatever it was We just want to go somewhere have some fun get together and just share this story But I do want to recognize that Fly and Kurt and the team there have been Extremely supportive of us not saying you should use them, but they love us. We love them And what we're building is really on top of the best platform. We believe so fly is amazing

  46. Adam Stacoviak

    Fully agree. Yeah fully agree with that Okay, I'm good. Yeah, right. So this happened about six hours ago or seven hours ago. This is 1 a.m. Last morning Okay, something went wrong. So things will continue going wrong. You will never really get there It's all about the mindset off. Can we do it a little bit better? And again, we are figuring stuff out So this was yesterday last night. So this is a question for the audience Who would like to see us improve? this specific crash live Yeah, yeah, all right, let's do it cool so What do you think happened here? Just like let's do like a very quick Understanding of what the problem is. What do you think happened here?

  47. Jerod Santo

    Can you describe the architecture of pipe Lee like pipe dream in terms of what's yeah, I can so it's

  48. Adam Stacoviak

    Varnish Instances, I mean varnish is really at the core at the heart of it There's a couple of other components around it, but varnish has the heart of it varnish Makes requests to backends the backends in this case would be assets. For example, we store static assets We were mentioning mp3 files PNG files JavaScript CSS that kind of stuff which rarely changes then there's a feeds Back end feeds stores generated feeds for users plus-plus members and shows various shows

  49. Jerod Santo

    Every show has its own feed and then you can also create your own custom feed And so there's something like on the order of 600 to 800 feeds I would say yeah eight of which are way more important than the others because they are publicly consumed by all the podcast indexes and those feeds get the

  50. Adam Stacoviak

    Most requests from all the platforms the the podcasting platforms that consume the changelog episodes and they distribute them to their audiences or to your audiences, but through the through that platform our Audiences our audiences. Yes our audiences for sure And what this is basically we get these instances that are distributed around the world So that the delivery of that content gets accelerated The one thing which I haven't mentioned is the applications the the changelog application the website which is an important one that's where many users go to for example, look at the homepage look at news things like that and That is the one which is most sensitive to latency because as I mentioned it's only in one location close to the database So we need to accelerate delivery of that website to users which are around the world including Australia South Africa South America all over the world We have a very diverse audience and we want those users to have just as good experience as anyone else

  51. Jerod Santo

    That's maybe in the North America. So in this case one of our ten. Yes, ten instances of The pipe Lee application. Yes Crashed run out of memory misty bird for and three one. Yeah misty bird. That's the one misty bird crashed

  52. Adam Stacoviak

    That's what happens exactly. Okay. So now to your question Yes, to the audience was why do we think the application crashed why do you think it crashed and anyone can answer except? Matt's in the beyond James Yes, of course, thank you Someone's paid attention, but why did it run out of memory? Why do you think it ran out of memory because there wasn't enough memory Right, I love some trolling seriously now You're gonna make me say it, right Okay, so as more content gets cached in memory The problem is there's like a configuration which I wish it was easier to make but you have to Manually adjust how much memory you give to varnish out of the total memory available So there's a a dance that you need to make so that you know How much is enough so that when more memory gets allocated the thing doesn't fall over. I wish this particular thing was easier Maybe something that we can improve but for now you have to fine-tune it and find if you have four gigabytes of memory total How much should varnish be allowed to use and if you think it's four gigabytes, it's way too much So that's what we're going to do. Now. I'm going to switch to some live coding and show what happened. So Actually, no one let me maybe let me try this how's the font can everybody see the font How's the font can everybody see the font and we see what's there I'll make it a little bit bigger a little bit bigger okay, so This is the change and if I'm going to undo this, you'll see what it was before so we I Thought let me take responsibility of this. I thought that 800 megabytes is going to be enough This was the case when the application had two gigabytes So when the instance had two gigabytes 800 megabytes of headroom was enough. So the application wouldn't crash Because of out of memory issues and Apparently when you have four gigabytes 800 megabytes is not enough. So what happened is 33% 33% should be enough For this to work and again, you have to specify this explicitly. I'm sure that will improve this at some point This is what this looks like. So we're going to push this change into production lie right now. Here it is Here's the change so we'll say increase varnish memory Varner or limit. Let's do that limit varnish memory to 66 percent 66 percent right varnish memory can only use 66 percent. Okay, I committed I What's going on? Of course, I need to connect right? Let's do that. Let's connect wine. Yeah, I'm offline Let's do this again. This is live like this. This is not recorded. It hasn't happened Let's see. Let's see how this I normally record these things, but let's see what's going to happen

  53. Jerod Santo

    Good time to go live your heart. There you go. Let's just read this Bill Gates quote Okay, you thought 800 megabytes ought to be enough. Yes, Bill Gates 640k ought to be enough for anybody, right? You know, it's not a reasonable thing that you thought nope, okay

  54. Adam Stacoviak

    all right, so what happened is we we committed and we pushed that we pushed this commit this one commit and to Deploy something into production. All that we have to do is tag The the repository so tag and commit so RC 3 is the last one that went out You can see when it went out. It was two days ago We're going to do an RC for now and all we have to do is this J tag J stands for just saw the just tag Because I don't want to remember the command. It's quite long So I'm going to do the shah the shah in this case is going to be sorry tag, so V 1.0 point zero RC or For there you go. Okay. Nice. We'll just do head right? Of course. It's going to be head and the discussion This is the changelog discussion where you can you can basically listen about this thing So this is us preparing highs and 20 remember that get up discussion I talked about that's what this is going to do and I'm going to push it now. No push Get push What happened undefined That should have been fine. Looks like the connection the connection drops blocking port 22. I know the connection dropped

  55. Jerod Santo

    Yeah, let me just go back. Let me read a Bill Gates quote. Yes. Sure another one Give me another Bill Gates

  56. Adam Stacoviak

    Let's get push again. I mean just make this it's gonna slow. There you go that pushed cool. So the tag went out and what we're going to see now is Actually this one right here it's going to we'll go to the actions. So this is a live this one right here one zero zero RC for and it's going to Push this change into production across all instances is going to roll them live. We are deploying this So why is it significant? Sorry, let me set the record straight. Sure

  57. Jerod Santo

    Bill Gates in 1996 said I've said some stupid things and some wrong things, but not that No one involved in computers would ever say that that a certain amount of memory is enough Right, so I guess I take it back. Is that true? What is your source of information yes, so there you go, he may or may not have said that So let's have watching it roll out. I see

  58. Adam Stacoviak

    So now we're seeing like a live roll out and we'll see so publish and deploy tag is doing all the validation Go on go on it's moving on it's going through the changes Yeah, it's a garris, but it's going to resolve itself. It just takes a while There you go. How long does it normally take your heart? Well a last time? I think it took two and a half minutes. So we are about the good idea. Of course. Why not? Look, we're already creating the new machine. It's look at that. That's going. All right, so this is something that I think is important to Get right Early on and that thing is how long does it take you to push a change into production? So how long does it take you to make a change and see that change roll live? And this is something that we've been working on for quite a few years on this thing. We do push the production We own our production. We don't develop in production now be crazy even for us But it's like all like like this this mentality of if I'm going to make a change How long will it take me to see the change happen and if you can shorten that time you're in a good place

  59. Jerod Santo

    Ten years ago. It took 20 minutes or so It was long. It was a long time It's long enough that you go do something else and now it's two and a half minutes maybe and this is a global CDN

  60. Adam Stacoviak

    So this is we are pushing this change. So all the instances of the global CDN that we run in Where do we run them? Let's have a look. So this is it So we are Ashburn, Virginia, these are the instances Two days ago, if I'm going to refresh this you'll see the new instances come live. So that was two days ago That was like the last deploy. Let's refresh deploying 57 now. That's the commit We have new instances coming up and you can see like that. We do blue green Of course, we into blue blue green green to deploy the new ones. The old ones are still there two of everything it's an important rule and Yeah, this works fairly. Well, it will not have any downtime and we can see look at that in a minute So Ashburn, Virginia, Chicago, Dallas, Texas Santiago, Chile, San Jose, California Heathrow, of course, London, Frankfurt Sydney, Australia Singapore and Johannesburg So these are all places where we deploy these instances and they will accelerate all the content to our

  61. Jerod Santo

    Yeah users. Does anybody out there have a deploy pipeline that runs faster than two and a half minutes?

  62. Adam Stacoviak

    Does anyone have a deploy pipeline? Let's start there

  63. Jerod Santo

    Two people three people four people. I think we're winning five people six people Cool, two and a half two and a half. Nice. It only feels long when you're on stage. Yeah

  64. Adam Stacoviak

    Well, this is real. This is real like not edited. This is this is what it feels like now I don't think two and a half minutes is long I think we can improve it But you need to think about all the things need to happen behind the scenes right the allocation of resources the health checking Like how do you know what you've put out there is correct and you need to wait a while To make sure that the thing doesn't crash. That's why you need to wait at least 60 seconds before you can say Yep, this is good. You need to do a few health checks because the thing starts falling over You don't want to leave that thing running in production. Of course cool. So I think we're in a good place The one thing which I wanted to show is if I come back here Can you see that is that graph looking good to you? Yep. Cool. So do you see this? Yellow line This was the instance that crashed. So this one If I there we go, it's San Jose, California for some reason there is a lot of requests hitting this instance and Those requests they don't look like human requests So I think we may have some sorts of bot situation going on some sort of I know LLM trying to learn there's yeah, there's like a lot of lot of requests and If I'm going to look at this one the CPU utilization this is the same instance right here the one in San Jose, California I mean look just how Abnormal this instance behaves and if I'm going to remove that you can see all the others So all the other instances the CPU switch is fine But this one is spiking up to 100% because it has a lot of requests now Obviously this is in front of the application. So five minute load average We can also see here San Jose, California

  65. Jerod Santo

    Yes, can we see the endpoints that it's serving the endpoints that it's serving

  66. Adam Stacoviak

    we can yes, so Let's let's let's do like a tiny review right now. Jared's like can we do X? Of course we have the moment No, we can we can definitely do this, okay, so I'm going to switch to this view and this is the last seven days and I'm going to make this a little bit bigger for you to see and Can you see that? All right, okay, so you can see that we're here right July 26th, so the application Went into production a few days ago so Not all of it not completely, but we started routing a bunch of traffic to the application to see how well it would handle

  67. Jerod Santo

    Our users is that the biggest thing we ever did almost

  68. Adam Stacoviak

    It's getting close still way getting close. Okay in close. Okay, but this basically shows that We had many many steps towards this moment and now together with you We've shown you how we can update something that's running live that is serving I mean this doesn't sound like a lot of requests like a thousand, but the granularity is 30 minutes And we are sending a portion of the traffic and it's handling it pretty well And we can see that the biggest users or like the most requests are coming from or D Chicago Chicago, that's the one Frankfurt next London Heathrow, and it's not me running low tests in Singapore So these are like the and then San Jose, California right here Yeah, so we are running live traffic not all of it abortion of it But we wanted to see will this thing continue working? Well, did you know that?

  69. Jerod Santo

    No, I don't think so. Yeah, so technically we're not launching pipedream today because we launched on Thursday. Just me and you exactly

  70. Adam Stacoviak

    Sorry team

  71. Jerod Santo

    We barely launched it we launched one out of five one out of five. Yeah Yeah, so roughly 20% of requests go through our pipe dream at this point. So whenever you go to changelog.com, that's it one

  72. Adam Stacoviak

    one out of five requests hit the new instance and we were able to see how does it behave with 20% of traffic and It's working. No one complained. Adam didn't even notice And that's like one of the best things right like to roll things out you do it in such a way so that I Mean it is a big thing to us and to people that understand and know what goes behind it, but to everyone else Did anything change? Because if you do your if you do this type of job correctly All people see is maybe little improvements and if they're not paying attention, even those they will miss they think you're always as badass Well, I think we are good team and The other thing which I want to mention is that while what you see here again, it feels very It just happens right? It's like a small thing the worry that went into it It was months and months of preparation months and months of discussion people joining us working with us I want to thank James. I mean, he was the first one that has joined James this. Thank you very much James a Rosen Yeah, thank you James. Thank you I Talking in the various issues the first one that we had Basically being with us for almost two years now Discussing about problems that we thought were problems at some point you were questioning like are we maybe too? Strict are we too demanding out of this and no no I mean we want things to be better and this is why we want things to be better. So James Thank you very much for all the conversations Keeping us like big-picture like that's all objective perspective as well. That was so so helpful and then Matt right Matt Johnson. Thank you very much for that. Thank you Just thank you. What a matter. Thank you. It was all about like VCL and Going a bit deeper just to have like another perspective on VCL I mean he has a lot of experience in VCL and varnish in general but also Documentation also like a diligent approach to how should we do this? So that's easier for others and that was like so great to have that help now That was months and months and months off I mean again, all of us have full-time jobs all of us do something else But in our spare time we find a bit of time to help others and this was that so thank you very much for that

  73. Jerod Santo

    Man, I had a good idea last night. Can I pitch it to you? Of course? Yes, please. Yes So my desire was a 20 line BCL. Yes You gave me a thousand lines. Mm-hmm. Matt says we can count you guys can just pull most of it out into an include

  74. Adam Stacoviak

    Yeah, that's cheating. But yes Yeah, of course great. So we can go to the repo. We can give you 20 lines Joe. We can give you less than 20 lines Yes, I think that's a very good idea Yeah, just look number one like step number one. Yeah, exactly. Well ties in that for sure So I wasn't prepared to do this, but but let's do it So we have this is just we have like a bunch of targets here. There's one that says how many lines Right, so let's just run that live and see what happens. So how many lines I'm going to tap this So, let's see how many lines we have. Okay 961 like hang on. Let's see. What's in varnish. Hang on a second. Oh, no, we don't want that. This is not correct Because we just want ours. Okay. Shall we change this life? All right. Let's see what's going on. What's going to happen. So why not? So we want only Only VC actually no varnish. No, we want inside VCL. So yeah, that's just like one change So let's go just file. Do you want to do it? I feel like a step they get all right, so let's see how many lines how many lines there you go So varnish, let's do varnish All right. Let's see that varnish. Sorry VCL VCL. Okay, let's see that works There we go. How many lines there you go. These are the actual lines all the lines so three six four That's like the main one and these are like the includes 24 in this one and 106 and 15 right how many total Less than a thousand. I think it's less than a thousand 500 ish getting to 500 between 450 and 500 a lot of these are just like static Do the math. Yeah. Yeah. So again any so forth out there do the math. Look at that 15 That's how I do it. 106 plus 24 plus 364. Look at that 509 509 509 lines. So there we have our answer Cool, so it's still good. It's still good all right, so

  75. Jerod Santo

    Did you want to thank Nabil? Yes, of course

  76. Adam Stacoviak

    Nabil How could I forget Nabil? Of course, so one really important thing is that Varnish cannot terminate backends which have SSL in front So if the backend is talking HTTPS it varnish cannot use it out of the box Varnish enterprise and other products can but the open source varnish cannot we did a bit of Digging and we realized that's how we learned about pool handing camp PHK for short and He was always against including SSL anywhere near varnish because he would complicate things too much SSL is really really complicated. So wouldn't the bills help? We wrote I'd know like he wrote 50 60 lines of go code that Intercepts all the requests going to those backends terminates SSL for varnish and presents the request unencrypted a really simple elegant solution one of the Almost like sidecars that sits next to varnish and helps it terminate requests which need SSL. So I'm not sure that's

  77. Jerod Santo

    That any justice Nabil? Genius Absolutely genius. Let's hear it for Nabil. Good job Nabil and his SSL termination. Thank you TLS exterminator TLS exterminator, that's the one that's a good cool. Well friends It's all about faster builds teams with faster builds ship faster and win over the competition It's just science and I'm here with Kyle Galbraith co-founder and CEO of depot Okay, so Kyle based on the premise that most teams want faster builds That's probably a truth if they're using CI provider for their stock configuration or GitHub actions. Are they wrong? Are they not getting the fastest builds possible?

  78. Adam Stacoviak

    I would take it a step further and say if you're using any CI provider with just the basic things that they give you Which is if you think about a CI provider It is in essence a lowest common denominator Generic VM and then you're left to your own devices to essentially configure that VM and configure your build pipeline Effectively pushing down to you the developer the responsibility of optimizing and making those builds fast Making them fast making them secure making them cost effective like all pushed down to you. The problem with modern-day CI providers is There's still a set of features a set of capabilities that a CI provider could give a developer that makes their builds More performant out of the box makes the builds more cost effective out of the box and more secure out of the box I think a lot of folks adopt GitHub actions for its ease of implementation and being close to where their source code already lives inside of GitHub and they do care about build performance and they do put In the work to optimize those builds but fundamentally CI providers today don't prioritize performance Performance is not a top-level entity inside of generic CI providers

  79. Jerod Santo

    Yes, okay friends save your time get faster builds with depot Docker builds faster GitHub action runners and distributed remote caching for basil go great old turbo repo and more Depot is on a mission to give you back your dev time and help you get faster build times with a one-line code change Learn more at depot dev get started with a seven day free trial. No credit card required again depot dot dev

  80. Adam Stacoviak

    We improved it. We're here. We did it Heisen 20 one more thing. Okay, this is too far out. This is too far out Too far actually no are good. Yeah, we're good. We're good. Okay good. Anything else. We want to cover before we do this There's one more thing that we're going to do Heisen 20 any Any thoughts any any comments from the I mean you're here. Do you have any questions? Do you have any comments? Do you have any any thoughts anything? We got a question. We're among friends. Yes, I'm like Can we talk about how we're using dagger? All right. Okay. All right. I can show you about that. Okay, cool So there is One very tiny change that I have here, which is using dash dash cloud This is an experimental feature that exists in the dagger CLI The reason why we do this is because we need a remote engine a remote dagger engine So that everything that runs in the context of dagger is quicker over Wi-Fi It would be over like my tethered connection be very very slow. So how use dagger In this case, let's go here We have a couple of commands the one I think that's the most interesting one for example for running tests So let me show you what that looks like. I'm just doing just test you can see there. It does like the dash dash cloud Let's see how fast it goes. It just it starts an engine in the cloud. It connects to it So we need to set up varnish. We need to connect varnish to All the backends the TLS exterminator and you want to do it in a way that is as close to production as possible So it's almost like we want to run the system as it runs in production, but we want to do it locally Okay, in this case, it's a remote engine. But from the from the perspective of how it gets put together. It's literally just the same Context of running containers and because everything runs in containers we get the exact reproducibility That we get in fly. So get the same configuration the same Linux subsystem the same Kernel or like a very similar kernel and what that means I mean in this case It's even using firecracker behind the scenes. So it's very close to fly. So we are able to test the system most accurately and we're even able to run the system most accurately as it would do in production and That part is hard because whatever you do locally if I was running for example on the Mac Everything would be different even if I would have a VM things would be different So it's like that containers the container image the interaction between all the components And when we ship something in production, it's as close as possible to that image even down like to the go version. So right now For example, what's happening here? We're pulling down I mean some of it is cached we're pulling down various dependencies and you can see they're all Linux ones So Linux Linux and again a VM on a Mac it gets you close, but it's not the same thing You get like subtle differences. I'm not sure if that answers your question Yeah

  81. Jerod Santo

    Good use of dagger any other questions Before we in the back do the big thing. I think the Beale had one was it

  82. Adam Stacoviak

    Yeah

  83. Jerod Santo

    Does varnish only cache to RAM or does it also use disk so it can use disk as well?

  84. Adam Stacoviak

    But we don't have this configured So the configuration is cache to RAM only it is the fastest one if we have this as you know The problem is the host they can't move around so it's no longer stateless That has certain challenges about like placement but also disks tend to be a little bit slower not by much but a little bit slower As an optimization it would be worth exploring that especially with NVMe disks Which is what we would get and fly so that is something as a future improvement But right now everything gets served from memory Which is exactly what happens in in fastly as well because that's what gives you like that that highest performance

  85. Jerod Santo

    So follow up to that our crash last night was because it needed more RAM than we allocated to That's great and in varnish there's no way to say just use all the RAM available

  86. Adam Stacoviak

    So if you tell it that it uses more than it has available because it tries to allocate more than you have So you need to set an upper limit. It can't know how much the machine has well It knows, but it doesn't do the allocation correctly that a bug. I don't know whether it's a bug It's just not working well with the system available to the host maybe I mean maybe I mean this is how many years old so

  87. Jerod Santo

    20 something there's no way that bug is okay, so it has to be a decision right so I

  88. Adam Stacoviak

    gained a lot of respect For memory specifically in memory problems because they're very very hard. I will spend maybe three years Optimizing memory allocations on the RabbitMQ team for RabbitMQ in the context of Erlang And I've learned that there are so many subtle differences between how memory gets allocated on different runtimes In the case of varnish obviously C so it's as efficient as it gets, but it's very difficult to know how much you should allocate ahead of time and What you should free Timing like something has somewhere to give and usually what you need to do is Overallocate basically you give it more than it needs so that when you get these spikes right there's like Enough for it to spike so that it doesn't crash things over Memory these days is cheap, so honestly like skimping on that is is not worth it and There's there's a couple of ways we can we can go around it like disks is one of them Limiting what varnish can do but also seeing if the varnish memory allocation can improve Right because I know what it takes for exactly what it took in RabbitMQ to do that It took me years, and I was like full time on that so it took a while including to understand What happens behind the scenes so you need to break it down in a way that you map and observe? Everything when it comes to memory allocation, and then you realize what needs to be tuned To your use case that's the other thing because every context tends to be different so the allocations They are generic to make them configurable to you or specific to you so that they're optimized for your use case it takes a bit of

  89. Jerod Santo

    Understanding of what you actually need yeah, I'm probably reducing it down too much But I feel like you could just say this machine has four gigs Yes, use three and if you need more you got another gig on there right so don't crash. Yep true. Yeah Yeah, but my question my follow-up. I guess would be assume that we figure that out that little dance. Yep What happens when our? Instances become overwhelmed varnish might not be crashing, but maybe it's slowing down Maybe machines bogged can't do stuff. Yeah, is there an auto scale? I mean the answer is more RAM on the fly instances or more fly instances in other regions or in the same region

  90. Adam Stacoviak

    That's right. Yeah, so there is a big difference between regions by the way And this is something like this is the sort of thing that you only realize once you start using it, okay? So this is these are like the last 24 hours. I'm not going to refresh. It's a little bit outdated I did it like you can see here. It was 1042 a.m.. It was this morning before we started recording and what you can see here is to this one the San Jose, California the SSJC is the one which shows like the highest fluctuation a lot of these like once they load up on memory like Heathrow They tend to be fairly stable, but then you have a few for example This one Santiago, Chile. This is like 2.3 gigs So this one is not using all the available memory this next one in Sydney is 1.84 And you can see that the memory is decreasing because it doesn't need I mean this is something I'm not sure why does this for example? I wouldn't expect this line to go down I would expect it to stay stable, so I'm not quite sure what's happening here, but this one the lowest one in Johannesburg it's using 816 megabytes of RAM so this shows you there is a difference between Different regions and how many requests I need to serve so what I'm thinking is we should optimize For the regions that we have and the ones which are busy We should give them bigger instances maybe beefier instances, but others like Johannesburg does doesn't need that much So maybe we and the problem is you can't use different scaling strategies for different nodes And they have multiple deployments, so it complicates things a little bit, but this is a refinement. Yeah, I think we need more listeners in Johannesburg

  91. Jerod Santo

    I mean what's up with that exactly? All right, there's one other question out there. I thought it was back here. Yeah, okay. Yes, I like it Okay, so the paths look like insane right so yeah, thank you. Thank you very much, so

  92. Adam Stacoviak

    This is we're looking at honeycomb and one of the integrations that we have in Pipe dream is we are sending every single request to honeycomb and Some requests to s3 so like we see what's happening with these instances So what that means is we are able to for example if I come back to the boards If I come back to the boards again, let's hope the tethering works It will be a little bit slow Okay, so let's go pipe dream requests, and I think maybe pipe dream content, but let's go requests We can see the get okay, and now we can slice them and dice them for the 404 s and the 500s So maybe let's do let's do gets okay, so let's go to the method and the only thing that we need to do so this is One hour ago, so these are all the requests in the last hour, and we can say give me the URL which is what? We've been asking so Group by URL and what this is going to show us is the get but also the number of requests So what you see here is that the most popular request is? podcasts feed which got 203 Requests in the last hour, and this is global so we can do one more We can say let me remove that and let me let me remove request and let me do our data center so we can see Which data centers hit the most? So let's go that and we can see that Frankfurt 71 requests went to podcast feed Chicago the next 147 San Jose, California So these are the most requested URLs if we zoom it out a little bit Let's look at the last seven days, so we get like a bigger perspective Added to seven days, okay, so you can see when it went live So you can see that for some reason this one. I don't know who this person is uploads Nine thousand and five hundred times shall we go and check who this is I don't think you'll know who this is avatars People let's see okay, so let's go to com No, let's go Where is it? Where was it not this one this one? Okay? Copy that Copy pasting hard uploads, and I think this will be CDN CDN there you go, so let's see who this is Oh Someone you have a stalker or a few who knows Who is that guy? Yeah? Yeah, do you recognize him? Who's this other guy like? 6qd who's so that's Z 4 or Z 4 Z as you say it over here all right cool All right, so let's see what's this next other person? Why not

  93. Jerod Santo

    There you go

  94. Adam Stacoviak

    Shall we shall we keep going do you want to find the who this person is? Yes, that's like the third third most popular one, so we might as well come on be Adam. Yeah, Adam. Let's see all right Let's try that uploads

  95. Jerod Santo

    Nick so something is loading to maybe a JS party page way too many times But it will show up here if it was the case true is that then we have like this like random Is dysfunctional doing something with

  96. Adam Stacoviak

    You know what we could do we could do user agent I think someone's hot-linking us man was the user agent and we'll see where these requests are coming from it's gonna be a robot

  97. Jerod Santo

    Empty string you don't want to be known Okay, shall we reading hiding? Let's get to the San Jose URLs. This is what we're trying to get right San Jose URLs

  98. Adam Stacoviak

    Okay, let's do that Okay, so it's not even here in the list if you look at that and look this one has an empty string So that's really interesting. We have somewhere an issue. So this needs to be Or no, hang on. This was like when we were doing the testing. So maybe that's one not so much practical AI. All right Okay, so let's do server data center equals SJC. Yeah run query Now, let's see. What do we get? Most popular in San Jose. How you can do all this in daisy. Yeah, that's really cool, isn't it? It's so awesome. Look at pocket costs. This mp3 was requested a lot 428 times change over. Is this the last episode one of those went out yesterday, right? There you go

  99. Jerod Santo

    It's just a popular episode in San Jose. That's it. That's better than I thought was gonna be which is hacker

  100. Adam Stacoviak

    Yeah, no, this is killer and Cali not a problem overcast pocket costs and overcast and 10 of those downloading stuff This is good. This is good traffic. We love it. Yeah

  101. Jerod Santo

    Cool okay, any other technical or otherwise questions from the audience anything before Gerhard does his big reveal All right, everybody wants to see what you got here. All right, let's do it. Cool. So

  102. Adam Stacoviak

    Coming back here. It's the one more thing. This is the important one. No, that wasn't that something else All right, I flashed something yes So What we're going to do now we are going to shift All the traffic all the traffic to pipe dream, right? So all the action traffic is going to go to pipe dream. Shall we do it? That's why we're here, all right, so this is okay, this is the moment so all that we have to do This is how simple it is. It's always DNS. No scriptic. It is no automation. It is DNS. It all comes down to that Yes, we are going to delete these one by one. These are the a records are pointing to Go for it. Okay, Kaizen 20 in Denver

  103. Jerod Santo

    This is gonna be anti-climactic

  104. Adam Stacoviak

    Delete those and they'll be like the theater deleted now if this one goes down So, let's see if the system can handle all the loads right like what can happen So after this the only requirement is we walk away Okay, we do this thing and we go for lunch or something like that All right. So boom we have four more to go. Okay, three more to go. Sorry. Yeah after this one three more to go It's not deleting. It's not deleting. It's slow, right? It's my it's my Thank you very much one more one more cool and the DNS is like one by one So more and more requests are going to go to this one, right? The pipe dream pipe dream. There we go. All right, so delete so we're at two one out of three

  105. Jerod Santo

    One out of three. Yeah, half our traffic will be going to pipe dream. Yep

  106. Adam Stacoviak

    Look at that and now a hundred percent now that was well, we have one more. So we're fifty fifty What's this one? That was it. Let's see the end. Let's see it and we have to do the same thing for CDN as well See the end ones too. There you go. All right, and so this is Request is this this is invited fastly. This is it. This is the farewell from too fastly

  107. Jerod Santo

    This is a moment in a morning in the same time if it works only if it works here

  108. Adam Stacoviak

    We may need to add these back If this crashes and burns we'll have to go back for sure They have no idea what we're doing. Yeah, exactly. And this is not a live live show. So we're not streaming that's right So it's okay. That's right. It's in this room. I'm a friend if it doesn't work out. We edit this part out So that's okay. He always said that we never do. Okay, cool. We ship it all There you go. One more to go. I think this is one and one more to go

  109. Jerod Santo

    We're at 60 second TTL 60 second TTL. They should be fast. Yes All right, if you can get your phone out and hit our website, please. Yeah, let us know how it goes But that's it. We only have these two traffic. Where's the real-time dashboard showing the real-time dashboard, of course

  110. Adam Stacoviak

    Let's do that. Let's do last 15 minutes Let's see everything crash and burn. Oh my gosh No, what's the worst that could happen? Okay

  111. Jerod Santo

    All our appliances run out of memory. Well, there's no peaks and bounds. Let's let's go back to here

  112. Adam Stacoviak

    Let's go to the memory. There we go. Every one minute. Let's maybe go to the last one hour. So, let's see We had like a spike there, but that was this is like 1220. So I think it's still good I think what we're going to see so two things we're going to see we're going to see Let's come back here to the pipeline requests We're going to see a change like these requests will start going up More and more traffic is going to start hitting look at that. Oh my goodness. Boom That was like the spike right there more requests getting resolved here Status still 200. That's good. So that's all good. I'm going to our website and 404 is 500. So things things are looking good Let's see. What else do we have? Let's load a few more boards and the opposite is obviously in here in fastly nice kind of works Still It's so fast. Fastly serviced stats. So these are fastly serviced stats. The requests went up as well I'm not sure what happened there. Why did they go up? Because I just told everybody to go to it, right? I don't think we have that many people here, but still cool. So

  113. Jerod Santo

    That's the requests. All good? Play in MP3s. Play in MP3s. Play to show. Let's do Pydream service

  114. Adam Stacoviak

    So there have a few again the cache we can see things going up here So this is last seven days. We'll need to zoom in a bit. So let's go 24 hours so that we see the spikes There you go. That was the spike. Yeah, so what is this plank? This one is more hits. That's good more hits

  115. Jerod Santo

    We're getting more hits. That's good. More hits. So it will take time. The real question is was it all worth it and The answer to that is well Do we fix that cache miss ratio cache hit ratio? That's the actual answer to the question. What's the 17%?

  116. Adam Stacoviak

    So let's go back to that. 17% cache hits. Last 24 hours. This is the home page, right? That's the one that we were focusing at. So we're looking at the home page last one day. We had 3,700 hits. Yes. And we had 33 misses. I think that's better. That's better. It is better. We did it. No way. No way. Let's see three seven one five plus 33 that's ninety nine point one percent hit rate. That's a lot of nines. Yeah. Two.

  117. Jerod Santo

    To be exact. That's two nines. That's way more nines than two nines.

  118. Adam Stacoviak

    More than we had. There are some nines in there. Seventeen percent. Seventeen percent. So this is much better. All right So obviously this will take a while right for all the traffic to start shifting through it is DNS, right? It's cached but things look at that things are happening. Things are happening.

  119. Jerod Santo

    So theoretically 60 seconds later those TLS records should expire. Exactly. And then require a new hit which goes to a new route which is through

  120. Adam Stacoviak

    change.com. And change.com right now for me it returns a single IP address. That's it.

  121. Jerod Santo

    Bam. Because we're still in Ashburn.

  122. Adam Stacoviak

    No, this is this is the pipe dream IP address. This is it. So this is one and if I do change.com This is the DNS that updates the the quickest. That is the Google DNS. That's very fast. So Google DNS knows we're up. There's also I use DNS checker. Let's see if it's still a thing. It's been a while since I've used DNS checker. A service that checks the tries to resolve the IP address from a couple of locations. More than a couple. Let's see. DNS checker. Let's go changelog.com. Try CDN. We'll do CDN as well. changelog.com. This is the important one. So all the DNS in San Francisco it is. I'm going to make this a little bit bigger for everyone to see. IP address. We can see all the locations. So all of it is the new IPs. We don't see any of the old ones. The world knows about what we did. Yeah I think so. They took notice. I think it's been good. I think. Yes. Question. That is a flight.io IP address. That is a flight.io IP address. Let me show you that. So we have just IPs. Right. We can see what IPs the CDN is using. And you will see that it's this IP address which is a dedicated one. That's the IP that we're using. That's it.

  123. Jerod Santo

    Should they clap now? Only if you want.

  124. Adam Stacoviak

    You did it. Thank you. Mind blowing. We did a thing. We did a thing. Thank you. Thank you. Thank you. Because you were a part of this. And we did it with you and for you and this was so good.

  125. Jerod Santo

    Thank you very much. Thank you. So a couple of special thanks. We've already thanked our Hypely folks. Thanks to a couple of Denver liaisons. Dan Moore. Is Dan here? Hey Dan. How are you? Dan. Thank you Dan. And Kendall Miller. Is he still sticking around? He left. He was here earlier. Thank you to you two for helping us find this theater. For helping us connect with Nora again after all those years. We don't live here. And so I had no idea what I was doing or who I was talking to. So always awesome to have locals and friends willing to help out and make it awesome. So that's our show. One person didn't get thanked. Yes. Jason. Oh yeah Jason. Where is Jason? Come up here Jason. Jason come here. Jason's our editor. He's behind the scenes but he's very much part of this team. He doesn't get seen. He gets mentioned a lot. But critical critical behind the scenes here at Jage Log. Thank you Jason. Thank you

  126. Adam Stacoviak

    Jason. Thank you. All right anything else Gerhard? Thank you all. I really appreciate it. Thank you. Really appreciate it. Thank you all. Really appreciate you all coming. Thank you.

  127. Jerod Santo

    There you have it. Our first ever Kaizen Live. But I'm pretty sure it will not be the last. You know an idea has legs when you're already brainstorming version two before version one is even out there. And we certainly were. Stay tuned for more. This particular episode is better on YouTube and we have more videos from the Oriental Theater coming soon including BMC's Live Beats show. Subscribe there for clips, shorts, and more goodies at youtube.com slash changelog. And of course join our totally cool totally free hacker community zulip at changelog.com slash community. Have a great weekend. Share changelog with a friend or three who might dig it. And let's talk again real soon.