Changelog & Friends — Episode 57
Kaizen! Should we build a CDN?
Gerhard discusses 2024 goals, migrating PostgreSQL to Neon.tech, troubleshooting persistent CDN cache issues with Fastly, considering building a custom CDN using NGINX on Fly, and exploring a potential job board product.
Transcript(27 segments)
Welcome to Changelog and friends, a weekly talk show about CDN shopping. Thank you to our partners at fly .io, the home of changelog .com. Launch your app close to your users. Learn how at fly .io. Okay, let's talk. Gerhard is here once again. We are kaizening in 2024. Great to be back. 2024, here we go. We made it, we're here. Yeah, the first Kaizen for this year. And it happened so soon. We all made it back from Chicago. No crazy stories on the way home. We already shared all of our crazy stories on the way there. So here we are. Did we actually share those stories though? I think it was like in a... I've learned how to say cheers, Adam and Jared style, devolves a single glass. Oh yes, that was so funny. That was one of my highlights. Say more. Say more. So apparently, they always say cheers, is both of you hold the same glass, you hold it up. So it was like, you're always like holding your hands. And you say, cheers. I haven't seen that one before. That was so fun. That's funny, but it's funny. That is funny. It was so inconsequential to me. I don't even remember it. No offense. I think it was a picture moment. I think we have a picture of that somewhere. We're holding the same glass? Pretty much, yeah. We're pretty close. We get pretty close around here. We're not holding the exact same glass. We're holding our own version to the glass and we're clinking them, right? Is that what you're talking about? No, no, no. Gosh, maybe I am missing it. There was a picture of this. Pics or it didn't happen. Yeah, I don't remember that. Pics or it didn't happen. No, no, it didn't happen. That's okay. That's what happened in Chicago. Can stay in Chicago. Unless you have a picture and then it can come out. I'd have no problem with it. I can look it up. It's there somewhere. Okay. I'll take your word for it. So the receipts are in the show notes. If Gerhard can come up with receipts, they will be in the show notes. If not, then we just know he's just fabricating evidence. Since this is the new year, can I just say that I remember when Gerhard used to version our infrastructure by the year. And now it's sort of versioned, I guess, every two months or kind of continuously in a way, really. That was crazy, right? We're a whole new era of continuous improvement. I mean, I do, it's almost like a generation. So for example, our Fly app, the one that is currently running production is 2022 -03 -13. And guess how I remember it? I just remember it just sticks with you. And the next one, the one that we're currently experimenting with is 2023 -12 -17. So 17th of December. This is the new generation of the changelog app. But it's already old and busted, it's 23. We're on 24 now. Yeah, well, guess what? We can delete that one and set the new one up and that's okay. It's too easy, right? Yeah, it's too easy. I like your propensity to date stamp things because it's very nice for like remembering like, hey, when did I do that thing? What I don't like about it is it makes things feel old. For instance, one subdirectory in our code base that I do not appreciate, and I'm here to air my grievances, is 2022 .Fly. A, that's just an ugly folder name. B, it's forever ago. C, I have to like go in there to do stuff with Fly when I could just have all that in the root and just be chilling. So I'd just like you to defend the decision -making process there, Gerhard, explain it to me. How did that come to be? So it really was the generation of the app. It was 2022, we've set it up for 2022, and I just created the Fly folder because before, if you remember, we had the various Kubernetes clusters and then we had them versioned by year. Yeah, we were kind of straddling for a while, weren't we? Yeah, and exactly, this was like our migration to Fly, which happened in 2022. That's how long ago it's been. Wow. And since then, we really haven't changed the app. We've done a bunch of other things, but that app, in its implementation, stayed as is. There's a new Flood .io directory where we're starting to capture apps. Yeah, that's been there for a while. Is it year -stamped? It's just Fly because the apps are time -stamped in the directory, and the one that we have there, you'll see, is the Dagger engines. Gotcha. Because our CI also runs on Fly, the workers themselves, and that's what that is. So the new app, which is basically part of a PR, 492, and we'll get to that in a minute. In flight, in progress? Exactly, it's in flight. It's also in that directory. So the app is time -stamped, and we have multiple apps because the idea is we have more than one. And like ChangeLock Social, for example, is another app that we run on Fly, but that's in a different repo. Maybe we consolidate, maybe we don't, I don't know. The point is it's a nice way to store all your apps because we have more than one, and then you know which one you're targeting. It makes it very simple to not make mistakes when you want to work against a specific app. You can't basically be in the route, and the route has changed, and then maybe you're targeting a different app instance. This way it's very clear which app instance you're working against. Makes sense. Well said, good defense. Are those tied to the machine then, like you said, or does that make sense to tie it to the machine, or did I miss that part while I was trying to grok everything you're saying? No, it's just the app instance. So each app is backed by multiple machines. So that is like a subdivision of the app. And this Fly directory, Fly .io directory is part of the 492 pull request, or this is predating that? It predates it. Okay, because I didn't see it in master. It's there. I got it in my code base. Is it there in master? Yep, it's just hidden. Oh, I see it, because it only has one directory. It's not year, there's no year sub. Yeah, that's why. Okay, cool. But more are coming. More apps like this one, for example, the second one. Because we've been doing this for a while, right? We have two apps, like two changelogs running at the same time, and we don't want that to be part of a pull request for too long. This 492 is a special case, and again, we'll come back to that. But that's the idea. You can have multiple apps running at the same time, and you do like a long blue -green. Awesome, dig it. One thing which I would like to do now, because it is the beginning of a new year, is take a step back and take a bigger take on this. So
what
I'm thinking, and we have time, okay, this is edited, so it's okay.
Okay, it's
a big idea. When I answer this real quickly, everybody will know that there's like six minutes of silence that got handed out when I was thinking about my answer. What is the one thing that you want to achieve this year? With regarding to changelog .com? You can make it as big or as small as you want. Okay. We have some big ideas. They're more like features, though, not infra. We can do, we can go there. Like, this is basically so we don't constrict the creativity and the space. Right, he wants, this is open -ended on purpose. He's setting us up here. Open -ended on purpose, yes. And mine is big. I can tell, like, mine is really, really big. Okay. Oh, wow. Why don't you go first? Okay. It's as if I'm prepared. Yeah. I am. So I'll go. No edit necessary here. Yeah, go ahead. My birthday doesn't happen every year. That's right, you're a leap year, baby. And this one is special, because it also kicks off a new decade for me. So just to put it into perspective, the next time that my birthday coincides with a new decade, I'll be 60 years old. So this is like a, once a score, you're scoring. Yeah, pretty much. So after two decades of hands -on experience, which is well over 10 ,000 hours, I have this urge to produce something that I haven't done before. Something in the content space, something that combines audio and video and AI, and AI is a very important element. And 2024 is a combination of so many things for me that makes me really excited for it, because it doesn't come often. No, this is a lot of pressure. I think this is it. The next one will be 60, so it's big. I told you it's big. Do you have more than that, or is that all you're saying? That's all I'm saying, because - So big that you're not gonna put any sort of box around it yet. Remember last time when I've done this? Let's see if this time it works better. Bigging something up, but that disappointed you. So I'm not going to say anymore. Yeah, don't build it up too big. So content and AI. And video and audio. Yep. Okay. Is there any more details? That's it. It's just content space. And it's gonna ship on your birthday? Before, but yes. I'm going to do something special for my birthday, for sure. It's a one -time thing, or is it an episodic thing? I think it's going to be an episodic thing, but I have all these interests in hardware and software and combining things. And it is the long -term that I'm thinking about. Not months, not even years, decades. Something that can be tracked over decades. Something that when I'm 60, I can look back and I can say, wow, the last 20 years have been amazing. So that's this timescale that I'm thinking at. Okay, so you're going to start something, but you're not going to finish it. It's going to be a new thing you're starting. Yeah, something like that. Okay. How do you start and not finish? I'll finish when I'm 60. Or when he's done. I can't take yet, or when I'm done, exactly. But this is enough. I had fun. How frequent are these episodes? Are they yearly, are they monthly, are they weekly? Let's see what happens. Okay, wow. He's not going to say, he's not going to build it up. That's a good goal. I mean, I feel like I shouldn't have any goals shared after that one. I mean, I'm going to sound like a mere piker no matter what I say. We can make a smaller one. I mean, this is big, right? There's like different timescales in the context of - This is like a project of a lifetime. Yeah, something like that. It feels that way, almost like a next evolution of something that I've been working on for a long, long time and ShipIt was part of it, by the way. That was just a small part. It was a stepping stone on your way to this other thing. Pretty much, and before that, it was the RabbitMQ videos, TGIR. Those were fun. That was like a whole year of videos. Well, I hope you achieve that goal. It's still live, tgi .rabbitmq .com. People can go and check it out. I was terrible, but I've learned so much. So go and have some fun. See how not to do videos. I was learning. That's how you learn. All right, I like that one. Adam, what's your goal for the year? Big or small, doesn't have to be as big as Gerhard's. Probably won't be. I have two. Oh, he always does this. He'll end up with seven. Yeah. Well, I think we know what one goal is, which is to finally get plus plus in -house meaning not on Supercast. And I think that there's some things that we'll all gain from that, both how we promote it, how listeners understand it, how it can grow, how it can be embedded in the application process, and just how all the workflows work. I think there's a lot of gain there, and I know we've been taking incremental steps towards that. And I think, one, that you and I were kind of passionate about Gerhard's, I think, if you don't mind me talking about the one we just talked about at the tail end of last year, which has the word J -O -B in it, and I guess an S. Is that cool? Can I mention that? Well, we're goaling, so yeah, go ahead. Doesn't mean it's gonna happen, but it's a goal. Yeah, I think so. I think it's worth talking about. And it feels kind of even weird to put this as a goal because it seems very simple, but I think to execute at the level we like to execute at, it is not very simple. And so we were talking with our friends at Go Time about just different ways to sustain that podcast. And during the conversation, the idea of a job or came back up is a way to alternatively sustain a podcast. So that show has not had the best track record of being well -sponsored, but it also is a really awesome podcast. And that's not its fault that it has trouble gaining and maintaining sponsors. I think it's just a challenging thing for the podcast industry. And I was like, well, what if we found a different way to give value back to that community? And so the conversation sort of stemmed towards it may be a Go -focused job board. And then I think afterwards, Jared and I had a brief conversation or it was in Slack or something like that. What if it was just changelaw .jobs? So there is a .jobs TLD. And so if we had changelaw .jobs and we made it a SaaS product where you can subscribe to it to have jobs there, frequently being the job promoter, not the job seeker, and we leveraged our podcast network and found a way to automatically or systematically pull those job promotions in and out of the podcast to make them dynamic basically, then we have a real interesting way to have a job board that has an interesting economic footprint behind it where it's SaaS -based or one -off based. And we really do a pretty good job of this job board, not just like put it up and go post a job kind of thing, but far more embedded into the network. I think if we can execute on that well, then we have a decent, I would just say moneymaker on our hands that helps us sustain whenever sponsorship slim or as we prop up plus plus and that becomes more and more of a leg, which honestly, for those who support us on the plus plus side we want it to be more of a leg to our chair, I suppose, in terms of stability, but we never really anticipated it being that. Traditionally, sponsorships have always trumped the amount of revenue we can gain from plus plus, but I think there's an untapped market on subscribers supporting you. And I think that's where bringing plus plus inside gives us that chance, but then also this opportunity for changelog .jobs being a great indie dev centered place to get and look for cool jobs. And I think one part of that is maybe the vetting process. So lots of interesting things on how to execute, not just throwing it up and there you go, post a job, but something that's a bit more well executed and really for the indie market because most of the indie markets in the job space that have been like boards have been bought up by the big guys, the big folks. Or neglected. Yeah, or neglected. I mean, GitHub jobs is pretty cool, but I mean, obviously GitHub is not jobs. And I think it went by the wayside last time I checked. Early, early on, that was one of our first sponsorships here on this podcast. And it's kind of cathartic, Jared, to say that my dream thing for this, I suppose goal is to promote jobs. I said it before, we will never promote jobs on the spot. I'm never saying never again, man. Why are you bringing, why are you telling people again? Why are you bringing it back up? It always bites me. I mean, I don't mind. That's cool. But I think if, I really don't mind. I like being wrong, honestly. I love being wrong when it's right to be right, I suppose. Because I think if we do this right, it could be a cool, fun thing for the community, and it could be a good revenue driver for us. And it'd be kind of cool to put that info behind that, like the front end, the back, and all the things that we've been building. It would just be kind of easy to extend what we're already doing well. So that's my. One thing which I would like to add to this, because it does connect, I was exchanging some DMs with someone from our community. Her name is Mary Hightower. And I'll just read the one sentence, which is very relevant to this. This was end of November. So a few months back. One thing, and I'm quoting Mary, one thing I've seen in the changelog crowd is the perspective of how to build software and teams well. I think that's something important, because it is in changelog's DNA to care about those things. And it's not me saying that it's just like someone that has been in our community. And I'm sure that others must feel similar to this, because there is a perspective on what does it mean to be a good team? What does it mean to have a successful community, a successful relationship? And coming back, changelog and friends. Look at us, what we're doing now. How open we are, how we're trying to support those that maybe are less fortunate than us when it comes to their work environment. Well said, I think that's on point and entirely relevant. And the reason why something like this, which to me has always seemed like potentially a bolt -on, could actually be very integral and valuable if we execute it right, which is always for us strengths and weaknesses. Our strength is our weakness. We know that perfection is the enemy of progress and progress over perfection. And that's why we Kaizen, and that's why we do MVPs and all these kind of things, because Adam and I both desire the perfection. And sometimes we just don't build the thing because we're like, well, we can't figure out how to do it perfect or even well. And so we're not gonna do it right now. Hopefully we can eschew that and get a jobs thing going in order to provide that value and to sustain shows like Go Time. And it really, our entire network when advertising wanes. So to me, it's feeling less and less like a bolt -on moneymaker and more and more like true value for all of us. So I'm into it. Whereas in the past, I've kind of poo -pooed it. So I've had a change of tone. Same, me too. That's why I was like so cathartic that we would actually go back to it or just that it would become an idea again, I suppose, like how in the world does that even make sense? And somehow it does make sense now that that's even something we're promoting or suggesting. And it has always seemed like a bolt -on that didn't really provide the value. It always seemed like, well, this would only be so that we can find one more way to sustain. Whereas now I feel like if we can embed them into the shows in the ways we think we can and swap them out when necessary dynamically, then I think that's a big win for us and a big win for the folks trying to find the right folks. And I think if we could do a good job on vetting who comes into that pool and just some way to provide, like you were saying, Gar, quoting Mary Hightower. Hi, Mary, by the way. I think that's great. Building great software, building great teams, I think has always been the fun part of the conversations. We just had that conversation with Dan Moore on Letters to Developers. Such a cool conversation, the first shot of the gate this year. And I think it's gonna be a big hit for the year and it's show number one for 2024. So Adam shared two, therefore I will share zero. Three. Three. Yeah, one, two, three. No, I'm thinking he stole mine. So changelog++ 2 .0 was exactly what I was gonna say. Sorry, Jared. No, it's all right. You're doubling down. It's definitely going to happen. We're on the same page. That's a good thing. I mean, we should have similar goals, shouldn't we? Yeah, bringing changelog++ onsite in our control and making it way better. We have lots of ideas and we've kind of been inching towards that. We haven't gone all in on it because there's always been one more thing that pops up as more important. For instance, even our big conversation today about Postgres is a thing that is currently just more important than that. Although you're doing the bulk of the lifting on that. There's other things that are popping up, I'm sure we'll be talking about soon, which are more time sensitive than that. And so it kind of always gets pushed off. And I just want to stop pushing it off and actually get it done because A, we've had more subscribers recently. So thank you to all of you who joined. Yes, big time. It's been very uplifting to see so many people joining on, even in its current state, which we know is not as good as it could be. And 90 % of the people are there just to support us. And we love that. But we also want to quid pro quo that and provide value back and make it awesome. So it's been a thing that I think I even wanted to do last year and just didn't do it. So it's like enough is enough. Let's do this and let's do it well. And not perfect because then it'll never ship. But ship something and do the bulk of the work and then refine from there. So that's my goal. That's a good one. That's a good one. What's up friends? I'm here with one of our good friends Faras Abukadije. Faras is the founder and CEO of Socket. You can find them at socket .dev. Secure your supply chain, ship with confidence. But Faras, I have a question for you. What's the problem? What security concerns do developers face when consuming open source dependencies? What does Socket do to solve these problems? So the problem that Socket solves is when a developer is choosing a package, there's so much potential information they could look at. I mean, at the end of the day, they're trying to get a job done. There's a feature they want to implement. They want to solve a problem. So they go and find a package that looks like it might be a promising solution. Maybe they check to see that it has an open source license, that it has good docs. Maybe they check the number of downloads or GitHub stars. But most developers don't really go beyond that. And if you think about what it means to use a good package, to find it, to use a good open source dependency, we care about a lot of other things too, right? We care about who is the maintainer? Is this thing well -maintained? From a security perspective, we care about does this thing have known vulnerabilities? Does it do weird things? Maybe it takes your environment variables and it sends them off to the network, you know, meaning it's going to take your API keys, your tokens. Like that would be bad. The unfortunate thing is that today, most developers who are choosing packages and going about their day, they're not looking for that type of stuff. It's not really reasonable to expect a developer to go and open up every single one of their dependencies and read every line of code. Not to mention that the average NPM package has 79 additional dependencies that it brings in. So you're talking about just, you know, thousands and thousands of lines of code. And so we do that work for the developer. So we go out and we fully analyze every piece of their dependencies, you know, every one of those lines of code. And we look for strange things. We look for those risks that they're not going to have time to look for. So we'll find, you know, we detect all kinds of attacks and kinds of malware and vulnerabilities in those dependencies. And we bring them to the developer and help them when they're at that moment of choosing a package. Okay, that's good. So what's the install process? What's the getting started? Socket's super easy to get started with. So we're, you know, our whole team is made up of developers. And so it's super developer friendly. We got tired of using security tools that send a ton of alerts and were hard to configure and just kind of noisy. And so we built socket to fix all those problems. So we have all the typical integrations you'd expect, a CLI, a GitHub app, an API, all that good stuff. But most of our users use socket through the GitHub app and it's a really fast install. A couple of clicks, you get it going and it monitors all your pull requests and you can get an accurate and kind of in -depth analysis of all your dependencies. Really high signal to noise. You know, it doesn't just cover vulnerabilities. It's actually about the full picture of dependency risk and quality, right? So we help you make better decisions about dependencies that you're using directly in the pull request workflow, directly where you're spending your time as a developer. You know, whether you're managing a small project or a large application with thousands of dependencies, socket has you covered and it's pretty simple to use. It's really not a complicated tool. Very cool. The next step is to go to socket .dev, install the GitHub app or book a demo. Either works for us. Again, socket .dev, that's S -O -C -K -E -T .dev. Okay, well, my next goal is to encourage you to open up GitHub discussion 485. It's in our, the changelog, changelog .com repository because the bulk of this conversation is going to happen around that. And if you're listening, you can go there. By then it should have been done, but you can see all the topics, all the links, everything's there. Mm -hmm,
links and show notes too, by the way. I think the biggest thing for us, and we mentioned this a couple of times, is it is pull request 492 where we are migrating Postgres to neon .tech.
So
that's the big thing. And it's the biggest change I think that we had since Kaizen 12 is to set up neon .tech as a managed Postgres alternative to our current Postgres which is running on fly .io. Let's open up this pull request and yeah, let's take a look at it. Just like to load some of the context. Let's do
it. So when I started this, the first thing which I did, and this is almost like the Boy Scout rule, update dependencies. And I know that we should have bots that do this automatically, but sometimes, especially when it comes to the major versions, you would want to do that yourself. Like for example, Erlang, that was an okay one, 25 to 26 with that upgrade. Postgres, that was like a bigger one from 15 to 16. Nothing changed, so it was still good. But those types of upgrades, you would want to supervise. You wouldn't just want a bot to do it for you and then you figure out, ah, there's all these things which I missed. And to be honest, the end of the years are really good for these big upgrades. So that's 490 which is a precursor to this pull request. We can come back to the Elixir upgrade because by the way, that's the one thing which didn't work very smoothly, but we can come back to that later and just focus on this one. So we have a new app instance as we discussed, the 2023 -1217 which is running on fly. And that is configured to use Postgres. So Adam is the one that set up Postgres for us. How was that, Adam? How was like the whole initial setup of Postgres? You mean neon? Yeah, on neon. It was actually pretty easy. Barely any convenience for my screen fan lovers out there. It was pretty easy. I mean, I think I just went in there. The only confusing thing was there wasn't the idea of orgs. You know, you create a project and inside that project you invite people. So that was kind
of,
I guess, the only oddity. I mean, I did nothing besides you're giving way too much credit, honestly. I just talked to the folks. The folks behind neon are amazing. You started it. Without that, this would not have been possible, so. Well, if you want me to give you the real getting started story, I began it all things open, really. And so their CTO was there and their team was there, Ralph and others were there, and I was like, Jared, we should just go over there and talk to them because we want to have you manage postcards. Like, Garrett's been pushing for this. And my first, you know, you might want things, Garrett, but then I go and ask Jared, do you also want this? Do you bless this? Because Jared really is our CTO, really. And so if I would never make a tech choice without conferring with both of you guys that that's what we should do. And so I asked Jared and he's like, yeah, that works. Let's do that. And so I went over and we ended up getting them in the pod and we talked further and then we talked further afterwards. And I just laid it out like, hey, we love Fly, big love to Fly, but we want something that's future focused. And I think in my discussions with Kurt around, Kurt Mackey, who is the co -founder and CEO of Fly .io, he was always like, you know, we have different ambitions and databases are part of it, but we know we're not providing the state of our thing. It's good, it's good for everybody, but this isn't something that we're sort of like laying further into. Now that may have changed in that year and a half a good conversation. But I was always like, I know after our conversation, Jared and I's conversation with Nikita Shemganath, the CEO of Neon, I think about a year back, right? Jared about a year and some back now, that he really laid out a lot of good promise and he had experience in databases before. Like he had been previously successful around databases with like Memcache I believe, I forget what SQL cache or I forget what his previous startup was that was acquired, but he had had some success. And impressed us in that podcast about where they're taking Postgres, in particular serverless managed Postgres and then the idea of maybe getting to geo, which they're not quite there yet. And then I think what really impressed me recently talking to them was around the way that they plan to bolt in bringing this dev mode to Neon and Postgres, really where you're, and y 'all can probably speak to this more than I can, but the way you interact with a database is one in production, but also in dev. And so to innovate and to experiment with the database at the dev level always requires some sort of like cloning of the production database and this weird flow. And they've made it away because it's serverless and because it's sort of ephemeral to allow you to just branch off the database. And this isn't a new concept necessarily for databases. I think, who's it out there? Gosh, their other name. It's the SQL one, the MySQL one. PlanetScale? PlanetScale, yes, thank you. I think PlanetScale really began a lot of this branching idea with Vitesse and whatnot. So it's not a new concept, but it's a new concept to Postgres. They have upstream commits. They have a lot of promise. And so we're like really enjoying the process of where Neon can go. So that's sort of the precursor backstory. Well, then all things open, talk to them, talk to them about partnerships and stuff like that. And they're like, let's do it. And so they gave me the keys. I went in, I opened up the project and I invited Gareth. That's what I did to kick off Neon for the long story short. But it really began a year or so ago, really the idea of Neon being something that we can use. And just knowing we like to play with cool things, manage serverless Postgres, is something we should be playing with. And now we are. Yeah, so I'm very curious to see what Gerard thinks about connecting to a branch for his local development. Would you do that? Do you see yourself doing that? Absolutely. Is that weird for you? You expect me to say no? No, I mean, do you see yourself - Well, generally I'm a naysayer. You are. Also, it's not local, so it's going to be slower and you need like an internet connection and all of that. I agree, it will be. Not slower, like Docker for Mac slower, which for me was a long time naysayer. Like, no, I'm not going to run on my development environment through Docker. I already have it set up. I mean, that's years long thing with like, how should people contribute? Let's set up Docker containers. Gerard won't use it, so it's not going to be good. You know that whole deal? That's right. I'm way less concerned about some slower query times in development because I have a recurring pain with development where I do like to have fresh data as I'm coding. It's just more realistic, it's more enjoyable. It's just, I prefer that. And so I am often doing a fly proxy, a PG dump to my local and a PG restore or whatever the actual command is in order to get fresh data. And I'll do that once a week. Every time I'm starting up a new coding session, sometimes I'll be like, oh, this is fine. It's last week's data, no big deal. Other times, especially like there's a bug. Well, the bug often has to do with data that's in production that's not in development, of course. And so I want freshens. And so I'm just constantly doing that. And it's just part of my workflow. You know, I go get a cup of coffee. It's not a very large database. If it's large enough that you're gonna wait for it. And that's a pain that I live with, but I do want that snapshot to be relatively recent. Being able to connect to a dev mode, which is just a branch of production that I'm assuming I can either resync or just do a new snapshot whenever I'd like to. And it just is somewhere else. And I just changed my connection string. I don't have to version Postgres locally. It's one less dependency on my local box. I'm here for it. I haven't tried that yet. I haven't used it. Obviously we are in flight with even doing this. So maybe I'll end up hating it and be like, nah, I'll just run my local Postgres and do a snapshot and everything will be fine. But I'm definitely not nay -saying it yet. Like I'm excited to try it. And I think it's gonna be better than what I currently do. I think that's a really cool idea because it helps me figure out what else is important part of this pull request, the 4 .9 .2. And my most important take on this was like, okay, so if we do this, what will this unlock? What will this enable us to do differently or better than we're doing today? And what you're saying to me sounds like that's like a great goal to work towards because it will simplify things a lot. You don't need Postgres locally. One other thing, it's almost like a complication to this. What about contributors? What about people that don't have access to our production data? And we will not be able to give them access to production data, even if it's a branch. They're currently in the exact same box. Like they're already there. They live there right now. And that's one of my pains is people are like, I'd love to contribute. Cool, go clone the repo, check the contributing guide. And they're like, awesome, can I have some data that's like real? Cause they don't even have like podcasts when they're, you know, and we had seed data in the past and it's just like, we are not an open source project like most open source projects where it's like, there are dozens, if not hundreds of strangers working together. It's like, we have a fly by contributor once in a while and we want to enable them. But oftentimes that person who comes maybe once every few months is not worth maintaining seed information. Or I had a long, like my to -do list, right? At the bottom of it's like, find a way of just taking production and sanitizing it and reducing it down to what they could use and provide that for people. And I don't have it done. Like there's no, I don't have answers for them. I'm like, yeah, you can just do it without data. It's no big deal, hopefully. And like one guy was working on the player, which like he couldn't play an MP3. So he couldn't actually do, I can't remember what he was trying to do. And I'm like, well, it's going to take me hours to get you going. So that's where we already are. So we aren't losing anything. We're not solving that problem though. Sounds like, yeah. Maybe we are, maybe there's a way you can provide them a branch with a sanitized branch, you know? Yeah. I think this is where Neon would be great conversation with Neon to see, okay, so when we do create a branch, can we add some extra stuff that runs part of that branch so it puts it in a state, which is okay to share. And then we can automate that in some way so that whenever someone wants to contribute, they basically connect to the latest one and they don't have to do anything because the connection string doesn't change and what we make available, you know? So that would be an interesting one. And I kind of got that far. I'd have to go back and find it, but I do have at least the start of like, what is the series of SQL commands I would run to take production and sanitize it and reduce it to useful but not real? And I started like writing some deletes and stuff. Like I probably have that somewhere, but I never actually got to a place where I could then, it was all ad hoc. Like, okay, I'm gonna go get a snapshot. I'm gonna delete stuff. I'm gonna give you the SQL file via Dropbox or something lame, right? So this could be cool in that way, maybe. So that sounds almost like a step four. We're still at step zero, where we're still migrating towards it in that the pull request is open. And one of the first observations was that the latency increased. And if you think about it, it makes sense because with Fly, Postgres was local. So we get like sub -millisecond latency. In Neon's case, the Postgres is remote. It's running in AWS, still the same region, but it adds a couple of milliseconds. And when you have lots of queries, which we do on some pages, they add up. So for example, when we started this, the homepage latency just shot up by 3x. And Jared, you came in and did some Elixir foo and reduced the number of select statements. We had 70 plus. Now we have 15. So while it was 3x before, now it's like maybe 10%, which is 0 .1. So that's a huge, huge improvement. So how do we feel about knowing that the latency of all our database queries will increase? Are we okay with that? Yes, because we are leveraging cached information most of the time. And also that I can now be more diligent as well. So a lot of the reason is like, I never had a good enough reason to go optimize that particular page. And then I did, and I spent an hour or two, and now it went from 70 to 15 queries. And I could do that on other things as well. I know you posted slash feed is also super slow. 477 selects, I think, which is too many for anything. But that page is never live. It's always pre -computed. And so, I mean, when you hit it on fly directly, of course it's gonna hit. But when you hit it through changeable .com, it's going to a pre -computed XML file that's on R2. So like we've already kind of solved for that in other ways. And we can use Honeycomb and know when stuff gets slow. And then we go optimize it just like developers do. So I'm not really concerned with that. I think it's kind of, it sucks having network latency when you don't need it. Like we could avoid it with this other thing, but I think the wins outweigh the drawbacks. What do you think? Is there a way to reduce it natively? Like you said, they're in the same region. Is there a way that, you know, from an interest standpoint, we can put them closer, even though they're different networks? Like how can we get them in quotes closer to not have that much latency? So there's nothing that we can do, like this team can do to improve that because we are already in the Fly region, which is closest to the Neon region. So we can't basically pick another region either on Fly or on Neon. Maybe there are some improvements that Neon or Fly can do, but it's the speed of light. That's what we're working against here. So let's say we make it to like a millisecond quicker. It will not have the same impact as for example, if we optimize some of the queries so we don't have to run 400 plus. If we could reduce those, that would help. I think those are the biggest wins or the bigger wins that we should be looking at, rather than physically getting these two things closer. But are you saying that our Fly machines, so we had a Fly instance, multiple Fly instances that are running app servers, and we had one Fly instance that was running Postgres. And are you saying that those did not have network latency between them? Or are you saying now there's more network latency? They have a much lower network latency. They're still traversing the network stack though, right? Like they're not co -located on the same machine. Correct, but it doesn't leave the Fly network. So it's all happening within the Fly network. And we have two Postgres instances, so a primary and the replica, this is on Fly. And we have the same setup on Neon. We have the primary, it's called the read -write instance, and we have a read -only, which is a replica. And the next point is like, maybe we should look into that. Maybe we should configure to use read replicas. But before we talk about that, again, same setup in Neon as we have in Fly. The difference is that the physical distance is greater and there's more network hops. And when I say network hops, some of them are invisible because you don't see all network hops that happen. But anyways, we're just basically adding one, maybe one and a half millisecond latency. And again, these aren't always the same, they're variable. But basically we're adding more latency to every single SQL statement per query exactly, and they just add up. The more you have, you're basically paying the network latency penalty for each of those queries rather than having one query that does more and then it comes back with all the results. It goes back and forth, back and forth. Right, and is there any sort of connection pooling or other things we could do in order to reduce that per query cost? We have all that set up. We do have that. It's literally, you run one, you have to wait for the response. You run another one and some of them do run in parallel, but eventually you've run all these things and all the responses have to come back for you to be able to rebuild the page. While if you use fewer request responses, it will be quicker. It's just - Just a lot of math and physics. Doing less costs less than doing more. Yeah, pretty much. But again, it's like, it's a light of speed. That's what we're dealing with here. No physical distances and we're doing many round trips back and forth. Well, let's work on that, Gerhard. What can we do about that? Let's Kaizen, speed of light. What can we, can we slowly make that faster? You know, iteratively? I don't think in my lifetime, but I don't want to say never, but I don't - What about my 60? You know, this could be your next 20 year project. Yeah, maybe, maybe. A shorter project would be to, I think to look at the read replicas. I think they would help. So having some read replicas and having some, I'm not sure whether they're in the same region, but distribute them a little bit because Fly, I mean, we have this option of distributing our app and we haven't used it. We're still like in a single region and we haven't used it because we haven't configured read replicas yet. If we had a read replica in every single location, this would be a lot more interesting. So what do you think about read replicas in the context of our Phoenix app, Gerhard? I think it's interesting. I wouldn't put it like high priority just because of the obvious reason that like most requests are never hitting our app, you know? You say that, you say that, but remember the issue with Fly, sorry, not Fly, Fastly? Oh my goodness me, I was leaving that like towards later because that's not a fun one, but we'll dig into it. That's bad again? Well, it's been bad since October and we can't seem to get anywhere with the Fastly support, that one. So our hit ratio, it's really tanked. It's way down. Exactly, and we've been trying to figure this out with Fastly what is going on and we can't get the clear answer. No changes on our end that we can identify. No changes on our end, no. Can you zoom out a bit and give a one minute version of that problem and exactly what's happening so there's context? Okay, so let's talk about that. No, he's excited, you can tell to talk about this. Yeah, I'm prepared for this. I really like, man, this burned a lot of my budget that I have for Changelog, that's why this hurt. This burned almost like a whole month of work budgets, this like whole Fastly CDN thing, it was really that bad. And there's an issue, it's issue 486, it's a long one. If you open it up to see just how much we talked and James A. Rosen was there, so thank you James for helping out. It's honestly like, it'll take you at least 30 minutes to read it, so can you imagine how long it took? And this is only the public stuff, there's also something even longer which is a whole Fastly support thread that I wouldn't even want to open. But anyways, October 8th, this is when it started. Our CDN cash misses increased by seven X. So we had about 750 ,000 cash misses in a two week period and after October 8th, we had five million cash misses. That's a crazy amount of number. Now, this has improved since, so we didn't do anything December 28th, we are now at 900 ,000. Now obviously requests go up and down but we still have more than we should do. Most of these requests are to the homepage. 80 % of them are HTTP one, 90 % of them 19, 19 are HTTP two and only 1 % are HTTP three. 75 % of all text HTML requests are cash misses. So this is like highly cashable content that there shouldn't be any misses and we get no explanation for why this just started happening I got so frustrated that I want to build my CDN.
And
that's not the 20 years project. So yeah, so three years ago, Kurt posted about this. He wrote the five hour CDN on the Fly .io blog. I recall talking about this, I think on a Kaizen briefly. Yeah, and actually it wouldn't be that difficult. Honestly, that would be easier to do than deal with all the fastly issues. That's where I'm at now. And this has been years. This is not the first time, by the way. This is a long, long, long story. I'm using a similar approach. Like I have like something like this configured in my Kubernetes clusters. I have quite a few Nginx caches, everything. I have origins configured and it works and you can serve stale content. It's not rocket science but at least would have full control over. So what I'm thinking is let's deploy some Nginx instances all over the world using Fly. Let's serve all requests from those. They'll have some local disks. We cache all requests there. Problem solved. We're done. That's it. Worth a try. And I'm thinking cdn .gerhard .io. I even have a name for it. Not a logo yet, but I can ask chatgpt to create me one. What do you think about that? Well, I don't know what it takes to build a CDN. I think in the conversation, one of it is streaming logs. That is how we have built around. And the question was whether or not if Cloudflare had that similar support. Cause the obvious answer here would be, okay for having challenges with Fastly and they're aware of this stuff. Like we've brought it to their attention that we have had challenges. Multiple times. And it's strange to me because we obviously have such it's not like we're here trying to bad mouth anybody but we do have a mouthpiece of the developer community. And we're using the technology to showcase the technology. So it would make sense in my opinion if you had that kind of relationship with such a content I guess media company is probably the better way to say it that you would want to put some effort into ensuring that they get the right help to ensure that these problems aren't there. And maybe it's just a Fastly thing. Maybe it's an us thing. I don't think it is us. Cause we've seemed to have exhausted every single possible thing we could do around it. And so the obvious next choice would be, okay, maybe maybe we're just, we're not holding it wrong. It's just, we can't hold it right. And we can't figure it out because there's no support to hold it right. And so we go and talk to Cloudflare. We decide to build our own thing. And I think it really comes around. What does it really take to build a CDN for the kind of company we have and the kind of content we have that we need to cash globally does it make sense to build something in house? Does it make sense to move to the next key player in the industry, which is Cloudflare. They've shown desire to work with us. We're talking with them. It's not come to full fruition, but there's a lot of desire but I don't like to bet on desire necessarily. So I don't want to say there's something happening there but it's definitely on the table to talk about. And they're talking with us. We just haven't landed the point of the deal. And I think for us, we look at infrastructure partners like this, like Honeycomb, like Fastly has been like Linode has been in the past, like Fly is like TypeSense is, is we want integrated embedded partners. Not because that's what we necessarily want but because we see that's what they get the best benefit of. We get the best benefit because we get to have that deep relationship and that conversation back and forth to improve. And I'm sure if Neon succeeds with us and we fully migrate our Postgres there and we're super happy with all the things we've been sort of talking about that there's going to be a deep embedded relationship. I've kind of come up with this idea over the holidays. This embedded sponsorship is different than just sort of flying by and throwing some money at content and hoping that you can talk to their audience. It's far more of a partnership and embedded. And so that's why I go that route. And I think Cloudflare has an opportunity to work with us if that works out. We've given Fastly years to work that out and they haven't done it. And that's just a shame. I really would love to have them figure that out. I've begged them in email, in conversations. And I don't mind saying that because I've worked it personally to the nth degree that I'm kind of sad and upset that that's where we're at. They are amazing, maybe not amazing for us, but we've just not gotten the kind of support we need to get past these challenges over and over and over. So I guess my question to you is, does it make sense for us to build our own CDN? What does it really take? Should a small operation like ours try to do that or does it make sense to go to the Goliaths and the Behemoths like Cloudflare and Fastly like we have done? Should we try something different? What should we do? One thing which I want to mention here, and this is really important, is that if we didn't have Fly and if we didn't have the partnership that we have with Fly, I wouldn't be suggesting this. So that's the first thing. The second thing is, as crazy as this idea was three years ago when Kurt laid it out, you know, having sat on it for years and understanding what we need, we're not that complicated from a technological perspective. Like our app isn't that complicated and it's not changing that much. We're not a big team. And what that means is that our needs are fairly simple and straightforward, which means that some of the big companies, they can't really meet them because they're too big. There's too much there. There's like a lot of complications that 99 % of the stuff we don't even care about. We don't care whether it's Varnish, we don't care whether it's Nginx, we just care about the experience and the experience is too complicated. So I'm sure there's a way that we can make this work, but is it worth our time? And the answer is no. That's what I keep coming back to. What we need is something really simple and we don't have that really simple thing. So even like our config, what we need in terms of like streaming logs, it's such a simple feature that we require. And yes, sure, we can go and start the conversation with someone else, but just back to Jared's point, it would take him a few hours to explain to someone or to do something for someone what he could do himself in like five or 10 minutes. There's like an equivalent there to what we would need. And it's really not that complicated and we're leveraging someone like Fly, which have come light years in the last three years. Like they're like light years apart where they were as an organization, as like the services they offer. Can you gush a bit about that light year change just real quick? I mean, they are a partner. They're not sponsoring this message I'm asking you to say, but can you gush a little tiny bit about their improvements? Because that is the home of the change of changelaw .com. Almost at the changelaw .com, Jared, accidentally. You know, Fly is the home of changelaw .com. Let me change this question. Since we went from Kubernetes to Fly .io, how many issues did we had because of Fly .io? Was Postgres a problem for us on Fly? Not really. I mean, we had some issues like minor issues, but nothing big, nothing of the scale of Fastly. Whenever we, like how many times did we reach out to support and they couldn't help us? I can't even count on a hand. I can't exactly fly. There you go. From a technological perspective, the machines, the way they work, the deploys, I mean, they just work for us. They just kind of like meet our needs exactly where they are and things are fairly fast. It's very easy to spin up new apps. I know that not everyone has this amazing experience with Fly, but we've served billions of requests in the last two years. We're still good. We didn't have anything big or anything bad to say about them. I mean, I can talk, for example, why our dagger on Fly has been failing and there's some problems with the wire guard. I mean, it's not all great and we can talk about that, but that's a very specific use of Fly in a very specific context. And it's not their core competency necessarily. Like their core competency is what they provide to us. It's the edges where they're sort of moving and innovating that still need work, which is par for the course. So, I mean, this basically has to do with the Fly. There's intermittent flooded IO wire guard gateway issues when you're connecting, for example, from CI, from GitHub in this case. Sometimes that whole setup, and it's very difficult to say whether it's Fly or whether it's GitHub or Microsoft Azure, where this runs. So it's difficult to say what exactly is happening. We just know that specific combination isn't working well. But because we have two of everything, it's okay, because we've been falling back to the GitHub runners. Builds have been a bit slower, but they worked. So, you know, deploys were taking 10 minutes rather than. And I get the GitHub action run failed emails when my deploy goes out successfully. So I just want to like balance this out in that we have had some issues with Fly, but not in the path that we really care about. Like production hasn't been down because of them. And again, knock on wood, it doesn't happen. But, you know, it's been good. Now, should we put all our eggs in one basket? You know, two of everything. If we run everything on Fly and the Fly goes down. We're down. Let me ask a different question then. So if we did decide to build our own CDN, like this is one more thing for a small team of guys to maintain uptime too. Like what will we be taking on in terms of burden too? It's one thing that we don't have the need for, you know, let's just say 99%, like you'd said, of a Cloudflare or Fastly feature set. We really only need the good 1 % because our needs are just limited and we don't have exhaustive needs. And we did decide, okay, let's build our own CDN. Again, eggs in one basket, we're going to build it on Fly if we decide to do that. But what would it be in terms of like build time, burden to maintain, you know, if it's down, like how do we, I mean, that seems like we'd have to like probably have more of your time. I mean, I don't know, it just seems like we're taking on way more responsibility because Fastly's in front of everything. And while there's some challenges there and there's some misses of frequent, of recent, we're relying on them to do their job and they kind of do their job for the most part. You know, we've had some issues obviously, but we would be taking all that on ourselves. Does that make sense? So let's break it down in terms of like the big pieces that we need to get into place. We have one new application, which is our CDN application. All that is NGINX, exactly as it's described in Kurt's blog. We have an NGINX config that has all the rules that currently we are defining in Fastly. We distribute this app across all the Fly regions, maybe not all of them, but most of them. So a couple like US West, US Central, US East, South Americas, a few in Europe. And all this is like literally run a few commands in Fly and you have all these app instances spun up. They're the same config, same everywhere. We get one dedicated IP, it's an Anycast IP, again, Fly feature. So regardless where you are, you use the same IP, that would be cdn .changetalk .com. It will hit one of those Fly instances. If the instance is down, I think the way it works, the Fly proxy, which you're basically hitting the proxy on any of the edge, again, where you are, it will redirect to running instance. And then you have some very small rules, which basically tell you what do you do. So let's say that you're serving an MP3. If you don't have the MP3, it will stream it from wherever it is and it will cache it locally. So you have some disks attached to every single NGINX instance so that you have like a local ephemeral cache of all the content that's requested in that region. It's just simple config, you just add a volume, boom, you're done. That's it. I mean, there's not much more. I suppose it's like the config for the NGINX, right? So that Jared gets the logs. That can't be it. What about logs and stuff like that? What about the things we need for stats in the application and? Logs, exactly, yeah. So NGINX logs, we'll get them in the format we need to. We'll write them to a disk. I mean, Fly has NATs. That's how they distribute all the logs. I know that's not always reliable. There's like small issues and I know because I've been using this for another project for like the past year. This is for Dagger, by the way, so I know exactly how NATs works, how log distribution works in Fly. And the challenge would be to get those logs reliably from the NGINX instances to S3. I think that's the one thing which is like an unknown in the sense that I know the limitations of NATs, which is internal to Fly, but maybe there's something more that we can do there. We cannot do this without a little bit of Fly's help. And what I mean by that, we wouldn't want our logs to get lost, right? Fastly has been very reliable as far as I know when it comes to delivering those logs. I know that we can get them in the right format because NGINX is super configurable. What I don't know is how reliable will it be to get those logs from Fly into S3. One tool that I've used and I love is called vector .dev. It's an open source tool, very lightweight, written in Rust, that consumes, they're called, inputs. So it can be anything from like a log file to standard in, standard out, whatever. It has a couple of sources, and then it does transformations and it has syncs. So we could co -locate some of those vector instances right next to NGINX, they're super lightweight, think megabytes of memory usage, hardly any CPU usage, and they could distribute those logs reliably. They have back -off mechanisms, they have all sorts of things. So even that, I would have an idea of how to do. Time -wise, we're talking days of my time. Preach, I like it. So I think by the next Kaizen, if I set myself to do this, this would be done by the next Kaizen. What about costs? I mean, we would have to compare apples to apples of Fastly pricing versus Fly pricing. It looks like it's about 0 .02 cents per gigabyte. Mostly, I'm worried about outbound data transfer. 100 gigabytes per month free, that's North America and Europe, and two cents per gigabyte outbound data transfer. So I think we would do some sort of analysis of what we are doing currently on Fastly and what that would cost with our own CDN on Fly. And that would be interesting to compare. So if we did, let's go five cents per gigabyte, we would still be within our sponsorship account because Fly sponsors our infra. We would not exceed our sponsorship limit. Oh, sorry. No, hang on, I may be wrong. Hang on, hang on, hang on. Let me time this, I have an extra times. Maybe it would be slightly over, slightly over, but then we have a bunch of redundant infra that we can shut off. Maybe we can increase the sponsorship a little bit or Fly can increase the sponsorship a little bit. Right, we can always go back to them with a new cause and idea to say, I had an idea, I suppose, and this may not fly, but I was thinking the idea of really simple syndication, what if it was a really simple CDN, like RS -CDN, like a repo we started up where you could do the same thing we're gonna do if we decided to do this, and it became a template via open source as it works called RS -CDN, and it's meant to run on Fly and you can spin up your own really simple CDN essentially and kind of follow our blueprint, and I think that that's promotion for Fly, that's obviously promotion for open source, dog fooding in a way, because that's what we're asking for, like just a really simple CDN. Don't give us all the extras. I mean, if we think of it as an experiment to try out and see how far we can get, maybe we can invest a little bit of time and see, will this work? I mean, we have the blueprint, we have a couple of things which are out there, I think we're relying a lot on Nginx, and Nginx caching as a feature, that's one of the Nginx Plus features, especially managing the cache and visibility to the cache. Maybe there's other tools that are more CDN focused and open source, I don't know, traffic I know is popular in the cloud native world as an alternative to Nginx, I don't know the reasons, you probably do, but just as an example, like maybe Nginx isn't necessarily the solution. Yeah, I'm thinking something battle -hardened that has been used for this purpose for many, many years, even decades at this point, and there's only really three options, there's Varnish, there's Nginx or Apache. Apache I would discount, because again, I don't want to go into that, so it's either Varnish or Nginx. Varnish is a beast. Why don't we just export our Varnish config and just import it into our new thing? We've already written the code, I mean, I've learned Varn, I know VCL now, go ahead, I know VCL. That might just work, really? I get lost in those thousands of lines of stuff. That's what makes me think like, is this a really simple CDN? Because when I look at our Varnish config on Fastly, I think it's actually doing more lifting than we think it's doing, but maybe some of that's, a lot of it's generated based on, we turn on a few features and they boilerplate out some stuff, but when I started thinking about replacing Fastly with anything, I go back to that Varnish config and I realize, okay, I do have, and I have more rules that I would like to deploy as we take plus plus onsite and stuff, it's gonna get more. I'm happy to write an Nginx config, I'm already writing VCL, so I'm not against it, I just think like. You've used both, which one do you prefer at this point? Well, it's tough because I've only ever used Varnish through the Fastly admin, and so it's like this weird thing that you're doing and you're kind of, you write it directly, but then you like, it exports it to the right place and you gotta like set priorities in order to get the code where you want it to be, and so that's never what I want, right? And I've written Nginx configs like the way I wanna write them in Vim or in Sublime Text. So I like Nginx better just because I've never actually had, I've never just gone and like downloaded Varnish and ran it, so it's tough for me to compare, but they're both fine. I mean, I used to know Nginx very well, I haven't run it personally for years, but for me, Nginx configs are pretty straightforward stuff.
You
can still screw it up good. And I will say that ChatGBT led me astray a couple of times on Varnish stuff, it's gotten it right, but it also got it wrong a couple of times where I was like, nope, that's not how you do it. I had to learn the hard way. One plus is that ChatGBT and all the GPTs knows Nginx configs very, very well. So when you're lost, you can be found. I know at this point, all I'm trying to say is that there's a lot of frustration that has built up over the years. It doesn't seem to be getting any better. And someone's like, I wanna do something about it. And maybe this is not it, I mean, it's close to me, like the heart of a hacker, the hacker has to hack. The easy button to me, I'd love to do that, some sort of hacking. I think I would love to investigate further really what it would take for us. Cause I mean, I love to tinker just like you do, but do we wanna hold a CDN forever as our own responsibility? That's not really the business we're trying to be in. I think that we are in the business of partnering with great tech stacks and great infrastructure partners and helping them evolve to fit our needs more so than us trying to like tinker. I mean, I would totally tinker with this RS CDN kind of idea, but I think at the end of the day, I want a great partner as a business. I wanna promote a great partner to a great developer audience that makes sense for them to try out and use on their own. To me, Cloudflare seems like the winner of what we should try next, unless you investigate and further in quote, sell us on the idea that this makes sense for us to build and hold ourselves. Cause if there's legs there, then that's kind of cool. And maybe that's kind of fun. It would put us more in the fly basket, which I'm not against cause we can certainly circle back with Kurt and the team there and showcase our ideas. And they love that. They love the hacker spirit. So I can't imagine we would get turned away with this idea. I think my primary concern would be going against the grain in terms of infrastructure partners and then going into the grain of building out a service that we may not actually wanna manage ourselves. But I like the idea of the tinker. They think it'd be, it'd almost be fun to do just for the fun of it really. There's a limp into this as well that we could deploy, which is that we could leave CDN .changelog .com completely alone. We have two domains on Fastly. And then we have changelog .com, which is fronting our app servers. And those are two different things inside of Fastly. And obviously one has the bulk of the traffic and the other one has way less traffic. The feeds is gonna be big, but it's not even, the logs we don't care about as much, right? The MP3 download logs are the ones that we want. That's the bulk of the traffic. We could leave that alone for now and tinker with changelog .com, which is really just fronting our app servers anyways and has a bunch of logic, like where the feed rewrites are and go to R2. And there's lots that you would get done there, but it's probably like 20 % of the work that it would be if you took them both on at the same time and said completely. So you could build a poor man's version of this as a tinker, which maybe takes one day for Gerhard versus three or something. And we could roll it out and leave CDN alone. And then if it doesn't work, to turn it off and go back to what we're doing. So I think that's like a way we could do it with way less risk and probably more fun. What about Cloudflare, Jared? Have you looked at the logs? Just enough to know that I think that we need the enterprise plan before I can even play with the features, which is kind of weird to me. And they don't tease them where I would expect, like in the Cloudflare UI, you expect it to be like, here's a feature you can't use, hit a button here. But like this feature just doesn't exist until you get to the docs. And then you're like, oh, log push, which seems to be exactly what we need is to just like writes your logs out in real time to R2. Right. That's the feature we need for our analytics. And then I haven't looked at it for rewrite rules and all the other stuff we're doing fancy, how could I recreate the varnish functionality over in Cloudflare? I haven't got that far yet because I figured why do it if we're not sure yet. So I'm pretty sure we can get everything done there that we got done in Fastly. I just don't know exactly how, but the log push is an enterprise feature, which we're just on a standard plan right now. And so I can't even. I'm sure we can get that blessed, like, hey, just turn that on for us. Yeah, I just can't even look at it. I haven't even looked at it yet because you just can't. And that's been the main hang up really, because I mean, to zoom way, way back, we wanted to actually run Cloudflare and Fastly side by side. And I think Jared, I can't recall, remind me why we did or didn't do that, but we had the idea of doing it. And it came around that we were always unsure of how to do essentially what log push does, which is move those logs streaming to another service so that we can consume them and use them for the stats and whatnot. Or any blessed way that we could get the data that we need from Cloudflare. The first time we looked at it, which is probably five years ago now, they just didn't even have it. Like they had your dashboard and they'll show you what you've done. And that was it. Like you can't say it. Yeah, but how many requests to this endpoint do we serve? Like they just didn't have that kind of stuff back then. They seem to have that kind of stuff now. There's other stuff called website analytics, which is in beta, which has even more granular data. So I think they're like been adding that over time. And then a log push service seems to be exactly what we would be after. Maybe there's an even easier way that they have. That's like, this is a Cloudflare way. And I haven't asked, I can just ask them that. I haven't asked them, but the question's like, hey, if I wanted to count downloads to an MP3 endpoint, like how would I get that done? I'm pretty sure most Cloudflare engineers are gonna be like, oh, here's how you do it. I just haven't asked. And maybe the answer is you do it with log push. Okay, well, we don't have that. So that's where that is. But I'll be down with tinkering with this personalized fly CDN, even if it's just for changelog .com, which just fronts our app servers. We don't really care about the data. We don't need to strain the logs. We just need the rewrites to work so it gets the feeds from the right place on R2 and like the basics there. And if that works great and nothing works out with Cloudflare or Fastly and the costs make sense, then you just do the other part, which is gonna be harder. But once you've done the easy part, the hard part becomes less hard. I think it's worth trying a couple of things. I think if Cloudflare will work from a certain perspective, we should definitely try out and see how far we can get. I think this fly thing has some merit to it, at least try it out and see again how far we can get. Maybe we'll come across things that will be blockers, like real blockers. Or Kurt, after he hears this, he says, hey, you guys are crazy. Don't do it.
That
was a joke. Actually, I wrote that three years ago and I do not believe it anymore. Please don't do that. Yeah, that's true. You guys are crazy. Or maybe he's like, you guys are crazy and I love it. Maybe. Yeah, let's do it. What's up, friends? This episode is brought to you by our friends at Neon. Serverless Postgres is exciting and we're excited. And I'm here with Nikita Shamganov, co -founder and CEO of Neon. So Nikita, one thing I'm a firm believer in is when you make a product, give them what they want. And one thing I know is developers want Postgres, they want it managed and they want it serverless. So you're on the front lines. Tell me what you're hearing from developers. What are you hearing from developers about Postgres managed and being serverless?
So what we hear from
developers is the first part resonates absolutely. They want Postgres, they want it managed. The serverless bit is 100 % resonating with what people want. They sometimes are skeptical. Like, is my workload going to run well on your serverless offering? Are you going to charge me 10 times as much for serverless that I'm getting for provision? Those are like the skepticism that we're seeing and then people are trying and they see the bill arriving at the end of the month and like, well, this is strictly better. The other thing that is resonating incredibly well is participating in the software development life cycle. What that means is you use databases in two modes. One mode is you're running your app and the other mode is you're building your app. And then you go and switch between the two all the time because you're deploying all the time. And there is a specific part when you just like building out the application from zero to one and then you push the application into production and then they keep iterating on the application. What databases on Amazon such as RDS and Aurora and other hyperscalers are pretty good at
is running the app. They've been at it for a while. They learned how to be reliable over
time and they run massive fleets right now like Aurora and RDS run massive fleets of databases. So they're pretty good at it. Now they're not serverless, at least they're not serverless by default. Aurora has a serverless offering. It doesn't scale to zero, Neon does, but that's really the difference. But they have no say in the software development life cycle. So when you think about what a modern deployed to production looks like, it's typically some sort of tie in into GitHub,
right? You're creating a branch and then you're developing your feature and then you're sending a PR and then that goes through a pipeline and then you're on GitHub actions or you're running
GitLab for CI CD and eventually this whole thing drops into a deploy into production. So database are terrible at this today and Neon is charging full speed into participating in the software development life cycle world. What that looks like is Neon supports branches. So that's the enabling feature. Git supports branches, Neon supports branches. Internally because we built Neon, we built our own proprietary and what I mean by proprietary is built in house. The technology is actually open source, but it's built in house to support copy and write branching for the Postgres database. And we run and manage that storage subsystem ourselves in the cloud. Anybody can read it. It's all in GitHub under Neon database repo and it's quite popular. There are like over 10 ,000 stars on it and stuff like that. This is the enabling technology. It supports branches. The moment it supports branches, it's trivial to take your production environment and clone it and now you have a developer environment. And because it's serverless, you're not cloning something that costs you a lot of money and imagining for a second that every developer cloned something that costs you a lot of money in a large team. That is unthinkable, right? Because you will have a hundred copies of a very expensive production database. But because it is copy and write and compute is scalable, so
now a hundred copies that you're not using, you're only using them for development, they actually don't cost you that much. And so now you can arrive into the world where your database participates in the software development life cycle
and every developer can have a copy of your production environment for their testing, for their feature development. We're getting a lot of feature requests by the way there. People want to merge this data or at least schema backing into production. People want to mask PII data. People wanna reset branches to a particular point in time of the parent branch or the production branch or the current point in time, like against the head of that branch. And we're super excited about this. We're super excited, we're super optimistic. All our top customers use branches every day. I think it's what makes Neon modern. It turns a database into a URL and it turns that URL to a similar URL to that of GitHub. And you can send this URL to a friend, you can branch it, you can create a preview environment, you can have DevTest staging and you live in this iterative mode of building applications. Okay, go to neon .tech to learn more and get started, get on -demand scalability, bottomless storage and data branching. One more time, that's neon .tech. I mean, I think to be honest, I think Fly should have a CDN because that's one of the first things that are fairly easy to run as a distributed systems worldwide because the state is decoupled. It's the simplest use case, right? Yeah, so if Fly invests in something next, I think a CDN should be it. The thing which we haven't talked about and maybe we should is Superbase on the fly. Oh yeah, because that popped up just recently after we were already starting with Neon. I mean, we wanted to manage Postgres for a while and they weren't doing anything about it. And so we're like, well, let's go talk to Neon. And then tell them the rest, Gerhard. There is a Superbase Postgres on fly .io. It's in the fly docs. I think this was in December 13th or something like that. Yeah, it was fairly recent. Superbase partnered with fly .io to offer a fully managed Postgres database on the fly .io infrastructure and low latency. I mean, it's just like right there in the intro. I think that makes a lot of sense. So yeah, I think it was like bad timing, I suppose, or good timing, depending on how you look at it. I think the Neon, I think I really want to see that through, but it's interesting to see something like Postgres appearing on fly as a managed service through partnership. So I'm wondering, maybe a CDN is next and this is my wishful thinking. Yeah, maybe. It's definitely an obvious move. I mean, it's not obvious that they would partner with Superbase. I think that for me was kind of a pleasant surprise. It makes sense. Like, oh yeah, this is like a great partnership. I think both companies are very impressive and aligned in that way and it benefits both. So I thought it was a good idea. Obviously I felt like it was late to the game because we had been wanting managed Postgres for a long time on fly. So much so that we made a different move, you know? That's right. And still interested in maybe, you know, trying and comparing the two. Obviously, depending on how tightly Superbase is integrated into fly's infrastructure, expect them to have that advantage in terms of performance. Yeah, maybe that's a, maybe they go out and find a CDN focused upstart that could integrate into fly. I don't know, maybe. I mean, if I was to pick a CDN and I haven't tried them, but like I did a bit of research, Key CDN looks interesting. And not because it's based in Switzerland, that has nothing to do with it, but there's that as well. So Key CDN. It was real fast for you, real close. Yeah. One of your favorite places. Yeah. I haven't shopped CDNs for a long time. I just, I've been happy for the most part until October the 8th. It's almost like a yearly thing. Like every year something, something like that happens. And then we spend a few days with support and we get nowhere. I end up going in circles and saying, you know what? Like flip the table, I build my own. And then I calm down. And then you're like, I don't really want to build my own. Yes, yes we should. But here we are like, yes you should, Gerard. You should build. This is like the third time this thing has happened over like the last couple of years. So I think there's something there and it will happen again, I'm sure. Just a matter of time. I guess just to layer one more on, like thank you James A. Rosen for helping us out. But to have to reach out to an ex -fastly person or for them to actually reach out to us probably with like, oh my gosh, you guys are feeling so much pain. I just need to step in and help you all. That is just not cool, really. That's not great. But did you know that Vercel Postgrass is powered by neon? No, is this an advertisement? No. It just sounded like that. Did you know that Vercel Postgrass, is this a product placement? Where's the jingle? Well the reason why I say that is because suit base is available on fly. And it just makes sense to say, well maybe neon at some point will be also available on fly. Yeah, that might be to fly's advantage to do that, right? Right, it makes sense. And you know, but at the same time, I've had this back of the head thought that maybe neon will be acquired by Vercel. Yeah, are they the only database provider on Vercel now? Well Vercel Postgrass is neon. So Postgrass on Vercel is neon. You don't need an account, I'm just reading from their docs, I'm not at all advertising. It is not SOC 2 Type 2 compliant, coming soon. I'm just reading from their docs. But it just makes me think like, maybe neon will be acquired at some point. I don't think so, but it just gave me this feeling because when I talked to Nikita for these ads, possibly did with them, which was sponsored, it was really, his perspective was around the JavaScript developer and you never bet against JavaScript, this idea that he had said. And you know, they're quite embedded. I just wonder if there's like, fruits there happening where eventually they might get acquired by them. I don't know, because Vercel is such an acquisition behemoth these days, like they're acquiring a lot of different stuff. And just thought there. But maybe at the same time we can expect to have a neon Postgres inside of Fly, where we just basically have the same great features we love that we're thinking we'll love with dev mode and whatnot and branching and copy -on -write and all the fun stuff they provide. Maybe it's just like, well, now it's just, network latency's gone, it's just not there anymore because it's within the Fly infra, and that's gonna be a good thing for us. The good thing is, really, is that we have choice, right? We have so much choice as developers, and that really is the fun part of it, right? There is a lot of choices here. It's almost the paradox of choice. Yeah, paradox of choice in the green scheme. We'll end up doing nothing again. Like, eh, we didn't do anything. Build your own. There's 14 choices. It's not the right one. There'll be 15 choices now, 15 standards. We'll release our open source CDN, and there'll be 15 of them, right? Yeah, so I kept one more thing as last. All right, one more thing. This is almost like an Easter egg. It's not Easter yet, and it won't be Easter the next time we record, I don't think, but still. Part of the pull request 492, I snuck something in that I wanted to have for ages. Oh my goodness, did I notice it? I don't know. Let's have a look. This is a test. See if you can notice a feature which I snuck in pull request 492. Okay, now I've switched to the file changes tab. I'm gonna just scroll through the file. Is that where I'll find it probably snuck in as just some sort of file change here? I think it's actually, if you look at the pull request of the conversation, it's actually the second comment. Actually, it's the comment. The first comment which I've made after the description. Don't give me all these hints, man. Yeah, too easy. That was for Adam. You keep looking at the color, Jared, it's okay. Let's see who gets there first. Is it this video? No, that's actually a surprise how the auto -scaling slider works in neon, which is very counterintuitive. So I left that gotcha there, and I gave support to their product team about how that could be improved. Is it 1Password? Yes. Oh, I'm glad you mentioned that because I love 1Password. And you're doing more with this, what's happening here? What's this about? So in a nutshell, our application needs a single secret now. Don't tell them. OP underscore service underscore accounts underscore token. Single secret. And during boot, the application uses the OP, the 1Password CLI, to inject all the secrets that it needs at boot time. So it pulls them down from the 1Password vault when it boots. And is that hosted by like 1Password cloud or where's the vault? Correct, that's all 1Password cloud, yes. Okay, and so we don't have any additional infrastructure for that? Nothing additional, no. Spell it out for us really detailed. Why is this cool? I mean, I think I understand why it's cool, but spell it out. We have a single secret that gives the app access to all the secrets that it needs and there's a dedicated vault for that app. What that means is that that secret only allows the app to access just -in -time secrets when it boots. We don't write them anywhere. We could, but we don't. It's all in memory. When the app boots, it has access, boom, it pulls them down. The secrets never leave 1Password apart from loading into the app's memory. We don't configure them in Fly, which is what was happening before, right? Every single secret the app needs, we configure it in Fly. Remember how we rotated secrets, Jared? That's a pain. So that process we no longer have to do anymore because if you want to update a secret, you update it to 1Password, you restart the app, and boom. At boot time, the app picks up the new secret. That's it. Does 1Password vault have some sort of a webhook or something that they could trigger? Because then you just take step two out. You know, that's what I want. Yeah. Just let the app restart itself. Like reboot my app when I add a secret kind of thing? We're done step one. So please continue being excited for step one before we talk about step two. Don't you love how I'm never satisfied by your, I'm like, no, not cool. This would be cooler. You and every other developer. That's why we keep kaizen -ing this. It never gets old. You know, what would be cool if we proved this? And before you know it. Gerhard's like, can you just appreciate this for a second before you ask for more? That's cool, Gerhard, I'm loving this. I'm loving this. And it hasn't even been merged yet, so again, let's merge it first. Let's start using it. Let's get it merged. Okay. That's a nice Easter egg. Well, he did ask if this covered all the secrets and you said looks correct. So I think that's all we needed to worry about in there. That is kind of cool. So cooler thing I think is that it's limited. Even if it could somehow leak, it's only the secrets that we store in one password for that vault for the infra, right? So there's a barrier, there's a perimeter to its touch point of secrets. That's it. And if this was leaked, yeah. Rotate the service token, basically rotate all the secrets in the vault and we're good. Again, that would be like a step number three where it could be automatically rotate all the secrets that were leaked from one password. And that's almost like a one password request. Yeah. This is where I also say that we're working with one password behind the scenes to make this embedded partnership more apparent as well. We're using this tech, we're paying for this tech. We're not promoting it because they're paying us. And we're actually pursuing them to pay us, not so they can keep promoting it because we love it so much and we love to work with them to share more of this story on the inside and maybe even have that relationship where we're, hey, this is how we're using it. And Jared's response was, could there be a webhook? And maybe they're like, yes, there could be a webhook. Reminds me of this book I read with my kids. But anyways, that's cool. So hopefully we can get a one password sponsorship here soon because of just how we keep using it and improving it in terms of our infrastructure. That's awesome, I love that. Been using one password since the dawn of time, basically. I just adore it, it's awesome. So those fly secrets then go by the wayside? Pretty much, yeah. The only secret to which we set is this one password token, service token, and then the one password CLI loads all the secrets directly from one password. So when I wanna add a new secret, let's say I integrate a new service. I go add it to the one password vault and then I go restart the app. I push the code that references it and by the time the thing boots up, it's gonna have access to it. That's pretty cool, man, I love it. Yeah, there's still a file there. There is the env .op file where we put what secrets you want. That's part of the pull request. I add it to there. Exactly, because that's what gets it in the environment, in the app's environment, just in time when the app boots. Okay, what about dev? Are we still using durenv for dev? So yes, so for example, part of this, I have an envrc .op and basically that one I template just in time, which does exactly the same thing, but in this case, I write it locally to my file. I wouldn't need to. I could, for example, run op every single time, the one password, to load them in the env, but I don't do that, but it's an option. Say that again in different words. Right now, if you wanted to use this in dev, you would need to run the command locally. We need to read the... The op command. .e, exactly, so like to read the env .op file and maybe template it, like maybe write it to a disk or load it in your environments. You would need to run things through the op CLI. Can I continue to ignore that and just use my durenv as I have been? Because my secrets obviously in dev are going to be different than the secrets in prod anyways. You could, yes. What I really want to know is when this gets merged, is my setup going to be hosed or not? Oh, you have no secrets. No, it shouldn't, because this just configures it for prod. So whatever you're doing in development. This is additional. Additional, yes. Gotcha, I could use it if I wanted to in dev, but I don't have to. Correct. Sweet, cool, awesome. That's awesome. Anything else? I feel like that was the coup de gras, the Easter egg. That's why I left it last. That was it. Awesome. So my question is, do we build a CDN or not? That's what I want to know. It's always, that's like a title, let's build a CDN. That might be a show title right there. Yeah, that's the show title. Kaizen. Kaizen, build a CDN question mark. Yeah, I like that. To be determined, I think, let's tinker. I think that's the answer, let's tinker. I like it. And we'll talk about it again on the next Kaizen. Yeah, and we're merging the neon text. We're going to take that into production. Okay, so we are all good with the latency, so all good. There are some issues with the Elixir configuration. I've left a couple of things for the neon support. I have a support case open, so we'll select back and forth on that. I have a workaround which works, but the official documentation doesn't work for us. It's the official neon tech documentation for Elixir configuration. Some issues with the SSL, with the SNI, it doesn't work as advertised. So we'll be on neon .tech as of the shipping of this podcast. So when people listen to this, we'll be on neon? I think so, depends when we ship it. That's a week from today. Yeah, a week from today is fine, yeah. So if you're listening to this, go to change .com and see if things are snappy or if the latency upsets you. See if it loads. So Fastly in front, so by the way, Fastly will be serving your request most likely. Sign in to the website and we'll give you a cookie. And if you have that cookie, Fastly just passes through to the apps and you'll enjoy slower response times because you're going to be hitting neon. But we hope you enjoy that cookie. An easy way to do that is for free, right? Just go to change .com slash community. That's right. And hey, while you're doing that, come and say hi in Slack because I want to say hello to you. Lots of cool people in there, lots of good conversation. Home lab's been active, TV and movies has been active. A lot of, I think you got your wordle channel still yet. Jared, I'm tracking that. Oh, we picked up some wordlers. Thanks to stay in the log, we got a few new wordlers. Still going strong. I'm still keeping my streak alive, so. A lot of fun. All right, y 'all, bye friends. Bye friends. Kaizen. Kaizen. Kaizen. That's it, our 13th Kaizen episode. If you have a long road trip or a marathon to run, you could go back to the very first one and binge our entire journey along the way. Find them all at changelog .com slash topic slash kaizen. Oh, and you've probably heard that we're bringing Shippen back real soon, but not with Gerhard on the mic. Maybe you're wondering how he feels about that. So was Adam. So for the plus plus folks, how do you feel about us relaunching Shippen? Change log plus plus members, stick around for that bonus. And if you haven't signed up yet, now is a great time to directly support our work with a plus plus membership. Ditch the ads, get free stickers and discounts on merch and hear about Gerhard's feelings at changelog .com slash plus plus.
Change log plus plus, it's better.
Thanks once again to our partners at fly .io, to Breakmaster Cylinder and to you for listening. We appreciate you spending time with us. Next week on the Change Log, news on Monday, Alan Jude talking free BSD on Wednesday and Techno Tim joins Adam for the State of the Home Lab on Friday. Have a great weekend. Share the Change Log with your friends who might dig it and we'll talk to you again next week.