Changelog & Friends — Episode 122
Where DOESN'T curl run
Daniel Stenberg, curl's BDFL, discusses his leadership principles, curl's ubiquity across devices, LLM-generated security reports, and financial independence through support services.
Transcript(69 segments)
Welcome to Changelog and Friends, a weekly talk show about the life of a BDFL. Thanks to our partners at Fly .io. Launch your app, close your users, too easy, learn how, at Fly .io. Okay, let's talk. What's up, friends? I'm here with a good friend of mine, Faras Abukadije. Faras is the founder and CEO of Socket. Socket helps to protect some of the best engineering teams out there with their developer -first security platform. They protect your code from both vulnerable and malicious dependencies. So we've known each other for a while now, Faras. Let's imagine somehow I've landed myself at Vercel. And because I'm a big fan of you, I understand what Socket is, but I don't know how to explain it to anybody else there. I've brought you into a meeting, we're considering Socket because we want to secure our dependencies, we want to ship faster. We want everything that you promised from Socket. How do you explain Socket to my team at Vercel? Yeah, Socket is a developer -first security platform that stops vulnerable and malicious open -source dependencies from infiltrating your most critical apps. So we do that by focusing on real threats and keeping out all the types of risks that are out there in open -source dependencies. Everything from malicious dependencies, typo -squat attacks, backdoors, risky dependencies, dependencies with hidden behavior. There's all kinds of risks out there, a lot of reasons why a dependency might be bad news. And Socket can help you as a developer just keep all that out of your app, keep things nice and clean and pristine amongst your dependencies. I saw recently Dracula. I'm a fan of Dracula, I don't know about you, but I love that theme. Big fan of Xenorocha and I saw there was like a misspelling there. And so because Dracula is installed on VS Code and lots of different places, I saw there was a typo -squat sitting there that had different intentions than obviously Dracula did. Is that an example of what you mean? Absolutely, yeah. Dracula, that's a perfect example. It's super common these days to see that type of an attack, where you see a common dependency that you have an attacker just pretending to be that dependency, typo -ing the name of it by one letter and then trying to get unsuspecting developers to install it. Unfortunately, we're seeing more and more of these types of attacks in the community and they're taking advantage of the trust and open source. As developers, we need to be more aware of the dependencies we're using and make sure that we're not pulling in anything that could risk the data of our users or cause a big breach at our companies. And so part of that is obviously being more careful and asking questions and looking more carefully at the dependencies we use. But also part of that is tooling. It's really a hard problem to solve just on your own as a single developer. And so bringing in a tool like Socket can really help automate a lot of that work for you. It just sort of sits there in the background. It's really, really quiet. It doesn't create a lot of noise. But if you were to pull in something that was backdoored or compromised in some way, we would jump into action right in the PR or right in your editor. Or even as early as you browse the web, we have a web extension that can actually give you information. If you're looking at a package that's dangerous or if you're browsing, you know, Stack Overflow and you see somebody saying, hey, just install this dependency to solve your problems, a lot of times even that can be a way to get the attacker's code onto your machine. So Socket jumps in at all those different places and can tell you if something is dangerous and stop you from owning yourself. Yes, don't get yourself owned. Use Socket. Check them out. Socket .dev. Big fan of you for Ross. Big fan of what you're doing with Socket. Proactive versus reactive to me is the ultimate shift left for developers. It is totally developer first. Check it out. Socket .dev. Install the GitHub app. Too easy or book a demo. Once again, Socket .dev. That's S O C K E T .dev. So Daniel, this is your first time being on Teams like on friends. Now we're friends of many years now, but mostly we just talk about curl. And I can't help but talk about curl when I talk to you because it's such a big part of your life for one. But then back in March, just a few days after my birthday, you turned twenty six. So congrats. Daniel turned twenty six? No, curl. Twenty eight. I was going to say, dang, if curl's twenty six, Daniel, I'm not going to ask. Good job, Daniel, being twenty six. Yeah, happy. We call that post dated belated. Happy belated birthday to curl. Everyone's favorite. I don't know you call it command. I was going to call it a command line internet fetcher. But that's not what it's called. I mean, it's so much more than that. Daniel, what's your what do you call curl these days? So many things. Yeah, I usually call it like Internet transfer tool or something. Internet transfer tool. OK, but it's not very sticky, though, is it? No, it's it's not.
I
mean, I couldn't think of something sticky right off the bat. I was like, what is what is curl? I know it's like Internet plumbing tool. I don't know. An HTTP client, maybe. Well, it's more than just HTTP, of course. Yes, exactly. It does so much more than HTTP. That's why I go with Internet transfers. But that also a little bit vague and hard to write. Internet transfer tool just doesn't sound so cool. I mean, everybody know what curl is, though. It's kind of part of the substrate of the Internet at this point. I mean, it's twenty six years old. Yes, it's been around for a while. Yeah, that's a good way to put to the Internet substrate mechanism thing. I don't know. That's a good thing to substrate. Yeah. So this is your fifth time on our shows. First time on changelog and friends. We usually do it around anniversaries, birthdays, 17 years or curl. I remember was one of our episodes and it's been three years since we've had you on. So why don't you catch us up? What's new in curl? Yeah, three years. Three years is a long time. In most aspects. And it certainly is in curl time as well, even though I often have that reaction. I probably mentioned it before that people say that, you know, they used curl 10 years ago and they used curl today. And, you know, there's no difference. Has anything happened at all? But I did a quick check the other day. And in the last three years, we've added 21 new command line options. Oh, my gosh. Do you know how many total command line options there are at this time? Yeah, I have a counter. So there's 263 of them. This is why it's hard to define what curl does, because I mean, so many things it can do. And of course, and then people like to say that is that a good thing, really? And no, it's not really a good thing to have a lot of command line options, because of course, you get lost among them. And very few people actually use more than, I don't know, 10, 15, 20. But of course, everyone uses their own subset of them. So they all serve a purpose for some users, but sure. So it becomes a problem to document and for people to understand and find them and so on. But at the same time, you know, people come up with new things they want to do with this Internet transfer thing. And then we have to, how do you add the new thing? Yeah, you have to add a new option. Like we have stuff like, oh, you can set one of the, for example, there's a header field in the IPv4 header. And it's called type of service. You can set it now. It exists in IPv6, too. It's called traffic class. It's just a numeric field in the IP header. OK, now we can set it with curl, because some people like to do that. Most people won't. But of course, we have to add a command line option to be able to set it if you want to set it and stuff like that. So there's often these days when we add new options, they're all these niche things for some users. Right. Yeah, it's the eternal struggle, I guess, of somebody who's writing useful software is how to maintain and evolve it over time that continues to serve new needs, but doesn't trample down what people came for in the first place. And I think when it comes to command line options, like you said, it's the documentation, it's the man page, it's the help, where it really does get in the way. Other than that, I mean, it's invisible, you know, I use curl daily, and I probably use five. Exactly. I don't know any of the new ones, they don't get in my way, they don't bother me. Right. They're useful to somebody. So exactly when we do that properly, they're not in the way for those who won't care about them. So they'll just sit somewhere and one day, five years down the line, when you want to do one of those things, you figure it out, and then you
find
out which option to use, and then you do it, and then you forget about it again and move on with your life. And that's how it's supposed to work, right? Right. So you've added a bunch of command line options. I know you've been very ahead of the game and continuing to work on HTTP3 stuff. What's the state of H3 these days? Yeah, so just over these three years, HTTP3 has really grown a lot, both in curl and in general on the web. So now we support much more H3 in curl. And well, it's still complicated to build curl with HTTP3 support because of all the different components involved and the different maturity levels and the different APIs and so many different pieces that need to work together. So it's a juggle. If you install curl from Linux just to today, for example, I don't think a single one actually enables HTTP3 by default, just because of the weird mix of different dependencies that they need to add up for it to work. But we support it now, as we say, non -experimentally with a set of dependencies called the ng -tcp2 -quick library. That rolls right off the tongue. Yeah, it's a mouthful. The best part is that it then requires the ng -http3 library, too. So that's, yeah, say that fast three times. ng -tcp2 and ng -http3. So yeah, and then when you enable HTTP3, you can use the dash dash HTTP3 option, of course, with curl. So then you can actually try it against, you can run it against any server, really, because it'll try H3 in parallel with H2 and H1. Oh, really? So it'll just race them all against each other, and the one that wins runs pretty much. So you would assume that H3 would always win, but I guess not, huh? No, because it's such a problem with, since it's based on QUIC, which is done over UDP, there's a certain amount of blockage going on in the world. So in a lot of cases, it actually doesn't work, just because your company organization or something in between you and the server decides that UDP is bad, we shouldn't let it through. Yeah, just firewalls or whatever it happens to be. Exactly, or just, you know, someone decided along the way that this isn't, you shouldn't do this. So therefore, it's a lot of that, you know, trying out if it works, and then you have to have a fallback and use the older versions if that new one doesn't work. So you fire off all three at once? Not really. Well, because H2 and H1, you can negotiate on the same connection. So we do both at once. I see, you negotiate that one. You can't negotiate H3 because it's over UDP? Exactly, and to complicate matters, we also do happy eyeballs, so we do IPv4, IPv6, old, and IPv4, IPv6, new. So we'd actually do four attempts at once. That's what they call it, old and new? Yeah, not v1, v2? I mean, what in the world? No, well, I call it like that. Oh, okay, gotcha. That's not the true nomenclature. No. Well, IPv6 is already hard enough to get. I still understand it as a user, not an implementer. So if they called it old and new, that'd be even worse than v1, v2. Yeah, well, it's a complicated setup, and then we work with several other backends to enable libcurl with other backends too, a little bit depending on so that we can offer them on more platforms and on with more different TLS backends. So it's a circus. We support four different quick backends. That's what keeps Daniel busy. Yeah, and we do that a little bit so we don't have to pick a winner. We can support all of them and see a little bit how they develop and go and become good or bad, because when we picked them originally, we don't really know if they're going to be the good one in the end. Even though I mentioned ng -tcp2, that library is actually created by the same guy team that created ng -http2 library before, another mouthful. But that's a very successful and very reliable library for H2. And they get that guy a marketing team, help him name some projects so we can pronounce them. They like their ng stuff, but it's really, really hard to say. Yeah,
very literal. Is IPv6 arriving? I mean, it's been like the slowest transition in history. Yeah, it's really slow still, and I don't know. Luckily, I don't really have to care. But from my point of view, that's why we do happy eyeballs, meaning that we do both pretty much. You say happy eyeballs? Yeah, it's an algorithm. They call it like that. Okay, I thought you said that the first time, but I wasn't sure if I misheard you. Yeah, it's an RFC that actually exists in several versions. But it pretty much means that we actually start the IPv6 attempt first, and then a few milliseconds later, you start the IPv4 attempt. And the one that wins, that succeeds first, that's the one you go with, and then you cancel the other one. And it's happy eyeballs because you're watching both and whichever one comes back faster, the eyeballs are happy? I don't know where the name comes from. It's just a weird name.
But
it's just I've sort of used it for so long, so I'm just used to it. But yeah, I don't know why it's called happy eyeballs. We need to do that research. That's interesting. I've never heard that. Hopefully, there will be some happy eyeballs in the end, I guess. I don't know. That's what I assume. It's like you're kind of looking at both and then get happy when one returns a response or something. Could you give us a primer on IPv4 versus v6? I know that there was a terror that IP addresses will eventually run out. And that's why IPv6. How much do you know? Can you give us a primer on that and the state of it? Well, that's still the case. And I think there's been a lot of patching and gluing things. So yes, we are running out of IPv4 addresses, and I think they are getting more expensive. So in particular, in certain areas, they really run out. So you have to be creative or you have to pay a lot of money to get IPv4 addresses. So I think that is the real case. But I think also during this time, people have come up with new ways on how to work around the problem with different kinds of NATs and carrier -grade NATs and everything so that we can
keep
on doing this. We can extend our lifetime on IPv4 a little bit longer because most people, it turns out, don't really need their own IPv4 addresses. So we can come up with new ways. But of course, that is then also kind of a blocker for doing new kinds of innovations on the Internet, because back in the at least 80s, 90s, people thought about doing things peer to peer. You knew you had an IP address in your client and you knew the IP address in the server and you could communicate between those two IP addresses. Nowadays, you really can't, because nowadays there are so many different layers and translations. But that's just the reality now. I think that has made it so that the IPv4 problem hasn't become that big. So we managed to survive
on
IPv4 pretty good anyway. Now we're layering NATs behind each other. So you have this one IPv4 addresses representing so many networks. Yeah. And I think also another thing that has happened, especially compared to the 80s, 90s, that nowadays we built everything on top of HTTP instead. So we built so much protocol layers, pretty much taller protocol stacks now than I thought we envisioned back in the, well, when did they come up with IPv6? I think mid 90s or something. Yeah, I mean, it definitely predates me. I remember I was in college. And so this would be around the turn of the century. Oh, one is when I graduated. Yeah, exactly. Yeah. And they already had like IPv4. And they're like, but the new things IPv6 and everyone's going to be using this because we're running out of addresses. And that was 25 years ago, you know? Exactly. And the sort of the story is exactly the same. It's the exact same story.
Yeah.
Well, if I was a person issuing IPv4 addresses for a price, you know, I'd kind of like this scarcity, you know, it's helping me out. I can just keep raising my prices and issuing them. You know, it's not too bad if you're holding the keys, right? Right. Yeah, I guess it's more of a problem if you actually want to start something new today. And you really need to have that connectivity to people in the world and you really need IPv4 addresses. Maybe that is a problem. But
at
the same time, I think the entire Internet, the infrastructure has also changed. Like we're doing everything with CDNs nowadays that we really did not do in the 90s and stuff like that. So we have changed how we do Internet networking. Well, that was one of the pitches was like every device would have its own address, you know, and not just your house or behind some sort of firewall, but you'd actually have your refrigerator would have its own address and it would be publicly exposed. And that would be good for certain reasons, but obviously also bad for other reasons. Exactly. The privacy angle certainly was never discussed back then. So wait a minute, are you going to know that is my fridge talking to your server? Yeah, we were pretty naive about privacy for a long time. Right, exactly. Because at least when you're hiding behind the net, you don't know if it was my watch or my fridge or my printer that talked to you. Yeah, and especially if you're having multiple NATs, I mean, we have oftentimes an IP address that exposes like an area of a city, you know, but that's all you know, is like, well, somewhere in Bennington, but we don't know who it is. And sometimes that's problematic if you have actual malicious actors out there who are trying to hide, but it's also really nice for normal people who just want their privacy and don't want to be tracked down to every single thing they're doing all the time. Yeah, they find other ways of tracking us. Oh, they do. Yeah. Okay, so there's IPv4, v6, h3, h2, tons of more command line flags. Anything else that's like super notable in cURL that our listener would be like, oh, a cool new thing that I can do now I couldn't do before? Well, I think we have only done minor things really when it comes to the what happens in the end user layer, we support TLS 1 .3 in more ways. Now, for example, you can do it, you know, cURL ships with Windows since a long time nowadays, right? So it's built into Windows. And then when they ship, when Microsoft ships cURL, they ship it built with sChannel, which is the Windows TLS library native in Windows. And nowadays, for example, we can do TLS 1 .3 with that library, which is a fairly new thing. So stuff like that, but, I mean, most users won't notice, they'll just be happy that it'll actually survive longer and work better. But that's just one of those things that you don't really see or know about in the engine. When I do more things that we support over the last few years, we've done quite drastic refactoring of our way of building protocol chains in internally in cURL, I would call it. So how you stack different layers of doing protocols, like if you do HTTP, proxies, TLS, or doing, you know, protocol, TLS, protocols and TLS and proxies in many different layers. So nowadays we can do that in many more, in a more flexible way so that we can support more ways of creating protocols. Well, I call them protocol chains, pretty much, you know, setting up different, for example, doing different kinds of proxying, different kinds of protocols over different kinds of proxies in more combinations, because that is also where we're going into the future. Because nowadays we have so many different HTTP protocol versions and we have a lot of proxies and people want to do, you know, you want to do HTTP 3 over HTTP 1 or HTTP 2 proxies, or you can do HTTP 1 or HTTP 2 over an HTTP 3 proxy. And, you know, you get a little confused in your head just thinking about that, but pretty much to be able to offer all those different combinations of protocol versions. It becomes an explosion in how to handle that. So we had to change our internals so that we could build them dynamically in a better way so that the code could manage every, all of these new ways of building protocol chains. Tricky stuff. It seems like not only has Curl changed recently, but the world around Curl has changed quite a bit since the last time we talked as well. And I know you have written somewhat extensively on the effect of LLMs on Curl development. I'm trying to find it back in your backlog. The gist being you're getting a lot more low quality PRs, issues opened by robots, security fixes that are like not useful. Remind me, because it's way back there now. Yeah, exactly. Yeah, I had this, I think the one I blogged most about was one particular security report from some user who basically just fed it into an LLM. He found, he claimed that there was a buffer overflow and I asked him for clarification and he just came back over and over with very friendly saying, and trying to, and then gradually changing the nature of the bug over the time. And then pretty much me concluding that I'm talking to an LLM here. Yeah, I've had a few of those and I think they're problematic in that. I mean, they're, they, yes, they're crap, but they're pretty good crap, at least
from
the start. Like they look less crappy. The worst crap you can identify immediately, right? This is crap. Close it, move on. And it's, you know, forget about it. But when it sounds correct and it seems legitimate, you have to actually investigate, spend time and research. What is this? What is it talking about? And as I mentioned, then people then say that, well, it's me, it's obvious that it's AI I'm talking to, but I'm also used to talking to people who are using translation services, right? So maybe I'm talking to a guy who doesn't know a word of English, right? So he's feeding everything he's saying through a translation thing. So I can't really judge his language as sure. It sounds a little bit machine -like, but that could just be him being, you know, speaking Korean into a voice thing. That's, I can't really just dismiss him just because his English is not perfect. Questionable, right? My English certainly is not perfect either. So that's pretty good, Daniel. That's pretty good. So therefore it becomes a challenge. So I'm sure they say something, it seems, you know, the AI kind of hallucinating, it seems reasonable. Yeah, it could be right, maybe, but it doesn't feel right. And then, you know, ask a few follow -up questions and, oh, wait a minute. Yes, you're right. And then it provides more information, but the more information is also slightly off and not actually answering the questions. Yeah. So it's been a few of those that, yes, they certainly are time suckers because they spend a lot of time and just in the end, it's just worthless crap. One of those were at least good enough that they mentioned that in the first blurb, they said, I asked Google BARD or whatever it is he said, and it found this security problem. There was at least a very good hint that maybe, maybe that's not the best thing to ask for problems. Yeah, I did find the post. It was back in January. I love the title. It made me laugh when I read it and holy cow, this is on page four. You blog a lot, Daniel. So much. It's the I in LLM stands for intelligence, which I thought was a nice turn of phrase. And you go ahead and write in detail about how it's basically just causing you nothing but pain, right? I mean, it's just additional time that you have to spend and you're now negotiating with a large language model and you're not sure if it is or not. You have to like determine before you write it off. Exactly. Because obviously that there's a user somewhere that is copying and pasting this into LLM probably, but it takes a while until you're really sure about it. And also, maybe it is okay to use an LLM if it had been accurate. If it had been a genuine problem, then I wouldn't have mined it at all. I mean, that would be fine. But yeah, it was a bit of a struggle to get to that point. They're just slopping up the crawl factory. You know, they're just throwing their slop into the crawl factory. Yeah, it's certainly sort of gravel in the machinery when they do that. And also since security problems tend to be top priority, right? Then that's when we drop all the other stuff and just focus on this. Because if it's a security problem, it's worth sort of dropping the other stuff for us. It certainly trumps a lot of other work. Plus there's money in the game now too, right? With a lot of the security stuff. So that makes it more of an attractive target for people to... Exactly, yeah. That's why they do it. Yeah, they certainly want the money. And of course, I ask for it a little bit by saying that we will give you money if
you
report it. Well, you're incentivizing it because it's important. Yes, exactly. So I think in the end, it is a win for us. But yes, we also get a fair amount of rubbish to work. People that just want to... They're basically fuzzing every bug bounty they can and using whatever tool they can to scale horizontally. And LLMs is a tool to scale horizontally. I can email with Daniel without having to really think about it. I can try to get this bounty. Exactly. I think it works like that too. So they can run their tools against a lot of projects and they just fire off those. So if they're lucky, they will get some bounty from X percent of those projects. So many negative uses, I suppose, right? That are out there. It's like you want to see the positive, but then you also see this kind of thing, which is good when you use good. But then it's like, well, you're just inundating folks with crap. Or versions of it. Pretty good -looking crap too. That's the worst part. Minutia, I don't know. Like more to sift through. Got lipstick on the pig. Exactly. It's a little bit of a denial of service attack. At least it would scale up. Now it's just been... I mean, it's been manageable because it's not gone to a level that is unmanageable. It's early days, Daniel. I mean, this is going to scale. Yeah, I mean, exactly. I can easily see how it could scale up and become a much bigger burden and nuisance than it is now. Have you considered fighting fire with fire? I mean, you have these tools at your disposal. Yeah, but I really don't think I can do that in an effective way. Well, that would go against some of the BDFL things you have in place. I'm also very wary about how I appear and what I respond to because I know that if I'm being, I don't know, unpleasant or unfriendly, people hold that against me. And
whatever
we say on the internet, it'll live forever. So
I
always make an effort to stay on the right side of how to behave and be friendly and just be accurate and direct. But then, of course, I also stop it as early as I can. Well, yeah, maybe just as a higher quality or more sophisticated filter. Before things reach you, you know? Yeah, exactly. So I think since we're using HackerOne as a service for this, they also actually have other, can actually enable more filtering. But in the past, I've also had problems with that because then suddenly when I've had people actually submit a bunch of legitimate reports in fairly high frequency, they got caught in that filter. So I also had the other way. But I think it's also one of these things we just have to learn to adapt to. So, yes, I think now we get some problems with it. We will add more filtering, raise the bars a little bit to catch these a little bit better earlier. So we'll see. I'm sure we will find a fairly good balance. I do think some of those things that you could do negatively in that scenario might go against these BDFL guiding principles you've laid out, which, as you mentioned there, you want to come off kind, open, and friendly. That's number one of your 10 guiding principles for curl as a BDFL. So does it make sense to kind of get on that list or kind of dig into some of that? Is it important enough to you to sort of put out there? Yeah, absolutely. No, but I think that that is one of the things. I want to make sure that I'm not dismissive. I mean, again, because it might not be an AI. At least in the first post, I don't know that it is an AI. It could just be that guy who just got an AI to help him actually phrase it or produce a proper report. And that's fine. So I don't want to be dismissive. I actually never want to be dismissive. I try to be. I mean, of course, it's a challenge when people are explicitly and deliberately very rude or just being very unfriendly. And it's hard to not just bite off immediately. But I try to not to and just stick to the facts, answer the actual factual or technical question or details in whatever they're talking about, and then not drag it on and stop it there. So tell me more. What is this BDFL for those who haven't heard BDFL, Benevolent Dictator for Life. Daniel has been and continues to be benevolent and a dictator and spending his life on curl. Full time now for, man, half a decade, I think something like that full time on curl, which is awesome. Yes, exactly. A little bit over five years now. Yeah, a little over five years. And so as a BDFL, he has, like Adam said, created these guiding principles, which means he's thought deeply about it. What are some of the other ones? So Adam mentioned the first one be open and friendly. What else do you aspire to as a BDFL? Yeah, what do I do? So, of course, because people like to say that, sure, I'm a BDFL, which means sure, I'm the dictator, so I could do whatever I want in the project in theory, at least. But of course, there's this difference between being a dictator in a software project and in there in the country is probably that if people wouldn't like the project, they will just go away and do something else instead, you know, maybe fork it or at least not participate in the project. So there's every motivation to not be a bad dictator for the project. So no, I don't think I'd have anything that is strange. So sure, I think, for example, the quality of the products that we're shipping is one of the key things. So and I know that that's one of the key things that people appreciate about Curl and Lib Curl as a project that we ship products that rarely cause people problems. I've had a lot of people mention to me that they never experienced bugs in Curl, which of course I think is fun because we fix bugs quite frequently. But still,
we
still work hard on making sure that, you know, it's a good, solid product and that we don't break behavior and we don't break user scripts. We don't break APIs at all. So stuff like that. So basically, I want Curl and the products we ship to be that I want it to be and work like it actually works now, right? It should be those pillars. You should be able to build whatever you want on top and they should just continue to work like this as they have been for a long time and they should continue working like that. So that's what we really focus on in the project and I want it to focus on too. And then, of course, we're old, we're everywhere, and then that makes us get a lot of eyeballs on what we do and how we do things. So that's also why I want us to then, for example, do things properly open source -wise. So I want us to make sure that we do open source pretty much follow all the best practices you can imagine for doing open source. Being open, being transparent, doing things the way you should do things
open
source. So if I would join a project or participate in the project, how would I want that project to be and how would that be open source -y the way I like it? I want curl to be that. So a lot of that and also not only open source -wise but also code -wise and protocol -wise. Because I know that since we have been doing protocols and we are doing protocols, a lot of people will then copy the way we do it either by copying our code or just following the way we do protocols or implement things. So therefore it's also good and nice to be able to say that if you follow our way of doing it then it should be a good way to do the communication like we do it, then it should be good. Yeah, yeah, yeah. I'd like to commend you on both of those fronts, especially I think on the open source leadership in terms of how you manage the project. Because Ab and I have known you for a long time now and we've been watching your work in the public space for many years. And you really, I think, are a guiding light for so many people because you have been around a long time and you do think deeply about these things. And you have been able to dedicate so much of your personal time and now your full time to this project and just maintaining it and sustaining it and community leadership. A lot of times I will just look to say, well, how does Daniel handle this? Because you've come across many of the problems that so many people are going to come across. It's like, well, here's how the girl folks handle it. Maybe it doesn't necessarily apply to you one to one, but it's a great place to start. And every project is unique, so maybe it doesn't apply to everything, right? But I think also sometimes it's a big benefit of being an old project and have been around for so long time because we have had time to adapt and adjust and do things the way we should do because obviously we didn't do everything right from the beginning. I mean, who does, right? But if you just stick around for long enough, you get time and the ability and chances to fix those and make sure that didn't work. Let's do it this way instead, because this is the better way. So over time, a lot of things just fall into place and end up being decent ways to do things, decent approaches, concepts and policies and everything. And you're right about it. That's the other thing is there's probably other people who are also making long term decisions and managing a project for a long time. But like I said, you blog is profusely the right word. Maybe that sounds like too much. What's the word I'm thinking of profusely? He refuses to stop. No, that's not the word I'm thinking of, but we'll just roll with it. Just documentation. I know that's also I'm leading you to the next or down the field here. That's one of your other principles is like the documentation, how much you write about these things. The fact that we know what your guiding principles are because you've taken the time to write them down. I mean, that's how you lead, right? You lead by example, but you also lead by stating why you do what you do. Exactly. Yeah, I like being able to also do that, not only so that I can inform sort of how I view things. And that's why I think this is so important. And so they sort of go hand in hand. And also, of course, this is not just me saying these things. I actually work on it pretty hard. So I hope that I'm sort of stating the obvious really, because I've already delivered documentation for this to a pretty ridiculous level sometimes. So it should be obvious that I'm focusing pretty hard on making sure that everything is documented to a very detailed level, for example. Quick follow up on profusely. I'm now doubling down. I think it's a great word. Pouring forth with fullness or exuberance. Bountiful. Exceedingly liberal. Giving without stint. I think you do blog profusely. I think I would pick the perfect word there, even on accident. I thought it had sort of like a negative connotation, like almost too much, but I'm not seeing that at least in the dictionary. Maybe if I go to urban dictionary, I'll want to redact that. But let's stick to that. Let's stick with it. A profuse blogger. You also want to remain independent. I think this one might be of all those. Maybe the hardest, right? Because this isn't necessarily entirely your decision, is it? No, it's not entirely my decision. And I think I've been a little bit fortunate that it has been possible to do it this way. Navigating on how to do open source and how to actually do it for a living, it's not easy. And it's not obvious how to do that. So I'm not judging anyone who's taking decisions on how to do that and making tough choices on how to navigate forward to actually being able to sustain it somehow. Because you need food on the table at the same time as you want to produce something. So I think I'm in a fortunate position when I can do it like this. And I'm happy to continue this way. And when I've managed to do it for this long and for this amount of time, I imagine that I should be able to continue doing it as well. And I think it's pretty good because it makes us, we don't have to obey to anyone. We don't have not even an umbrella organization or anything. No company, no one actually decides what we need to do. We just decide what we need to do depending on what we think our users want or how the internet goes or
what's
an internet transfer really. And we can just base it on that. And I think that is good because we don't have to bend to artificial whims. Well, friends, I have something special for you because I made a new friend. Tamar Ben Sharkar, senior engineer manager over at Retool. Okay. So our sponsor for this episode is Neon. And as you know, we use Neon, but we don't use Neon like Retool uses Neon. Retool needed to stand up a service called Retool DB. Tamar can explain it better in this conversation. But Retool DB is powered by Neon. Okay. They have a service called fleets. It is a service that manages enterprise level fleets of Postgres, serverless managed fleets of Postgres and Retool DB by Retool is powered by Neon fleets. Tamar take us into how Retool is using Neon for Retool DB at large.
So one big problem we had with Retool, we wanted users to have value production value as soon as possible. And connecting to a prod DB in a new tool is not something that people will do lightly, but they're much more likely
to then dump a CSV into Retool. And so because of that, we said, okay, well, what if we just, you know, host databases on behalf of users and then they can get spin up really fast. And we really started to take
off. Problem we had is we didn't have a big team. We couldn't set up a new team to support this feature. So what do we do? And so we were looking at what are
the options out there? And, you know, we found Neon. Neon is a serverless platform that manages Postgres DBs. And so like, okay, that's interesting. Let's kind of look in further. What's kind of really unique about them is you really only pay for what you use, which is exactly
the case that we have, right? Because we want to provide this to everybody. Not everyone uses it, not everyone uses it all the time. And so like, if you had to like, you know, us manage a bunch of RDS instances, for example, right? Like, basically we're like, you know, all in for a team to support, like figure out, okay, what are they on? How did we do?
Try to have some kind of greedy algorithm to get all the data in the fewest moments as possible, right? This is now a hard problem. That's not kind of a core value, right? A core value is kind of providing that database. And we don't want to kind of want to, you know, we're not like an infra team. We don't want to kind of get in that game. I think what's really great is that, okay, well, one big kind of risk when you think of going in third party is A, the cost. We're getting this free to all users. We have 300 ,000 databases right now, right? We can't, especially as we were rolling this out to begin with, right? We like didn't know for sure how it would, how people would respond, right? And, you know, we can't all of a sudden have like, you know, a couple million dollars, you know, at the bank for this, without kind of seeing the activation that it has on our users. So it's kind of obvious, but what was the appeal of Neon?
What was really appealing to Neon? It spins out to
zero. And so because of that, right, it really kind of reduces the cost. And so really it's really exactly only what we spend. And there's really actually not a way
to actually spend less money, even if we're hosting it ourselves. So you'd be like removing all the people cost, right? Because let's say we use something like an RDS, we have to figure out ourselves, right? Basically what Neon is doing, right? How to bucket all the instances together, how to bucket the usage just to have as few instances as possible, right? To scale
up and down, depending on what's going on. And now we sort of don't have to worry about any of that part, but still get kind of the cost benefit. And so really it kind of was out, you know, it's a win -win. Okay. Win -win. Always a good thing. I like win -win -wins, but okay, fine. Win -win. If it were not for Neon and their offering of fleets of Postgres and how they're essentially your serverless Postgres platform, where would Retool be at with RetoolDB without Neon? Oh, we would have to have at, you know, at least a fully staffed team, you know, call burden would be a challenge. You know, and I think we have to spend a lot of time on, you know, making it sustainable. And that's, you know, a whole, you know, other sense of concerns that are, that we don't ever think about. First of all, like it's, you know, it's a team of engineers, right? Which is not free. So to everyone's salaries, right? So let's say probably a team, let's say, you know, eight to 10 people, you know, easily only focus on this. And then it's thinking, well, like, does the revenue of
RetoolDB offset that, the cost, even if just the engineers?
So, you know, that's step one. But I think even before then, right, like you'd have to set up this team before you even had a product. You know, databases and, you know, having them the way that Neon has them, right? Like suspend to zero, having, you know, warm
spares that they're, you know, ready instantaneously when you like log on to Retool. Those things aren't free. And even if we tried to do like, okay, like an MVP, right? Like there's a kind of basic functionality that needs to exist that we all have to start from scratch. And that would be a huge commitment
to this. And I think we would have completed, it would, it would come out like a year later because we'd have to do a lot more validation to know that it would have been worth it right before we
started. Here we were able to quickly try it out, see that it was effective and then grow it from there because the cost was very low. And that really gave us a lot of flexibility of also testing out
different, different features and different flavors of it.
So RetoolDB is fully powered by, backed by, managed by, Neon. Neon fleets, neon .tech slash enterprise. Learn more. We love Neon here at Changela. We use Neon for our Postgres database. We enjoy every single feature that Tamar mentioned for RetoolDB, but we use it at a small scale, a single database for our application. They use it at scale. One single engineer propped it up, managed it. That's insane. They would have never been able to do this without Neon. RetoolDB would have cost more and may not exist without Neon. Okay. Go to neon .tech, learn more or neon .tech slash enterprise. I wonder what makes you do this, like really at your core as a human being. Like I get that you are probably in the best intellectual space to do it. You've done it for so long, but sometimes we do, they call them because I do this because, right? Maybe you do this because you want to see the substrate of the internet to be something pliable and usable, but like,
really,
why do you do this? Yeah. Oh yeah. To this level of detail and this level of quality. Now, I think the why has then changed over time. And now I want to use this platform that I had sort of already reached and I've already done it to this level. Now I want it to remain this and I want these products and this project to keep on facilitating, doing internet transfers and making sure that we do internet transfers properly and that everyone can do them in an easy and good fashion and the sort of stable transfers and good way. So then it just becomes a personal thing that I want it to continue to be that and to continue to be a really good choice for all of these different services and platforms and tools and devices and languages, because it's really cool. And I want it to be, to remain that good and efficient. So it's a little bit, it isn't harder than that. I just want it to be that good and I want it to remain at that level of goodness. What do you do? I don't want to be speaking morbid by any means and your involvement in the project doesn't have to be a morbid scenario, but what do you do and what are you doing if that's your guiding principle personally so that it remains, what are you doing so that you don't have to be the BDFL forever in any scenario? First, I think what I do now is just leading how I want it to be done. So I'm leading by example, right? This is the way I think we should do the project and how we should do protocols and what I think we should do. And then I pretty much just make sure that if I would go on an extended holiday tomorrow, everything necessary is already available as in documented, provided, written down. And I mean, I don't have any magic handshakes anywhere. There's no secret store that is necessary for anyone. I mean, sure, there are some credentials for logging onto servers and stuff like that, but there's no project secret anywhere. There's nothing hidden. Everything is out there. Everything is documented, even to the level of how I do releases, how we do things, how we do governance in the project. So everything is there. So that's how I want it to be done, right? Show by example how I want things to run. And if I don't run it, everything is there for someone else to do it the same way or another way, but if I go away tomorrow, there's nothing preventing anyone else from taking over tomorrow. Right. What about the last mile of that contingency plan? Like the credentials, the server logins, the DNS, the password to log into the registrar? Yeah, I have those sorted out too, but at a more personal level then. So I have more of a wheel -like situation so that I have relatives or mostly my brother who's into computers pretty much like I am. So he's most of my next of kin. So if I would actually die, he would have access to all of that tomorrow. And then is there anybody else on the Curl team or community where it's like they're an obvious, Daniel's gone now, your brother has the credentials. Hey, Daniel's brother is going to pass those credentials on to this person who's going to take over leadership. Is there anything that's obvious like that or is it not so clear? It's not so clear. We don't have any, there's not a dedicated heir or anything. So I wouldn't... No, I just mean like somebody who you'd have in mind where like, maybe if you're on your deathbed and you're like talking to your brother, like, hey, give the credentials to this person. Yeah, but my brother also has push rights in the project. Okay. So maybe he's the guy then. So he's already a maintainer in Curl as well. So he's sort of tangential. Well, he is only, you know, marginally involved, but still. Wasn't I giving him too much credit? No, but I don't want to dismiss it either. But I mean, he has maybe like 50 commits. I have 18 ,000. So, right. But still he's around and he knows the project. He knows a lot. He understands these things. Yes, exactly. Well, that's good to know. So you definitely, you've done your legwork on, I'm calling it contingency planning, legacy planning. I don't know what you call it. I think so. I hope so. Yeah. Well, you never know, you know? And so
these
are things that I think people think about more as we do get older. We start to think, well, you know? And actually, however silly it sounds, I get this question very often. Do you? Yes. So it's very good to actually have it sorted out because then I have a good answer when people ask me about it. Right. And people ask me about it because yes, I'm the BDFL and people, I think actually to a slightly larger degree than I deserve, people call it a one man project or something because I don't think it is because we're quite a number of people who actually contribute. I do half of every commit, but there's a significant amount of changes done by others. But anyway, so people still have that. Well, they think that about Curl and they think it's me. So if I'm gone, then surely Curl will die, right? So that's sort of the connotation with the question. Can you explain the financial arrangement? How did you get to financial independence and exactly how it works for you? Sure. It's actually pretty easy. So the Curl project is completely separate and standalone. So I don't do business with the Curl project. I support Curl stuff, but I do that separately. So I sell Curl services via this American company called Wolf as a cell. And my primary Curl business is just support on Curl, pretty much a little bit insurance thing. I sell a number of issues per year and I have a guaranteed response time. So my customers, mostly actually big American tech companies, they have paid me a yearly subscription basically, and they file issues and I help them when they have issues. So I make sure that their Curl use is uninterrupted and works well in their products. Usually companies where Curl use is deemed important enough for them to do this. Even though I usually, of course, try to tell a lot of my potential customers that it's much better if they pay me to do that than rather spend time for their developers to try to figure out how to fix Curl because I probably do it much faster and much cheaper than they having to waste engineering time on figuring out how to do things, figuring out even just how to, or just finding or fixing bugs even more. So that's what I do. And then of course, in addition to just supporting, it's also more feature development and contracts, more working closely with the product development teams and how they use Curl and a lot debugging their applications using Curl because very often it's not a problem with Curl, but maybe in how they use Curl or in the area between it's hard sometimes. It's having an NDA and contract in place makes them feel safe to share their code with me. They wouldn't dream to sometimes submit extensive explanations in the public bug report because they're scared that their never really special source code would be something special for the rest of the world to see. So it works pretty good. Still a challenge to sell supports on something that is free because it's free. So why would I pay you when it's free? Well, we had a conversation in Twitter DM about this. The thing is actually the first DM you and I had Daniel, October 9th, 2021. And I said to you, I said, are you aware of how crucial Curl is to the Netflix architecture?
And
you said, I'm not. I'm aware they use or use libcurl for a few years, but I'm not up to speed with their usage now, thanks to a prolific influencer slash developer slash persona in our industry, the Primogen, which I'll link up in the show notes. There's a video that says working at Netflix, what my job was like. And it essentially goes through the Netflix architecture and says the TV OS they ship submits requests to Amazon via HTTP, via da da da da Curl. And so I'm like, well, this is obviously crucial. You know, I just respond like, hey, if you check out this, you know, this video at this point, you'll see that it seems to be core underpinning. I mean, if every time I push play on Netflix, there's a Curl call to AWS. That's pretty crucial. Wouldn't you say? Yes, yes, of course. But you know, I've come to the point where it's sure there are what, 200 million Netflix devices. That's just a drop in the ocean when it comes to Curl installations, because I'm not mean to be sort of get on my high horse about that, but sure. But then there's also Roku and there's Apple TV devices. They all use Curl
and
YouTube has it bundled in YouTube app on all your phones. So everything like that is using Curl. And nowadays every TV has it. Every car has it. Pretty much every printer has it. Fridge, dishwashers, washing machines and, you know, trains, motorcycles and keyboards, watches and robots and computer games is really good. It's really big in a lot of high volume games, I guess, because they want it to be portable. I don't know exactly why the games like it so much. So, yeah, that's why I say, I mean, I actually, I think I underestimate when I say 20 billion installations. It depends a little bit on how we count, but Curl pretty much runs since it's not provided as an API in mobile operating systems. A lot of the mobile apps ship their own Curl installations. So in an ordinary mobile phone, it's installed like 5, 10, 15 times because
a
lot of the high volume apps have their own installations. All right. I just bundle it. Well, you know, we call that Daniel, we call that total world domination. I mean, where isn't Curl? Yeah. Where is the bottom of the ocean? And we know it was on Mars, right? We knew that much. Yeah, exactly. Yeah. I actually have the discussion sometimes with the more with the commercial or potential commercial customers. It's actually in the lower end in the really, really tiny devices, because then they think Curl is too big. Are you going to have a Curl light? You're going to have an embeddable version? Yeah, I actually have an effort I call tiny Curl, which is pretty much a way to scale it down as much as possible. Pretty much disable a lot of, well, optionally disable as much as possible to make it as small as possible to fit on the just a few megabytes devices. Well, that version hasn't found critical mass or like people don't know about it. How come these people aren't using it then? Do you know? Well, they are using it. So I'm actually, I'm having customers using it. So it is happening. But it's not a, then when I end up in a fun situation usually, because when you talk about people writing code for really, really tiny devices, they usually end up with an example HTTP client from some vendor or of their development board or something. That's usually pretty much how Curl started 25 years ago. 200 lines of really, really stupid code. And sure, it might work for them occasionally or mostly. And that's the competition then to say, oh, look at this code. It's only 200 lines. Why is Curl 50K when I build on my target? When I can use this 2K code. So that's the struggle I have then. And then I have to talk about API stability protocols, blah, blah, blah. Yeah. 48K of security hardening in there and all kinds of things that you're not taking care of. Yeah. And you know, which API do you get and which API will you have in 10 years? Will you have the same API then or won't you? And stuff like that. But you don't have to sell it too hard because you already have total world domination. So you only got 20 billion devices. I mean, Adam just told you as the core of Netflix and it's like drop in the bucket, drop in the bucket. Sometimes it feels like, sure, if you want to argue about going with just a silly example code or you want to go with Curl, sure, you go with a silly example code and call me again in two years when it's broken and you want to have some serious help and you want it to, I mean, it ends up a silly discussion oftentimes. So no, I'm not losing sleep over that. But my tiny Curl effort is basically just at least a way for me to offer something that is at least in a much smaller footprint than the regular one. Well, the conversation we were having in the EMs at least, it was one, bring you to your awareness, if you were not aware, the level of usage in Netflix. And I imagine that they're just one of many, but they're also a fairly well -funded, fairly successful. They're the en and fang, whenever you say that term. We're talking about really how you sell support. And I was asking you, like, are you intimately involved in selling support? You know, Wolf SSL has salespeople. You said, yes, you were. And I was like, well, you know, is selling support a priority? And I think we've talked about this a little bit the last time we talked. Wolf SSL was early for you. Now it's sort of three years at least later. How does it work selling support? Like, how challenging is it? Do you have a large pipeline? What do you optimize for? I don't know, answer any of these questions you like. There are things in motion that I should not talk about. But selling support in general for curl is, in general, actually hard. Because it's an old, established, very functional, and actually rock -solid product. So a lot of those people, sure. And I mean, you mentioned Netflix. And Netflix is a fun example. They don't use, I don't think, at least they didn't use to use curl for the actual film movie transfer. They just used it for UI and things related to selecting the actual whatever you're streaming. But there are, I mean, there are other companies. I know at least two that have much, much, much more higher volume of curl use. Like this company with the blue logo. They did, I know I talked to some people like seven, eight years ago. They did one million curl requests per second on average. And that's a high volume use of curl. You would imagine that those guys would be willing to buy curl support. But so far, I have not managed to get support from them. Because it just works for them. They think it's sort of, why would they buy support? It's been working for, I don't know how many years. And they think it's good. So it is a challenge. But yes, I know it's used everywhere. And of course, I am via, I'm getting help. Because I'm worthless at business myself. I realize my limitations here. I'm not the business guy. I'm not doing most of the sales conversations. I leave that to actual sales people. But sure, we try to interface and talk to pretty much anyone we know that is using curl. In high volumes and in sort of mission critical situations. Where it is really crucial for them that curl continues to work. And it goes up and down. Sure, I mean, I still work on curl full -time, right? So it at least pays me so that I can continue doing this. But we also explore other ways to charge money for curl -related things. So there are, of course, always, maybe not just selling support is the only way we should do this. And maybe it's not even the best way. And I don't know. We have, I alluded to it, I have some other things in motion. There will be some other things going forward curl business -wise. So I think it is a pretty good brand. It's a pretty good product. I think there should be ways to sustain the business around it. So two more of your BDFL principles seem like two sides of the same coin. One is to stay on the bleeding edge. And the other one is to keep up with the world. And they seem to have different angles, but they sound very similar. Can you explain those two? Yeah, they're really similar. But what I want to, what I really like, because I think it's a really fun position, is to be on the bleeding edge so that we can be early with development of new protocols. Like we support HTTP2 really early, actually, and HTTP3 also very early. Because it helps, it becomes a good sort of spiraling. We get it in early so that the people who actually implement and deploy servers and proxies can use curl to exercise their code and try their servers. And they can then help us make our code better. Because I think that's fun. And it makes us polish and streamline our implementation early on. And sort of make better decisions earlier in the process on how to do protocols, really. And also then, of course, it's just fun to have that position so that we can help others make sure that they get their implementations done better. So that's what I mean with bleeding edge, because I think it's a fun place to be at. And listening in what the internet wants, it's more, that's what I mean there is I want to make sure that we offer the ways of doing internet transfers that the internet seems to be wanting. You know, the protocols curl supports, I want them to be done in a secure way, in a modern way, and the way we want to do. So if you want to do internet transfers for your application, you want to implement a core infotainment system, you want to transfer data for that. You want it done in a modern way, which means this current protocols, current security, the current mindset of how to do things. So that's what I mean with keeping up with how to do things. That means the Cypher suites and TLS and versions with HTTP and how to do authentication, things like that. Because I think it's important because I think the internet is moving quite fast, quite almost all the time. So if we wouldn't keep up, we could easily just be left behind. Because I think we have a role and that role should be providing service and how to do things in the proper way. And also because a lot of users are just using curl already. So we make sure that a lot of things are doing correct and secure internet transfers by using curl. So that's also a reason to make sure that curl does things properly. We help securing the internet, really. What's up, friends? I'm here with a new friend of mine, Jasmine Casas, product manager at Sentry. She's been doing some amazing work. Her and her teams over many years being at Sentry. And her latest thing is just awesome. User feedback. You can now enable a widget on the front end of your website powered by Sentry that captures user feedback. Jasmine, tell me about this feature.
Well, I'm Jasmine. I am a product manager at Sentry and I'm approaching my three year anniversary. So I've spent a lot of time here. I work on various different customer facing products. More recently, I've been focused on this user feedback widget feature, but I've also worked on session replay in our dashboards product with user feedback. I am particularly excited about that. We launched that a few weeks ago. Essentially, what it allows you to do is it makes it very easy to connect the developer to the end user, your customer. So you can immediately hear from your basically who you're building for for your audience. And you can get basically have a good understanding of a wide range of bugs. So Sentry automatically detects things like performance problems and exceptions. But there are other bugs that can happen on your website, such as broken links or a typo or permission problem. And that is where the user feedback widget comes in. And it captures that additional 20 % of bugs that may not be automatically captured. I think that's why it's so special. And what takes it a step level above these other feedback tools and these support tools that you see is that when you get those feedback messages, they're connected to Sentry's rich debugging context and telemetry. Because often I've seen it myself. I read a lot of user feedback. Messages are cryptic. They're not descriptive enough to really understand the problem the user is facing. So what's great about user feedback is we connect it to our replay product, which essentially basically shows what the user was doing at that moment in time right before reporting that bug. And we also connect it to things such as screenshots. So we allow we created the capability for a user to upload a screenshot so they can highlight something specific on the page that they're referring to. So it kind of removes the guesswork for what exactly is this feedback submission or bug report referring to.
Now, I don't know about you, but I have wanted something like this on the front end pretty much since forever. And the fact that it ties into session replay, ties into all your tracing, ties into all of the things that Sentry does to make you a better developer and to make your application more performant and amazing. It's just amazing. You can learn more by going to sentry .io. That's S -E -N -T -R -Y dot I -O. And when you get there, go to the product tab and click on user feedback. That will take you to the landing page for user feedback. Dive in, learn all you can. Use our code changelog to get 100 bucks off a team plan for free. Now, what she didn't mention was that user feedback is given to everyone. So if you have a Sentry account, you have user feedback. So go and use it. If you're already a user, go and get it on your front end. And if you're not a user, well then, hey, use the code changelog. Get 100 bucks off a team plan for three -ish months, almost four months. Once again, sentry .io. What's your take on the internet these days? The state of the internet. I mean, do you like it? Do you dislike it? Where are we? Because we've gone places and now here we are. Yeah. Like if you were going to give Daniel Stenberg state of the internet, like how is it looking out there? I think that's a very... Why did you ask me this question? All right. After an hour, you hit me with this. It's a tricky question, I think, to answer because I think it goes up and down a little bit over time. And I think, I mean, sure, sometimes it seems that we're going the wrong direction, sometimes in the right direction. So I think it's, yeah, it comes and goes, I think. So I don't think we're necessarily going in a bad direction, but it's certainly not always in the good direction either. So, I mean, like with encryption and all sort of governments and China and firewalls and whatever happening everywhere. So I think it's, yeah, there are suddenly bad signs every now and then, but we seem to usually get around them and get around them and come out in the better way. But it seems like a struggle that just continues and keeps on. We just have to keep an eye on where we're going and stay vigilant to see where we're going, like with these back dooring encryption algorithms or inventing crazy things because of thinking of children or whatever. But I'm generally sort of an optimist. So I think in general, I think we're going in the right direction. One reason I ask that is because we had Paul Vixey on the show a couple of months back, who's been instrumental in the DNS area of the internet stack, if you will. And Paul seems more pessimistic. I mean, he described DNS as a series of patchwork that have never been stuck in the past. I've had my interactions with Paul. I know exactly where. So yes, compared to him, I'm certainly the optimist. So he's certainly... Okay, cool. All right. Well, he painted a pretty bleak picture. And Adam and I, we kind of get blown in the wind a little bit on these things. And we have our vantage point as well, but we aren't down in the weeds of any particular thing. And you are clearly on the bleeding edge of the technical weeds of internet transfer, which is pretty much what the internet is for. And so happy to hear that you're slightly more optimistic than Paul is. But there's also the social side of it, like the silos of the platforms, and now this revolt against social media. And then we have the rise of the LLMs. There's a lot of things happening at the application layer that are also the internet, but they're not down where you operate the internet. No, exactly. And I think... I mean, that's why I kind of refrain from commenting on that, because that seems like, sure, we can talk about that, but it's not really about internet transfers, really. That's really human social side of things. And I agree. Even there as well, we seem to be going both in the right and the wrong direction of time. So sure, banning social media or not. Have you looked into ActivityPub? Are you interested in that protocol or the whole, this idea of a federated social network of... Well, I'm pretty much only on Mastodon and not on Twitter anymore. But I haven't really looked at a protocol. I have a feature request for support for the message signature algorithm they're using, because apparently there is an RFC 9D241 or something. And apparently the Fediverse is not using that exactly. Okay, fun. That as far as I've come to the details in the protocol. Okay. And we don't support that either. But you can't post to your ActivityPub stream via cURL directly. Of course, there has to be some sort of a server involved anyways, but... Yeah, but you can do it with cURL, but you just have to do some massaging of some headers and stuff because I've seen people do it with cURL. So yes, it's HTTP in there. So at the social level, though, I mean, we respect your opinion at all levels, Daniel. I think I was kind of reading into that with keeping up with the world. I was kind of thinking like, stay on the bleeding edge. There's your technical side. Keep up with the world. That's kind of more of the social side. Maybe that's not the way you were framing it. But in terms of the Fediverse idea, do you think that's a good idea? Are you like a proponent? Are you just like a user who's like, well, I don't want to be on Twitter and so I'll use this and I'm not really happy? No, I really think it's a good idea. I think when it comes to social media, it is certainly one way to manage the load and how to manage just people and content in general so that we don't have to have those central silos deciding what is right and wrong and what is truth or false or what is allowed and not allowed. So we can spread it out just like we've done with emails in the beginning of time. So I really believe in that, even if I understand that having things like this is also probably not the businessy side of things. So I figure it's going to be hard to do that as a business. That's why no business does it because they like the silos and building walls around it so that they can extract the maximum amount of money out of it. But so I guess that's also why it's going to be this struggle between the technical benefits of doing it like that and the financial benefits of doing it the other way. So which way is the best way? I mean, you also have to have money involved because it needs to be run, right? I think we're already seeing that because if you read at least between the lines, some of the bigger Mastodon instances now have a huge bill to pay every month because they get a lot of traffic. And how do you finance that? It was possibly easier when you had one silo called Twitter, but it's going to be harder when everything is federated and distributed. Yeah, that's why the Threads integration is interesting because it's such a big business. The thing that's interesting to me with Fediverse stuff right now is how a lot of the small publishing platforms are adopting. So you have Mozilla, you have Flipboard, for instance, Medium. Then you also now have Ghost. And these are like small businesses in relation to big tech. But then you have this gargantuan giant thing that is Threads. You know, glomming itself on. And you're like, you know, some people love it. Some people hate it. Maybe it's validation. Maybe it's evil empire. And we don't necessarily have to get our opinions on that aspect of it. But there is the business side kind of coming to the Fediverse. You're right. Yeah. So that is interesting. I guess the challenge is to see how that goes over time. Yeah, exactly. Threads versus Fediverse. I guess it's fine as long as they are the elephant and the Fediverse is the ant. I mean, they don't care about the ant here because it's just an ant on an elephant. But maybe if it would have been if the growth of the Fediverse explodes and Threads isn't, maybe it'll change the equation.
Or
maybe not. I don't know. You're right. It's an interesting balance there. They're at least trying it out. So I'm just thinking about your thoughts there on the business coming to the Fediverse. Like, what does that actually look like? How to, you know, given that it's pretty much been disparate users there, you know, really folks revolting against big tech, social media, right? You know, is business or that, you know, is Threads welcomed back in there while they may or may not support different protocols with it? How does business then sort of hop into this Fediverse world with welcomed arms? Will it actually improve it? You know, is that is that what's required to get to the feature set that does improve the Fediverse for mass adoption beyond where it's at? And how do you get the business to? I mean, how do you spend a lot? Get business in there really focused without sacrificing something, right? You I mean, you're not going to see anyone wanting to add ads in there, but the business is I mean, sure, you can get some kind of marketing value of being there and everything, but it's as the as the network is growing, it's going to be more and more expensive for everyone involved. So somewhere someone has to get money in there. Well, businesses go where eyeballs are, too, right? That's where I think the precipice happens. Yeah, but not only you need you want those eyeballs to do something to exactly want to convert them somehow. Well, maybe you can do that by posting. I don't know. Gosh, they're just everywhere. I mean, whether you're selling podcasts or ads. Whether you're selling socks, right, or a subscription to a journal or a author, you want people to know that you have things to offer. And the Internet was a great way of doing that for a long time. And then social media came around and Facebook said, actually, the eyeballs are over here. Come build your following. And people did that right. And then they sold you access back to the following that you built. And then we all got burned by that. And that happened at Instagram and that happened to Twitter and that happens at LinkedIn and all these other places. And people are just sick of that. They're just like, if I build up some reach so that I can tell people what I'm up to and then they can visit my website and buy some socks, I don't want someone intermediating that relationship and then just selling it back to me. We don't want to get burned again. We got burned once. And so let's not do that. And so I think that's attractive for businesses. The question is, can the Fediverse actually provide the reach to the people who are there to see what you're up to and to actually talk with you and those things? So far, part of it is it's not siloed, but it's federated. And so it doesn't work the way the silos work. That's why they have the better user experience. Exactly. So I guess in one way, it can't be prevented. Since you can just invent a way to just nag about your socks all day long.
If
the other instances just think you're too annoying, they will just block you or just not federate with you. And that's fine. But so, yeah, I think it'll be interesting to see if the businesses and the brands will show up to a significant amount. Well, the gateway is always content, right? If you create compelling content that matters to people and you take it to the people, right, rather than saying, hey, I have a website, come check out my website, the contents here. You take the content in unique ways to the places where your future potential possible buyers would be. And you do it in a way that is not just, hey, I make socks come by. But it's more like, wow, we work directly with these cotton farmers in XYZ. And this is how we sustain that region. And this is how we've enabled an uplifting community there or whatever it might be. And that's their brand story. That, I think, is how brands engage, obviously, in meaningful ways. I think you're right. But I think given the sort of what we've seen on the web is that, sure, that works for a subset of the content producers. They think like that. The other guys, they think, hey, SEO is a good thing. We should just pepper our website with weird keywords because this is the way they will find us, not by producing content, because just spewing out keywords is much cheaper and faster than actually doing that good content. Absolutely. And you could use an LLM to generate those, too. Hey, I have a business in XYZ sector. Can you give me keywords? And it'll give you keywords all day long. And then in the end, if you then use your LLM and send more socks than the guy who did that nice content for his socks, what kind of incentive is that? Call that an SDOS. That's a sock denial of service. That's what that is. Yeah, so there we go. We're kind of like kind of good, kind of bad, right? Kind of promising, also fraught. I mean, none of it's straightforward. And that's why I think we're at a very interesting point in internet history, because we have had a bit of a revolt. I think many people's eyes are open. You look at the backlashes on Reddit, the backlashes on Stack Overflow to these content deals, the users of the internet are not very happy with their platforms right now. And so people are willing to maybe suffer a little bit of reduced user experience and maybe like a fractured reach in order to have more autonomy and a little bit more ownership. And so maybe there's a chance. I don't know. Yeah, and I think at least my impression is that people have at least started to learn that maybe we don't need to be everyone at the same place either, right? So maybe it's better because we might get better content by not trying to be there where in the entire population of the world is, because it's not going to scale. It's not going to work, really. Right, it didn't work. It hasn't worked for so many people. Right, every time someone tries, it fails eventually. All right, well, there's your state of the internet report. We dragged it out of it. We dragged it out, Daniel. You know, I'm somewhat optimistic as well. I think that there's obviously problems. Some seem insurmountable. Some are larger than others. I think when you have a worldwide network and then you have individual countries operating on a worldwide network, you obviously have a lot of concerns that don't cross international borders, but they cross the network borders. And so that's always problematic. And it was from the start. It's just that we've matured into where the stakes have risen to where everybody cares more. And so nation states and large tech and these companies are starting to try to establish themselves and make their rules fit everywhere. And it's messy out there. Yeah, and I think we see some troubling tendencies there sometimes when you try to make your borders mean more on the internet. Suddenly within our country, this is the rules for how we do things, because it's going to be really, really messy if we want to try to enforce that. Like, I mean, sure, China as an example, but even more when the EU comes up with things the US does. And if we're actually going to try to enforce having different rules for different countries, it's going to be really hard to do things. But at the same time, I think sometimes we also see this as a danger, but it hasn't happened as much as we maybe think it might do. Well, a good example is EU with Apple and the way Apple approached their compliance with a lot of the EU regulations was localized to the EU. And of course, I'm in there as an engineer, I'm in there with the Apple engineers thinking like how fractured is this code base now where they have very specific and completely different rules based on region? You know, how many if statements are in this code base that don't need to be there? Yeah, they're just there for regulatory purposes, right? Exactly. And the question then, so sure, they did that for EU, but are they going to do that for more areas as well? Or are they going to hold out on that? Because you would otherwise imagine that you would pretty much, you know, like the GDPR pretty much affected how they would do business in the whole world instead. Yeah, because that's just more convenient. Or like they had to do with their hardware, right? Like with the USB -C port to just put it in all the phones. But when it comes to software, they're like, no, we're not going to do it for everybody. We're going to do it just for the EU. I guess that tells us where there is a lot of money involved. Yeah, it costs less to put on all the if statements than it does to actually do it for the whole world. It's worth complicating the software to that significant amount just to keep it. Yeah, good old fashioned tech debt. Potentially, do you have time for maybe a can of worms potentially? Hit me. Well, Curl has verified. You know this because you wrote the post and you've done the work. But I think given the install base and, you know, the fact that Curl is pretty much everywhere Curl could be at this point, people will question whether or not there's ever a possibility of Curl being exeed. I don't think so, given the things you've done. I think what's important to talk about is the ways and maybe some of this is in your BDFL manifesto or Ten Commandments or however you want to frame it. I don't think so necessarily, but in what ways are you preventing or working against securing, bolstering your security to never have even a social engineering attack? You know, how do you protect yourself? How do you protect Curl and what it is? First, of course, the XE attack was pretty amazing in both sort of the engineering that it took to do it, but also in how they selected the target, both in a project that was mature for their attack, but also technically the correct project in that they had payloads in Git, for example, that they could infect with like encrypted binary stuff. So they could hide a payload in Git. So they did their due diligence
really
good when they selected that target. So not only were they a skilled team, they selected the perfect target for that. I don't think Curl is as perfect a target for such an attack. So if we start in that end, so one way that XE was good is that they could insert huge payloads that were just encrypted because they wanted binary blobs to test their compression or failed compressions or whatever it was. So one way we should then make sure that we don't have anything hidden in Git. We don't have binary blobs in Git. Everything should be motivated and understandable, right? There should not be any big binary blobs that no one can understand. So you should be able to review everything and ideally then someone actually reviews everything. I guess that's the sort of the Linux lore. Given enough eyeballs, all bugs are shallow, but how many eyeballs do we actually have? Probably not too many because we all just think that someone else is doing the reviewing. But at least we've had a few security audits of Curl. We actually had a few recently even. So we've actually had within the last few years have had two security audits. So actually someone at least independent security professionals actually reading a lot of Curl code and figuring out if we have any hidden issues or problems somewhere. So at least we have that. And then we just do the normal things like making sure that everything every change is done as a pull request on GitHub. We review as much as possible. I think I review every pull request someone else provides. I have to admit that I merge code that I write that no one else actually says okay on. Because that's just me. Because I want to move faster than someone else is actually saying thumbs up. So I'm just reviewing my own code. And that's sure, that's a vulnerability in the process really. But I trust myself. And also in addition to that, we have a lot of test cases. And then if the test cases actually work and they prove that Curl works to this degree. So you actually have to do a pull request that actually runs through all those test cases. And someone did the math and we do indifferent and mutilations and somewhere around 140 ,000 tests per pull request. So there's a lot of testing. When all of those tests go through green, we know that all the functionality that we test for is there. So it's really, really, really hard to plant a backdoor or land unintended functionality there. Bugs, sure, stuff that we don't test could obviously land every now and then because we fix bugs all the time. But actually generally land a backdoor in this code I insist is really, really hard. And talking about what people ask me, people ask me that quite often if I've ever have detected anyone trying to land a backdoor in Curl. And that might just be because I'm stupid, because I've never detected an attempt to land a backdoor in Curl. And I think that is because it's so hard to actually do that. So I don't think people in general try that. I think they did that with XZ because that was a different project. They could do it. I think it's much easier if anyone actually wants to attack Curl. It's much easier for them to find a bug or exploit something that we already did unintentionally, like a security vulnerability. Find that and exploit that. I think that is hard, but I think it's much less hard. And then in the end, so if everything we land in Git is tested and it gets reviewed, there should actually be a very, very slim risk that we actually land something there that is bad. And then in the XZ case, they added attack code when they produced the release Torball. So they actually added code that wasn't in Git. And that's the final step. We do releases the same way. We actually generate files that we put into release Torball. And those generated files are not present in Git. So we could actually do the same kind of attack with Curl if I was a malicious release manager. And that way, we pretty much make sure that we have a reproducible process to do those release Torballs. So pretty much nowadays, I do them with a Docker image. So anyone else can produce the binary identical copy of a release Torball. So ideally, hopefully, someone verifies my release by doing an identical copy of the release to make sure that the process actually works. And if every single step there holds, then there should be no room. And sure, I mean, every process, of course, have flaws. I mean, we can do mistakes. And we're human involved. So humans do mistakes every once in a while. But hopefully, we have enough checkpoints, enough process and procedures to make it really, really hard. And if one of those tiny things go wrong, it won't be enough to actually land a backdoor. So I think we have a lot of checks, a lot of processes. And so it should be hard to land a backdoor. I will never say never. But it's going to be a challenge. They need more ingenuity and effort than they did for XZ to land it in Curl. It's about putting those hurdles and break points and just frustration points in there that if there's enough in the chain, then it becomes less and less of a desired target because it's just generally hard. So we talked to Jacob DePriest, VP and Deputy Chief Security Officer at GitHub. Actually, if you're listening to this, it's in the feed already. So we're releasing it tomorrow officially from our record date. But in terms of ship date of this podcast you're listening to, it's already in the feed. So go check that out. But one thing he talked about was this idea of attestations, artifact attestations. And I know that you play, I guess, in the GitHub sandbox to some degree. They may even talk to you early on. Are you involved in any way with that? Are you familiar with that feature set they're bringing out? It's in beta, but are you planning to use that? What do you think about it? I have just sort of read up about it. So I don't know right now. I haven't sort of been missing any feature in that area. That's why I haven't really kept up with exactly what it means and how we can use it. Because I think I have my T's crossed and my check boxes crossed pretty well. And I do all this and I sign my releases. I have the same signature and my T's published since a long time. So I think I have a lot of these already covered. I'm not sure exactly what else they offer in this regard. So I haven't really followed it. So
I
can't comment too much. Can you say the word attestation? Attestation. Oh good. Because we found that to be the hardest part of it was like you know the lack of stickiness of the word attestation. Yeah it's a hard word. I think the idea though is pretty straightforward. And the fact that they're leveraging GitHub actions as a part of it, it's basically a way to generate and verify signed attestations for anything you make with GitHub actions. So I think you're deeply on obviously GitHub and you're probably using GitHub actions in many ways I don't know for sure. But we don't do releases with GitHub actions. We don't do releases with any CI. So I do. That's why I don't need it. In our case all our CIs or all our jobs we run there, they're just throw away machines. They just run tests and verify. They just say a green checkbox then they're done and we throw them away. So we never use anything that they produce. So therefore if you attack our CI services it doesn't matter for us. Sure you can make our CI jobs fail or fake that they work. That could possibly be an attack vector. But you cannot affect or sort of inject anything in our releases because we don't build releases that way. I build them locally in a docker image on my machine but in a documented way. And then I sign them and I upload them to my server. That's why you're the best Daniel. Personal hand cut releases after all these years. He's still he's hand crafting his software. Exactly two hundred and fifty nine of them or something. Wow good stuff. Good stuff. Well is there anything burgeoning anything upcoming anything that you're up to that you want to make sure that our listener knows about before we let you go. I don't think so. When it comes to curl stuff we're just you know chugging along adding things. Since we do releases every eight weeks we always just keep adding small things. It feels like we never add big things because we always do these iterative smaller things on and on and on and on and never ending. And since we don't break APIs we don't break users scripts so we just continually add and polish things. And that's what we're going to do continuously. So yeah that's what I what I'm planning on continuing doing going forward as well. I guess we'll talk to you in three years and see if you're still doing just that thing. Yeah. Or sooner or later. I don't know. We got to have him back soon. Now that we have change log and friends you know we have another episode every week that we can actually just bring you on. I feel like we have to interview you. We got to get you on more regular Daniel because every three years I mean you're better than that. Yeah we can certainly do a different cadence. Exactly. Maybe annually at least. Come on. Cool. Well we always appreciate your perspectives on everything man. All right I forgot to mention by the way what happened also happened during these three years that I added a new command line tool to my sort of agenda. So nowadays I also make it to what I call true rel. Talk about hard to pronounce t -r -u -r -l. Completely impossible to pronounce but I pronounce it true rel. I have to add e. Yeah throw an e in there. It's just a small command line tool for doing URL manipulations on the command line. T -r -u -r -l. Just another command line tool just to fiddle with URLs because it's really hard when you write shell scripts and you want to do things with URLs like you know change the query parts or change host names change. Yeah they are tricky to manage in shell scripts so now I have a tool for that. Very cool. Very much fitting in the Unix philosophy here. You can just use t -r -u -r -l. Yeah and also another more important underpinning to that is that one of these things that happens over and over there's been actually several times during the last decade people have pointed out written papers about the problems with having different URL parses in different layers of your system pretty much if you write an application and you use another transfer library or third thing and they all parse URLs and since URLs are such a weird beast so they're no good spec for how other URL is or isn't so they all all URL parsers have their own definition or opinion what a URL is so basically that means if you're using two different ones it's a big risk so there's a big risk that one of them treats it slightly different than the other and that's one way to sometimes smuggle things through because what's the host name in one might be an invalid host name in the other and vice versa so that's also one way why I wanted to have this tool to manage URLs because it uses the same parser as curl does so if you want to write a shell script that understands URLs it has the same it understands URLs the same way that curl does so it sort of removes that friction that you might actually handle the URLs differently but in different parts of your setup pretty much gotcha now is true rel just as easy to get your hands on as curl in terms of apt -get dev install yes it's getting there it's only a year old so it's not completely everywhere but it's getting deployed and installed it's in most of those popular package managers now well we'll link that up so folks can check that one out neat tool terrible name
yeah
yeah but but it works out because it ends with URL and it works with URLs and and it's also a little bit like the tr tool as a tr you know the shell command for transposing or whatever it's called and it's similar in spirit to tr so that's so tr URL it works sort of yeah but maybe i like the way it looks in the uh on github slash curl slash i would call it trurl yeah if you pronounce that so tr URL uh it rhymes with curl so yeah i can see why you selected it right but you know naming is hard it really is welcome back soon we'll talk more yeah thank you yeah that's all for now bye friends bye friends naming things is hard names we imagined up for this episode include bdfling curl everywhere curl could be and ultimately where it doesn't curl run which we ran with you know what else is hard finding your next favorite podcast help your friends and colleagues by sharing changelog shows with them they'll thank you later and we'll thank you right now thanks we appreciate you spreading the word we also appreciate our partners at fly .io our beat freaking residents break master cylinder and our friends at sentry do yourself and us a favor and use code changelog when you sign up for a sentry team plan 100 bucks off next week on the changelog news on monday a deep dive on semantic versioning on wednesday and kaizen 15 it's a good one right here on changelog and friends on friday have a great weekend leave us a five star review if you dig it and let's talk again real soon