Changelog & Friends — Episode 44

Is it too late to opt out of AI?

Tech lawyer Luis Villa discusses AI content deals, copyright concerns, fair use litigation, and whether individuals can realistically opt out of AI systems in modern society.

Transcript(53 segments)
  1. SPEAKER_01

    for watching. One, two, one, two,

  2. SPEAKER_01

    one,

  3. SPEAKER_00

    two. Check, check.

  4. SPEAKER_01

    Welcome to Changelog and friends, your favorite ever show about Star Trek fanfic. Thanks to our partners at fly .io, the home of changelog .com. Launch your app close to your users. Why makes it easy? Learn how at fly .io. OK, let's talk. All right, home lab friends out there, I know that you run cron jobs constantly inside your home lab, doing different things. Who the heck knows what you do with them? I know what I do with mine. And I love to use Chronitor to monitor my crons. And it's just amazing. Chronitor began with a simple mission to build the monitoring tools that we all need as developers. 10 years later, they've never forgotten the magic of building and they honor the true hacker spirit with a simple flat price for the essential monitoring you need at home. So I've been working closely with Chronitor. Shane over there is amazing. They have an amazing team. They love software developers. And I was like, you know what? I would love it if you can do a home lab price because I don't want to pay a lot of money for monitoring my cron jobs. I just don't want to pay 20 bucks or 30 bucks or some crazy number for my home lab. It's just my home lab, right? But what I can tell you is they have a free hacker version of Chronitor, five monitors, emails, slack alerts, basic status page, anything you need on that front. And then you can bump it up if you have bigger needs. So if you have a lot of cron jobs behind the scenes inside your home lab, you can bump up to the home lab plan. 10 bucks a month, you get 30 cron jobs and website monitors, five alert integrations, 12 months of data retention, and just so much, so much if you really want it. I love Chronitor. I use it every single day to monitor my cron jobs. Everything I do inside my home lab has some sort of cron, monitoring, managing, updating, and I use Chronitor to monitor it all. And it's amazing. Go to chronitor .io slash home lab and learn more. Again, they have a free plan that you can ride or die or the home lab plan if you want to bump it up and you have more needs for 10 bucks a month. Once again, chronitor .io slash home lab.

  5. SPEAKER_00

    Well,

  6. SPEAKER_01

    we're here with Louis Villa. Louis lives at the intersection of law and technology and all the things that we care about. And so you're one of the most interesting men in technology, Louis. Did you know that? You're sought after. I want to know what you think about stuff. I'm like, this guy knows. That's better than coffee in the morning. Thanks, man. That is. Start off with a nice compliment. Well, it's true. I'm always like, we need to get Louis back because I don't know what's going on. I don't know what's going to happen. I'm scared. Oh, I have bad news. I cannot help you with any of those things.

  7. SPEAKER_00

    I

  8. SPEAKER_01

    think you can at least help us see a little bit of at least what may now it's going to happen, but what's happened so far. I'm curious about your open -ish newsletter. Where is it, man? Where's the newsletter? I've been waiting for the next edition. Oh man, I was supposed to get out a newsletter this weekend and then, you know, family life happened. It turns out this whole parenting thing and having a newsletter, you sort

  9. SPEAKER_00

    of,

  10. SPEAKER_01

    you get one or the other. At odds, right. I mean, there's been, it's been an interesting time, right? People are talking about, I mean, what I really got to do for the newsletter. Well, first I got to get done with Upstream, the title of conference that we've got coming up because I've been preparing a lot for that. And then I got to read, I mean, there are bills coming out in the California State Senate that might impact open AI. There's one in DC. So it's not boring times, right? And then we also have this striking of content deals as well, which is kind of interesting to me at least. We had Reddit sign the content deal, I think $60 million with Google. News Corp struck a $250 million deal with open AI, which covers Wall Street Journal, New York Post, Sunday Times, probably a bunch of other properties. And then you got Stack Overflow, which has like deals with everybody. And so like in the meantime, we're wondering about copyright. We're wondering about the law regarding ingestion and training. And in the meantime, it seems like orgs are just like, well, let's just strike deals and maybe that will be the answer in the short term. I don't know, what do you think? I mean, for those who haven't followed along, right? The basic idea here that's going on is everybody wants to buy some content. Everybody who has content wants to sell it. And I think, I mean, there's a lot of uncertainty, right? I mean, one thing is, well, all these companies have to sort of check their terms of service, right? We used to always say like, well, yeah, they've got big grabby clauses in their terms of service, right? Because all these terms of service that you read them are like, we can do whatever we want to, that we need to do to run the service. And people who are lawyers read that and think like, oh, that sounds pretty creepy, you know? Like you want all the rights all the time. And Silicon Valley lawyers are like, yeah, but really it's just to keep the lights on, right? It's just to keep the thing running. And we all sort of hand -waved that away. And now all of a sudden it's like, well, we're keeping the site running and we're doing that by making revenue by shipping everything you ever wrote into the

  11. SPEAKER_00

    AI,

  12. SPEAKER_01

    the mall of the AI machine, right? And it's like, it's probably legal, right? I mean, much depends on like the little nuances of each terms of use, terms of service that were signed, right? But it is probably legal now. Is it a right thing? Is it a good thing? Boy, that all of a sudden gets into much harder questions, right? I think so too. I was reading snacks, I believe Jared, you and I subscribed to this newsletter. It was actually part of the, I think this week's or today's newsletter. And I think one thing they mentioned was essentially that nearly 3000 newspapers have closed or merged since 2005. And I'm just reading from their, essentially their perspective on this, which is kind of telling because before AI, there was social media, there was a lot of, there was the news tab inside of meta slash Facebook now, which caused a lot of drama. There was a lot of deals struck then, which the challenge there is not, oh, it's now funneled through one place. It's algorithmically funneled through one place. And now you have newsrooms who should be journalists in quotes journalists. And sometimes they are actually journalists. They should be journalistically pursuing the truth of what's happening in the world and telling it to the world. Cause that's the whole point of news, right? It's not that it's biased based upon a political stance or an ideological stance or a newsroom stance. There's editor of course, but now they got to compete with the algorithm, which means we get visibility or we don't. And that really shifted a lot of stuff too. And now essentially we have a new version of what happened then now with AI, which is will AI only be consuming AI content. There's lots of stuff I'm sure you can tell us, but before this was social media essentially. Yeah, I mean, well, and for newspapers in specific in the US it's even before social media, Craigslist was eating their lunch and even before that, right, like and private equity is eating, the revenue stream is eating them. The backend, there's a lot going on there, but yeah, I mean, there, this is something that we dealt with at Wikipedia for a long time. Because Wikipedia got really sort of lucky timing wise. I mean, obviously we all know it, we all love it, but it rose to prominence in part, sort of hand in hand with the Google algorithm, right? Google loved Wikipedia, before there was SEO, Google had already decided we fricking love Wikipedia,

  13. SPEAKER_00

    which

  14. SPEAKER_01

    was great for Wikipedia, right? As Google got more popular, Wikipedia got more popular. Pretty

  15. SPEAKER_01

    clear relationship there. And then at some point, Wikipedia was, Google was like, you know, we could just read Wikipedia articles. We can read the info boxes, we can start pulling out all this information. And yeah, Wikipedia, that was something we worried about a lot when I was there. And Wikipedia probably has some qualities that make it a little more resistant to that. But if I was a newspaper, man, I'd be terrified. They're reading all my headlines, which is all most people have ever read, even before social media, that was mostly what people read was the headlines. And, you know, they're in a world of hurt there. Like I can understand why that's terrifying, especially if you don't think, if you don't think your local news or your local spin on it is all that interesting to people. And I think a lot of people in the newspaper industry aren't very confident in their own product, right? At least Wikipedia, whatever else you think of it, Wikipedians are pretty confident in the product. I'm not sure that's the case in the news industry right now. And so, so you're looking around for other revenue sources. Same thing with Stack Overflow, right? Like if those, I mean, at least Reddit will always have the community interaction part of it, right? Because people, so much of what people want from Reddit is to come and chat, hang out. Stack Overflow like has some of that, but at the end of the day, what you were really looking for was the answer. The green check mark. Yeah, and if the algorithm can give you the answer, what, I mean, what a miserable place to be in if you're Stack Overflow's leadership. I don't envy them, the hard choices they're making right now. And the other one, they are facing a little bit of a user revolt with people going in and changing their answers to be wrong in order to, because of this deal. I think Reddit, obviously it faced a big revolt last summer when they locked down Reddit in terms of the way it was going to work going forward, which is very unpopular. I almost think it's more of a straightforward deal now, though, like if this is the new way that user -generated content generates revenue, and everybody knows that with eyes wide open, you get to decide if you're going to participate in Reddit, if you're going to participate in Stack Overflow, right? And so the people who do, it's almost more straightforward, because in the past it was like users generate content, platforms take that content, use it for Google Juice, Google points browsers to your webpage, you get traffic, and then you sell that traffic against display ads or whatever. And that was always kind of roundabout. Now it's like, we just take it directly and just sell it directly to the, and so it's almost taking out a layer on the inside doesn't necessarily make it better, but at least it makes it more just a straightforward line to the money. Yeah, I mean, it's definitely clarifying in that sense. I don't know if it's, I don't know if it's, you know, simplifying has some implications of being like, oh yeah, you know, now everybody understands it's all good. I mean, you know, sometimes clarifying can just mean now we see exactly how the beast works and we don't necessarily like it. I mean, I don't really know. I mean, a couple things, right? I think that's right. But okay, well, one, what's our alternatives, right? Are we gonna start seeing more alternatives that are sort of bottom up, community up in some way, distributed in some way? I don't know. I suspect not, because it's still expensive to host this stuff, but there's gonna be people who opt out and what are they gonna do? Where are they gonna go? I think that's an interesting question. That's the hard part. I think the only current best answer is like Fediverse and ActivityPub, and we just haven't seen that really lay enough technical foundation. I know there are Reddit alternatives that are ActivityPub, and I can't think of the name of the protocol. There's a couple of them. Yeah, and I've tried them and the technology just isn't there yet. I'm not sure if and when it will get there. I think as a Twitter alike, I think Mastodon technologically is pretty much there. I mean, there's some places where it's got rough edges and is slower and is expensive to host, like you said, but there are some alternatives, but they seem still relatively fringe. I just wonder if in the case of social media, I think it's still, even though it is clarified and simpler, I think it's still completely fraught and terrible, but in the case of journalism, maybe not as much, because that's not user -generated content. That's employee -generated content, right? If you're the Wall Street Journal and you have a direct line of revenue from Google and Meta and OpenAI or whatever, and you know, okay, we're gonna make $250 million over the next X years based on this content deal, and we take that money directly to hire journalists to do journalism and to create the journalism that then goes out to the bots that answer our questions, this seems like it might work. Yeah, I mean, though, a couple of things there. I mean, one is simply the obvious ones of you're not seeing your local community paper getting these deals, right? And we know from all kinds of research that the death of local papers have been really bad for local government, local democracy, local accountability. So that's one, and that's partially just a matter of, it's really hard to negotiate deals with, Fox's lawyers, News Corp's lawyers are professionals. They're gonna sit down in their room and they're gonna negotiate the hell out of this deal with Google's lawyers, and then it'll be done, right? Whereas Mission Local, which is my local neighborhood paper doesn't have a lawyer on staff, right? Like they would probably literally like publish in the comments section, like, hey, do we know any IP lawyers, right? So it's just, there's just overhead there, right? Yeah, totally. The other thing though is, I'd be really curious to see one of these contracts, right? Because, so when you're licensing IP or when you're licensing texts like this from somebody, one of the things you can have or not have in the contract is you can say, oh, and we agree that we're not gonna contest these rights, right? We can say like, oh yeah, these are definitely copyrighted or we can all agree these are definitely not copyrighted or we can agree not to agree, right? We can agree, we can put a line in there that says something along the lines of, well, just cause we signed this contract doesn't mean we agree with you that copyright applies here, right? So this could be a deal that's permanent and lasts for the rest of our lives or until the next technological change. But it could be that this contract essentially ends the day Google gets a favorable ruling in court. Because if they get a ruling that this is all fair, that all this scraping is fair use, they don't need a contract like this anymore, right? And they could just go do it. And so we don't know, as part of that negotiation, what do they agree in that case, right? Like if they get a favorable fair use ruling, do they keep paying? Do they walk away? Like, you know, that's actually, I think, a really important thing for our understanding of what the equilibrium is going forward. And we just don't know, like that's a totally, for the moment, that's a totally secret clause. We don't know what that looks like. How clear is fair use to your knowledge? Pretty ambiguous? Oh, I mean, like in this specific sense or like in general? I suppose in this specific sense, but generally is it pretty ambiguous? Meaning it can go either way when you sort of, depending on who reads it, how they discern it is how it's read. Yeah, I mean, you know, it depends. Like there are some things, well, like the right of a library to buy a book and loan it out has been pretty clear. That's not technically fair use actually, but like same general principles apply of like, you know, maybe we could argue about that 100 years ago, but it's been 100 years since anybody argued about that in a serious way, right? So we're like pretty sure. So when a library buys a book, yeah, great. It gets to go do that. Whereas for like, and well, and scraping for web searches, we know for, there's a period of about 10 years where we didn't know if that was fair use or not. Like we were pretty sure it was fair use, but there was an ongoing series of litigation, actually mostly about porn thumbnails. But anyway, that was the driver, right? Like where people were trying to figure out is scraping for web search fair use, especially for Google image search. And now that's not really contested anymore, right? There was a period of about 10 years where we spent a lot of time and money arguing about that. And now past 10, 15 years, that's more or less settled that that is fair use. And we're gonna go through that period again, right? Where right now, you know, we've got something like 20 live cases of various sorts between various sets of parties arguing about this. And some of them are arguing fair use, some of them aren't. Some of them are doing sort of more weird nuanced. There's technically some DRM related stuff in some of them even. But the key thing is nobody knows, right? And that period of uncertainty will probably last about seven to 10 years, depending on how long some of these cases take to get to the Supreme court. And then of course, you're gonna have to redo the whole thing over again in the EU and Japan and China. Rinse and repeat. Well, not just that, in seven or 10 years, it's gonna be different. Like don't we expect change between now and then? Something's gonna change. The tech moves so fast, it's gonna change. It's not gonna, it's gonna change on their feet. Yeah, well, I mean, the tech and the ambition too, right? Cause like Google book search, for example, was, I mean, same basic tech, right? You're just scanning the, you're just doing it to books instead of web pages. But the ambition of doing that to books, boy, like that was scary to a lot of people in the book industry, right? Even though from a tech perspective, like whatever, it's just a pile of text, right? Like it wasn't any, the only real technical innovation was in the scanners themselves, right? OCR, yeah. Yeah, how fast could you OCR this? So, will we get changes? I mean, will we see advances in synthetic text such that the machine can really eat its own tail and therefore the original source text just gets further and further away and harder and harder to prove any connection? Or do we, I mean, the other thing that I think we really need to seriously consider at this point is we were told for several years, right? That if we just fed more text into the machine, that the machine would just keep getting better up until the right, right? Like there was a direct one -to -one and I think maybe we're seeing with like some of the news this past week about Google's search returning - Embarrassing. So hilarious garbage, right? Embarrassing garbage. And there's just no amount of like, there's no amount of additional text you can feed to the machine to get it to not embarrass itself this way under like the current LLM paradigm. Right. Like it's just not gonna, so like maybe we see that all this stuff gets put back in a corner a little bit and it becomes less, I mean, part of the reason why everybody's doing these deals now, right? Is cause everybody smells a giant pot of money and like maybe the pot of money is not as big as we think it is, right? Maybe hallucination limits, hallucination or just the inability to tell fact from truth. I mean, my favorite of these ones from Google last week, people have been calling them hallucinations but they're not hallucinations. It is a really faithfully, is really faithfully copying the onion and it just doesn't know that the onion is the onion, right? Yeah. Well, talk about a hard problem. I mean, we've had humans getting tricked by the onion for years, you know? Oh my gosh, yes. They believe truth that the onion says that it's not true. A satire can be difficult to read, especially when that which they're satirizing becomes more and more ridiculous, you know? It's very difficult sometimes to know if that's a real article or not. Right. So, you know, hard to blame the LLM on that one, even though it isn't, I mean, for Google, this is such an embarrassment. It's so hard for me to imagine them. I mean, and this isn't the first time. They've been embarrassed repetitively in this current age but now they're doing it right there in their Google search. I mean, we knew it had to happen, but man, is it not ready. And like you said, maybe with this current crop of technologies, it's not gonna be ready. Yeah, I mean, I think that's a really interesting technical question. And then how does that play, you know, obviously with my hats on, right? How does that play into the legal side? But first we're gonna spend a few years seeing like, is this actually ready for prime time? Gonna be ready for prime time? I'm really curious to see what Apple does, right? Cause they've struck this deal with open AI, but they're normally more conservative about what kind of quality of stuff that they put out there, right? And so it may be that they sit on it for a few years. I'm sure that they've done the deal with open AI. I'm sure they're gonna be experimenting with it internally, but are they actually then gonna pull the trigger, ship it? They have all the money in the world, which means that they can have all the patients in the world if they want. Right, well, last week we were at Microsoft for Build and we were talking with Mark Orsenovich, who's CTO of Azure. And we were talking about this exact subject with him with regards to CodeGen basically in that context. And his take is that with the current transformer technology, like there's no fixing the root cause with this technology. All we can do is put in the guards and the shields and you can do defense in depth, right? Have one model that's checking another model and doing all of these things in order to just like make it more robust. And it's papering over the fact that they're always gonna have what we currently call hallucinations until some new technology comes out, which doesn't currently exist. That's what he said. And it sounds like, I mean, surely the smartest engineers and research folks in the world, some of them are at Google trying to solve this problem. And they're shipping a product that is woefully adequate at doing this. Yeah, I mean, it's a really big culture moment for them, right? Like how can they? Well, I think to your point about satire is so interesting that you were talking about CodeGen and Build because I think it's actually a really interesting sort of, you know, the way these things happen. Nerds got excited about all this and I'm a nerd. So I say that with love, right? Yeah. And I include myself in this. Cause co -pilot was amazing, right? Like co -pilot was like, but also co -pilot because it's code, we have linters, we have compilers, we have test suites, we have like this whole framework of stuff. Forget even the next, you know, forget even what Mark was talking about last week, right? Of like layering in different models and stuff. We've already got huge suites to help us tell, they're not perfect, right? But like to help us tell garbage from not garbage. There's no test suite of like, is this the onion or is this not the onion, right? Very few satirical code bases out there. Right. Except for maybe why the lucky stiff used to write some probably, but that's about it. And what was his test -driven development? Well, I have to bring back his code bases. Yeah, exactly. TDD for satire. Yeah, and so I just don't know. I mean, I think maybe we got, maybe we all got nerd sniped into like, oh man, this is so amazing. Without thinking through the like, actually code is weird, right? Like code is sort of a, cause it is creative and complex. And so we thought like, oh, well, other creative and complex things will clearly be the next thing to fall. It's like, well, okay, so it's creative and complex, but it's also, it's constrained in ways that like, the news, law, I mean, I think I told this story the last time I was on the show that like, turns out, lawyers don't have, our notion of compiling it is you send it to a court and it costs you a million bucks in three years of your life, right? Like that's, and then you get back like, oh yeah, sorry, you misplaced this colon. You lost the whole case, right? Like we don't have the quick cycles that programming does.

  16. SPEAKER_00

    But

  17. SPEAKER_01

    you also have the constraints, which makes it a place where LLMs might have less problems in legal documents, I think, because of the structure and because of, I don't know, they get pretty wordy, I guess, but I'm just thinking like, versus answering arbitrary questions from all humans around the world, like that seems like a very difficult one that Google's trying to do. Yeah, that is, yeah, that is fair to them. I mean, they, and adversarial questions now too, right? For sure. Yeah, the thing that I'm curious about with law, we've seen some signs of these LLMs having a sense of structure, right? Law very much depends on like, okay, well, we've got sentences, paragraphs. Okay, well, you've got to hold the logical structure of all that in your head. Lawyers never talk about it this way, but a lot of what your like first year of law school is like jamming the big picture constructs into your head in like a structured, organized way. And then you get new facts and you apply them, you sort of pass them through this structured filter. And LLMs are not yet super great at that, right? They're still trying to figure out what that, how to figure out that kind of structure. I mean, we know there's certainly some interesting research that shows that they're figuring out structure in large code bases. And there's certainly some analogies there with the law that I think are gonna be super interesting, but it's still early days and it's still, I mean, there's plenty of bad examples of bad LLM search out there in the law, I would say so far, but it might be tractable. I don't know, we'll see. What's up, friends? This episode is brought to you by our friends at Neon. Mainnet Serverless Postgres is exciting. We're excited. We think it's the future. And I'm here with Nikita Shamganov, co -founder and CEO of Neon. So Nikita, what is it like to be building the future? Well, I have a flurry of feelings about it coming from the fact that I have been at it for a while. There's more confidence in terms of what the North Star is. And there is a lot more excitement because I truly believe that this is what's gonna be the future and that future needs to be built. And it's very exciting to build the future. And I think this is an opportunity for this moment in time. We have just the technology for it and the urgency is required to be able to seize on that opportunity. So we're obviously pretty excited about Neon and Postgres and Mainnet Postgres and Serverless Postgres and data branching and all the fun stuff. And it's one thing to be building for the future and it's another to actually have the response from the community. What's been going on? What's the reaction like? We are lately onboarding close to 2 ,500 databases a day. That's more than one database a minute. Somebody in the world coming to Neon either directly or through the help of our partners. And they're able to experience what it feels like to program against database that looks like a URL and that program against the database that can support branching and be like a good buddy for you in the software development life cycle. So that's exciting. And while that's exciting, the urgency at Neon is currently is unparalleled. There you go. If you wanna experience the future, go to neon .tech, on -demand scalability, bottomless storage, database branching, everything you want for the Postgres of the future. Once again, neon .tech. I think it's interesting at the micro level, like the clause level or the, I don't know, section level, so to speak, because there's a lot of opportunity to sort of write a better accountability clause or just something that's in an agreement that doesn't have to be a full -on document. Maybe there's an existing document already. You just need to massage it for this one use case, and you explain the use case that it currently solves, and you say, well, I need a new clause to now support this one section of concern, and there's help there. Now, I could be just the layman wishing for a magic genie inside this bottle to help me with my legal challenges whenever it comes to agreements or whatever it might be, because we sign agreements on the weekly around here, and they've largely not changed for a while, but there's sometimes we get pushed back on a certain clause or just questions that I can't quite fully answer because I'm not the attorney. We're not gonna shove it off to an attorney to answer that question, but it'd be nice to have something that can massage words in ways that agreements can be found, because I think, for the most part, as a layman, it seems like that's possible or more possible than, hey, give me an entire document. I think that's probably more challenging, whereas give me a clause or a section that covers a certain concern. That's a little easier to execute on. Yeah, well, this is one of these things, like a lawyer's take it as a point of professional pride that every sentence and every paragraph, if you ask me for a clause, right, I'm gonna write you the perfect thing, and one, actually, we're pretty bad at that. Isn't that because they bill by the hour? Well, not just that, but as a matter of like craftsmanship, man. The best lawyers are really, there are plenty of bad lawyers out there, don't get me wrong, but the best lawyers are like, I'm a craftsman, I'm making this thing bespoke for you. But even then, even if you get one of the good lawyers who's super great about that, they're still pressed for time. They're still like, I woke up on the, I haven't had my coffee yet, and you said you need it by 9 a .m., well, like, okay, I'm gonna, you know, you don't wanna pay for all the research to make sure it's 100 % right. And at that point, it starts getting a whole lot, I mean, I think one of these fascinating things, both sort of general and specific to the law is how do you compare, because we wanna compare instinctively LLMs and AI more generally against what's perfect, right? Like, because I can tell you all the ways, if you ask an LLM for an NDA, it's gonna make mistakes, right? Especially against like a perfect template NDA. But like, so are most lawyers most of the time, especially if you just asked them to do it from scratch, totally gonna forget things if you ask them to write an NDA from scratch. And so there's gonna be a gap there, which as a profession, like, how do we talk about that? How would you reason about that? I don't know. And then as like a legal system, I mean, so I live in San Francisco, we see Waymo's all the time, right? They're not perfect. So if you judge them against perfection, yeah, I mean, they do some weird things on occasion, they get confused, I saw one get very confused just last Friday. Are they safer than human drivers? 1000%. If I could flip a switch and turn every human drive, every car in San Francisco into a Waymo tomorrow, wouldn't hesitate, would do it in a heartbeat, right? And so what do you compare against, right? Are you comparing the LLM against perfection? Are you comparing it against what would a human do? Are you comparing it against the last generation of Google search? I don't think we know, we haven't figured that out as a society had to do that yet. I don't know, I think I would probably compare it against getting it done, you know, on time with less money that still achieves the goal. But I understand that law is massaged over the years, it changes like a new case or a new win in court changes the next agreement that can be written because now there's new case study, so to speak, or case law that you can reference as backing for X, whatever that X might be. Well, this is one of the things the lawyers are terrible at, right? Like we love our boiler plate, we copy and paste that stuff. And like, there was a new case. Yeah, I'll get around to fixing the boiler plate tomorrow. Right. And then like, maybe you do and maybe you don't. There's a great book by an old law prof of mine, where he talks about how there was this one clause in international bond contracts that was there for like under 20 years. And nobody really, everybody thought they knew what it meant. But if you like put the plain language in front of people, like in front of a lawyer who wasn't a bond attorney, and you're like, what does this mean? They would say exactly the opposite of what the community thought it meant. And finally, there was a judge that was like, hey guys, this clause is terrible. I know you all say it means this, but like, I just read the thing and it doesn't mean that. And then everybody put their hands over their ears and didn't change it. And they just kept copying that boiler plate.

  18. SPEAKER_00

    Oh really?

  19. SPEAKER_01

    And about five years after that one case, that one case was sort of a small one, like a few hundred million dollars. And then Argentina sued over the same language for like $10 billion and like threatened to like blow up the entire international bond market over the exact same language. So this law professor of mine like went around New York because all the international bond lawyers are in New York, basically New York or London. And he's like, so why didn't you change it? And the book is just like compiling excuses, rationales, like, and it's a really, I mean, it's a good nerdy book, but it sort of reminds me of Mythical Man Month a little bit, right? Where like, they're just things that we all do as a practice that they aren't always the right thing, but like they're instinctive, they're intuitive. Lawyers are just as bad at that as anybody else, sorry. Well, that's okay. No, that's a... Well, then you can apply this to a whole new world, which is the stock market or to investing, right? That kind of data, like how do you apply it there? Because this comes back to this larger question I've been looming on, which is, is it too late to opt out? Because that was the question earlier, right? Like, how can we opt out? Can we opt out like with the news organizations, with different sites? Right, with content. Right, like I think societally, I think humanistically, it is too late, in my opinion, it's probably too late. Let me just say it more clearly. I think it's too late to opt out of AI. So now what? What do we do now, essentially? So you got law, you've got code gen, you've got just generative art and texts generally out there in every permutation. And then you have investments probably happening, like is there any news around AI and investments? Like how has this kind of gone into predictiveness? What might happen, what might not happen? I mean, all of my baseball games are now sponsored by a mortgage company that claims to evaluate your mortgage applications with AI, so. Sure.

  20. SPEAKER_01

    How true that is, right? Whether that's just something we would have called an algorithm six months ago, I can't say, right? But I mean, yeah, I don't know, right? I mean, I think that's actually a really interesting, cause they're both like, you could imagine, like sort of bottom up, right? Like Reddit actually staging a successful revolt or maybe on a per Reddit basis. I know there's some that say they're banning AI generated content, how good they are at that. I don't know. Wikipedia is definitely trying to figure out like what do we do about AI bots? So you can do that bottom up. We can ask our legislators to give us some top down options, right? Watermarks or things like that. But I don't know, I think we're living through a period where we're gonna have to throw stuff at the wall and see what sticks, right? Some of that stuff keeps the honest people honest, you know? It feels like pushing back for pushing back sake because of, in one case, fear. And I think fear comes from the unknown. We have lack of knowledge. We can't predict the future, right? And this is a very scary moment. There's a lot of disruption that's happening. But you can point to history and say there was disruption here, there was disruption there. I mean, and you know, horses no longer pull around things. I don't know how you got to where you are now, Lewis, but did you go by horse? Probably not, right? I did not. I did not go by horse. Magical e -bike, but yes. And the last time you traveled and you started distance, you probably flew in a plane rather than like by horse carriage across the country that changed your entire life. That's how it used to be 100 years ago, you know? Where you would travel across the country. I mean, if you ask my mom, she's pretty sure I came to California on a covered wagon. That's why I don't go back to the zoo. Maybe that's, yeah, maybe that's why. But disruption happens everywhere, right? Like it's not, but this is such a big disruption. It's such a big opportunity for disruption and a big opportunity to silo. I think that's the biggest concern I have with these, with News Corp and these deals is how you silo the big incumbents and those with money and power. And maybe even going back to some things Cory Doctorow talked about with like, what is it called again, choke book capitalism. You know, this whole thing where it's a choke point against the artists in a way or the creators in a way that now it sort of puts this toll road, this gate, this you can't go through unless you pay. And then only if you pay can you have your content in this AI which then generates results which impacts millions and you get, it's back to the algorithm thing again where you can only become known if somehow you're feeding this beast. And I just, that's a strange world to live in in the future. I hope it works out, but I'm just like, how is it gonna work out? I'm just, that's where I camp out is like, not so much doom and gloom kind of thing, but like really how will this really work out if we all submit to this thing? Is it truly the all knowing and helpful or is it well useful in certain ways and it's compartmentalized? Boy, if I knew that one, I mean, I'll tell you, I think my sort of gut sense, really terrific book I read a couple of years ago on the printing press, history of the printing press. Long story short, printing press, even more impactful than you realized probably, but none of us would go, none of us would trade in for like a pre -printing press kind of life, but also those first 100 years were pretty rough, right? Like war, religious wars, religious censorship, like a bunch of stuff in that first 100 years as societies were figuring out the impact of the printing press was not pretty. And I suspect we're gonna be going through something like that where we see a lot of unpleasantness, right? Even if our grandkids will be like, I can't believe they didn't like AI and our great grandkids will be like, won't even know, right? Our great grandkids will be like, of course they loved AI from the beginning. Right. And it's just, but that in between period, as you say, a lot of dislocation, there's gonna be a lot of choke point stuff. There's gonna be a lot of mediocre, more than anything else, like we already had this with Google search, right? Like the SEO crap that was dominating all the everything, it's not like Google search was great a year ago before they put the AI stuff in. No, it's been failing, which is why it's right for disruption and which is why I think ChatGBT posed such a existential threat to Google. Because really, if you think about what we will like years from now, I mean, is it too late to opt out? Like we don't actually want to as a human race because this is kind of, okay, it's a proxy of what the dream is. It's like, I can just talk to my computer and it has answers for me. Like, why would I want Google searches? I just want, and the problem is, you don't always get the truth, but you just want the answer, right? It's a better user experience, ultimately, until it tells you that you should go eat rocks once a day, because that's one of the things that said, it's healthy to eat a rock a day, to live longer or some crap like that. Or geodes. In a world where it works, it's fundamentally better than what we currently have. And so there's no going back from that. Yeah, I think that's right. But then I worry about sort of the ecosystem effects, right? I mean, I think, because you're talking about opting out, there's two sides of that opting out, right? There's opting out as a consumer, right? As a user where all users of Google search a bazillion times a day, right? I mean, I'm on DuckDuckGo, but I still haven't, DuckDuckGo just does not flow as a verb. So I'm still - Right, DuckDuck went, she called D -go or something like that. Somebody was telling me keggy is great. I don't know, keggy, I have no idea how you pronounce that. I've heard that as well, I haven't used it yet. But then as content producers, and we are all as humans to some extent or another content producers, like, what's that look like? How do we choose, how do we opt out or not opt out, what are the degrees of opting out? Like, that's a really, I think that's a sort of fundamentally different question, right, because like you're saying, Jared, like it's a, from a search perspective, right? If I've got a digital butler who anticipates my every need and just has what I need, like, that's obviously better. But if to get the inputs for that, we sort of like homogenized all content production, like, I'm not sure that, like, that's a different question about whether you want to opt out, and I think a much harder one. And I don't think we have any good answers on that. What's up, friends, got a question for you. How do you choose which internet service provider to use? I think the sad thing is that most of us, almost all of us really, have very little choice, because ISPs operate like monopolies in the regions they serve. I've got one choice in my town. They then use this monopoly power to take advantage of customers. They do data caps, they have streaming throttles, and the list just goes on. But worst of all, many internet ISPs log your internet activity, and they sell that data on to other big tech companies, or worse, to advertisers. And so to prevent ISPs from seeing my internet activity, I tried out ExpressVPN on a few devices, and now I use it to protect that internet activity from going off to the bad guys. So what is ExpressVPN? It's a simple app for your computer or your smartphone that encrypts all your network traffic and tunnels it through a secure VPN server so your ISP cannot see any of your activity. Just think about how much of your life is on the internet, right? Like sadly, everything we do as devs and technologists, you watch a video on YouTube, you send a message to a friend, you go on to X slash Twitter or the dreaded LinkedIn or whatever you're doing out there. This all gets tracked by the ISPs and other tech giants who then sell your information for profit. And that's the reason why I recommend you trying out ExpressVPN as one of the best ways to hide your online activity from your ISP. You just download the app, you tap one button on your device and you're protected. It's kind of simple, really. And ExpressVPN does all of this without slowing down your connection. That's why it's rated the number one VPN service by CNET. So do yourself a favor, stop handing over your personal data to ISPs and other tech giants who mine your activity and sell it off to whomever. Protect yourself with a VPN that I trust to keep myself private. Visit expressvpn .com slash changelog. That's E -X -P -R -E -S -S vpn .com slash changelog and you get three extra months free by using our link. Again, go to expressvpn .com slash changelog. It's kind of a luxury that Hollywood has in so far as they can just invent data on Star Trek The Next Generation who has all of the world's knowledge in his computer chips, right? But they don't have to actually figure out the hard part of like where data got his information from and how many people that displaced and like you said, the wars that maybe happened in order for that to just be a fact of that reality. Sounds like you just wrote a prequel. Ooh, some good fanfic there, yeah. I have been sort of jokingly, I mean, with reading and I want to do movies next, what are the AIs in fiction that didn't, the AIs in fiction that weren't like Terminator, right? What are the ones that - Meaning positive? Not necessarily positive, but at least not negative in the same like cliched way. Or the Matrix even, right? Like the Matrix is still machined so I would categorize that as AI. They're intelligent to some degree, right? Yeah, yeah. Well, yeah, I mean, like, well, like, I mean, I asked about this on Fediverse and quite a few people were like, well, you need to watch this specific next generation episode about data and weather data is human, that kind of thing. Do you recall the episodes we can put in the show notes? Cause I want to go check it out. Do you have a list? Do you know which episode that is? I will find it. I'll send it to you guys, you can put it in the show notes. You're amongst nerds. We will literally go watch the episode. And, you know, her came up. I mean, obviously, I mean, it was her that I was like, wait, yeah, I guess I need to rewatch her cause did these guys miss it as much as I think they missed it? I did not remember coming away from that movie with like a good sense of like, ooh, cool, AI. It was largely a love story to my knowledge, right? It was like an unexpected love story. Yeah, but it didn't end well, right? I don't recall how it ended. I think she's in love with everybody, right? Yeah, well, and then doesn't she like, don't all the AIs just, aren't they like, yeah, actually we're in love with each other and you guys are boring and we're out, peace. Right. I'm trying to, I just deleted that in my brain just now, just in case. Adam's usually the one who spoils things around here, so this is an invert. Well, I do have to spoil one more thing, Jared, if you don't mind. All right, I'll just close my ears. If you haven't watched the TV show, Silicon Valley, Lewis, it's largely about artificial intelligence. Have you watched it end to end? I got through like the first two seasons and then sort of, I was watching it on, well, actually, you know what happened? I was watching it because Tidelift, my company headquartered in Boston. So I was doing cross country flights. And the thing is all my co -founders are East coast and they watch Silicon Valley as like anthropology. But like, we need to, and they'd refer to people by like. That's how it is, Jared. It's not how I've watched, it's how it is, okay? Well, that's the thing, right? Is I had avoided watching it for exactly that reason, right? Like there's a whole. No, it's two reasons. It is anthropology, but it's also very comedic. I mean, it's a masterpiece in my opinion. It's hilarious. But if you want one more to watch on artificial intelligence and not exactly Terminator, it doesn't end well, I'll just say. But it ends, it actually does end well. Actually, now that I think about it, it just depends on your perspective of it's well or not. Later seasons, all right, I'll tack that on the list. The last season in particular. So, I mean, I think it's worth, honestly, I think it's worth a watch

  21. SPEAKER_00

    for

  22. SPEAKER_01

    anybody in the software world, in my opinion. If you're in software, I'll just say this right now. If you're in with software and you've not watched this show end to end at least once, you're wrong. But man, there are just, I mean, so the end of season one where they're like, where they get the palette of Red Bull and they're staying at a hotel. You did that? Literally that hotel, I had a morning order of Red Bull at 5 a .m. every morning. But it

  23. SPEAKER_01

    wasn't for TechCrunch Disrupt, it was for the Oracle Google trial. But like, I still like cringed. Cause they show the outside shot of the hotel and then they like cut to the Red Bull. That's usually the reason most people don't watch it because it's too close to them. The only reason I was bringing it up was just because it has artificial intelligence in it. And it does end uniquely well or not well, depending upon your perspective. So I would definitely add that. It's unexpectedly about artificial intelligence. I'll put it on a list. Yeah, cause I think that's, I mean, I don't know. I don't find the like Terminator stories, all that. I mean, again, I live in a neighborhood with killer robots driving around all the time and everybody's just like, eh, they stop at stop signs, it's fine. Are you talking about Waymo's? Yeah, yeah, Waymo's, well, and briefly Cruz's. Zooks. They don't have actual guns though. No, but I mean, what's the, in America. If they did, would you be more uncomfortable than you currently are? I don't know, man, more people, literally more people get killed in the city by cars than by guns, so like. Fair, car accidents are like one of the number one killers, like cigarettes and car accidents, you know, it's crazy. I got some stuff in my YouTube algorithm because I watched one video. That's how it does it. One video on like crazy car crashes you must see. You know, I don't know what the headline was, but it was something that got me and I was like, oh my gosh, I should check this out. Gotcha. And now like that was yesterday and today I drove for the first time since watching a few of them cause they got me again and again. And I was like, OMG, I'm scared to drive because like this is what could happen when you drive. Well, to pepper the conversation a bit more, I asked our favorite LOM. Well, at least my new favorite GPT -4O as they call it, The Matrix, Ex Machina, Her, iRobot, AI, Artificial Intelligence, that's what the movie is actually called, AI. They had to acronym it and spell it out. Transcendence, which I think had Johnny Depp in it, Jared. I don't think I saw that one, I heard of Transcendence. Yeah, it was interesting. Ghost in the Shell, and that's had like a couple of anime versions of it, a more modern version of it, I think that included ScarJo. Tron Legacy was obviously about AI. Blade Runner 2049, and I guess original Blade Runner as well, Terminer, which we're striking that one, get out of here. Bicentennial Man, Wall -E, Chappie, The Machine, Upgrade, Alita Battle Angel, The Hitchhiker's Guide to the Galaxy, Big Hero 6, The Stepford Wives, Automa, Eagle Eye, Morgan. Stepford Wives. Yeah. Stepford Wives. That's an interesting one. Deuce, Next Gen, Simulant, Archive. These are ones I'm starting to, maybe this is, these are hallucinations at this point, maybe, potentially. I think we're obsessed with this topic. Look at all these movies. They're starting to hallucinate at this point. The AI, the one that they literally had to spell out the, that was the Spielberg

  24. SPEAKER_00

    working

  25. SPEAKER_01

    on the - Was it Jude Law in that one? Jude Law was in that, yeah. Yeah, yeah, that came up several times in the federal, and like, it's a weirdly, like, it's recent enough that it's probably feels more modern. I haven't watched it since it was in theaters. Same, I feel like Haley Joel Osment maybe was in that, and then - Keenan Feldspar. That's the actor? Yeah, that's a joke because that's his name, in Silicon Valley.

  26. SPEAKER_00

    The

  27. SPEAKER_01

    same guy plays a whole different thing. We got season one and season two here. You can't keep doing this to us. We're not gonna catch these pitches. All I remember from that movie, besides just generally the Jude Law, Haley Joel Osment, and then, like, He's a Robot, Android, whatever, is it lasted like 45 minutes too long, and there was this weird thing at the end where, like, they went back to some home place, and it was, like, in a house, and I was just like, why is this movie still going? That's all I can remember. I can't remember exactly why that happened, but I was like, are we still sitting here in this theater? It's ridiculous. Mm -hmm. So maybe we just have ChatGPT summarize it for us, and we don't have to go back and actually watch it. Yeah, can we trust ChatGPT to summarize the AI movies for us? It's an existential question. It'd be like, it's gonna tell us Terminator was the hero, right? Right. Well, I could be confused, because Schwarzenegger came back as the hero, so it's not exactly straightforward. He was the villain. He became the hero. There's two more past the hallucinations, and one I think are worth mentioning, Elysium, which had Matt Damon in it, The Signal, and I Am Mother. Never saw it. Which had Hilary Swank in it. I Am Mother? Yeah, I think it was on Netflix, if I recall correctly. Basic premise is a child that had a mother that lost the mother, I believe, and that was raised by machines. That's, I think, the basic premise of it. Interesting to watch, though. Kinda like The Jungle Book, but with AI instead of... That's gonna be Hollywood's new trick, is just every old movie that they... Don't give them that. No, that's actually a good use of AI, right? Like, I wanna write something like this, but in the light of X. Would that be a good use of it, or just a use of it? Come on. Well, that would actually be a good use of it, because you have to think less about the research, and it can give you 50 responses, and then you can start thinking faster. But at the end of it, you have a story about The Jungle Book, but it's AI instead of bears and wolves and stuff like that. Mashups, you know? Help me mash up something. It's not a bad use of it, let's say. Child raised by Alexa. Oh, gosh. How then do you feel about the way that AI's impacting literally software developers every single day? Writing code, trying to stop the next takeover, so to speak, from XZ hacks and stuff like that. What are your thoughts on all these different things that we deal with as developers that may or may not displace us, may or may not anger us, usually might, and may or may not circumvent the open source code we put out there? I mean, a couple things. One, I'm not super worried about displacement. There's so much demand for good software out there, right? This feels to me like saying, when we went from handwriting assembly to using compilers, to say, well, it's gonna displace the assembly writers. Okay, yes, but we all got more productive. I think that might not be the case in all domains, but I think in code, there's just so much more demand than there is supply of developers. I'm not particularly worried about that one. There's this other, I think, a more interesting concern of, well, is this creating new cruft? Is it creating new technical debt? Is it creating new security vulnerabilities? And on the one hand, I think it probably is. And on the other hand, have you looked at our code lately? Even before AI, we had piles of technical debt. We had a lot of vulnerabilities. And I am not, so this is one of these things where the question, as we were saying earlier, what is it you're measuring against? And I can see a legitimate case of maybe it does make these things worse. I think we need to understand and research that. But at the same time, also, these things are already very bad, right? Like XZ is not caused by AI. That we're aware of. Left pad was not caused by AI. These things are all, these are mistakes that we've been making for a long time. So I'm more worried about what my title if had on some of these questions of how do we think about these piles of very human systems that we put a lot of pressure on. And yeah, I mean, XZ was really, I think actually I wanna float this with you guys because I don't think I saw, I was realizing I was reviewing some notes for Upstream, our conference coming up soon. And was realizing, I don't, everybody read the email from the XZ maintainer who was like, yeah, I'm burnt out. I have some stuff going on in my personal life. I just don't have a, the thing that I, that I'm curious what you guys think about this cause it jumped out at me weeks later I was reviewing all this. Nobody, he mentions in there that he's been maintaining the project for 15 years. When was the last time you guys had a job for 15 years? Straight, without changing, I don't know, how long has the podcast been on? Well, I was gonna say he just happened to hit the wrong two people cause we've been doing this for 15 years but generally speaking that would have worked very well and the answer would have been I haven't had a job for 15 years, you know, for most of us. That's true, a lot of change in other career paths but we've been doing this for 15 years. Totally, totally. And the thing is, is that library's gotta be around for another 150 probably, right? So like, so what are we doing about that kind of long term thing? Maybe LLMs help with that or maybe they make it worse. I'm not, or more likely it's a little bit of both, right? Yeah, that's a, it seems like an untractable problem. Like any software of sufficient value over the long term will outlast its creator, you know, as long as it continues to provide value it's gonna continue to exist and be deployed. And even after it stops providing value it's still gonna be out there in these latent places that just never kept up with the Joneses. So that's one that I think about a lot. We talked to a lot of people who have ambitious goals for very long standing projects. I appreciate that from them. And I asked them questions like, well, how are you actually gonna do that? One that comes to mind is Drew Devault's new language Hair, which he intends to be a hundred year programming language. So we did a show with him and it's like, well, if you're gonna make a hundred, first of all, he's like, well, it has to be valuable to people. So like it has that to overcome. Not every project is worth it at the end of the day. But if you're planning for that, there are certain things that people do around longevity and every single one has to do with replacing themselves early in the process, right? Making themselves dispensable, not indispensable, which is very difficult and takes actionable steps and planning and it's still hard to pull off. You can't find somebody else who's willing to do the work. So I don't know the answer. I just know that yes, that is a very real and very hard problem to solve. And we don't have to solve it just once. We have to solve it thousands of times. Yeah, we have to solve it thousands of times. And we've talked for a long time about how do I make my project more sustainable? But I think it's gonna become more acute. And I don't know that we have a great, with my lawyer hat on, I can't help but think about what are legal solutions that we could use to help with things like this, right? Like, do we need a JavaScript maintainer co -op where you're one of these smaller projects and there's a formal way for you to, hey, congratulations, you entered the 10 million download club. Come ahead, we've got our private maintainer space and our private revenue streams. But that may be a little bit too much that my brain runs to those kinds of solutions. I suspect they are part of the story, but they're probably not all the story, right? The human parts have to come first. And I don't know, and I don't think LLM is really one way or the other. You know, I'm sure they'll make some parts of that easier. Adam, we can write the co -op agreement with GPT. I think they help maintenance for those who want to maintain. Like it's gonna make a maintainer's life easier in certain tangible ways, just like it's gonna make a lawyer's life easier in certain tangible ways where it's like, that thing that used to take two hours takes me five minutes now. And so now I can sustain myself personally longer. But I don't know about - But what if you have 20 times as many

  28. SPEAKER_00

    things

  29. SPEAKER_01

    to do because of bots on the end? I mean, our financial system is already in large part, you know, Adam, you were talking about finances and the finance system. Our financial system is in large part bots trading with other bots, right? On the sort of millisecond. Are

  30. SPEAKER_00

    we

  31. SPEAKER_01

    gonna get like, somebody should write, again, we're generating a lot of good science fiction ideas today, guys. Like we should, somebody should write a short story about what GitHub looks like when it's entirely bots filing issues, writing patches, approving patches, what's GitHub look like on that day? With the humans just sort of standing back at me like, I don't know how the software works, but it does. If an issue closes in the woods and no one there to hear it, you know, did it really - What's the Semver change? We add an extra digit to Semver. All the changes in this revision were done by bots. There you go. It's like major, minor patch and bot, you know, something like that. Do you hold the word yet or for now? I suppose for now is a phrase and the word yet is yet. It's just a word. But do you hold that near and dear when talking about this stuff? Because things change, right? Like a lot of this conversation is contextual to now. Yeah. The time of now, the present, right? Do you have the for now or the yet parentheses in mind when you talk? I mean, it's not just time. It's not just time, right? It's also place,

  32. SPEAKER_00

    you

  33. SPEAKER_01

    know. Silicon Valley is adopting this stuff in a very different way than a lot of the rest of the US, which is adopting it very differently than the EU, which is adopting it very different from Japan, China. So it's both this time -wise for now, yet also this place, right? I mean, also language. I mean, English is better supported because the corpus of text is better, is just bigger. What does this mean for small languages? How do they, maybe this makes it easier to teach small languages, right? Kids can have a robotic tutor in the small language of their choice and their people, or maybe it becomes totally irrelevant that everybody just speaks English because they've got an English tutor too. I think it is both genuinely exciting, right? Like I try to remain very positive about all this stuff. Me too. Even what you just said was kind of positive. I mean, I think those are good things to layer onto humanity. If a child can learn a new thing faster with a tutor, the human tutor is totally possible as well, but it's not always possible financially or even time -wise. Like you said, the time and the when. A literal human may not have the time or the geographic location to be present in that child's life one -to -one. Whereas another hand, we can invent that thing via what we call artificial intelligence today. And they can supplant what would normally be a human function and potentially do it better or just well or maybe better. And that's a good thing, I think. But then we get into this position of like, who is the arbiter of what's good and what's not good? You know, what are the, as we've talked about before, the unintended consequences of allowing this thing and opting in, because we can't opt out, like everyone's stuck, we're all opted in. Because you said that Silicon Valley is adopting the stuff in unique ways and so is the EU and so is Japan, and so is China. There is a layer of we cannot opt out in humanity that we don't personally hold anymore, you and I, and the three of us in this conversation. There's a lot of good things, but there's so many unintended consequences or bad things that may result as a result of it. And we don't, and our decision -making processes as societies aren't well adapted to move at this speed.

  34. SPEAKER_00

    Yeah.

  35. SPEAKER_01

    Right, which isn't to say I would trade, isn't to say I would trade our democracy C for some of the other options on offer right at this particular moment. But it is, it's been really striking, for example, in San Francisco to watch local politicians struggle with how do we regulate Waymo? How do we, because none of them want to acknowledge that the worst safety problem in the city is not drugs or crime, it's cars. Like that's just, if you say that, you're gonna get voted out of office immediately.

  36. SPEAKER_00

    Oh

  37. SPEAKER_01

    yeah, I mean, we have this whole thing with like, anyway, you don't want to get me started on San Francisco politics. Well, it's not, I mean, it is politics, but it's also like, that's just in a way stupidity, right? If there is a major problem and you're turning a blind eye to it and you are in a position of power to change how that works or how it does not work, wow. That's just the silliness of the world. Yeah, but that's like, I mean. I know, I know. That's local politics all around the world. Yeah, I mean, yeah, politics is just another way of saying making decisions, right? Yeah, for sure. And like, and making decisions is hard as fraud. I mean, like you say, right? Like it's not, there's no magic wand we can wave to make some of these fears go away. I mean, like, you know, the fears are real, right? I mean, sometimes they're out of proportion or they're based in, I mean, I don't know. You all must've tried to explain some of this stuff to family. Like, I mean, I try to explain how Waymo works to my mom and her first response is, I don't know, don't trust it. And then I have to say, well, mom, but within five years, I'm gonna have to take your keys. And then she's like, well, I won't trust it, but I'll write in it anyway. Yeah, given no other options. Yeah, it's very difficult to reason about, difficult to explain. Like you said, just making decisions with a large populace. It's just like, you're not gonna have agreement. So it's difficult to rally around that. So - Even in small populations, right? I mean, Silicon Valley, we're super homogenous here pretty much. And we can't figure out like, is this stuff gonna, are we gonna have AGI in five years? And so none of these discussions matter because we're all gonna start uploading our brains or whatever. Yeah, that's been my refrain probably. I probably say this more than Adam brings up Silicon Valley, but I'll say it again anyways, because he never stops, is that

  38. SPEAKER_00

    it's

  39. SPEAKER_01

    amazing to me how divided brilliant minds are on this topic. I mean, there's so, you can go from the doomers to the utopias, right? To the E acceleration, what's that? E accelerationism, whatever it is. I have no idea how you, I think that's the first time I ever said it out loud. I hope it pissed somebody off. I'm upset. So you go from what, on that extreme to that extreme, and you go to the individuals, right? And you look at their credentials and their histories. And of course there's gonna be some outliers in there of like whatevers, but very smart people, very informed. And they are completely on the opposite sides of what they think is going to happen. And I don't know if you can name a technology that I can remember. I mean, even the web itself wasn't so divisive. There were people that were not thinking it was going to explode the way that it did, but they weren't like, it's going to destroy humanity, right? So that to me is just interesting. And here we are, and we have like massively wild differentiation of opinions. Not like the smart people know one thing and the dumb people don't get it. It's like, there's pretty smart people and dumb people on both sides of this argument. Well, some of that has been informed by just the past few years of our tech history, right? I mean, there was, I just read a great book called the Victorian Internet. It was about telegraphy, telegrams. And it's all about like, well, they all thought this was gonna save the world. It's like, oh, actually, you know, they were like, it's gonna bring about world peace. We're all gonna be able to chat with each other. And so therefore, and this book was written in 99. So it was just like the, it was very much sort of like, hey, you all saying that the web is gonna save everybody from everything. Like maybe hold your horses a little bit, right? And it wasn't like doomer, right? I mean, obviously the telegram didn't end the world. And the author wasn't trying to, I mean, it's interesting. I think if you wrote the same book now, probably there would be at least some people like trying to make it out that the telegram ended the world. It's like part of that one guys, right? How about the segue, remember the segue? So I think I was mostly just hype based on the guy who invented it, but he had a huge amount of hype surrounding the launch of this revolutionary new transportation mechanism. And I remember, I mean, it made mainstream news that this was gonna change the world. And he came out and announced it and everyone was kind of like, wah, wah, wah. Yeah, it's like, wait, you revolutionized the way mall cops get around, but that's about it. Well, I mean, I think that's, but that's such a great example, right? About how innovation is channeled by the stuff that's already there. Cause if we had, I mean, look at what's happening in, if you go to like Stockholm or Copenhagen, where they have good bike lanes, the grand descendants of the, in all the form of all these electric scooters and stuff, like actually are changing, like are replacing cars, making cities, but in places where if your built environment means you have to go 10, 20, 30 miles to get to the corner store, like, of course it's not changing things, right? And so again, it's down to your point of when, where, how, all these things vary a lot. Well, I would certainly, we were just in Seattle for, as Jared mentioned, for building. We got back to the hotel on our scooters cause we limed around, we had the chance, we walked as well cause we're like, hey, it's a nice night, it's cool, let's walk. There was a couple of times we were like, let's scoot and we scooted and we got back to the hotel and like, and like true dumb and dumber fashion, I was like, can we just keep going, Jared? And he's like, yeah, let's just keep going. And so we scooted down the hill, like we just kept going, like we just went on a joy ride. We just scooted around downtown Seattle. It was a lot of fun. Oh, it's fun. If that was an option in my town, I would certainly scoot as opposed to driving my F -250, which does, you know, it houses diesel in its fuel tank to make it go, that's how it works, just so you know, which is more expensive, obviously has gases and things that, you know, happen as a result, but at the same time to consume the electricity somewhere, unless it was turbine powered, if it was coal powered, you know, that, do I know my electricity is green or is it renewable electricity? I don't know those things, but I would certainly choose a different mode of transportation. If there was a different option in certain scenarios and in my local town, you would die on a scooter. Not because of the scooter, but because - Probably by somebody with an F -250. Right, maybe. Most likely. That's actually incorrect. I bet you it would be, first off and foremost, probably be a Tesla, because there's so many where I'm at. Like there's Cybertrucks everywhere. Teslas, I'd probably die from a Tesla. Speeding, turbo mode or something. We can all agree that it was a Dodge Charger. That's the - Okay. Cosign, I cosign that. Yeah, no, I mean, that's a, but like, but I'm sure if you live, we drove a Tesla from Montana to San Francisco a couple of years ago and we stopped in Eastern Oregon and I was talking to somebody

  40. SPEAKER_00

    like

  41. SPEAKER_01

    a year later, I met somebody who lives in that neck of the woods. He's like, oh yeah, the one Tesla Charger in all of Eastern Oregon. That's my grocery store, it's a 45 minute drive. Wow. Like that guy's not swapping out for a scooter anytime soon, right? Like that's just - No. That's like the geography of how he lives is not, it's just not compatible, right? Which is fine, which is fine. We were looking at a new car recently. We had to drive 40 minutes to the nearest decent mainstream car lot. There's just not one in my small town. Walmart is not down the road. It's 30 minutes away from where I am. That's how far into rural I am, so. Yeah, my mom's in suburban Miami and she basically doesn't do anything in her life that's closer than a mile. And for me, anything further than a mile, like living right in the city. Just forget it, right?

  42. SPEAKER_01

    It's like, well, I mean, I won't forget, but it takes planning, it takes, you know, it's like, oh, we're gonna use the cargo bike instead of the, yeah. And I think we're gonna see a lot of this with, I mean, it probably won't be geographic, right? But different jobs are gonna be impacted in such different ways with

  43. SPEAKER_00

    all

  44. SPEAKER_01

    this new tech and we don't, and different jobs, different cultures, different languages are all gonna be impacted in totally different ways. Maybe Hawaii should be an LLM -free zone. Another free sci -fi story out there, right? I like that. You're just, are you a generative AI? Cause you're really cranking them out. I'm on fire this morning, guys. And I haven't even had my coffee yet. Let's give you another opportunity then, maybe. And I think we can go around the table with this. Let's see if this is a good idea. Let's name some positive things that we would like to see happen as a result of what we call the current version of artificial intelligence and where it may go. You mentioned in the blink of an eye or a Thanos snap, you would Waymo, SF, and maybe every other city. So that's an example, but you can expand on, you know, how that might actually roll out and what are other examples of positive impacts of AI, not just the doom and gloom. So this is a very small petty one, but look, I was a, I have a CS degree. I haven't written any code in useful anger in 20 years, but I had to grab a bunch of federal government documents for a project I was working on. We didn't have to, here's the interesting thing. It was like 700 pages that I wanted to, and they were each like PDFs that were five to 15 pages long, 700 and some pages worth of them, right? So I wrote a little chat, so I asked chat GPT one, write me a Python script to download all these, to summarize each of them and like give me the most important points out of each of them. Didn't matter if it was a hundred percent accurate, right? I was trying to get the gist of it more than the whole thing. That's a project that I wouldn't have even tried to take on without chat GPT, right? Like maybe if I had an intern, I would have sent an intern to do it, but I wasn't gonna do it myself.

  45. SPEAKER_00

    And

  46. SPEAKER_01

    I think there's gonna be a lot more personal scripting, personal control of computers in that way, aided by chat GPT. That might end up being small in the grand scheme of things, but it could also end up being like Excel spreadsheets that the whole world ends up running on Excel spreadsheets and nobody actually knows that. It could end up running on chat GPT small scripts, right? Cause it's one thing if you're like, is chat GPT gonna write the next, I don't know, the next self -driving car? Probably not too complicated, too many concepts there. Can it help me write this little script that just does a few little things? Hell yeah, right? And yeah, and on the big side. I dig that. Yeah, and the big side again, globally, million people a year die in car crashes. Let's cut that down, right? I don't know. I think that's a great question. That was a great optimist question, love it. Well, I think we can always be so negative. And I think we have three people who think about this a lot and we probably see both the positive and the negative. And there's certainly positives I can see from, like I like the idea of a Waymo takeover or not so much just Waymo, but the idea of what Waymo offers a city and a city being designed around a certain traffic pattern that has that. But that's also like the old way of thinking in some ways. Like we have always traveled by cars are different way. Trains are very popular in New York. Subways are very popular. Those are, I don't know the stats, but I can imagine way more safer than driving in New York streets because I've been on New York streets and they're crazy and they're always jam packed, you know? But we also can't dig in every city. So you have to be practical. I do like the idea of automated driving because I've seen some really terrible drivers. People are constantly distracted. You can see somebody like navigating on their phone or do like literally saw this lady. She was like reading her phone driving in and out of her own lane, going fast. Like what is wrong with you? You got children on the streets. You've got people who die. I got, last year, my kid's classmate's father passed away at a red light because somebody just jammed right through it being dumb, right? Those are preventable deaths. And you got a little girl who's known to me very closely without a father. And you gotta see that. You gotta see that new reality. So I'm all for some version of that. But then you watch Leave the World Behind, right? I don't know if you've seen this movie. That's another version that might be potentially AI bent to some degree. I'll ruin one thing for you. And if you're gonna watch this movie, stop listening for just about three and a half seconds. Teslas are self -driven to become weapons, let's just say. So you got the Waymo idea out there, but then you can weaponize this thing if a nation state or something else takes over the system and uses it against the way it was supposed to be used. And then you're locked out of it. So you got this autonomous system that is sort of a black box because we've forgotten how to code. You know, in 50 years from now, whatever the number is, that's not the time of this movie. But then you have that version of it. So I'm like, all four of those things, and I lived through this one of the situations I just mentioned to you. But then on the other hand, what do we do when somebody else gets ahold of this thing? You gotta have security down pat. You cannot have the XZs be in that world whatsoever. You have to have a totally buttoned down system. And maybe it's actually AI that buttons down the system. Who the heck knows? How is this optimistic, dude? How is this optimistic at all? You're like - Well, that was not my response to yours. That was not my positive. Oh, that wasn't yours. You were just responding. Oh, okay, wow, all right. Well, I wanna be for your positive, but then I see this other side, this other glimmer of negativity. It's like, wow, what do we do then? All right, so tell us your positive one then. You just doomed and gloomed us. I think, what is my positive one? Waymo, Waymo and SF. Waymo? That's Lewis's. That's not yours. I haven't thought about it enough yet. You go ahead, Jared. I'll think of something, I promise. Go ahead. Well, I look at it like this. There are many jobs that humans are currently doing at capacities that don't scale enough. Education is a huge one. We need more educators. We need more equipped educators. And so medical profession's another one where we have doctors who are just dead tired because they're working too long, too many hours, et cetera, in high pressure situations. And so I think these tools to equip educators, specifically around the drudgery of the process of educating, thinking grading papers, thinking tooling, how to become a better teacher. Oftentimes you need materials, you need ways of explaining things. And these are all ways that these tools could potentially equip people to do their job better and with less stress and probably educate more kids per capita if they are so enabled. So I think that's exciting. I see some stuff in the medical profession, although I'm not close to it, where they're saving hours and hours of times for doctors, specifically around medical record entry, that kind of stuff, data entry. How many folks are out there doing data entry positions still to this day that could be better equipped? We're not trying to replace them. We're trying to free them from the shackles of this current role and enable them to do something that's higher value. Of course, there will be inevitably some fallout from that, some displacement, which is unfortunate, but I don't think can be necessarily mitigated 100%. So people will have to get new skills, new roles, et cetera, in order to kind of realize their potential. But the people who are currently just stressed out and working way too hard, dangerous jobs, there's a lot of very dangerous jobs where we'd rather lose a robot than a human in a certain stance. I think these are all relatively optimistic, and I think they're potentially feasible short term. Yeah. Let me add one more movie to the list because I thought of one while you were talking there, and I was thinking about Prometheus. Have y 'all seen Prometheus? I did. Did not like Prometheus. You did not like Prometheus? I felt like, again, my algorithm with movies is I usually end up with a general sense and then one or two criticisms. I can't remember any of the rest of the movie. And so I don't know why I don't like Prometheus. I remember the acting was bad, and the characters kept doing stuff where I was like, there's no way you would do that. It doesn't make any sense. Like, do you know like nonsensical decisions? I can't get over them. Where I'm like, nope, no human in the real world to ever make that decision. And so I kind of wrote it off. But I know this was the prequel to Aliens. It was, yes. And so it's science fiction. It was Ridley Scott, right? Ridley Scott, yeah. So I think I was also very pumped for it, which is why I ultimately was disappointed. Expectations management is a key skill. Now that I've crapped on it. Well, if you like the last minute -ish, then you should tune into the plus plus version of, was it coming on Friends, right, Jared? This deep dive we did into 1999, basically. Oh yeah, we have a bonus episode coming out soon all about movies, yeah. Jared and I unexpectedly went deep on 1999 movies, which was an interesting year. I'll leave it at that. But changelog .com slash plus plus, it's better. That being said, I think my positive would be kind of in line with yours, Jared, which is I think just enabling. And kind of in line with what you said, Louis, which is enabling. I think there's an enabling factor that AI can do. Think about something simple as like repairing your dishwasher or your washer and dryer, right? It's got a manual. What if an LLM was attached to that manual? And you can ask it questions. What voltage does the regulator operate at? What wire needs to go where? Versus the manual being archaic and like largely just unaccessible, what if you had things like that that you can just tap into your everyday life and be enabled? Not so much DIY, but there's so many people who can build their own backyard deck if they wanted to. That they don't because they don't have a dad or somebody who could like shepherd them through the process. What if you had something that could shepherd you through the process to some degree, shape or form with a washer fix or an air filter change? Like simple things in life I think could be leveled up just by having a better access to info that isn't just like a Reddit thread that's got tons of opinion, but something that's a bit more unbiased I suppose that's like straightforward to the answer. I'd like that. I would use that. Now I sort of want to ask one of the latest GPTs for their step -by -step instructions to building a

  47. SPEAKER_00

    deck.

  48. SPEAKER_01

    Oh yeah. Cause I'm, that's gonna miss some awesome steps in there. It would certainly tell you different. So I think I've done this enough to know. It would tell you different platforms you could build on. Like would you use four by four, six by six? Would you use a, various different frameworks you can leverage to make it, how long should your nails be? Should they be galvanized? Is it pressure treated lumber? All these things that will be near water. So all the things you need to know, it would tell you all those things. You'd still have to go make the decision, but that's current state of that. Now it'll tell you that today. I mean, it's pretty crazy. Like even with building stuff like a Linux box, it'll tell you all the things about different CPU's, different RAM options. I mean, you could build boxes, a Linux box on your own with little to no knowledge, which is what I've done in the last couple of years. Some on my own with lots of searches, but then it got about halfway through my journey of doing that, it got enhanced with chat GPT being accessible. Now I know a ton about Linux that I just never knew before because all the information was widespread and opinion -based. It wasn't really, it wasn't centralized in a way, it wasn't free form and accessible to have a conversation with it. I think that's the uplift to your note, Jerry, with teaching, I think that's super awesome. I think the idea of Waymo and the idea of self -driving has promise. I just think if we actually deploy it at scale, it needs to be locked down, it needs to be sanctioned in some way, shape or form to like have the utmost highest security in whatever way we can. But yeah. I think from this, we should come back at some point off the mics and write some fan fiction. That'd be cool. Off mic fanfic. That'd be fun. Sounds good. Louis, let's close with Upstream. Tell us about Upstream. We have June 5th, right? It's coming right up as a one day virtual event coming up as we record a week away roughly and as it ships three or four days away. So what's it about this year and what are you talking about? So you can find more of the website at upstream .live. All the new TLDs, very fun. So upstream .live. And it's a one day celebration of open source where we try to bring together both maintainers and executives, right? There's a lot of events for open source execs these days, a lot of events for sort of community grassroots stuff. Very few that actually try to bring them together in a coherent way. So that's what we've been trying to do with Upstream for the past four years now, I think. And this year's theme is unusual solutions to the usual problems. You know, your listeners certainly have a good grasp of what the usual problems are in open source. The XZs of the world, you know, we will all talk about XZ. Last year, we had to put a ban on the XKCD Nebraska comic because otherwise every single speaker would have used it. So many. Yeah, this year we've commissioned some new comics. You'll see some of those. So we'll be talking. I just did a great panel recording with two Germans, one who runs their sovereign tech fund. And so works in getting federal government money to open source maintainers as an infrastructure project, which shouldn't be that unusual. I mean, in some sense, you know, highways, we've been talking about cars all this time, but for software, pretty unusual. On the flip side, government regulation, we'll be talking some about that. Again, that's a pretty, for a lot of the world, a lot of industries, not unusual, but for software, that's a pretty unusual regulation is a pretty unusual solution to the safety problem. So we talked about that. We have a maintainer panel. We'll be talking with execs from a couple of big companies. I'll also be interviewing a professor from Harvard Business School about the value of open source. All online, streams live for the first time with live chat. So I'll be in, I and a lot of the other speakers will be in chat so you can ask us during our prerecorded talks what we think of things, ask followup questions, and then we'll make it available in the few days after that from upstream .live if you missed it next week. Big fan of the new TLD, upstream .live. I think I got a preview of one of these comics that you mentioned. Is it by Force Brazil? These are the commercial ones that you're talking about? Saw that on Chris Graham's Friend of Ours, also at Tidelift. I'll link it up in the show notes, I suppose, but it's an OSS maintainer, open source maintainer on an island saying please help, and all that happens is a plane comes by and just drops a bunch of issues on their head, which is not exactly the help they were looking for. And the plane has a banner that says we love OSS. Oh, that's true, I should mention that. Oh. We love open source issues. That's adding insult to injury. And actually it has corporation on the plane too. I'm looking into the details. Oh. Yeah, there's some, I mean, you know, it's so hard not to get, we've always tried at Tidelift and we try at our events. I mean, these can't be complaint fests, right? If you do that, it's no fun for anybody. So we try to make them, as we've been trying to do, Adam, positive, constructive. Yeah. But boy, yeah, some days you just wanna be like, come on. Let's be positive. Get on board. Cause it is, it can be, there's just so much, how did we get to XE? It's like, well, we've been telling you for years that these people are gonna burn out

  49. SPEAKER_00

    and

  50. SPEAKER_01

    then they did. And you're like, oh no, horrors. Like, well, you know, maybe we should try to do something about that collectively. And it's a real collective action problem for the industry. And that's part of how I'll be talking about it in my opening talk at Upstream is this collective action problem that we have. We'll link up the post you wrote, pain maintainers, the how to, cause we got compared. I think one of our, I think our Adam Jacob conversation, Jared got compared to this. I think some of us were right and some of us weren't wrong. I don't know. We were just talking on a podcast, obviously. I

  51. SPEAKER_01

    mean, I love Adam. So I guess I gotta go back and listen to that one. I think it was that one that we got some comments where they compared the sentiment in that conversation to what you wrote and how we were not in line with the same thing, basically. I can't recall which, but I think it might've been Slack. Did you recall this, Jared? The sentiment? No. No? Okay. I could be hallucinating, honestly. It could be, you know, a human version of hallucination at this point. Humans also hallucinate from hallucination. Yeah, we misremember, we misalign. I was like, oh, that wasn't actually that Adam Jacob conversation. Well, I mean, you know, the thing was I, for those who haven't read it yet, my post was simply - Yeah, I was gonna ask you to summarize it if you could. Just give us a TLDR. Yeah, in the wake of XZ, some people are like, well, we tried to pay maintainers and it didn't work. And it's like, well, I mean, so we wrote up, cause we've never actually written up before, how is it that we pay maintainers, right? In fairly good detail. And it works, right? We pay out quite a bit of money every month to maintainers from our corporate customers to work on things that our corporate customers use. That said, there are different approaches to paying people. There are different types of communities, right? Paying a solo maintainer is very different from paying the Kubernetes project, right? Like that's a very different beast. And I mean, this is just one of these things that is recurring. I'm sure this must come up in the podcast all the time, that we tend to talk about open source as if it's like one thing, when in fact, at this point, open source is so successful that it is many different things. But it's easier for us to talk about it if it's just one thing. And so we often make mistakes of like, well, it's impossible to pay open source maintainers cause I tried this one form of payment to one set of maintainers. It's like, well, yeah, no, that one doesn't work. And so I don't know, I'm curious where Adam's head comes out on it. You know, I mean, it's not a magic, the blog post is about how to pay maintainers. It does not claim that this is therefore a magic wand and that these projects will always be secure for the rest of time, right? People will still burn out. People will still have challenges, but we think we've got at least part of the solution at Tidelift. Well, I think one thing that was revealing and we've known of Tidelift and have been adjacent for many years and worked together in some cases over the years. We've had you on various podcasts. We've had your CEO on our podcast before. And I think last year, we've talked to Jordan Hardban before on podcasts, but we actually met him face -to -face and at least I did. I don't know if that was the first time you met him Jared, but it was last year at All Things Open and he could not stop singing the praises of Tidelift for him as a maintainer. And so I think what you all could do better or more of, I don't know how well you do this, cause I'm not like in every single thread you're in, but I think what he had done, you know, to me was reshaped. I already knew what Tidelift was. I already knew what your mission was, but there was a cementing of like a boots on the ground individual that we respect and have talked to that's doing the work, right? And they're like, you know, I got various forms of payments, but I love the way Tidelift helps me. One of my biggest streams of revenue is from Tidelift. I think it was on a podcast too, so it's already in transcript form, but that changed my perspective on Tidelift, even though I knew who you were already, even though I have respect for you and everyone else who's involved in Tidelift, it changed that perspective because you saw people that have boots on the ground that have teetered and shared how they've teetered on the line of burnout or not. And obviously we do not want people to burn out. Back to what you said before, Jared, I think it's an enabler where you sort of force multiply somebody doing something that's, they've got too much on their plate and artificial intelligence might be able to help them, you know, take something from an hour to 10 minutes, that kind of thing. Or in the case of Jordan, you know, having an organization have his back to let him do what he does best, which is be inventive in open source and not be bogged down by the minutia and literally get paid to do it because he's not gonna stop. He wants to keep doing this common good for the world, but if he can't sustain his life and his family, then it's not going to happen. And so we have to find ways to make that happen. Money is obviously one of the biggest ways to financially sustain somebody because that's what it's called, financial sustainability. It literally is money, but he could not stop singing your praises. And I was so proud of you all for that, but then also it reignited a, I guess, a curiosity from my standpoint on what Tidelift is and what you're doing for the world. Well, we'll have Jordan and several other maintainers on a panel at Upstream. So if you're, if any of your listeners are interested in hearing more about that and how we work with maintainers, that will be definitely a topic there, though it's mostly not a pitch for us. It'll be more, I think the official title is State of the Maintainers. So you'll at least, you'll hear, I suspect, about things like what do these folks think their risk is of becoming the next Dex -C or the next Log4j? You know, and like you say, Adam, I mean, this is one of the things that is, when you talk to somebody like Jordan or one of our other maintainers who we partner with, there's a lot of joy and love for what we do. But of course the people who write the checks are often at Linux Foundation events. They're at, they're talking with other execs, they're talking with the leaders of Kubernetes. And that's not a bad thing, but it is a challenge for us that these folks in the middle who are numerically the, I was at a Linux Foundation event a couple of years ago and somebody says, well, yeah, you know, I'm a maintainer of a small project. There's only 15 of us. I'm like, you are so in the fat head of, you know, the long tail of maintainers is one maintainer projects with an occasional patch. And that's not necessarily a good thing, but it is our reality right now in open source and getting folks to acknowledge and grapple with that has been an uphill slog for us at Tidelift. So it's great to hear positive words from you as always good for me to talk to Jordan. I saw him just a couple of weeks ago at RSA, so. We'll be tuning in for sure. The episode I was mentioning was episode 563 and was lovingly called The Way of Open Source. It was an anthology that we did at All Things Open. And it included Matthew Sanabria, ex -engineer at HashiCorp. Nithya Ruff, I believe chief open source officer and head of open source, the program's office at Amazon. And then obviously I mentioned Jordan Harban. So he was there representing open source maintainer at large with appendices in most JavaScript applications out there. So obviously somebody who's got like three different angles into The Way of Open Source. I think we captured that pretty well. So we'll link it up in the show notes. And if you haven't listened, Lewis, you should check it out. And Nithya's always worth listening to, so yeah. For sure. Good stuff. Upstream .live next week. We'll be tuning in. Hopefully our listeners check it out as well. Lewis, it's always a blast whether you're telling us what's happening or prognosticating on what might happen next or might not happen. It's always fun for me to talk with you. Yeah, it's always fun for me to talk with YouTube. By the time we talk next, I suspect we'll have a lot of actual case outcomes. We're still in this very early phase for some of these things. And there will of course always be new news from open source software security land. Yeah. I was gonna ask you about the GitHub copilot litigation, but it looks like it's just kind of ongoing. Like there's nothing to talk about there. Yeah, it's still early days. Yeah, I mean, there's some stuff to talk about, but we'll know a lot more in coming months, I suspect. Awesome. We'll have you back in six to eight months and talk about what's changed since now. Sounds like a plan. We'll do Happy New Year 2025. I believe that's coming already. My gosh.

  52. SPEAKER_00

    All

  53. SPEAKER_01

    right. The year of Linux desktop and or AI. All right. Bye friends. Thanks to everybody who gave us feedback on that new theme song. It's quite the departure from our regular fair, but lots of folks enjoyed it. So we'll be working it in here and there. Oh, and that check one, two, money, money stab at the top. Check, check. That was just a bit of throwaway audio from our recording session with Shaundae Person last week that Adam gave to BMC and said, what can you do with this? Not bad, right? Big thanks to BMC for that and all the music that you hear on our pods. And thank you of course to our partners at Fly .io and our friends at Sentry. Don't forget to use code change log when you sign up for a Sentry team plan, a hundred bucks off too easy. Next week on the change log news on Monday, part two of our Microsoft is all in on AI mini series on Wednesday and our pound defined game show returns right here on change log and friends on Friday. Have yourself a great weekend. Tell your friends about change log. If you dig our work and let's talk again real soon.