Changelog & Friends — Episode 24

Doomed to discuss AI

Jon Evans discusses the cultural history and contemporary perspectives on AI existential risks, plausible science fiction concepts, how science fiction influences real-world AI development, and why the actual future will likely be far weirder than predictions.

Transcript(34 segments)
  1. SPEAKER_01

    Welcome to Change Loggin' Friends, a weekly talk show about the Bubbleverse. Thank you to our sponsors for helping us bring you the best developer pods each and every week. Fastly .com, Flight .io, and TypeSense .org. Okay, let's talk. Kick in whenever you guys want. Exadelic. That's how you pronounce it, Adam. I've got my copy right here. Very, very limited edition. I think there are only about 50 of those. Yeah, this was cool. I remember being like, this isn't even real. And I was like, wait, that makes it even more unique. Uncorrected bound manuscript. I got to admit, Jon, as I'm reading it, I'm wondering, has he changed any of these pros? Because I wonder how final is this copy you gave us? Because I was like, that's an interesting way to say it. I wonder if it's still in the, and I keep asking myself that, but. Well, obviously you'll have to read it again, the final iteration. Yeah, are there any edits between this uncorrected bound manuscript and the official one that launches? There are, but they're not major ones. It's very, very lightly edited and polished minor copy edits, but it's like 99 % of the same. Okay. Yeah. So we're talking, of course, about Exadelic, Jon's new book. Sci -fi craziness, that's how I describe it. In stores today, I guess. Well, congrats, Jon, you shipped a book to the world. That's cool. You made a thing. Thank you, it feels very weird and good to have it finally out there. And you shipped to us an uncorrected bound manuscript that we're talking about, maybe six months or so ago. And just as a nicety, he said, hey, we don't have to talk about the book, but I figured you guys might like a copy. We do appreciate that. And we're happy to have you back on the show. So it's been a while. In fact, Adam, you have not met Jon previously because I interviewed Jon a couple of years ago about GitHub Arctic Code Vault, which I think, Adam, is an episode that you lined up. I coordinated that. Yeah, you coordinated it. I was so bummed. Yeah, I was like, geez, I wanna talk about the Svalbard Arctic Code Vault and all the fun things. How cool is it to think that you've made a contribution to humanity to some degree that, for the moment, epicenter of open source code and a lot of the software contribution that's given to the world happens, archives it in this insane idea. I think that's just kinda cool. And then, hopefully, when we're truly invaded, not just when the government says we're invaded, or seemingly invaded, that with humanity's gone, they can pull up the code that was just terrible. Right. For the most part. Does everybody think that their passcode is terrible? That's why I say that. Oh yeah. Yeah, that's why we did a video for YouTube that had like a million views. And the most common comment was, please don't have included my hello world. Right. I counted them there. But no, no, we swept it all up. And I think there was some sort of threshold, right? Of what the contribution was to warrant being included. Wasn't it something like that? Like it was a date? Was it just a date or was it like criteria? There was a date, but then there's also a criteria, yeah. It had to have been active. Either it had to have a certain number of stars or you had to have made some commit to it in the last year or two. Right. But everything else we swept up in the future. Which, as I mentioned years ago in an interview, Adam, is that our transcripts, which are markdown formatted plain text on GitHub, are in this Arctic code vault. So not only will our bad code be out there, but our bad questions and comments. Bad words, yes. It's such a surreal thing to think about. Isn't it, Jared? Like to think that at some point in the very near future, as we speak into these microphones, the words I'm literally saying right now are being transcribed into text into markdown. This markdown repository you're talking about. Yeah. And is in a GitHub repo that has infinite history, essentially, that you can go back to. Maybe you can scrub that history if you'd like to, but it's there. It's there, right? And then eventually it's somewhere else in the world. It's translated ideas. It's stuck in a vault somewhere or whatever, even. Assuming that Jon and his team did a sufficient job with their archive strategy, because forever is a long time in data archiving. Isn't it, Jon? Oh, yeah. I mean, we were theoretically aiming for 1 ,000 years, and when we started thinking 1 ,000 years in the future, like, I have no idea anything happened. Like, there could be some sort of tragic event. We could go back to the Stone Age. We could have, like, uplifted ourselves to being AI gods or whatever by then. We really have no idea. We don't. But, you know, we sort of plan and hope that most eventualities will include Svalbard still being there. Yeah. Yeah. Well, one thing we can say about that particular aspect of your life, Jon, is it creates an excellent biography. So I was reading your little bio blurb on your book, just thinking, like, you know, how do you introduce this guy? Who is he? What does he do? And, like, how many people can write this in their bio? Was the initial technical architect of Bookshop .org and is a founding director of the GitHub Archive program, preserving the world's open source software in a permafrost vault beneath an Arctic mountain for 1 ,000 years. I mean, I can't write that in my bio, Jon. I'm never going to have that in my bio. That's so cool. I mean, I'm not going to lie. I kind of chuckled to myself as I wrote that. It's like, yeah, as you do. That was, it was an extremely weird opportunity. That doesn't come as people's waste. I'm very grateful to have that one. You know, and I think I did a reasonably good job. I'm not going to downplay Jon by any means, Jared, but he also has something you don't have, too, and that you and I also get to share. What's that? Is that in our bio, we can say we produce the world's greatest podcast. Oh, all right. Well, I guess you can just say whatever you want in your bio. Is that your point? I'm pretty sure there's no bio police that comes along. It's my bio, and I'll say what I want to. That's funny. So catch me up then, Jon, because we haven't spoken much. Obviously, you've written a book in the meantime. You were an author prior. I remember us talking about some of the novels that you had written, what you were up to, but how do you go from that? That's 2020 was when it was all said and done, I believe, when it was all entered into. And then like, what have you done since then? You've just been writing this book. Have you taken up work elsewhere? What's your life been like? I have actually. Actually, this is kind of a segue from the archive program to the book in that, you know, I was professionally thinking about the future of humanity, which somehow crystallized into me writing a weird science fiction novel about the future of humanity. But I have also, I took an engineering job because I'd kind of stopped coding, like as CTO and then like program director. I hadn't written much code for a long time. You can see a big gap in my GitHub, sort of green visible map. So I took an engineering job at a company called Metaculous, which is extremely weird and science fictional itself. It's a platform for predicting the future. That's where I'm working now. So what am I going to say here in a second? A platform for predicting the future. Wow. Yeah, Metaculous .com. What are you trying to predict? Well, there are a huge variety of questions, like everything from the War of the World Cup to are the robots going to kill us all? Anyone can create a question and then people go on and make predictions. And the theory is that enough people predict the future. Strangely, it seems, you know, studies show that our errors kind of cancel it. So a group of people are actually much better at making predictions than any individual person, even if the individual person is an expert most of the time. Wow. Is it a marketplace? Like, is it a place where you buy and sell predictions or place bets or what's the interaction like? No, it's just a pure prediction doing it for the love and the kudos. Oh, I see. And the recognition. I mean, marketplaces have their place, but there's also like, there's weird failure modes and people hedging and people betting against other people instead of against ideas. So yeah, there's a place for both of those. Which is fascinating stuff. I've looked a little bit into prediction markets and I've always been like, well, this is just gambling, but a very interesting form of gambling. Yeah, that is interesting because people on podcasts make predictions all the time to come back home a little bit. Right, but it's always just one person and they're usually wrong. The easy button could be just to get whoever's involved in producing The Simpsons to just go on there. Yeah, I mean, that's kind of eerie, right? They clearly have a channel directly to the future. There was something with that. Like, I'd seen a TikTok and I can't recall exactly, so I'm gonna paraphrase what I recall, I think it said, was that they compared The Simpsons, I believe, to Star Trek, which both equally predicted some versions of the future. And I think there was something with the amount of episodes versus the amount of episodes for each and the number of clearly correct predictions. Are you guys familiar with this before I go further? I have not seen it. Well, not so much the video, but the idea that they say The Simpsons have, in many ways, have just potentially might be time travelers of sorts or have some version, some eye into the future because they've predicted the future to almost the detail so many times that it's uncanny to think that they've, it's just, it's a phenomenon, like that they've done it. Or it's not a phenomenon and they have access to knowledge of the future. So the other interpretation is that our future is so hilariously weird that comedians are better at predicting it than scientists. Right, I'm with that one. Like, satire, give it enough satire, right? Like, eventually the future will map on top of that, some of that satire. I mean, the degree, though, to which they've been accurate is the scary part. It's not so much roughly accurate, it's pretty much accurate in several cases. And unfortunately, I'm not such a scholar in The Simpsons that I've got this list that I can describe to you, but this is what I've heard. So this is secondhand knowledge to a degree. But I've heard of it and I believe the people, there's enough people that have verified this is accurate that they've accurately predicted the future. I have definitely seen on Twitter or X or whatever, a colleague these days, people saying like, according to the prophecies of the second volume of The Simpsons and then some like eerily specific accurate thing that happened in 2021. Well, I do think the law of large numbers comes into effect here and the fact that they have - 20 seasons? More than 20 years. They have, I think, over 750 episodes. And then if you think, okay, per episode, how many quote -unquote predictions of events will there be per episode? Hundreds of things they come up with in a 22 -minute time period. That could be true in some sort of distant future. So that's just large numbers. And I think if you have enough numbers, you're going to hit on a few. And it's better than Ostradomus because his stuff is all very vague and interpretable, but at least with The Simpsons, like you said, Adam, it is like very specific thing that happens. And it's not like an interpretation of the thing. It's like, no, that's literally what happened or it's off by a skew. So it is pretty impressive, but it is just like, what's impressive to me is they're large numbers. It's amazing, the ability to sustain for that long. The most sustained one they had though was Trump coming down the escalator being president. Like that was, nobody predicted that really. And that was like the meme. It was a predicted meme essentially. And it turned into a meme, but it was predicted. If you go back to like Jesse, the body of Ventura became governor of, was it Minnesota? And then you're thinking like, well, what's more absurd than that? Well, it's like Donald Trump is president, right? Like that's a trend line perhaps. And then you go there. There's a bit, I don't know if you've gotten to it in the Exodela, the book where the particular goes back in time to 2003, it's like, I think I remember the future, but it's a future in which Donald Schwarzenegger is governor and Donald Trump is president. Is that a real future? Am I hallucinating this? This sounds very likely when I think about it. Right? Yeah. Well, I haven't gotten far enough, I guess, cause I haven't hit that bit. So you're spoiler in on me. Apologies. It's relatively early. I mean, so far in my experience in this book, and I'm not super far into it. It's just like craziness begets craziness. And I'm, where I feel like I am is in the first third of the matrix, you know, before we figure out what the plot actually is. And you're just kind of along for a ride and you're wondering when's it all gonna make sense. But I also read the quote on the back, the review of the Lisa's version. And it sounds like maybe it never makes sense. Like it's truly great, but it's just truly weird. And it never gets less weird is what one of the quotes on the back says, so. I promise you it all makes sense at the end. Actually, there's definitely a quote that does say it all makes sense in the end, by the way. All right, good. I don't want to mischaracterize. I haven't got to the part that makes sense yet, but I'm sure it all will. In addition to this, you've been working on some nonfiction, you said. Some writing about our weird past and maybe what it's gonna be for our weird future. You want to talk about some, what you call the cultural history of AI doom. I love to hear your thoughts on this. There's like 1990s mailing lists. It opens up into lots of subcategories. Where do we go here, Jon? Totally. I mean, I guess we can start with like, so what is generally sort of the very first science fiction novel was actually a novel of AI doom, believe it or not. Really? Yeah, so this goes back to the early 1800s, right? There's a giant volcanic explosion in Indonesia, 1815. 1816, the weather is terrible across their own hemisphere. You know, there's drought, crops fail and a bunch of people at this mansion in Switzerland, extremely rich and privileged and weird people have a terrible holiday. This includes Lord Byron. He was like the Kanye West of his day, super controversial. His daughter grew up to be the world's first computer programmer, Ada. Percy Shelley, who's poem Ozymandias, he probably read in high school. Anyways, Byron challenges them all to write a scary story. And the most culturally significant person who gets this challenge is Shelley's 19 year old girlfriend, Mary Godwin, who writes Frankenstein as a result. And everyone knows Frankenstein, right? Like the guy with the bolt in his neck sort of shambling along going, uh, kill.

  2. SPEAKER_00

    He's in the house. He's upstairs.

  3. SPEAKER_01

    That's not actually how the book works though. In the book, Frankenstein is this brilliant creature that teaches itself to read, teaches itself languages, invents new things. It is basically an artificial general intelligence that the other characters are concerned is going to reproduce and take over the world. So like the early 1800s, we are already worried about AGI and AI Doom and the robots killing us all. What? Yes, that is correct. Was he a monster though or a robot? What was he in the book? He was like an artificial creature created, you know, stitched together from other parts. So not a robot as we understood it, but I mean, we barely had science back then. Well, the famous phrase, it's alive, came from Dr. Frankenstein. Of course, yeah. Screaming it, of course. I'm not gonna, I'm not gonna, I can't if you want me to, I can scream that. Please, please do. We're here for it. Yeah? Okay. It's alive! It's alive! Okay, I was waiting for the second one to drop. Yeah, that was good. Yeah, he crescendos with it, right? I think he says it twice. Yeah, he does. I mean, he's excited, right? He stitched together body parts, electrified it to some degree. I know the story, I'm not like specific on the details, but I think he stitched it together and electrified it. And that was kind of like the thing. And we are electrochemical beings, so that makes sense that you would initiate life through power, electricity, you know? Yeah, and this was just the era of the 1800s where they were just figuring that out, right? Electrifying frogs' legs and saying that they twitched when you ran a current through them. They were totally freaked out by that, yeah. Totally, yeah. There's, man, this is such a weird space, man. Between electricity and sound, have you heard of the stuff around sound even? There's a lot of interesting stuff around sound. But you can produce sound waves and make shapes and make all these things. And that's how they suggest that they use some sound technology to move the, with accuracy, all the stones to the pyramids in place. Like it would have taken, they had like a small margin of window to do certain things. And between electricity and any things you could do with that and sound, kind of like modern Stone Age, because we don't know at all how that works, and so many of us are just removed from the science of this stuff, that it seems science fiction, but it's quite possible. Have you heard about ultrasound drug delivery? This is kind of a new thing. No. Ultrasound drug delivery. No, it's kind of amazing. So if they want to deliver drugs to your brain, right? Like it's hard to get drugs into the brain. There's the whole blood -brain barrier. Yeah, you can't, yeah. Right, so their solution is just to inject you with all these tiny, tiny little bubbles that go everywhere in your body. And then if you apply the right frequency of ultrasound, then the bubbles break and the drug inside them gets released out. So you can very specifically micro -target drug delivery anywhere in the body now. This is a relatively new thing. So you put the drug into a bubble and multiple bubbles, and you distribute the bubbles to different areas of your body, and then you make the bubbles pop? That's correct. That sounds amazingly weird. How do you target which bubbles should pop? I don't have so many questions. You just give them the area. You aim the ultrasound at a particular area. Oh, I see. You target with the sound waves directly into that area. Exactly. So the bubbles would spread everywhere even to your brain, and then you put the sound waves into your brain and it pops right there? Precisely. That's brilliant. That's interesting because I mean, there is an ultrasound that women get when they're pregnant and they have a child that get an ultrasound to look at the baby. That's kind of the same thing, right? Like it's sending sound or some sort of thing that makes an image based upon, I don't know how ultrasounds work. I'm just like roughing it based upon what you just described. This is fun, Adam. Please keep guessing. But it is, the word ultrasound is in there. So I'm assuming they're connected. Yes. And it's plausible. I assume the bubble popping ultrasound is a higher energy, but I could be wrong on it actually, no. I mean, I don't have the ability to pull up my personal archive here of what this thing is called and I'm gonna like be so upset later, but this sound stuff is legitimate. Like they do some really unique things with sound. Like when you pay attention to that spectrum of people uncovering this knowledge and this experimentation, it's just, you cannot, it's like science fiction. It's so wild what's possible with sound. So the other thing I've heard about ultrasound is people are speculating, like you know you've got an ultrasound and it's very hazy and you need an expert to interpret it. Right. But they're talking about if you can get an AI to clean that up and turn it into like something, you know, movie quality, everyone could have their own personal ultrasound. And if you're like, oh, I feel weird today, I think I'll inspect the inside of my body by aiming the ultrasound there, having the AI show me exactly what's going on to see if there's anything weird going on, which is a little disturbing, honestly, if you're not in the medical profession, but like within the bounds of possibility. Well, according to egyptforward .org, a study shows that ancient Egyptians used sound waves in building pyramids. So if that headline is anything to be believed, then Adam's sentence is also something to be believed. I mean, I'm sure they've shouted at each other a lot. I mean, that's how work gets done, isn't it? We use sound waves all the time to get work done around here. No, man, the Sphinx goes over there. Look at the plans. They said that the window to which they had to construct these pyramids, to do it in the timeframe that they suggested they did it, where they had to cut and move these large blocks, I don't know if they're granite or what the heck they're made of, sandstone or something just unimaginable, they're just so big, the degree of accuracy of the cuts when they had to move them into place and construct these things, the margin of error was within minutes. So the accuracy to which they built them and the time they suggest they built them is just, you have to think, how in the world could they do it? Because even in modern technology, we cannot replicate how to build such constructions. It just hasn't been done. We couldn't build the pyramids of Giza today, is that what you're saying? We couldn't build them? All the pyramids, there's pyramids throughout the world. I know, but those in particular are the ones in the website I just referenced. Well, sure, let's use the Giza ones then. So you're saying that today's technology and engineering couldn't create those? They, yeah, not the same way, no. Not the same way or to the same, could they fashion the same product? They can't figure it out. There's pictures of large cranes that should carry lots of weight that topple over trying to pick those kind of stones up. That's how big they are. Okay, what do you think, Jon? Those are crazy old. I was just thinking sort of a tangent, but we know things are old, but part of the archive program, I think in a thousand year sort of stance, I started thinking about just how old things are. So a thousand years is a very long time, right? Like ancient roads, like Great Zimbabwe and Angkor Wat, they had not even been built yet a thousand years ago. The pyramids are much, much older than that. When Herodotus, the ancient Greek, went to visit the pyramids, they were as old to him as he is to us today. They are 4 ,000 years old, which is insane. How do they do anything on that scale, 4 ,000 years ago? Maybe they're quite a bit more advanced than we give credit to. I mean, they were as smart as us, right? They're a lot of really good engineers. Clearly, maybe even better engineers, if that happens to be correct, that we can't even build a, not even a facsimile, like the same artifact, different techniques. I don't know, I like to think we could get it done, but who am I, except for - Probably not in the cost -effective way, though. Well, cost -effective way has never stopped us from doing stuff before, has it, Jon? That's true. Even thinking about magnets, like, aren't magnets the wonder of the world? Something as simple as a magnet? They absolutely are. I mean, these things are just, and see, I was looking at it, it's like harmonics, I believe is the word used. Ah, I can't even find it, I'm just so upset about it. Oh, here it is, cymatics. C -Y -M -A -T -I -C -S, cymatics. Is this like language, essentially, in sound? Cymatics, look into it, it'll blow your mind. Okay. So this is not quite the same thing, but in terms of things we don't understand that blow your mind, three days ago in the New York Times, there was this opinion piece by an astrophysicist saying, like, our standard model of physics doesn't work. The more information we get from the Hubble telescope and so forth, the more it doesn't fit in line with what we have. And there's this amazing quote. One possibility raised by the physicist Lee Smolin and the philosopher Roberto Unger is that the laws of physics can evolve and change over time. Different laws might even compete for effectiveness. So like, this is an actual proposal being proposed by an actual astrophysicist in the New York Times three days ago. We live in a very strange universe is all I'm saying. Maybe the laws are also changing over time, okay. Exactly, yeah. Well, a comedian, I was almost gonna tell you guys this as truth, but I think it's actually just hyperbole for a comedian's sake, because it's one of those folks that are, I don't know, like the content creator. They seem, before you look into them further, like they're telling the truth or they're really unearthing some deep dark secrets, basically, but it's really just a comedian. It's a bit, essentially. But he basically said, you know, what if Isaac Newton didn't actually discover gravity? That gravity, in that very moment, changed and he discovered it. Before then, gravity was different. You know, like the laws of physics changed immediately for him to discover gravity. Published that in the New York Times. Put it in the New York Times, yeah. There's a science fiction writer called Greg Egan who writes about stuff like this. Like in one of his books, some mathematician comes up with like a mathematical representation of the universe, which is more efficient than our universe. And the universe is like, yes, Stanky, we'll do that. Goodbye to the old universe. We're taking over the new math now. And so he thinks himself out of existence. So yeah, science fiction is covered on this, if it's any consolation. Interesting, so. Well, I mean, it's interesting because sometimes life imitates art and we see things like, you know, the tricorder or the different things in Star Trek from the 80s and the 90s. And then we see things like smartphones, you know? And sometimes people pull direct revelation from science fiction. Maybe even people named Zuckerberg, right? So like the metaverse is a thing in a book

  4. SPEAKER_00

    written,

  5. SPEAKER_01

    was it Neil Stevenson? I can't remember the book now. Snow Crash. And it was a dystopia though, wasn't it? I think it was a dystopia. Totally, it was not portrayed as a happy future. Yeah, and Zuckerberg just missed the mark there, pun intended, and decided he was gonna name metaverse. Now we have a concept called metaverse. So sometimes it's like directly. And then other times art imitates life. And so where do the science fiction writers get their ideas? And John, you are one. So I could ask Adam that and he could guess, but I could ask you and you could tell me directly, like where does your AI either doom or utopic views that you end up putting into these books that are described as weird and just continually weird? Where do they come from? Well, I think I'm writing software science fiction. Like there's a review that came out recently that said the branch of science in this particular science fiction novel is computer science. Because like I'm a software guy. But also like computers are the world there, right? Software mediates everything we do. This conversation, every text message, like most of the news you read. We live in like a software mediated universe. So like I'm playing about the notion of like a programmable software universe, like the fundamental substrate of reality is more like software than like hardware. That's not that different from the world we actually live in anyways, right? On a day -to -day basis. So I think like, you know, there used to be a lot of space travel science fiction in the 70s and 80s when we had the Apollo program. And nowadays there's gonna be, I think, a lot of software and computer science fiction. Then we'll get into like the biotech science fiction in 10 or 20 years. I think people adopt wherever like the big engine of change and the big changes are happening in the world around them. So you're pulling it out of the software world. That's cool. What's up friends? I'm here with one of our sponsors over at Tailscale, Jeremy Tanner, part of the DevRel team. Jeremy, one of the things that I used to do a lot in my home lab is I would set static IP addresses for particular devices so I can easily remember them, mainly so I can access them via SSH or via the web if the service has a web interface. But because I install Tailscale on every machine now, I don't need to do that anymore. Number one, I was able to get a short memorable tail net name and then two, each device gets assigned a machine name based on the host name of that machine when Tailscale is initialized. And between those two features, I no longer have to remember or care about specific local IP addresses anymore. I just let DHCP do its thing and Tailscale does the rest.

  6. SPEAKER_00

    Yeah, you'd mentioned the enjoyment of not having to worry about IP addresses, usually when you plug in and DHCP gives you an IP address in the range that it thinks appropriate for that particular network. That becomes a much bigger problem when you mail a machine somewhere, when you connect from somewhere else, but having that stable, both the Tailscale IP address and the domain name and being able to get to a machine by its host name doesn't change for the network that it's on. That's somewhat rare. I'm like, prior to Tailscale, I had not had that experience. It was always having something send you a heartbeat back or fishing for it, or if it's sitting out there behind Nat trying to cook up a way to find that machine the first time when it comes up.

  7. SPEAKER_01

    Mostly I'm installing Tailscale on Linux. Yes, I use it on my Mac machines as well as my iOS devices, but, you know, App Store, pretty easy. In the case of Linux, you have the one command install that you can do, which is just piping an install script into your bash and then running that, but you gotta trust that, right? And I prefer to do things more manually. I do that with Docker, I do that with Tailscale. And so the fact that you all provide not only this one command to use, but also based upon the flavor of Linux, a manual install process and instructions, I just love that. What are your thoughts on that thoughtfulness?

  8. SPEAKER_00

    We're about meeting people where they are. And so if you're someone who wants to build from source, you're obviously welcome to. Many different distros use different package managers, and so it's like, here's the name of your package manager, if it's less familiar and here's the fastest way to go. A lot of the time we can sense what that is. And so, yeah, if you curl the install script, you can pipe that right into the shell. I'm usually very cautious of that. And so whether you have a view script source, which is an easy way to look, or if you download first or view the contents of the script before running it, I mean, I would always recommend that everyone do that and not blindly pipe into your shell. But once you do know that that's trustworthy, that enables automation. And so your Linux distros that have cloud in it in them, when you bring up a machine for the first time, if you have a single line that installs, then the next line can be tail scale up with the flags that say, here's a node key, join my tail net immediately. The SSH flag, make this machine available via tail scale SSH, advertise exit node flag. Let me send secure traffic to this machine and then out onto the public internet. And so if you're at a hotel, a coffee shop, anywhere with a connection that you don't trust your end of, you're able to get to a place that you do trust, whether that's your home, whether that's a data center, whether that's your office, anywhere else.

  9. SPEAKER_01

    Yes, anywhere is correct. I love tail scale. I hope you listeners will check it out if you run your own home lab or you have some influence over the networking that you all do with your applications. There's so much more you can do with tail scale at the enterprise level. Of course, we're just talking about basic home features, but these are building blocks to the magic that is tail scale and tail nets and all the fun stuff they enable for you and your applications, your home lab, wherever. Tail scale is awesome. Check them out. You can do so at changelaw .com slash tail scale. Again, changelaw .com slash tail scale. That's T -A -I -L -S -C -A -L -E. Enjoy. I

  10. SPEAKER_01

    have an idea for you. Uh -oh, pitch session. Here we go. I'm going to ruin it too, because you're not going to do it, but it's hilarious in my opinion. Please tell us anyways. The idea is this, is that it's a long drawn out, dramatic entire story. And in the end, it was DNS. Oh, I like that one. Isn't it always DNS? I thought you were going to like it. I like it. No, no, that is good. There are a hundred million people out there who are already ready to think that DNS is the villain. Right, that's true. The deep dark villain, yeah. That's too plausible science fiction. I'm picturing like a Scooby -Doo meme. You know, you take the mask off. Ah, it was DNS. It was DNS the whole time. It would have got away with it too. One of those silly kids. Yeah, well, that's actually a good, I love the idea of, and actually one of my favorite authors is Dennis E. Taylor. Tell me if you know this name. I know the name. I do, because you mentioned him recently, Adam. Was it with Chris Brando? Yeah, he's my favorite, honestly. And I will eventually have him on a podcast. I just haven't gotten up the nerve. Little intimidated, but he said yes, but we'll see. Anyways, he's written many books. I classify them as plausible science because it deals with artificial intelligence and the future. And he's got a trilogy, which is not a trilogy anymore. It's actually more like a five book series now. So it's called the Babaverse Trilogy. And the main character's name is Bob. And I'm not ruining the plot by any means because this is the premise of the first book. Bob essentially becomes AI and goes into the future and does all, like the book translates essentially. I'm doing a poor job of describing Dennis's life's work, but it's amazing stuff. But he's a software programmer. How is he? He lives in like Vancouver, BC, snowboards and mountain bikes and writes software. He writes software to architect the storyline behind the scenes, the software for him to maintain. Because when you write a book that's so connected in storyline connects to storyline and time to timeline, especially in this one where there's time dilation, there's space travel, and he literally thinks about the scientific, light year aspect of time and travel and timelines and storylines. He's written software to maintain the truth, essentially. And I think he's talked about it on podcasts and stuff like that, but he's a software engineer initially. I mean, I don't know how much, he actually writes some software. Thankfully, Bob was a software engineer in the storyline too and he's written, I think he's uniquely good at the role as the main character because in human form, he was a software engineer. And as AI that goes out and does what Bob does in the Bobiverse trilogy, that's not a trilogy, he can do what Bob does because Bob writes software and he writes VR software. And it's like really, if you're at all a software geek and you haven't read these books, you're missing out on life. The best part of life is reading these books. Seriously, it's good, it's good stuff. The Bobiverse is well known. Greg Egan, who I mentioned earlier, also writes his own software. He is like, on his website, he has simulations of the physics that he's using for his highly advanced and abstract, hard science fiction. And so, yeah, I wonder if we'll expect Bob to come up with a GitHub repo soon. Well, I just think there's a lot of, I was joking about the DNS idea, but I think that would actually be kind of a good plot. It would be kind of cool. There's a growing faction of humanity that are interested in software and software tech and building software that I think don't have particularly, like not all science fiction gets me the way that software -driven storylines go. Totally agree. They truly pay homage to what we consider as truth. Where there's people who don't make software or are involved in software creation that just assume what is being told to them is somewhat true but then there's the version of us that builds software and understands software that get it and there's no true storylines for us. They're kind of like missing it, in a way. Right. Yeah, there are 100 million developers out there, right, according to GitHub's latest. And I agree that like publishing does not really target or serve that enormous audience of people who are super interested in software and who will like write code every day. As much as it should. It's weird to me. Yeah. There's John Evans, there's Dennis Taylor, right? That's right, there's us. I mean, there are people who are doing it, man. Serving in the needs of the software people. One more name I'll throw at you guys. Nick Jones, not quite software, but close enough, Joseph Bridgman. That series begins with a book called And Then She Vanished. And it's time travel. It's a unique version of time travel. Phenomenal book. Phenomenal series. It's four books. My favorite, this is like a deep cut, but my favorite like classic science fiction novel about artificial intelligence is from the early 70s. It's called The Adolescence of P1. It's really prophetic. It's set in my alma mater, University of Waterloo in Canada, which is what it was now like. But it's like a very software -oriented, very realistic attitude towards artificial intelligence, getting out of the box and going onto the internet in the 1970s when we didn't even have an internet. So it's pretty crazy. And of course, Vernor Vinge, he invented everything, but the true names and other dangers that the sort of first internet real, internet AI story, that's great. Which in turn, by the way, you know Eliezer Yudkowsky, the AI Doom guy? No, we need to make a list because my list is short and I'm barely into it. I guess Yudkowsky's also a science fiction author, but he's more like the high priest of, we must stop building AI, AI is gonna kill us all. He once said on Twitter that reading this Vernor Vinge short story

  11. SPEAKER_00

    was

  12. SPEAKER_01

    like the defining moment in his life that changed everything for him. And after he read it, he knew what he was gonna do the rest of his life. Is that right? Yeah. So like science fiction's influential. Oh, for sure. That's why I think the Bulbaverse series is so unique because the way AI plays out in that storyline, I'm trying my best not to like spoil anything, but it's just, it's not at all negative nor positive. It's just the way, I guess. You know, it's just like not quite the word inevitable, but it just happens. And it's actually better for humanity in the grand scheme of the human storyline in that series. It's just so interesting how like you can think so drastically bad. I suppose when you think about humanity only, and maybe there is only humanity, maybe there isn't, that everybody's take is like, how is AI to humanity? Not how is AI to universality? I don't know. Like, how do you think about it from a non -human perspective? A weird question I like to ask is like, if an alien species were to build an AI, would they be like the AIs that we build? Can we even imagine a different kind of AI? And if not, like, then aren't we kind of all, you know, humans, aliens, say we're sort of all sort of crescendoing on like the same thing. Yeah. But anyways, yeah, I don't really buy the sort of AI is going to be terrible and kill us all. I don't really buy the AI is going to be wonderful and turn us all into angels, blah, blah, blah. I'm confident that the future's going to be really weird, however. Well, I think we got to this degree of the conversation by saying that so much is changing. Like you mentioned physics and the New York Times article. And I think we're kind of in this

  13. SPEAKER_00

    world

  14. SPEAKER_01

    right now of change. We're in a world of change around, in particular, our world here in software development, where AI is becoming, you know, a peer programmer and very much becoming like a sidekick to the individual and a sidekick to the corporation building software. And then we're also in a like a turbulent time of like economics, world economics is turbulent right now. There's lots of stuff happening in all parts of the world. There's things trying to, you know, change the power of the dollar, the US dollar, as I'm talking about it. There's lots of change around physics and, you know, going back to the moon or have we gone or all these things that are like basically every question imaginable is now in question again. It's like, is the earth flat? Is the earth round? Have we gone to space? Is there space? Is there only a little earth orbit? Like world economics, physics, I mean, all these things are essentially what seems to have been hard truths are now like, well, really, are they true? Well, I think this is like a factor of what I'm saying. Like the software is like what you're seeing and your exposure to the world is coming through these layers and layers of filters, like journalists and software and, you know, your feed algorithm deciding what you see in circles. So everyone's getting like a slightly different view on what used to be a shared common world, which is an interesting thing. Yeah. I don't think that's entirely bad though. I was going to ask how you combat that, but it sounds like maybe you don't think that we need to. Well, not necessarily. Like I think quote unquote misinformation is more a demand problem than supply problem, right? Like no matter how much is out there, people are going to find it and seek it out. But also like, I think to an extent, a variety of perspectives, like genuine the original takes is how we get new stuff. Right. So I actually don't want everyone to be thinking the same way about everything. Right. Yeah. We need different. We need Steve Jobs' original campaign, Think Different. Cause that's true. I mean, we do need uniqueness. Well, we also need collaboration and camaraderie and connectedness. So there's like, I mean, ultimately my takes are usually boring cause it goes back to things like moderation. I feel like we need both. Like, I feel like we need a physical analog connection to real people in the real world. And we also need the abstract digital morphological demand side misinformation world that we've invented. Cause there's so much like you said, John, that comes out of that, that are new things. Some are bad, some are good. And then hopefully we gravitate and elevate the good ones or good uses of things and find ways to combat the bad ones. But I feel like both of the individual and maybe more at the societal level, like a healthy grounding in both is probably what's best. Yeah. I'm with you. Like I call myself a radical moderate. You know, I walked down the street to the side and say, reasonable informed discussion where possible. I like that, a radical moderate, that's good. But yeah, I think you need both. And you know, you do need some wackiness and craziness and willingness to sort of reject what the status quo says things are. Cause sometimes the status quo is wrong. But you can't go embracing every thing, every alternative. What people say is the common thing at the same time. Yeah. Cause there's a lot of people who are just in it for the clicks, right? Just in it for the views, just in it for the followers. For sure. Or they've literally outside of their mind. Like there's a lot of people that are just out of their mind. Like they're not sound at all. Yeah. Legit. Yeah. And they will find conspiracy and conspiracy. That's. And where those two things intersect, that's where like magic happens, right? The bad kind of magic. That's where you get Kanye West or you get Lord Byron right there. That's Lord Byron. Right. Exactly. Yeah. So on the scale of AI doom, John, if a hundred is utter doom, that's Eliezer Yudkowsky. And zero is like no doom, zero doom. Where do you feel like you personally fall on that scale as we move forward? Are you like a 50, 50 guy, 70, 30? What do you think is going to happen on a scaled fashion? Like if I'm forced onto that scale, I'd be like very on the doom. I'm not as concerned, you know, and that's like in the very quite long term, I'm really quite unconcerned in like the short to medium term. Yeah. But also I think the scale, I feel like we're going to have a feature which is sufficiently weird in unexpected ways that we're going to look back at that scale and think, I don't know what we were thinking. Cause it turns out things are much stranger than that. And what actually happened was totally orthogonal to what we expected. It's a lot different than we, it was like completely on a different scale that we didn't even know existed. Like in the same way, you know, that we got a governor of Minnesota from the movie Predator, we got a California movie Predator, we got president Donald Trump. You don't look back and think, well, what are the odds that we're going to have, you know, Republican or Democrat governors in these states or this country, you know, like that is not even an option when we look back on this, what actually happened. The weirdness of the world is accelerating and increasing. And I think that's going to keep happening. Yeah. You got to define doom, Jared. You can't just say assume doom. What is doom in this scenario? Describe what doom would be if 100 was doom, what is doom and zero is no doom? Well, I think when you ask the question, doom is in the eye of the beholder, you know, whatever you define doom as, then you scale it according to that thing. I think we could all have different definitions, but somewhere in the realm of like takeover, death, insufficiency for life of humanity. I don't know, you know, of course my doom, I'm rooted in like eighties and nineties pop culture. So like I go Terminator 2. Come with me if you want

  15. SPEAKER_00

    to live.

  16. SPEAKER_01

    Right. I don't know about you guys, but like to me that would be bad. Like rise of the machines. That's kind of a typical AI doom scenario that most people I think think about. Is that what you think about? Kind of the machines rebel and take up arms against us? Well, I'd go slightly bigger in scale actually. Like my doom would be like the AI turns the entire earth, including us into more whatever computronium or whatever, because it wants more hardware just like we do. And so we just get sacrificed to the altar of better hardware ultimately. Okay. So that's a little bit more of the matrix style to them. Right. They're just harvesting us for energy. Yeah. Fair. Wow. Both bad. Both bad.

  17. SPEAKER_00

    I

  18. SPEAKER_01

    think we can agree on that much. Yeah. I mean, both are doom. That's why I say, I think maybe it doesn't matter. Like whatever you think you picture it as, do you think that's going to happen? That thing that you picture? I can't even like give a prediction of any means, you know, like I just, it's neither good nor bad, I suppose because it's such a big world and I don't have a big enough mind to encapsulate what I think could truly be accurate. Plus literally my words are being transcribed right now into the optic code vault for all of humanity to remember. And it's like, what did Adam say? It was stupid. It

  19. SPEAKER_00

    was stupid.

  20. SPEAKER_01

    Note to picture AIs, Adam was totally on your side. Yeah, exactly. We have him on record as saying you would take over. I do kind of want to go the Guilfoyle route, which is a Silicon Valley reference. And Guilfoyle said in there, like he wanted to submit, like at first he was like against it essentially. And I won't plot, I won't ruin the story for anybody who hasn't gone to like season five or six or whatever number this, I think it's six. He's like, I wanted to go on record, like send me an email that I've helped you with this thing so that the AI overlords eventually know that I was on their side to help them into, you know, when they go back to the humanity archive of what was said and done, they know that I was the initial help to ensure that their overtake was not thwarted because it's inevitable essentially. This goes back to the famous Simpsons meme. I for one welcome our new AI overlord. Right, that's literally what he said. I believe, great job quoting, that's what he said. Proving once again that Simpsons had a time portal to the future. Yeah, see, I didn't even know that was a throwback because I'm not a Simpsons scholar. I didn't know that was a throwback, that's so interesting. Nice full circle loop there, yeah, that was good. For sure, so I'm kind of like, I'm not actually scared of the AI overlords in the future, a little bit, just because I have to say that. It's required, a little scared. Man, I think that I just, like I guess my hope is less a prediction, is I hope that humanity finds a way to institute these artificial knowledge bases to be comrades, and that's maybe a bad term, like collaborative, you know, than not, to not be us versus them, but more like symbiotic. You know, I hope that that's how it remains, but I imagine at some point, an intelligence would get to be so intelligent. But I think that it, in large, from what I know about humanity and the Earth and the way we treat it and the way we grow, like we are kind of like a virus. Where we go, things get decimated from the eyes of the Earth. And so when you zoom out really, really far and you say it's like, I mean, I know at the closeness of humanity, there's love and there's respect and there's all these beautiful things, but from the, you know, it's like Monet. You know, Monet, I think there's a thing where it's a classic Monet is what they would say. From far away, Monet looks beautiful. When you get closer, you see the artifacts, you see the imperfections. I'm not saying Monet is not a beautiful woman. I'm just saying that's the thing. And I think that might be the case here, where if, when you're AI, maybe you zoom so far out to humanity, you think, well, ultimately, this is like a death doom. They're gonna war, distract, fight, infight, civil war. I mean, we see that in today's society. Like, you turn on the news, it's all, it's not good, generally. Like, where's the good news channel, you know? Some of that supply and demand as well, though. No, I feel ya, I feel ya. But that's my hope, I suppose. It's like, I hope that we can be symbiotic and that I think eventually, if a computer can become so intelligent or there's an intelligence that becomes so intelligent that it realizes, well, realistically, humanity is just bad for itself and let me protect it. And that's the age -old thing. AI is really trying to protect humanity and the only result is to get rid of humanity to protect itself. There's this Robert Heinlein line that mankind rarely ever manages to dream up gods that is better than itself. Think of the Roman and the Greek gods and all the terrible stuff they did. But even the Old Testament biblical God who's constantly smiting somewhere else. And I think that's true. And I also think when we're talking about super AI as the future, we are basically talking about new gods. Ultimately, this is a religious discussion as much as anything else. Kind of, I mean, I don't think so, but I mean, it depends on what angle you come from. We invented it, you know? We invented the machines. We invented microprocessors. We invented the ability for a computer to compute. And so can you invent God? I don't believe to truly be God, you can invent God. Well, lowercase G. Right, right. We're certainly trying to invent God, aren't we? Yeah, I think we have that drive for sure. Well, because the true nature of God has always existed outside of time. Well, because if you invent God, then you are God, right? Like that's the point you're making, but also it's that desire I think is innate in each and every one of us is to like elevate ourselves to that point. And so that's the whole, you know, cast away Wilson and the look what I have created when he created fire.

  21. SPEAKER_00

    Yeah, look what I have created!

  22. SPEAKER_01

    Right. And then he talks to himself in a volleyball. That's kind of in there in us. And I think if we are left to ourselves, then we end up doing such things. And so, yeah, I think that desire is certainly in there in humanity. Oh yeah, for sure. And so we find ourselves doing it. I don't know, I look at the current state of AI and I feel like we've pla... I don't know, maybe this'll be dumb here in six months, not even in the Arctic code vault. I feel like we've plateaued again to a certain extent. I feel like there was... I think that the progress that we've made in the world of machine learning has been leaps and then plateaus. And you kind of have a new technique, a new thing, a new idea that gets implemented. And then you have just kind of revolutions around that, not revolutions, evolutions around that idea for a while until a new thing. I mean, transformers is the current technique that has produced this new step function and AI's ability to do what it does. Go ahead. As an aside, I totally agree with that. I've actually wrote about... I have a sub -step about AI and I just wrote about this recently in the last five days ago that we've plateaued. Because like this last winter, when things were dropping every week, I'd go to AI events in San Francisco and people were stumbling around, like someone just hit them on the head with a hammer going, what is going on? This is insane. There's something new every week and it's blowing my mind. And then GPT -4 dropped and that sort of ended it. And we have pretty much plateaued since then. I think everyone in the field would agree with some relief, honestly, because that was a really crazy time, January to March of this year. It absolutely was. And so we're in a better place than we were, but we don't know how long we're going to be in this particular plateau. And I've said this, I think probably not on this show, but on JS Party, as a personal user of the tools, I've hit now what I call the trough of delusion. That's not what I... I didn't coin that term, but I'm applying that term to this particular case where I know the limits of the tooling. I use it for what it's good at. I avoid it for things that I know that it's bad at. And it's just become another tool for me and a useful tool, but not a life -changing tool for me personally. And so even in my own personal use, I've kind of hit that plateau where I'm like, okay, it fits into my workflows here. It doesn't fit into it here. And I'm more productive because of it in this case, especially in like, give me 40 synonyms for this word. When it comes to words, it's really good at words. And so I use it for those things. When it comes to Elixir, it's just okay to Elixir. And so I don't use it quite as much. When it comes to TypeScript, it's better at TypeScript than it is at Elixir. And so I'll use it for TypeScript. But it forgets everything. It doesn't know anything after September, 2021. I've run it as a bunch. So like new libraries and new releases, it knows nothing about, which is annoying. It drives me crazy. Totally. If you know something that's a few years old, you can kind of ask it about it for the most part. And I think on the current plateau, we will get there with that kind of functionality where like, it's gonna get better from here. It's gonna have a better memory. It's gonna have access to newer information because of the tooling and the processes and all the work going into greasing the skids of this current technology. But as far as another step function, like the next plateau, obviously I didn't know where this one was coming from. I don't know where that is. I don't know when it is. I'm not sure what it's gonna be like. But for now, it seems like just as a microcosm, like the era of full self -driving cars is just as far away as it was the last time we asked. Like we're still not there yet. They're better. There's more uses. But we still don't trust them to full self -drive. Well, they exist in San Francisco, right? Like I see them every day in San Francisco. Okay, so in limited domains, limited context, but like what you would consider the AGI of driving, which is you drop me a human into pretty much any circumstance I've been driving for 25 years. Okay, maybe certain machines I can't drive. But like, give me the sun in my eyes. Give me the ice. Give me the place I'd never been. And like, I can figure it out, roughly speaking. Of course, we still have tons of crashes and stuff. But like that level of full self -driving, to me, it just feels so far away still. Yeah, that's fair. And like, I'm Canadian. I've been saying a long time. Drive in snow, then I'll be impressed. Right. Until then, I'm not buying it. It's a whole different game, isn't it? Yeah, exactly. Well, even for humans,

  23. SPEAKER_00

    it's

  24. SPEAKER_01

    challenging. Oh, it is challenging, yeah. And scary. I've been wrecked before. I just brought back some PTSD, man. In terms of AI, some really interesting uses I've done recently. I'd like to share one because I just didn't consider doing this. And this is where I think it's like leveled up humanity in subtle ways. So this weekend I was barbecuing because that's what you do on holiday weekends. At least here in the States, it was Labor Day weekend. And I had some family over and I had these gigantic Texas -sized potatoes. So I was going to make baked potatoes. And they were just like baked potatoes. They were smoked baked potatoes. And so I wanted to go true barbecue and do low and slow, 225, super smoke for as long as it tastes to get to 205. And I haven't had breakfast yet. I'm getting hungry. Sorry about that. You're killing us. We didn't have enough time though. So that was my plan. So I had to alter my plan. I'm like, you know what? Let me ask chat GPT. Like I've got this amount of time and I've got this potato and I want to get to this temperature rather than me get flustered in, like skip it and go to a restaurant. I've got this much time chat GPT. I got this, the Traeger whatever model and I can get to this temperature. And I fed it this data essentially. I said, well, you know, if you put it this temperature, you'll meet your criteria for getting this potato to done this in this amount of time. So rather than skip the meal, I use chat GPT to sort of reverse engineer, you know, thermodynamics essentially. Like how do I get the potato to 205, target internal temperature and to what temperature do I have to cook it at for this amount of time? And they were like 275 or whatever. I forget what the number was, but it was like, it wasn't 225 where I wanted to be at, which is, you know, low and slow. And then the other use I did recently was I have a Denon 4400 home theater receiver in my media room at my house. And on Plex, I've got all these different films and they're all on different, and you got this one that's, you know, DTS HD 5 .1. You've got this other one that's true HD 7 .1. And these are all like original sound formats in the sound studio. And your Denon receiver, any given home theater receiver can process that sound into the speakers that you have available and make it sound good. It's a sound processing thing. So I'm like, well, ChatGPT, help me figure this out. Like if I've got these available settings in my Denon to translate this sound into my speakers, into my format, what's the best one to use given its original format? And I never thought to use it like that. I would just guess, read the manual or something like that. And which one does it map to? So I had ChatGPT make me this matrix. So I took all the films I have, all the original sound formats and all the available ones in the 5 .1 or the 7 .1 settings. And so now I don't have to guess anymore, like which one to use. I just go to this grid that ChatGPT may be based on what I have available and what the film might be. And boom, I'm using the right sound processing on my Denon. Like those are things that make subtle advancements in today's, like am I earning a million dollars because of that advancement? Heck no. But am I enjoying my home theater better and cooking potatoes faster or to the degree I've got to within a certain amount of time? Yes, that's amazing. If that is an AI making your life better, I don't know what it's doing. That's what I'm talking about, right? That is good life right there. Oh, for sure. Yeah, the current plateau is much nicer than it was prior to being here. Like

  25. SPEAKER_00

    I

  26. SPEAKER_01

    say, I'm in the trough of delusionment because I've kind of, I've hit up against the seams or the edges of what it can and can't do. But the stuff that it can do, Adam, like you're describing, like life is a lot better because I can say, give me the FFmpeg command for this thing. And then I don't have to go read the man page. And I Google way less and I just ask it for things that it knows. But when you first start to use it, you don't know the boundaries of its abilities. And so you tend to be like, it can do everything and it's always right. And then you're like, wait a second, chill out, Jared. It can't do everything and it's not always right. It's wrong a lot. Or because it can't do this most imaginable thing you want it to do, therefore it's a failure. You know, like can you scour the internet, find me the stock to buy and make me a millionaire in six months? If it can't, it's that question, has it failed you? No, it has not. Cause that's not quite where it's at. That's the - The people that have failed you are on Twitter telling you that it can do that for you. Right, that's right, right. And they're also telling you, you know, like Doom is upon us and like, I do wonder the people who are AI Doom, we should be scared of GPT -4. Like, have you ever used GPT -4? I understand that it's changing and it's going to get better but like, these are fundamental restrictions on what it can do. I totally agree with Jared. You bump up against the walls and you're like, it's going to get better, but it's not going to be like a transformative wizard anytime soon. Right. What's interesting to me is like, if we talk about that open letter published, now it was probably last year, maybe it was this spring. Oh yeah, that's really funny in retrospect, isn't it? Yeah, that letter is signed by really smart people. I mean, you talked, you mentioned the Yudkowsky. And he actually, I found a Time article where he says that that letter didn't go far enough. And so he really is, as you said, kind of the high priest of this particular belief. He's like way on the edge of Doom. What's the letter say? Give a summary. The letter says we should stop all AI research until we understand what the hell is going on. That's basically - Yeah, exactly. But it was signed by a lot of people and people that are like, they're not Joe Schmoe. Oh yeah, yeah, totally very impressive names. I know a couple of them. Is there a blockchain to verify they signed it though? They didn't protest and say they didn't sign it. Okay. We've also found the boundaries of what blockchain can do for us. But so they did, I mean, you can argue individuals and go ask them, but there's just a lot of names, a lot of people who are leaders in AI things. This guy is no slouch, far more impressive than myself. But you wonder, I don't know, how all those smart people could land on a position that's so strong. And then you have a lot of other smart people that land on a position that's so opposite strong. It just is an interesting conundrum, I guess, maybe because none of us know what's gonna happen. What do you think, Jon? It's hard to dismiss that many names on a signed letter, but at the same time, we just kind of are dismissive of it because it seems like it's not right, at least for now. Well, I think it goes back to what you're saying with the plateau. I think people aside that did not think we were going to plateau. They thought things were just gonna keep accelerating from January through March, April. Just exponential progress. Yeah, exactly. June would be even crazier than March and September would be beyond crazy. As you say, we have pretty clearly seen that that is not the case. Nothing dramatic has changed. So I think it's a reasonable concern. I would have argued strongly against it at the time that there's always plateaus. You never get uninterrupted exponential growth, except maybe Moore's Law, and that's like a one -off. I understand where they were coming from. It just, it looks like they made a bad call now. While we're here on this prediction, let me share one more other today leveling up. It's really good. You're gonna love this one. Do either of you manage hard drives? Jerry's gonna laugh about this one because he does not manage hard drives, just the one that's on his machine. Yeah, I really try not to. I've transcended in life. Right. I'm beyond. I'm beyond. Well, as you know, most modern hard drives, whether there's an SSD or a physical disk that spins, has the software in it called SMART, and I forget what it's called. It's an acronym. Oh yeah. But that report that you get back from the SMART data report is like, reading it as a human is just like, forget it, right? Like, what's important here? So I take that report and I just pipe it right in the chat GPT, and I tell it to tell me exactly what's happened with this hard drive. Should I replace this thing? I don't have to like, you know, look at that report whatsoever. And it's like, no, Adam, you're good to go. Keep going. Or Adam, listen, let me tell you something. In about six months, you're going to have to replace that hard drive, okay? That's a paraphrase of what chat GPT says back to me, but that's another like, modern leveling up of like. For sure. I don't need to plateau. I don't need to worry about these people scared of the future. Like, what are they so afraid of? Do they think literally these machines are going to construct robots, are going to take over Boston Dynamics, and like, next thing you know, the company isn't ran by the company more now. It's ran by some machine that, you know, manifests the corporation and pays the taxes and bills the things. Like, the humans are just subjects of this control? Yeah, probably not. Yes, is actually the short answer. That's what's going to happen? Yeah, yeah. But I also think that raises a really interesting point because like, all my non -tech friends are like, oh, chat GPT, that just makes stuff up. It's useless. I don't even know what we're talking about. They don't realize that people in the industry like us, you know, sometimes we ask the questions and ask it to write things for us, you know, but we know that it's going to be hallucinating and not everything, you know, if you just ask it, it's going to be correct. But what we use it for is what Adam was talking about. Like, it takes, you know, information and it transforms it into another kind and it's phenomenally good at that. Like, it's quickly good. Yeah, and it writes like 30 % of my code too. Like, Copilot writes 30 % of my code nowadays. And I think non -tech people don't realize that it's a powerful transformation tool. They think it's just a Q &A tool, which is too bad because I think they'd get a lot of use out of it as a transformation tool. The way Simon Willison described it really resonated with me. He called it a calculator for words. And so it's going to be very good at taking words. You can put a lot of words into it and have it summarize those down, right? Compile them down into less words. And some, like Adam just said with the smart diagnostics, tell me what this means or highlight the difficult parts or whatever. It can also take a small amount of words and expand them into much more words. And like those two use cases, and there's many permutations of those, are hugely valuable beyond just Q &A. Simon's great. I know Simon, he's super, super good at describing stuff. Yeah, we've had him on the show multiple times because he's also, he's very excited about things, but he was also very scared of things and he's also very practical. So it's like, you get the excitement, you also have a little bit of trepidation, so it's balanced, it's not pure utopia. And then he also is like, and he's actually, what I like about him the most is he shares what he's doing with it today right now in his life and how he's using it to be more productive. And I think that's ultimately valuable for all of us, kind of like these tips that Adam are sharing. It's like a version of bionics, but you're not actually embedding anything into your body. It's just your human form, I don't know, like typing into a machine. Maybe at one point we can actually think our thoughts through something and then think chat GPT instead of typing. I use the app on my iPhone a lot and I just talk to chat GPT. It's so strange when you volley back and forth a few times, but you can speak into your phone and it does a good job of translating your words into text. And so for long conversations that are deep like that, I won't type them out because it's just too tiring. I also wish it kind of had auto correct or sort of predictive text whenever you're typing, because it just doesn't. Like there's certain things I'm like, you can totally just complete the sentence for me, but it doesn't, anyways. Well, OpenAI, they're too busy printing money at this point, so. Gosh, even favoriting, like for example, the one I mentioned about the Denon and the sound fields and stuff like that, sound processing, like that's a chat I go back to in reference, but I've got to scroll, scroll, scroll and find it, you know, and it's just too challenging. So now I've just got a link to it. Bookmark it, yeah. Yeah, bookmark to it. But like just give me the favorite feature, right? Just let me, you know, kind of go back to these conversations, keep the context and kind of keep them going over time because there's context and I don't want to rebuild for the thing again. And in some cases it kind of forgets. Like I know we've got like 30 back and forth here, but I'm new, I'm new right now. I have no context of this past conversation and it's kind of frustrating. I think OpenAI doesn't really want to be a consumer services company and to be elected just train GPT -5 and GPT -6. Maybe so. Yeah. Yeah. Be the API to those things. Yeah. Seems true, the way that they're building things. I assume that they're more focused on that side than they are about improving the consumer product that is OpenAI chat. All right, so we're not doomers. We're not particularly not, not doomers either. There's fear, but I'm not afraid. There's trepidation. I like that word used here, trepidation. Yeah, I thought that was a good word to describe it. I'm not shaking in my boots about it. I'm actually quite hopeful that something will come from this that's good and better for humanity. How can we, like you were saying, John, some people just won't touch it because it doesn't, it's not accurate enough for them or whatever. Like don't dismiss it, leverage it, but don't lean on it that it's your only source of information. Like you've got to be wise and you've got to direct it. Like 30 % of your code is being written by it, but those are still your ideas. Like, can you help me? You're just saving yourself time. You could have gone and probably written that code just as well, if potentially not better, but why would you spend the three hours doing that when Chance GPT can get you in, it's the ultimate 10xer. They get you there in 30 minutes versus three hours. Yeah, I really liked the word copilot. I thought it gives a brilliant one when they get with X. Like you're landing the plane yourself. Okay, that's fine. But when you're flying, the copilot can take care of most of the work, right? Right. Yeah. And that's true. Well, that's why their next big innovation is going to be called GitHub Pilot, because then you're just out of the loop. Who needs you anymore? We'll take it from here, guys. Thank you. Yeah, we shut down all repo access. We don't do that anymore. Exactly. Well, the real question, Jon, is how much of Exadelic is human written and how much of it is not? So when I started writing it, GPT -3 wasn't even out yet. So you can be very confident. Just maybe one of the last great human written books at this point, you know? Yeah, I mean, it is kind of fun. Like people are worried about, you know, poisoning AI models with AI generated data, because there's so much of it already out there on the internet. Yes. Right. Yeah, it was written long enough ago that you can be very certain that this one is entirely written by my weird subconscious rather than a transparent architecture. That's even something too, where people that would have never written a book are able to get out the outline. It's like an editor almost. Like there's, I thought about this more recently. I was like, you know, in a lot of cases, a real human editor to an author is sometimes all the extra beauty in the words and the forming and the sentences and the structure. I mean, there's a lot of authors who are good at that, but maybe they just have the good idea but don't know how to manifest that into a well articulated, fun to read sentence that helps your imagination bloom with picture, which is what a lot of books do. You know, I think about that, like even today, people are writing books that they would have never written before. I think that's a positive sum for humanity, right? Like, let me get out the outline and maybe chant GPT or whatever this GPT world we live in will become is just the, get over the hurdle, you know, the unblocker, the writer's block, you know, remover, essentially. Let me get you moving, you know? Let me help you take that outline into something that's maybe you don't even like what I've given you, but it's helped you think it's possible because sometimes humanity is blocked behind possibility rather than like, oh, like, if I don't think I can achieve it, then I just won't do it, kind of thing. It is great just giving you lists of ideas. Like, I don't know what to call this thing. Give me 20 alternatives, wacky names to call it. Yeah. These are actually good names. That's my majority use is like, give me 40 alternate phrases from this phrase and I'll kind of say, and I'm, you know, usually there's like a phrase that I can't remember I once knew. Is that how you've been telling these shows lately, Jared? You've been using chat GPT? Because like the last several times we've had the title shows, I'll admit this, my ideas have been horrible. The most recent one that we put out for the change, well, the interview one, what was it, Jared? Back to the terminal of the future? Like that was amazing. And I was like, forget you, Jared, you know? Like what do you - That was 100 % human crafted, I'll let you know. It was Jared generated? It was Jared generated text. Except this is up to teach. Thank you for the compliment. Well, it was a good title. What's funny is a lot of times I'll discount it because I'll say, give me 40 of these. And I'll be like, these are all terrible. I'm like, okay, I'll use this one. You ever do that? You're like, ah, these are awful. That one's actually not that bad. I'll grab that. I will take the least terrible one. So it's less worth it coming up. Yeah, exactly. And it's like, well, better than what I came up with. I do think like by the nature of what they're trained on, they're going to take like the medium, you know, quality level, right? So I don't think JTPT is ever going to write a particularly good book if it's trained on just all the books that are out there. It'll write an okay one, but you're not going to like push the envelope. You're not going to create new groundbreaking, new art with the current architectures. They're literally designed to take the most common approach and follow that. Right, which is like mediocrity ingrained, right? Yeah, but it also like it lowers the bar. It means, you know, the base level that everyone can get to is actually reasonably good. Yeah, Damian Real talked about this a little bit on Practical AI. He's a lawyer slash programmer, who's done a lot of work, like all the melodies dot info and stuff with computer generated music specifically and law around computer generated music. And he was talking about the smoothness of AI generated music and how humans don't create like AI creates. AI creates with smooth trends, smooth data. And I think by that, you're kind of referring to like mediocre normalness, like the normality of the data, of the produced sounds, for instance. And humans create in this kind of beautiful, abnormal jagged way with music. And so they're using those designs or those ways to differentiate between human and AI generated music, for instance. I think it's probably very similar with words. Where like you're going to have this thing that's like taking all of human words and like crunching them and then spitting out this next best word, which is often the most guessable word for the circumstance, right? Like it's by definition, the next best, but that's not really the way that humans think or write. Like we come up with something entirely weird and off kilter and askew. So there's something there. Yeah, I don't know if you ever listened to the Google's music LM thing, which on the one hand sounds really good and even like includes vocals sometimes. And it's just like pick a genre of music, pick a length, pick instruments and it'll create it on the spot. It always sounds like great background music. It's never something you'd really want to listen to in the foreground. Yeah, this would be awesome for an elevator. Yeah, exactly. Not for my wedding or for a rock concert. Precisely. Well, there's something that's just magical about beautiful imperfection. I think that's what you're describing Jared. Like humans are, we're not predictable in a lot of cases. Like there's some predictability to humanity, but in creativity, I think there's not a lot of predictableness, if that's a word. You know, predictability. How would you describe that? Predictability. I liked what Damien Riehl, I think he did describe it as jagged, which I thought was an interesting - That's a good thing. I like that. Way to describe the way humans write and create is jagged. Yeah, it's like, well, there's the jaggedness might be the pausing. You might create, evaluate, repeat, you know? And that might be the jagged. It's like there's a pause in the evaluate scenario of what you create. We call that writer's block. I forget what Rockstar said. If you're gonna hit the wrong note, do it loud. Do it loud. That's right. See, an AI would never say that because an AI doesn't have that level of nudge, you know? Yeah. That's a rockstar. I love it. I love it. Jon, do you have any more books in you? Maybe. I mean, this is my first book in a while and honestly it just invaded my mind and I had to write it to get it out of my mind. So that's usually, that's my creative process. So I have no idea what I'm gonna be invaded by another one. So I think the answer is yes, but I have no idea when.

  27. SPEAKER_00

    Well,

  28. SPEAKER_01

    you just were invaded, Jon. It's a DNS config. Come on, we gave you the best ending ever. We can collaborate. I can help you with outlines and you can write the stuff. Just give Adam the co -author just for that last plot piece and we'll be happy. Oh, they have the protagonist local host. I would love to eventually write a book. I don't have the motivation yet. My mind hasn't been invaded by an idea to the degree where I'm like, I've gotta get it out. But I do aspire at one point in my life to write a book and it would probably be in this world that's not really catered to very much. And the idea that there's a total adjustable market of 100 million developers globally, that's interesting. Now, do they all speak English fluently? I don't know if the 100 million is all in my, I don't speak other languages so I have to write in my native language. I suppose I can work with somebody to translate but that's even harder too. I'm being chatty but she's really good at translation. Yeah, that's true. Well, yeah, I could be like, hey, translate this book. How many people have to read your book for you to consider it a success? Honest question. For you Adam and for you John. Because you're saying 100 million might, maybe it's not all of them. Do they have to all read it? Well, I think about, it was more like less enough and more thinking about what's the total adjustable market. Like is the total adjustable market large enough to consider going after? I think 100 million is plenty. So yeah, I think that's plenty. Okay. I would be happy if a thousand people read it, maybe even 20 ,000 people. That'd be fine with me. Okay. John, what do you, do you think like that? Do you think like, how many people do I want to read this thing? Because you put a lot of work into it. Yeah, I mean, what you're supposed to say is, oh, I don't care how many people read it as long as I move by it, that's not true. Everyone knows that's not true. No, that's a line, you're feeding me a line. Exactly. That's what ChatGPT would tell us if you asked it to generate a response. Honestly, I'd be, like my previous books have sold, you know, tens of thousands of copies total, which is not a huge amount, but it's not trivial. So I'd be happy with 10 ,000. I'd be extra happier, you know, with 10 times that much. Obviously everyone wants to have a huge hit, blah, blah, blah. But you know, if thousands and thousands of people have read your book and thought about it, then you know, moved by it, then that's a pretty good outcome. The commitment to the craft, not the writing craft, but the craft of taking the idea from the brain of the thinker. And putting into words in a form that is cohesive and readable by another human being. Like somebody that committed to that, I'm just not sure I could do it more than once. And to be really great at it, to get like 10 ,000, 20 ,000, a following like Dennis E. Taylor, for example, like he's got quite a following. The Bobaverse has done quite well and he's got Outland and Earthside and other spinoffs of other stories he's got. There's a short story that he's got called Feedback. I think it's probably his masterpiece that he barely claims. Like I think that's probably his best book, honestly. To get to that level, it takes such commitment. I'm just not sure I have it. George Erwell once wrote, "'Writing a book is a horrible, exhausting struggle, "'like a long bout with some painful illness. "'One would never undertake such a thing "'if one were not driven around by some demon "'whom one can neither resist nor understand.'"

  29. SPEAKER_00

    Now,

  30. SPEAKER_01

    Erwell was a downer famously. He was, yeah, 1984, right? I don't really believe that entirely, but like there is an aspect of that where you're like, oh, man, I'm doing this big thing and I'm wrestling with it and I don't even know if people are gonna like it for years. What am I doing? You do go through that. Right. Yeah, you have to be driven by something. I mean, I think to even consider the exercise, I would have to think, is there a market for it? So that's why I began with the TAM. Like, is there a market for the idea? Because it's like, is it worth sharing? I'm not so driven by the idea that I have to share it. But to get there, I think it takes some discipline, really, some discipline to get to 50 pages in a week or whatever in a day. Authors tend to think in weeks versus days because it's just too challenging to accomplish a goal on a daily because kids get sick, you get sick, life happens, you gotta go to the doctor, whatever, life gets in the way. You need gas in your car, that takes all day. Just kidding, that doesn't happen. But something disrupts your day where you can't get those pages in. So you didn't fail. You gotta think in weeks, right? I just don't know if I can do that yet. So I like to do that. Whenever I'm not sure if I can do something, I like to put the word in parentheses yet at the end. If not, comma, space, yet, with an exclamation point because I am determined to do something, but am I ready to do that right now? Maybe not, but I can't do that yet. So at some point, I would quit myself to do so, or not. It's also true of a big code inside project, right? Like if you wanna build some significant open source library, that's a commitment measured in weeks, months, or years too. Yeah, for sure. Well, the reason I was asked about your next book was less about what can we look forward to, but more back to Jerry's question. Since you didn't have artificial intelligence assist you in the creation of this current book we're talking about, if you would use it, how would you use it to help you and assist you? So I think people should use it. I also think I would not, which is weird. Not for any moral, ethical trends, but I'm obviously a very left -brained, orderly, intellectual guy. Like I wrote code, sea level companies, blah, blah, blah. But when I'm writing, I'm totally not that. When I'm writing, I'm feeling like, well, I'm gonna jump off the cliff and hope my subconscious catches me on the way down. I have no idea where I'm going or what I'm doing. So for my particular whack -a -doodle, I don't know, I'm making it up as I go process, I don't think ChatGPT would help. For most authors, it would help and should be used, but for my weirdness, I don't know. I'm gonna make some homework for myself, potentially for the show notes, Jared. I'm gonna ask ChatGPT if I were to write a book about DNS as the villain and the antagonist to the story or the ending plot line. Like how would I go about, like give me 50, 200 -word summaries of the book, like you might see on the back of a book. Summarize the book in two to 500 words, give me 40 versions of that, and see if there's anything interesting, because I'm kind of curious, could DNS be a true villain? Although actually, that's one way I might use it, like after I've written the first draft. Have it go through the first draft and say, so like what needs work? How would you summarize this? Analysis of it after, it'd be good for that, I think. Analysis is great, because there's lots of, I mean, I said this on podcast before, because Jared and I podcast a lot together, but there's times when I want to ask, Jared's my business partner, and so there's questions I ask people, like the role he serves in our enterprise. Can you hug me with this? But he's busy, he doesn't need to answer my dumb questions, and I've got this thing here that's totally willing, and potentially with more accuracy, and potentially more patience. And so I think that's to be leverage. I think to not leverage that is silly, right? Not a very wise move. To not leverage such a willing participant in your adventures, what a shame. Yeah, it's like everyone has an assistant now, and if you're not getting your assistant jobs, then that's kind of silly, because you have an assistant. Right. Well, let's look forward to Exodelic 2. Ooh, there's another good use is continuity, right? So you have it read your first book, and then you tell it, help me make sense in my sequel without contradicting something in my first book. Yeah, yeah, if I write something contradictory, you turn it into red. Exactly, because continuity, as you said, Adam, with this Babaverse thing, this Septology or whatever is written, I mean, that has to just get harder and harder and harder the more sequels you write. Well, it's turned into a Septology. It was originally a trilogy, but yes. Right, so a sequel, is it even, is it possible, is it feasible, is Exodelic 2 a potential in your life, or is that just, we're just making stuff up? It's possible. I put sequel hooks into everything I write just to add a reflex. I have no current plans to write a sequel. I have only the baggiest idea of what it might entail, but there are sequel hooks, it could be tough. Okay, well, no pressure, enjoy it, man, because you shipped a book to the world today and not very many people have done that and happy to be able to talk to you on your shipping day. It's cool. I'm glad this worked out on publication day. It's great, I'm pretty pumped. Yeah, well. Yeah. Excited for books, excited for you, and we'll read them. All right. One more question as we close out, which will get Adam to read your book in a split second is audio version. Is there a plan? Will it be read? Who will read it? What's gonna happen? Because that's something that's highly desirable is audio version. I agree, there is a plan. Details are not yet forthcoming and sort of coming down the pipeline, but I'm not totally sure when. All right. Yeah. Fair enough. In the meantime, just take the text, pipe it into some sort of AI and have it read it to

  31. SPEAKER_00

    you. Yeah, I think OpenAI has a whisper model that'll do that for you.

  32. SPEAKER_01

    There you go. Did we even read the sentence in the front though for the audience? Cause I mean, we've talked about the book. I mean, we haven't gone into it. I'm not suggesting we do so, but like have we even read the hook? Let me read the hook for everybody. So when you walk away from this, you have a reason to go and check this book out. Of course, Exodelic is the title. And what it says on the front, it says the world's most powerful AI has awakened to sentience and decided you're its worst enemy. Dun dun dun. Dun dun dun.

  33. SPEAKER_00

    Dun dun dun.

  34. SPEAKER_01

    There you go. Go read that book. There you go. All right. That's all for this time. Thanks for hanging with us everybody. Thank you very much. That was fun. Bye John. Bye friends. So long. Stick around. Plus plus people. Adam took the opportunity to pitch his plausible science fiction novel to John and ask him. Do you think the book has got a possibility? That's coming up right after this. If you're one of those awesome folks who support our work with your hard earned cash, sign up today at changelog .com slash plus plus. It's better. Thanks again to our partners, fasty .com, fly to IO and typesense .org. And to our super intelligent beat producing robot, break master cylinder. Next week on the changelog, news on Monday, an interview with Haroon Mir about infosec honeypots and canary tokens on Wednesday and changelog and friends with our good friend, Nick Neesy. On Friday. Have a great weekend. Tell your friends about the changelog if you dig it and let's talk again real soon.