Changelog & Friends — Episode 106
The state of homelab tech (2026)
Techno Tim joins Adam to dive deep into the state of homelab'ing in 2026. Hardware is scarce and expensive due to the AI gold rush, but software has never been better.
- Speakers
- Adam Stacoviak, Techno Tim
- Duration
Transcript(239 segments)
Friends, it has changed Logging Friends, the weekly talk show about, well, you know what? Whatever you want to talk about. And this week it is the State of Homelab 2026. A massive thank you to our friends and partners at fly.io. Launch your sprites, launch your apps, launch your fly machines, launch your everything at fly.io. Like us. Okay, let's talk. Well friends, I'm here again with a good friend of mine, Kyle Galbraith, co-founder and CEO of depo.dev. Slow builds suck, depo knows it. Kyle, tell me, how do you go about making builds faster? What's the secret?
When it comes to optimizing build times to drive build times to zero, you really have to take a step back and think about the core components that make up a build. You have your CPUs, you have your networks, you have your disks, all of that comes into play when you're talking about reducing build time. And so some of the things that we do at depo, we're always running on the latest generation are ARM CPUs and AMD CPUs from Amazon. Those in general are anywhere between 30 and 40% faster than GitHub's own hosted runners. And then we do a lot of cache tricks, both for way back in the early days when we first started depo, we focused on container image builds, but now we're doing the same types of cache tricks inside of GitHub actions, where we essentially multiplex uploads and downloads of GitHub actions cache inside of our runners so that we're going directly to blob storage with as high of throughput as humanly possible. We do other things inside of a GitHub actions runner, like we cordon off portions of memory to act as disk so that any kind of integration tests that you're doing inside of CI that's doing a lot of operations to disk, think like you're testing database migrations in CI. By using RAM disks instead inside of the runner, it's not going to a physical drive, it's going to memory. And that's orders of magnitude faster. The other part of build performance is the stuff that's not the tech side of it, it's the observability side of it, is you can't actually make a build faster if you don't know where it should be faster. And we look for patterns and commonalities across customers and that's what drives our product roadmap. This is the next thing we'll start optimizing for.
Okay, so when you build with depo, you're getting this, you're getting the essential goodness of relentless pursuit of very, very fast builds near zero speed builds. And that's cool. Kyle and his team are relentless on this pursuit. You should use them, depo.dev. Free to start, check it out. One-liner change in your GitHub actions, depo.dev. Well, friends, we're back. It is a new year, not a new Tim, same Tim. Got your hat on, Tim. I heard you got some strife on the internet recently.
Yeah.
About mid-year back, you took your hat off and you started to like just have your non-hat Tim going, you know, and a slight uproar, what happened there?
Yeah, people freak out when I don't have my hat on. And in my last video, no glasses, no hat. I broke my glasses and I didn't have a backup pair. And then I thought, yeah, I'll go no hat too. And so like, you know, you got this crazy person on Tim's channel that kind of looks like him and sounds like him, but doesn't really look like him. So yeah, people get really confused. Every now and then I do it on purpose though to try to like throw off the algorithm for YouTube because I don't know, they're like, maybe we target people who like glasses and now we'll target people who don't like glasses, you know, so the backward hat, you know, I've done that forever, but I've noticed some people are like, take your hat off, you're inside, you know? And so sometimes I switch it up because maybe the algorithm will now target those people that think that, you know, so you never know, you know, those games you play with the algorithm.
Well, of course, man, you got to A-B test and A-B test. You're A-B test, you know, that's what I've been doing.
Yeah, me too, yeah, A-B test, yeah. And then a C, then you take your C as your B and your next guy.
It's a constant game, you know this, it's a constant game.
Yeah, fighting for ears and eyeballs, you know how it goes.
Ears and eyeballs, well, we love these people out here listening to our pods and our content. We just go on this journey because we're just nerds. We can't help it, right? We just have to pursue the, you know, the inevitable, I suppose, and somehow put ourselves to that pain slash pleasure and share it.
That's right.
And that is what you call what we do. Man, did you think we would be where we're at right now last year, Tim?
No, no, not at all. A lot's changed. Even, I feel like for home lab, it's changed even more. But yeah, a lot has changed.
What do you think has changed? I mean, it's obvious, but I want to hear in your own words. What do you think has changed for home lab in particular?
If I could sum everything up in one word, it'd be availability, just availability. And that goes a few ways. You know, availability in parts. You know, it's very difficult right now to get your hands on server parts, whether that be used server gear, motherboards, CPUs. It's very hard to get your hands on those because I suspect that most of these companies have contracts with really big companies. And so your onesie twosies orders, or even your mass orders from stuff like Newegg, aren't as big as, say, Microsoft or something like that. It's really hard to get your hands on server grade hardware. And also, that also has been true for the second hand market too when it comes to CPUs, motherboards, and everything like that. I mean, home labbers have always been used to paying the home lab tax. Well, let me take this back. We used to not pay the home lab tax. We used to get the used server gear free, take it away, really cheap type of hardware. Then people started realizing, hey, there's home labbers out there, and I can make money off this. Or it still has some value. I guess I should put it that way. It has a lot more value than it used to because people are using this stuff in their home for home labs, which was awesome. Second hand market was great. And now we're at a point where not only do we have the home lab tax, because people realize that they could start making money off their second hand gear, now you can't even find it. And so that's one big change I see, is just availability of server parts. And the same goes for RAM. RAM, as you know, prices are through the roof, if you can even find it. Prices are through the roof. You're paying, I don't know, double, triple. I'm scared to even look anymore of how much RAM costs. Hard drives, too. Hard drives up. I was looking earlier today, and I paid $159 a year ago for a 14 terabyte hard drive refurbished. That same one now on eBay, if you could even find it, is almost $100 more.
Really?
Yeah, anywhere from $70 to $100 more. And storage has just gone through the roof, too. It's easy to find storage, but you're paying a lot more than you were before.
What about CPUs? CPUs are the same?
It is the same, if you can find them. Yeah, second hand CPUs, they're expensive. I mean, most people aren't. I mean, let me take this back. If you're building a server with server grade hardware, a lot of people are buying them used. We can't afford new ones. Or don't want to afford, I should say, in some of the cases. But even those second hand ones are through the roof, because people are still getting a lot more life out of them. Or they can't buy the ones that they want to upgrade to, the latest whatever, epic CPU. Most of those are allocated to some big customer, so even the mid-sized customers can't upgrade, because they can't get those. So they're not releasing that gear to then trickle down to the rest of us. So it's tough for CPUs, CPUs, too. DDR5, CPUs, hard drives, motherboards. I feel like the only thing that's really cheap right now are cases and closures, because no one can buy them. I'm sorry, no one can build them. No one can build them. So I think enclosures are really cheap right now, because they're like, please build something, but we can't.
So it's tough. Who was it? Gamer Nexus was talking about cases recently. I want to say about four months back saying the cases were actually up because of tariffs.
Yeah, so then there's that. Then there's tariffs on everything, which is a percentage increase across the board for everything. There's that for sure. But a lot of this has to do with just the AI race that's going on in build, build, build. And DRAM prices are through the roof, because there's a shortage of hard drives. There's a shortage of everything, because everyone building data centers right now. So yeah, there's definitely tariffs across the board on everything, but this is beyond that. This is beyond hard drive prices, RAM prices, GPU prices, through the roof. GPU is another one. You can't even get your hands on them. So if you bought one four years ago, you're pretty lucky. If you bought a 30-whatever, I still have my 3090. It's actually back there. It's the one I'm testing with. Right at the beginning of COVID. Right at the beginning of COVID, I thought, how am I going to? I don't want to pay $1,300 or $1,100 for a GPU. I thought, am I really going to pay this price? I'm so glad I did. I've had it for four years, and I paid retail for it. And so if you think the 4090s, 5090s, they are more expensive than that too. That's funny.
I bought my 3090 last year for $300 less than your retail price.
Wow. Yeah. Yeah, because that's when probably the 5090s were.
Just coming, just about to come out.
That's right. 4090s were out, and yeah, 5090s were being announced, and everyone was dumping them. But now it's like you can't even get your hands on them. Yeah, that's a good deal. That's a good deal.
Yeah. It's a good GPU. I may have some fun things happening on that GPU as we speak right now. Training a rag system. Just doing some cool stuff with rag right now.
Cool, man. I was just looking up that stuff too.
Yeah, yeah, yeah. So I thought you would say that, yeah, I'm glad you mentioned the hardware shortage, because that's key. But I thought you would say your availability remark would have been not unavailability, but abundance of availability in terms of capability. These augmented home labber that can now tend their homegrown vegetables, a.k.a. software. I thought they were this home lab garden, so to speak.
Oh, yeah. So that's my second piece, is the explosion. So then my second piece to availability, that was the other side of the coin. And I'm not just saying that, it's in the notes right here, is the explosion of self-hosted software that we can now run at home. It's incredible. And so that's my prediction for this year. This year is a year of the self-hosted software. We can't get hardware. We've got to make do with what we have. And so this is the year for software. And it's the year for software for many reasons. First of all, we have way more capabilities at home. I've been running Olam at home, open web UI, to play with models and do chat. I've even done some coding assistance with some agents. That stuff's fun. Models aren't as good as the open ones. I'm sorry. The open models aren't as good as the closed models, the ones you pay for, obviously, or you wouldn't be paying for them. But they're good enough to do a lot of tasks, especially just, like you were mentioning, rag and stuff like that. I've been playing with paperless, paperless NGX. What is that? Paperless NGX is a self-hosted document scanning solution. So if you think about, you have lots of documents that you want to store. You want to store on your own hardware. They might be private documents, whether they're, I don't know, financial statements or subpoenas or marriage license, you name it. Whatever you have that is private that you keep, just think of what you keep in mind your documents. Think about keeping that on your own servers and then being able to scan those documents and then getting metadata and data about those documents. That's what paperless does in a nutshell. Well, now all of these, not really sidecar, but these kind of sidecar solutions are starting to pop up where you can feed them to a model and get better data out of those. So right now, paperless NGX is super cool. That's actually my next video. So you're getting a sneak peek and that's why I'm so like, yeah, that's why I'm so like gung ho about it right now. Paperless uses traditional OCR, so optical character recognition. And for the most part, it's okay, right? It's okay. It's way better. It's faster than humans, but it is nowhere near the accuracy of a model that's been trained for vision. And so this is kind of the next evolution in OCR. It's not use optical character recognition, use a model that's been vision trained or multimodal is what they're saying now, where you can feed it text or feed it image and you get text out. And so I've been playing with this thing called paperless GPT and paperless AI, which hooks into paperless. And now I can scan documents, scanned images and get high fidelity data out of those images. So for example, I scanned a serial number on one of my devices, you know, took a picture, scanned a serial number, OCR did terrible. It got like made in Japan, right? It's about it, serial number wrong, everything was wrong. You feed it to an LLM that, you know, has been trained with vision, like a super small one from Olama and everything works perfectly. It even was able to figure out that the FCC trademark, their little FCC logo actually said FCC, even though it was F with circular C's inside of it. So it's really cool, really cool solution, self-hosted solution. And so, you know, that's where I'm thinking that this year is the year for software, not only because people are making all of these, you know, awesome solutions to self-host, it's that people have a lot of assistance now to get those ideas, to make them come to fruition. You know, they have agents to help them. I was just talking to a guy the other day who just built the piece of software he always dreamed of, but never had the ability to do it because he's not, you know, he's not a developer. He's not a developer. And say what you want about, you know, that code and whatever, I'm a developer too. Yeah, yeah. Good. Because I think that's-
Does it work? Does it solve a problem? That's good code. Does it solve my problem? Did it exist before this moment? That's good code.
That's right. And so people who are driven by results or who want, you know, code is just means to an end for a lot of people. You talk to product developers, you talk to people who are in IT, but don't do code, but have these ideas. Those are the people right now who are creating these awesome solutions and finally being able to get those ideas out of their head. So anyways, long story short, I just talked to a guy who built this whole solution on top of like Unify's API. It's actually Chris from Crosstalk Solutions. He built this whole solution on top of Unify's API. And it's like the solution he's wanted for years, but never took the time to do it or paid a developer to do it. So anyways, we're seeing lots of that now. And I'm hopeful and it's super exhilarating to see these ideas coming out because when developers, when you have people who are generally software developers, I don't mean to bucket people up, but a lot of times they're deep focused on some super technical solution, looks perfect, runs perfect, structured in a certain way. But now I'm seeing these solutions by people who think way outside of the box and they're just trying to solve a problem. And I get to see how they solve that problem. And it's really cool to see because I might not have approached the problem the same way that they did. And so I get to see like a new perspective on coding. So I don't know, that's the other piece is availability. So I think this year is the year for self-hosted software, for open source software or for solutions in general to be able to run on your home lab because a lot of people are just gonna be okay with the hardware that they had for a couple of years. And people like me who love to self-host stuff are always looking for like the next container app to run on your server.
Yeah, I feel you on that front there, man. I've been scratching little liches is how I'd say it. Question on paperless is can it handle books? I suppose this is does the OCR or the vision version of that, books are cool too. Because one thing I'm doing is I'm trying to figure out how to get knowledge out of certain books I only own in paper that I can't even get in digital. There's a lot of books that are just not like that. And I'm thinking, great, I have this book, it's on my bookshelf over there. I've had it for years, I paid the author, I've paid the publisher, it's my copy. But I'm not gonna go pick it up because that's the old way. Not that I don't read books, I still read, okay? I still read. But my preference is. I could read good. I was trying to build a little center for people who can read good, but I'm just not there yet. I'm not zoolander yet.
School for ants.
That's right, that was a good clip, man, that was a good clip. So can you do books with this paperless world?
Yeah, so you can because you can have multi-page documents. So you could, I mean, you would have just basically had this whatever, 300 page document. And then you could feed that to the LLM, which has vision and yeah, it should be able to parse it all out. Like paperless by itself should do pretty good on books as long as you get a good scan on it. But I'd still feed it to the LLM anyways to use its vision because it's gonna be, you're gonna go from whatever, 80% accuracy, probably a lot lower to like 90, high 90s accuracy. OCR in general, you don't realize how bad it is until you actually try to scan something in the real world and you're like, oh yeah, this used to be amazing, but it's not amazing anymore because we have vision-based LLMs that are amazing. So yeah, you absolutely could. So I'm thinking, okay, you would scan it, you would get it in paperless, it would put it in PDF form. And that's the thing that's cool about paperless too. It tries to get everything in a PDF form. So you would basically get it in a PDF. I assume the reader that you're using uses PDFs too, or maybe it uses PubM or whatever the weird extension is that I can't think of. But I think if you got into PDF, that would probably be good enough, I think.
Yeah.
Yeah.
My preference is to mark down. I wanna get things to mark down. I got some solutions I'm working on around transcription and just really pure good stuff, let's just say. And that's my next goal, is to be able to transcribe with really good accuracy because there's a lot of jargon out there and whatnot. And I'm close. I'm like 98%, let's just say 99% there. So I got some curiosities in that front.
It's so funny you mentioned that too, because I've been going down this rabbit hole on document scanning. This is kind of how it goes when I start researching a video or something that I'm doing. This is like the third rabbit hole within this whole paperless thing. I learned that there are solutions and people probably know this. I don't because this isn't the world that I live in, but for document scanning, there are solutions out there that prepare your documents for AI. And so it will take, say, a document and identify it and break it up into its parts so that you can feed it to what you're trying to do, a system for rag. So it will understand a title, a footer and all of these pieces of the document and not just the text itself. So there's two solutions out there. One's called Dockling, which is from IBM and it's open source. And it takes any document you want, whether it be MP3, PDF, Excel, and will break that up into its parts and then feed it to your LLM so that you can do rag against it. The other one is Paddle. And so Paddle is another one, Paddle OCR, that I don't want to say does the same thing because people who know this stuff are going to be like, it doesn't do the same thing. But for me, from the outside looking in, it's a solution trying to solve the same problem where it's trying to not only get the data out of the document, but lots of metadata about it too. So those are two solutions that might help you. And I say that because you're saying you want everything in Markdown, that's going to help you big time because if you scan a document that has a table and you do OCR against it, the text you get out isn't a table, right? And so same with even an LLM. So what you want to do is use Dockling or Paddle to feed, to do the transformation or the recognition of the individual parts. So if you took a picture of a table, you know, a workbook table, Excel table, then your output could still be a table, but in Markdown. And so this is like the next, I don't know, I feel like this is like on the frontier of document scanning. And anyone who's doing document scanning in the industry, they're probably like, this has been around for four years. This is coming from like a web developer who does infrastructure at home. And so this stuff is new to me. So those are two things I would look into. And I'm going to mention them in my next video because they're pretty cool. But I think that these two solutions, Paperless AI and Paperless GPT are trying to solve that thing. And the funny thing is Paperless GPT can hook into Dockling to do that for you too. So it's getting wild, man.
I was just thinking about architecture there. So if I, maybe you'll go here with me, Tim. I've been thinking about ETL pipelines. I feel like the world is an API, the world is a CLI, and the world is an ETL. And that means extract, transform, load. And I feel like that's exactly what you're doing there. So if I were building that pipeline and I were using Paperless and I was behind the scenes in your little nerd research lab there or whatever, I would want to keep the original images. And the reason I would want to keep the original images, I would want to extract whatever the purest original copy of it would be, which would be an image, right? Let's not take the transformed version of it. Let's extract literally what we get from it, the raw data. So let's have a raw layer. That's, if you went with the, I think it's the medallion process, I believe is what it's called. You got silver, you got gold, and you got bronze, you got silver, you got gold. And so the bronze layer would be this original raw layer. And so that would be simply images of every page you have. It could be the simple image of your serial placard. Or it could be all the pages of the book. Store those in the raw copy as an image. Boom, you got that, that's your bronze layer. Then the T comes into play, the transform comes into play. You say, okay, let's now take all those images. And this is great as technology or models change or vision models get better, is you can go back to that original raw source. It's almost like how they do mastering for films. They go back to an original film that was shot on film, remastered for 4K, but they're going back to those original slides. And so that's kind of the same process. I would want the original image though.
Oh, I agree, I agree. I like it. This is similar to like metafields in developers and APIs when you scrape stuff. It's like, let's pull out the stuff that we can use and put it in our API. But oh, by the way, we're gonna have this metafield that has everything we found to begin with just in case we need to come back and process it a little bit later, a little bit better.
Yeah. We can leave it on the table if you don't do that, you put them on the floor. You're not capturing it. So you tend to throw away in that process to get to the pristine, you throw away what was not really that good to you. But in the ETL world, you wanna keep that original raw source now in so far as that it does hold value. But if you go back to that original raw, if you need to ever, as your technology changes in the transform layer, well, then you've got lots of things you could do that goes back and gets more accuracy if you can't get full accuracy. Now, if your score is 50 out of 50 or 100 out of 100 in your quality score and your raw is not really needed anymore, well, throw it in the reference pile. But I want the images. I want the images so that when it comes down to that table, I can actually have the LLM examine the image of the table and then the markdown we get from it and be like, that's good. Let's go.
Yeah, I like it. Man, ETL has taken on such a different, I guess, a different perspective. Last time I was talking about ETL was, it was SQL, like trying to pull data out of one database and put it in another in the perspective of this. Yeah, it really makes a lot of sense because LLMs in general are like best effort, best guess every time. And so that best effort, best guess is gonna be different every time. And it could be better in the future or it could be worse, who knows? But yeah, saving the source image. Yeah, that's awesome. It sounds like a fantastic way to treat, you know, analyzing images like this.
Yeah, the pipeline is medallion. And like I said, it's bronze is the, you know, the base layer, which is your raw, your silver, which is like a, you know, maybe an augmented version of that that's been kind of cleaned up a little bit. Then you finally transform it in the final layer in your gold layer, which could be your production database, for example, or your production layer. So you've got, you know, that first layer raw, that sort of middle layer where you're sort of evaluating things. Maybe you're doing some joins. Maybe you've got multiple databases and the final transform is in the gold layer where you're taking maybe two or three different databases or two different data sources and you're merging them in production.
That's cool. Yeah, yeah.
That's cool stuff. Let's go to, let's go back to Chris from Crosstalk and unleashing, let's just say, Claude. Claude, Opus 4.5 on your UDM Pro or whatever. If you're Tim, techno Tim, maybe you've got the latest, greatest. I don't, you know, Tim, I'm a truck with that over here. I got to buy my own things. I know you probably buy your own things, but you get gifted a lot of stuff. I don't mean that negatively. No, no, yeah, I mean it. You get to play with the fun stuff. I'm envious. So here I am with my UDM Pro that's not even the special edition. It's just the one that's not special. And so that's what I'm using.
Yeah, yeah, it's, yeah, I wouldn't worry about it. Like at the end of the day, like their software continues to evolve. And so you're getting the latest and greatest everything even though your hardware might not be up to stuff. That's the cool thing about Unifying General.
Yours does like light dances and stuff. You got all your RGB stuff, bro. I mean, like I want mine when I put on my Beats and I take my dance break, I want it to dance and, you know, do a light show for me. I just don't have that capability like you do.
Oh yeah, no, no. Or you could play Snake. Did you see that video of someone playing Snake on there? Yeah, it's pretty wild.
On their display. That's right. So this world, something that happened recently with me, one of our neighbor friends came by, one of my son's friends came by and he brought a Switch. His Switch 2, as a matter of fact, after Christmas. It was one of his presents. Like anybody who's inviting somebody with a device into their home, where do you think I said? I said, you know, you gotta be on my guest network. Well, for some reason, they just couldn't get on. Like the authentication happened to the wifi network. You know, all things checked out as good. Couldn't get DNS. And so I thought it was my newly homegrown Rust project, which is called DNS Hole. So I rebuilt Piehole in Rust, Tim, if you didn't know this.
I didn't know that.
Available yet, dnshole.dev in the future very soon. I'm waiting for one or two more things to happen before I can do that. But right now, even as we speak, my DNS is being resolved by my own DNS server that has fully replaced what Piehole is. I think you'll love it when I can release it. Matter of fact, I'll share it with you soon if we can. All that to say is that he couldn't get on via DNS and I'm like, gosh, DNS Hole, maybe you messed up here. It was not DNS Hole, okay? DNS Hole was perfect. You know what it was?
What's that?
It was VLANs, man. It was my VLAN rules. Okay, so of course I popped out Claude and I'm like, what's going on here? Because I couldn't figure it out on my own. And I'm like, gosh, why didn't you just pull out Claude and let it just log into your unify and just check out some things? And so through investigation, it turns out I had some jacked up VLAN rules. And while it was in there, I was like, you've done this all wrong. You know, this is great work here, Adam. But like, you got old rules, you got these rules that conflict, you got this one rule that does nothing, you got this whole set of rules that doesn't make any sense. Can I fix this for you, please? Sure, Claude, please, please help me out. Five, 10 minutes later, beautiful VLAN scenario again. All the world's great. He's on the internet. They're playing and having fun. They're playing Mario Kart and life is good. So I mean, like, that's the world we live in, Tim. You know, I can't even get my VLANs right, but Claude can.
That is awesome. So I haven't used Claude in that way. To be honest, I haven't used Claude all that much. I, you know, I use co-pilot, you know, with models, you name the models. But no, that's interesting. So did it do it through the CLI?
Yes, and the API.
Yeah, yeah, yeah, yeah, gotcha.
Well, it knows the IP address. It has my auth. I've got an SSH key, so I'll SSH into my UDM pro.
Gotcha, CLI, gotcha.
It's me. So it logged in. Dude, listen, okay, hold your seat. It reads the Mongo database directly. It will update the Mongo database directly. I know that's, it's not cool in production. Don't read around the database. But I've done it before, and so I was confident. And then it will trigger whatever it does to like let the UI catch up, essentially. Like the cache layer that's in the UDM pro, whatever. Just because you change the database doesn't mean that the reads come back quickly. You gotta sort of re-cache the cache kind of stuff. And oh yeah, man, it's so cool. It will log in to the UDM pro via SSH as if it's you. Or in the case of Chris, which I'm sure he did or was thinking about doing, is you can use your own, you can use the API, the unify API. Or you can just log right into it and just SSH around, just CD directory and like you're on a system, like you're an assist admin, no different. I think that's such a wild world. I think that's what's making Homelab more special to me. Proxmox has got a little more fun. Tim, if you, I'm gonna have to show you some things, okay? I have a CLI built called PXM, stands for Proxmox. In a one-liner, in a one-liner, Tim, I can have a brand new Ubuntu machine running. I can specify the IP address. I can specify the CPU, the RAM, and the disk. It already has my SSH key. And literally in less than 10 seconds, it's reporting the IP address back to me via the CLI.
Yeah, yeah, I love it.
And moments later, that same, I can do PXM info and then whatever the VM ID is. So PXM info 104, for example. And it reports back to me, SSH user is Ubuntu at whatever IP address, all that good stuff, whatever the details are of that machine. And moments later, my agents can be building on brand new infrastructure.
Yeah. Isn't that cool? Yeah, that is awesome. Yeah, no, it is awesome. This is exactly what I'm talking about. This is exactly what I'm talking about. It's like AI and agents in general are just letting people get these ideas out of their head and tinker way more and go way deeper than they used to before. And this goes, this reminds me of when I went from IT to a software developer. I went from using other people's tools to using my own tools. And for me, that was like a light bulb. I was like, I don't need the UI anymore. Give me an API or even a CLI and I can figure it out. And for me, that was just like a light bulb went off and it was just like this moment where I was like, I felt like so much freedom to be able to build software that I wanted. And so now it's just so awesome to see like other people being able to do that, take that step from using other people's stuff to using my own stuff. And so now I feel like you, I mean, would you have ever written that thing out in Proxmox five years ago? Maybe, but it wouldn't take a long time.
It'd take me way too long. I wouldn't have the time. I probably would have, it's just too daunting of a task to do because it's really time. It's not necessarily ability. And I suppose it's probably both time and ability, but yeah, I would have just never tackled it because it had just been too hard of a mountain to climb really because, I mean, even with the augmented AI tools, it was still hard. I mean, it didn't get easier. It got easier to move faster and to get past the hurdles, but gosh, I had to solve so many problems and figure out so many ways to deal with how do you store the image on Proxmox. Well, that's kind of obvious to most people, but like getting through the whole life cycle and that was one of the first things I built with Claude. So I've learned a ton since then. So I want to rebuild it. I want to go from, cause now I know what the tool should do. And before I was trying to make this, I don't know what I was trying to make. I was just trying to explore really. And now I know exactly what I want it to do and what I don't really care that it does that I just don't need. And so I would just wouldn't waste my time on that parts of it. Cause I was trying to make this, I guess, I didn't want to have to log into my Proxmox machine every single time and navigate the web UI and click all the things. And it's not that it's a bad UI. It's just that that's not the way anymore. This is the year we almost break the database. Let me explain. Where do agents actually store their stuff? They've got vectors, relational data, conversational history, embeddings, and they're hammering the database at speeds that humans just never have done before. And most teams are duct taping together a Postgres instance, a vector database, maybe Elasticsearch for search. It's a mess. Well, our friends at Taggerdata looked at this and said, what if the database just understood agents? That's agentic Postgres. It's Postgres built specifically for AI agents. And it combines three things that usually require three separate systems, native model context, protocol servers, MCP, hybrid search, and zero copy forks. The MCP integration is the clever bit. Your agents can actually talk directly to the database. They can query data, introspect schemas, execute SQL. Without you writing fragile glue code, the database essentially becomes a tool your agent can wield safely. Then there's hybrid search. Taggerdata merges vector similarity search with good old keyword search into a SQL query. No separate vector database, no Elasticsearch cluster, semantic and keyword search in one transaction, one engine. Okay, my favorite feature, the forks. Agents can spawn sub-second zero copy database clones for isolated testing. This is not a database they can destroy. It's a fork. It's a copy off of your main production database if you so choose. We're talking a one terabyte database, Fort, in under one second. Your agent can run destructive experiments in a sandbox without touching production. And you only pay for the data that actually changes. That's how copyright works. All your agent data, vectors, relational tables, time series metrics, conversational history lives in one queryable engine. It's the elegant simplification that makes you wonder why we've been doing it the hard way for so long. So if you're building with AI agents and you're tired of managing a zoo of data systems, check out our friends at Taggerdata at tigerdata.com. They've got a free trial and a CLI with an MCP server you can download to start experimenting right now. Again, tigerdata.com. I wanted to be able to do a CLI version of it. I wanted to get JSON back and feed it to my agent. And now that's all possible, really. So have you played with or heard of this latest thing which is called Ralph Wiggum?
Ralph Wiggum, no, that name sounds familiar though.
From the Simpsons. Okay, yeah. Gosh, why is it called Ralph Wiggum? I forget, I think it's because they just keep trying despite setbacks is how I, if I can paraphrase Ralph Wiggum, is keeping the leap going despite setbacks. And so I believe it was, yeah, I don't have it here. I was gonna try to figure out who actually created, I think his name was, I don't know, I can't remember, but it was somebody who discovered this loop essentially. So you essentially keep feeding back the loop of the input output that you would normally do with your own typical cloud scenario, which is you entering the prompt, it doing something and returning some sort of response back to you and doing work in between. Well, they have found this way to create this Ralph Wiggum loop so that you can essentially define a pretty clear instruction set. You might call it a spec or a spec, but they actually just call it prompt.md. And so in this prompt.md, which you would feed into Ralph can do a loop. It could be a small loop, like build this one part of the feature end to end, and you just go until it's done. Well, the reason why I'm telling you this is because I feel like now that I know what it could do and what it should do, I would want to, and if hardware was more available, I would be more inclined to do this, but I would build a test subject hardware machine that is Proxmox. And then I would just set loose this, now that I have a pretty clear vision, I would set loose this thing on that machine, just have it build this Proxmox redo, I suppose, potentially, because I'm just trying to get the value out of it, not so much the pristine code. Sometimes that's the value part too, and you enjoy the process, but just for an exercise, because I want to automate Proxmox, why not do it via this Ralph Wiggum loop, just do it until it's done. And you can kind of give it repetitions, you can say, okay, do one version of it, not a version, but I think like tries, I forget what the terminology is for it, let me see if I can find it real quick. It's like one iteration, two iterations, and you can specify, because you have only so many dollars to spend. I don't want to spend more than 20 bucks on this feature, 10 bucks on this feature, so either spend 20 bucks on the feature or 10 or 15 iterations until you get to some result, and then I'll come back and examine it. And to run it again, all you do is just run it again. It's like item putting in that way. It'll just go back and do it again. That's such a cool world to be in, man. Building a little home lab gardens with that kind of loop.
So cool. Yeah, no, that is for sure. Because yeah, a lot of times agents will stop, and I know exactly what you mean now, because you can tell it to do something to completion, but it's either going to stop or check in or do something. And I know the prompts that I give. I have a lot of my prompts saved because they're annoying to keep explaining to AI, fix all unit tests and run all linting and do all this, do these things, don't do these things, go. And to be able to not be a human in the loop anymore until the very end is pretty cool to think about, is like, no, you loop, you figure it out, you do so many iterations of this piece of software, or you do one really good iteration and let me see the final result. At that point, you're just like a director. You know what I mean? You're just like, you know, a director. That's right. Yeah, give me some, that's right, a parent. Go clean your room. Don't come back until it's all the way clean. And you know what, I'm going to check under the bed. I'm going to check in the closet. So, you know, make sure you don't put stuff under the bed and in the closet. And when I come back in, you know, an hour, it better be done.
Right.
I'm not a parent, but you know, I remember those days. You know, thinking I can outsmart my parents by jamming everything under the bed.
But you got your pups, man. You got your pups, right?
Don't you have your dogs? That's right. Oh yeah, yeah, they're pretty clean though. And they don't clean up after themselves. I mean.
Not yet.
Well, generally, unless, you know, something, yeah, I won't go there, but, but yeah.
What is the centerpiece of your home lab right now? Like what are the centerpieces? I imagine Proxmox and TrueNAS is still there in the center. UniFi hardware is obviously probably part of the center. What's, what's around that center? What you building on?
It is. So, so this kind of goes into, you know, another one of my predictions for this year too, is if we could ever build anything is, is one big box. I think people are going to return to this one big box idea where, where they're, yeah. Only because things are hard to get a hold of. And, you know, while, while you, you might be able to get hold of lots of little older machines, you know, the, the, the one leader machines, you know, to do, you know, clustering. I feel like now that things are so scarce, people might be going back to, to one big box, that's your storage, that's your AI, that's your compute, that's your, you know, virtualization, that's your NAS, that's your everything. And it's kind of the way I've been going too. So I do have my true NAS box. It's, it's one big box, you know, has a video card, has RAM, has 10 hard drives, you know, GPU, all that stuff. And so that's, you know, not only my NAS running ZFS, but it's also where I'm running my applications now too. So I've moved a lot of my applications onto my NAS. Yeah, so I've been doing this stuff.
I watched that video of yours where you were doing stuff with containers and you had to do that sort of sidecar load, which I did follow, but I didn't, I didn't get the same results you did. Maybe, is that the way you're doing it with that whole, you have to create the YAML file and then it knows about it in the app container world?
Yep, well, if you're talking about true NAS, yeah, yeah. So I, so you don't have to use the YAML and do it that way. I do because I want YAML because I'm a developer, but also it's a lot, I'd much rather edit YAML than fill out a form any day, even if you could give me, you know, you could, yeah. And so, and also because then you get CLIs and you get help from AI, you get all the stuff that you get with YAML and I can do it in VS code. So there's that too. So yes, so I'm now running my applications on top of my NAS. So I've always gone back and forth, like, you know, do I want my NAS to just be a NAS and just be storage or do I want my NAS to be an application server too and then run those applications on top of my NAS. And so I've done both and I'm still kind of doing both, but for the most part, what I'm calling now my home production is on my NAS. So my home production, you know, I've gotten a little bit wiser over the years and a little bit crazier, but you know, I have a home production now and my home production is where the services are that need to be up. They need to work or I'm going to hear about it. You know, that's Plex, that's my NAS, that's, you know, whatever else I have running, which is a lot of stuff. And when I say I'll hear about it, I don't just mean my wife because she will say something if Plex is down, because we record a lot of stuff and if it doesn't record Survivor on whatever night, I hear about it. So that needs to be up, but also, you know, alerts and stuff I have set up and running too. So, you know, my home production-
I'm sure you know this stuff even.
That is right, yeah. Like Grafana is on there. I've been getting so deep- Prometheus is on there. Oh yeah, I've been getting so deep in Grafana and Prometheus now. And it goes back to what you were saying. Like, you know, I do a ton of, well, I've done a ton of Grafana and Prometheus in the past, a lot of observability stuff, you know, in the enterprise world or corporate world. But at home, I was always kind of like, man, that's a lot of work. That's a lot of work to get that going. You know, now that I have help from, you know, LLMs, it's work I want to do because while I could muddle through it and spend a week getting, you know, scraping, working on one machine, you know, the trade-off just wasn't there. And so now that I can scrape metrics on one machine in about 10 minutes, the trade-off is there. And so it's worth it to me. Yeah, tons of, tons of, I'm monitoring everything now. I have metrics on everything now. You name it, you name it. And I'm going to show off some of this pretty soon in my home lab tour. I do every year. It's coming soon, both the hardware and software, everything I host and run.
Excited.
Yeah, so that's coming soon. But I, you know, I need to make it really good for YouTube. You know, people have certain expectations and for some reason it's always got to be a little bit better. People will be like, oh, that's what you did last year. So yeah, so I've, I've, I'm running my applications on my NAS now. And there is a reason for that. Not just because I want one box, but ZFS is a really, really good file system. And when you layer on stuff like, you know, caching and then metadata, special V devs and separating out your metadata and putting that on ultra fast storage and then putting, you know, your app data on fast storage too you get this really good bulk storage that can perform like MVME storage. And so that's been my idea, this kind of crazy idea they have. I'm putting 10, 14 terabyte hard drives in an array. You know, I'm doing strike V devs, kind of boring, but, and then on top of that, I'm, you know, augmenting the things that need to run fast, like metadata lookups and app data and putting that on super fast storage. But anything in that ZFS pool can, can also use it. So it's kind of a tiered approach to storage. You have which is Ram, Ram is going to be the fastest. Then I have this, you know, special V dev where I can put files on there too that are below a certain file size. And then I have bulk storage. So my idea is run hybrid, hybrid ZFS. You know, all of my video editing goes on there, but also all of my databases still go on that pool too. And I still get, you know, MVME like performance for most of the things that I'm running. So it's pretty cool. And so it's been a challenge for me to get that working. So that's why I'm doing it. And mainly because, you know, I always look at it like this, you know, if Plex is running on one machine, which it used to, and my media collection is on another machine, now I have two chances for it to be down. If I reboot my NAS or I reboot my application server, right? And so now I've doubled the chances of that service being down in my home. And so I co-located everything onto one box. So now it's like, well, you know, if the NAS is down, that means the apps are down too, but my NAS should never be down. And if it's down, we have big problems. So, so yeah, that's kind of what I've done. I still have a Kubernetes cluster at home. You know, I still have three Proxmox in a cluster running Kubernetes. And I still have that on many machines. That's kind of my home lab test kind of lab where I test stuff before I actually run it in production, which I have a co-location to where I'm self-hosting Proxmox in another Kubernetes cluster there that's running technotim.com, plug for my website. Like that's all self-hosted in a co-location on hardware I own running on Proxmox in a cluster running on Kubernetes that I maintain myself. So, you know, my home kind of cluster is kind of a test bed for that too. So yeah, I actually run and manage three Kubernetes clusters. I have a lot going on, but it's, it's fun.
That's a lot of stuff to run. I was going to ask you about your clusters because I went the route you did a while back when you built that cluster from the NUCs. I'd only bought one cause I could only afford one, but you bought three and I thought that was cool. And I think that's where you ran your Kubernetes, or you ran Proxmox on high availability there.
One of the two. That's right, yeah. So I don't have Proxmox in high availability. I have them in a cluster. I'm like, I get it why people run Proxmox and-
I just don't have that need to do that.
Yeah, yeah, cause like I don't need HA VMs. I push that further down to the right, to the left. I don't know which way it is. I push it further down to the services and then run HA services, right? Like I don't need an HA Kubernetes node. Right. You know, I just build more nodes. It's the whole cattle approach where I'm like, you know, it's great for, you know, if you have a single VM, something super important that's old legacy app that you need to run and it needs to auto migrate somewhere. But with Kubernetes, you know, as you know, you don't worry about that. You just worry about the services. You run three replicas and if one node goes down, one node goes down. So that's the approach I take. I don't have Proxmox in HA. It's just clustered. So I have one UI and I can migrate stuff easily.
You got your TrueNAS, which is run your home production services.
That's right, yep.
You got your colo, which is in a data center.
That's right. And that's all of my public facing stuff, yeah.
Which makes sense because you want bandwidth there and you know, maybe no firewall poking, stuff like that. Although you could probably use tail scale or something else to do that.
Yeah, yeah. No, I used to host it out of here. No problem, you know, update DNS dynamically. It was all fine and I could today. I just wanted to kind of expand it and I had an opportunity from a local person here in Minneapolis to join their colo and I thought, hey, why not? Why not, you know? Let me do some super fun, you know, site-to-site networking stuff and backups back and forth. So pretty cool stuff.
What do you run on your Kubernetes clusters then? These applications, like what do you run on there?
So Kubernetes clusters, so I have, yeah, they're applications. Anything from Discord bots, websites. So I host, you know, my own documentation site that's, you know, multiple replicas. I have my own link site. I have some APIs that I run, two or three APIs because I have this mobile app that I use that I built many years ago that's still running and this mobile app then has, you know, APIs which then needs, you know, databases. Databases are in there. Other people's websites, like, you know, my brother has a website, I built a forum. You know, my other brother has a website, I built a forum. So those are hosted in there. Whole bunch of, I'd have to look, but just random stuff. But it's basically web dev stuff, a lot of web dev stuff.
And Proxmox was mainly just skunk work stuff, like lab stuff then?
Well, those are, so Proxmox is actually the host. My Kubernetes nodes are VMs in Proxmox, right? So I'm not running Kubernetes bare metal. I'm running Kubernetes as virtual machines. So I have nodes that are Kubernetes nodes running on Proxmox. And then Proxmox is also running some LXCs, like DNS, Postgres. So I have Postgres in a cluster running on an LXC because, you know, kind of mixing that into Kubernetes while it does work, depending on your IO, can go bad really quick. And I don't have a ton of IO. And then, yeah, it looks, you know, I'd have to look at the list, but I use LXCs too. I was always kind of against them. I know they're doing containers now too, which aren't really great. It's kind of like a hack of how they're doing containers now but I am using LXCs for small things. And when I say small, I mean like, I don't need a full OS for them. Right. Most of the time I don't.
It's interesting the way you're using TrueNAS because I was always in the camp of, let my NAS box just be a NAS box. But I'm kind of bummed because it's a Xeon CPU. It's a ton of RAM. And so I look at it, I'm like, well, you're not doing really much. I mean, like a file server is not taxing. As an enterprise, maybe with like thousands of users, that's the box you want. And so I've never really been happy about that scenario. But then I'm like, you know what? One problem, one issue, it's a NAS. I don't want to conflate what's on there because if I start putting applications on there, different things on there, well then the uptime may go down or I may have an issue that is not NAS related. So then I've mainly been like thinking like NFS mounts. So why, what made you want to put applications there versus just NFS mounts?
Yeah. So NFS mounts are great. You can get into some trouble with NFS mounts like SQLite. For databases. Yeah, terrible.
But not like other apps. You just need to have a storage. You know, for like a database application, you want to be closer to the actual storage for sure.
Yeah, yeah. And it's not even just the latency piece, it's the locks and everything like that. Like NFS just doesn't handle it so well. Like the latency I can kind of get over, but like SQLite in general, you'll have these locks that are like locked that you can't unlock and you'll get corruption and stuff like that. But NFS mounts were great. Yeah, I mean, you got to figure out permissions and stuff like that. You know, and I went down that route, two for a while, but then it's like, I have to back up. I have to like, you know, take care of applications over here, you know, set up all those mounts and do all that stuff. And then also still, you know, care and feeding for that NFS mounts and snapshots and keep that connection up. And again, you're back to, you know, you've just, you know, doubled your chances of downtime. You know, two, you know, it takes two, right? It takes two to make one. So, you know, and again, I've gone back and forth with this like so many times, you know, it's like this whole generalized versus specialized, you know, that you see all over IT and, you know, enterprise in general. I've generalized and specialized my servers so many times that like, I think I'm ahead of like corporate entities that do this with their employees. And so I've specialized and generalized my NAS so many times with so many different things. But right now I'm landing on this. And I think a lot of it has to do with two things. Well, actually three, one true NAS ditch Kubernetes and went back to plain old Docker, which was awesome because I never would have done it if they were still running the whole Kubernetes bit, but they went back to just standard containers, quote unquote standard containers, you know, Docker images, Docker containers, I guess you should say. Two, then I'm able to do it with the YAML because I'm not going to fill out their forms. Like if they ever take that away, I'm bailing, I'm going to find something else to do because just not me, like I don't want to like fill out a form, you know, to be able to, you know, put in my environment variables, like with the .env file, I copy and paste them and they're there, like, why should I have to do that in a form? I get why they exist because people aren't developers, but that's not how I want to manage my containers. But a lot of that has to do with Kubernetes too. And then, you know, the other piece is this whole like hybrid, well, it has a lot to do with what you just said. It's like, hey, you have this beast of a machine just sitting there doing nothing. You know, I feel like it's that meme where that guy's like poking that thing with the stick and he's like, do something, you know, that's kind of how I feel if my NAS is like, you know.
It's kind of boring to watch the metrics and things like,
yeah, dude, tell me about it. It's funny you say that.
You have no much over there, beastie.
Oh, it's funny you say that cause now you should see my metrics. Cause now I post all my app metrics on the screen. Like you can put little widgets. You know, my traffic reverse proxy, you know, is five or six megabytes per second, you know, which is, you know, not a lot, but you think like, that's all day every day. Yeah. You know, and then like, you know, I look at my, I can see my MariaDB database, you know, going, it's doing, you know, three, 400 megs per second of queries and stuff like that. I'm like, yeah, this thing is like, it's humming right now. You know, and if I look at the CPU differences, you know, you know, it's only a couple percent, you know, more than it was before. And so I probably have 50, 60 containers running on there. And again, like the reason why I ended up doing this is because I figured out a way with ZFS to create this hybrid pool. I mean, people in production probably won't do this. I think it's cool, but you know, to make my hard drives, normal hard drives, you know, be as performant and as responsive as NVMe. So not only the speed that you get, but as responsive. And that's, again, like I've layered in NVMe to handle all of the quick writes, quick files, quick access, and for anything else that's going into RAM, which is another huge case of like, yeah, I want to store my applications on something that has tons of RAM. And if it's storing, you know, bits and blocks in ARC, which is, you know, ZFS is RAM cache. If it's storing that stuff in RAM, yeah, do it. I want it to. So I'm like getting like the super performant, you know, apps that are mostly reading out of RAM. If they aren't, then they'll read out of NVMe storage. And then worst case scenario, they read from a slow disk, which is worst case scenario because I have all those tiers.
Can you tease what your RAM, your VDEV is, which is NVMe and then your disks? I know you're probably a person some of your future videos bubble, but.
Oh no, this is, you know, I've done videos on this too. So my, so I have a striped VDEVs. Let's back up. So for bulk storage, I have 10, 14 terabyte hard drives. And of those 10, 14 terabyte hard drives, I'm doing striped VDEVs or kind of mirrored pairs where I have two that are in a pair that are mirrored. And so, which means I have 50% loss of my data.
So you got seven terabytes, essentially.
Yes, per pair is seven terabytes. Yep, that's exactly right. And so, but I do that for two reasons. One is because you can, in mirrored pairs, you can stripe them across. So you're basically kind of getting like a RAID 10. I don't want to say a RAID 10. Think of it like a RAID 10 where, you know, you have these pairs, but the data is striped across. So you get the performance on reads and you get performance on write, which is good for me when I edit videos because, you know, I have a bunch of sequential reads and writes and I don't know where they're going to be. But on top of that, you can also expand by pairs. And so traditional ZFS is super complicated on expanding. And I know that they've been adding features to be able to expand VDEVs and do all this stuff. But, you know, when I started my array years ago, I realized that, you know, buying two drives at a time is a lot cheaper than replacing all four drives with four different size drives just so I can get a bigger pool. And so in the traditional sense with ZFS, it's kind of what you have to do is you got to plan up front and buy up front, but doing pairs let me build incrementally and buy incrementally. So anyways, that's my slow disk array. Then I've done this thing called special VDEVs, which is basically you say, hey, all of the metadata about that data, instead of storing it on the pool itself, move it off onto super fast NVMe drives. So if you end up in a folder, like I have some folders that have thousands of files, you know, that can take a really long time for it to parse that metadata and retrieve that metadata because it lives on slow spinning disks. Well, if you have a special VDEV, you move it off of there so it can look it up on NVMe. Then on top of that, another thing you can do with special VDEVs is you can say, oh, by the way, don't just store my metadata there. If you find any files that are, I don't know, below 64K, just put them on there too. That's pretty cool. So if you think about it, like now any small file, which usually takes a little while to find on spinning disks is now stored on there too. And so it's like, I've just given my whole entire array a huge boost because now most of the stuff is handling happening on NVMe. There's one huge caveat to that whole thing. If you lose that special VDEV, you lose all your data.
But I'm listening, yeah.
So you lose all your data.
You're definitely cowboying this.
I mean, but you know what you're doing. Yeah, but I've done the same thing, mirrored VDEVs. So I have four NVMe drives, you know, and so two would have to die for me to lose all my data.
Right, that's smart.
But then on top of that, you should have a backup, right? And so, you know, I am definitely going cowboy because I want the performance, but I think that's, I'm being a careful cowboy. I don't know if that's possible. Well, you got spurs on, man.
You got your gun on your, you got your slinger.
That's right, but my slinger's on safety. How about that? You know, where, you know, I'm still, you know, taking precaution. I'm still building in redundant, you know, redundancies.
Yeah, if you're a rocket one NVMe, I was gonna call foul Tim, but you got four, of course, in true Tim fashion. So not only four, but you got them in pairs. So you can lose two of the NVMe. Now, did you also go with different brands, buy them from different places, things like that too, or did you just buy a batch?
So I, on top of that, there's these old Intel Optane drives. So Intel used to make these Optane drives. They're ridiculously fast, and they have like the lowest latency ever. Intel stopped making them. Yeah. But these things will like outlive earth, you know, like these things, like the read and write performance you get on them and the longevity and how many times it can read and write is like ridiculously long, that I ended up buying four of them because of how fast and because of how responsive and because of how long they're supposed to live. So, no, I bought two from, I think, Newegg, and then two more from Amazon, like later on. They're hard to find, but they're still the best drive for that specific use case. And so when those die, what I'm gonna do is just replace them with Samsung, you know, consumer grade. Yeah. I hope they never die only because they're the fastest thing out there. They blow away any NVMe that's on the market.
How did you get to get four NVMe's in this world? Cause that's different.
It is different. So I used an adapter, you know, a PCI express card. So, you know, traditional NVMe's, this is gonna get a little complicated too, but NVMe's wanna use four lanes of PCI express where they can talk directly, you know, to the CPU or talk directly to something. I can't remember. I'm not a true infrastructure person. I just play one on YouTube. So, but anyways.
The mask is off.
That is right. Hey, I coined the term infrastructure as a hobby. That's kind of my thing. IH or something, that's what I call home lab because you know, my actual career is, you know, I'm software developer, you know, infrastructure's my hobby, but I love it. But anyways, NVMe's can use four PCI express lanes. And in your motherboard, typically you'll have one big slot. That's 16 X, you know, PCI express lanes. You have to buy a card that can actually split that out into four individual lanes. And then you can address all four individually. There's also this thing called bifurcation. And so you need to make sure that your motherboard can do bifurcation. And what bifurcation is, is exactly what I just talked about. It's able to split that 16 X card up into four individual, you know, lanes. Four 4X lanes. And your motherboard has to support it. And the card has to support it. Server motherboards, generally speaking, like super micro ones, a lot of them do. Desktop consumer boards generally don't. And so if you're going to do it on a board that doesn't support bifurcation, then you can buy a card, which is really expensive, that can do it for you on the card. And those cards are, I don't know, four or 500 bucks, maybe even more. But if you get lucky and you have a server grade motherboard that does it, that's where you want to do it. So anyways, that's how I get four NVMe's in one PCI express 16 X slot. And they're all getting a 4X bandwidth for each. So that's where I-
And then you still have lanes for your GPU.
That's right, yeah. So I have a lane for my GPU.
So your GPU's in the primary probably, right? Yep. And maybe one of your secondaries, which are bifurcating, you probably have a workstation or a server grade motherboard, I'm assuming. Yeah. Because you got that ability. Yeah. Workstations are great. Workstation motherboards are a great hack to not have to go server grade and not go consumer grade. You kind of get that middle ground. You're still in the 600 to 800 bucks, maybe a thousand dollars for the motherboard, but that's because it's just the world we're in right now. But you do get the capability and you usually get ECC RAM availability as well versus non ECC RAM, which you don't really need in an ass world, but you should have, if you want, peace of mind, I guess.
Yeah. Yeah, a lot of people- Non-corruption. Yeah, that's like, a lot of people will go back and forth whether you need it or not. People will say, well, if you care about your data, you do, because you don't, you know, if the memory gets crushed-
Well, that's where it writes first. It's the source of truth.
Yeah, so in ZFS, it's less important. I'll say that because of how it checks and how it does its CRC checks on the memory and can verify the data. Some people still swear by it. Like they'll say, like, if you don't use ECC, you might as well not even do stuff. You know, people go to the extreme. I'm not there, but I'm glad I have it. How about that?
That's an interesting world. So when you think about a problem like this, so let's zoom out to, you know, not scare the home labbers away, those curious folks. Yeah. When you think about this problem, do you get out your big old whiteboard? When you're specking this world out, how do you think about it? Because you're a YouTuber too. So you think about it probably in the story arc. And you also think about it as a technologist. How do you map this and plan this and test this world? How do you do that?
You know, first and foremost, like I've been doing this for a long time, even before YouTube. Like I've had a server in my basement since, I don't even know, I'll probably date myself, probably like 2004 or five, you know, going back to like this old piece of junk that when I was in tech support, I asked my manager if I could take it home so I could learn about Linux and learn about Active Directory. And he looked at me like I was crazy because the thing was, you know, 10 years old already. And he told me, yeah, you can take it, just take the hard drives out. Cause you know, I had data on it. And so I've been doing this for a long time. I've had a server in my basement for a long time. So I generally think about, you know, what do I want to do? What capabilities do I want to have? You know, generally speaking, you know, I want to provide file services. So I want a NAS, you know, to be able to do shares and store my files on. I don't want to store my files obviously on my system. I want to store them on my NAS because then that gets backed up. I want to have some compute, you know, and it takes very little compute to do most of the things you want to do. A lot of people will say get server grade, you know, but I could spin up this i3 right over here. That's, I don't know, from five years ago. And this will like destroy any, you know, self-hosted container I throw at it and like laugh at it. Like, so you don't need much compute at all. RAM is great, but again, like you don't need tons of RAM. And so I just try to think of, you know, what services do I want to provide? For me, it's storage. For me, we record a lot of TV and, you know, stream stuff at home. So I think about Plex. Plex then brings in a video card. If I want to transcode most of the stuff I can direct stream here, but you know, when we travel or whatever iPads, you know, phones, they want to transcode. So having a video card in there that can do that transcoding for me on the fly is good. You know, that's a decision point too. You could get by an Intel stuff, but then I started thinking, well, I want to run models at home too. So I kind of want to, you know, have shared infrastructure. So I want my video card to be shared too. So that's what I do now. Share my video card with Plex, with Ollama, with the stuff we just talked about, you know, with Paperless AI and stuff like that. Technically that's going through Ollama. But even some of my, you know, I do some transcoding for some of my, you know, video cameras too. So yeah, I try to find a video card that'll work in all those scenarios. And so, you know, I never want people to think like, oh, I need to go and buy this big thing before I can start home lapping. It's never like that. Like if you have an old PC in your basement, use that first. Figure out what you want to do with it. You know, use it as is. If it had Windows on it from 10 years ago, wipe it, put Linux on it. And if you're scared of Linux, that's fine. You can put Windows on it. But I would say just try Linux because you're going to find things are a lot more compatible.
Yeah, for sure.
And just try it. I mean, you might find it sucks to be a sysadmin at home. You know, I enjoy it. I enjoy it. When things go wrong, my wife says something doesn't work. That's when I'm like, all right, you know, it's devalue.
Something broke.
I have a job, my hat is on. That's right, this is why I'm-
Turn it backwards if I'm Tim.
Oh, that's right, that's right, yeah. But you know, I enjoy doing it. You know, a lot of people joke on, a lot of my comments, you know, on my YouTube video are bro has a full-time job at home. And it kind of is right, you know, to an extent, but if you build things right, you got things working and you know how they work and you document it in case things go wrong, and a little help from AI, like a lot of things are hands-off. So you're focusing on the next thing you want to do. So, you know, again, like I don't have like, you know, a recipe of how I do things. I just try to think of, you know, what are my base services I want to have? It's always going to be storage, streaming, and some compute, and some kind of transcode capability. And from there, I just try out a whole bunch of containers. You know, I treat containers now, just think of like apps on your phone. You know, I mean, I can't spin them that quick. I can spin them up, you know, probably about five, 10 minutes, all said and done, working with a proper certificate, but they're like apps on my phone. If I want to try an app, I try the app. Does it do what I want? Yes, it does what I want. No, it doesn't. There's this other app that I should try. And so I'll go try that. And, you know, that's kind of the world I live in now is, you know, my self-hosted services are basically apps at home for my home to use. I want to say home, I mean, pretty much me, but you know, there are some that my wife uses.
Yeah, yeah.
And, you know, and it's, as long as I have a platform to do all of that, it really doesn't, you know, matter what parts I used because you can get by with so very little nowadays.
They really can. I agree, I don't think you have to go so big to, I mean, like this is where it leads, okay? It's like, I don't know if you're a golfer, Tim, but that's what golf folks say as well. It's like, hey, you get invited to play golf. You're like, nah, I don't want to go. And then somewhere during that first round, you're like, oh my gosh, this is the best game ever. I'm going to go buy all the clubs I could possibly buy. And golf tech, like any tech is just limitless, really, what they can like fine tune and dial in. And so if you've ever become a golfer, you start with like this small itch and the next thing, you know, you've spent $10,000 your first year in some way, shape or form, I'm being facetious, but you know, golf rounds aren't cheap. Golf trips with friends aren't cheap. Golf clubs are not cheap. And then you have to have special clothes or you want to have special clothes because, hey, why not dress the part? I kind of feel like that's the same thing with own labs. Like you can begin with, like I did. I can remember the day when I got my raspberry pie. I can remember the day when I spun up the TrueNas or actually it wasn't TrueNas, it was just like whatever 45 drives sent me. Cause like way, way, way back in the day, I want to say probably six or seven years ago, maybe eight years ago, they sent me an AV15 to try out and they said, no, you know, you can just keep it. It's just yours just to have and play with. And this is when they were first launching that line to home labbers or what would become home labbers. And I was like, really? I'm like, yeah, we don't want it back. It would cost way too much to ship it back to us. And at that time, I guess hardware was just cheap enough. They were like, yeah, we don't even want it back. Just use it and enjoy it and tell people about, you know, your experience about it. I'm like, okay, cool. And so that today is my TrueNas box, you know, and it's got the Xeon silver, I think, 4012, I believe, the CPU in there, if I recall correctly, or 4212, maybe. But I began, that was gifted to me. I didn't even know what I was doing with it at first. But then I began on a Raspberry Pi and started to experiment. And so that's where I began with everything ran on my Raspberry Pi. Now I wouldn't because my, not my needs, like what I actually need for computers grown, but my desire to play with bigger things has grown. You know, the playground, I can't just have the carousel on it. I gotta have the slides. I gotta have the swing. I gotta have the rope climb. I've gotta have the, you know, all the things. You know, so my playground for home labs just grown a little bit. I'm curious though, when we look at maybe this potential dichotomy between TrueNas and your desire to put all things there, which maybe somebody out there is having similar feelings. And then this world of Proxmox. I feel like I'm with you. I kind of want my TrueNas box to do everything, but TrueNas, the software is not quite there yet. Where do you see, do you have this vision or purview into that world where you can see TrueNas being this all one big box software? Cause it traditionally has been great, you know, great for what it is. You know, ZFS, storage pools, a little bit of applications you need it, but not a tremendous amount. I kind of want my Proxmox and my TrueNas in one box. Is that how you feel?
Yeah, I do. And you kind of can. So you could virtualize TrueNas. So let's take, let's take your thing for instance. You did that before. I've done it for years and it's totally fine. Totally fine.
It's totally fine.
It's totally fine, dude. It's totally fine.
But don't you have to like map them to like weird, like drive IDs and stuff? Isn't it like weird within the uptime, I suppose, if you had some major issues with your drives?
No, so it's easier than you think. And so let's take, for instance, you have that TrueNas box, it's running TrueNas. Just pretend you want to run Proxmox on it now, but then you create a TrueNas virtual machine and you give that TrueNas virtual machine, you pass through the hardware of that HBA controller. You give it the whole piece of hardware. You say, nope, this hardware controller, hard drive controller, HBA, is now assigned to this virtual machine. And what that does is now the virtual machine has direct access to all of those disks. There's no IDs to pass through because it thinks it's the true owner of the disks. And then you do that, then your life is good. And then whatever, that's one way to do it. No, that is one way to do it. But I, I agree. So I, let me, let me put it this way. Like, and this is not a dig to either product. They're better at some things. And so, you know, TrueNas is leading as a NAS. They're, they're, they're leading with I'm a NAS, but I could also do apps. And I can also do virtualization. Not that great. And that's, that's my own opinion, but it can do virtualization. It's just not that great at all because it's, it's not a hypervisor first. It's leading with NAS. And when you think about Proxmox, Proxmox is like, hey, you know, I'm a hypervisor first. Like that's what we do. We do virtualization. You could install apps like LXCs, although I'll get into that, but you could install a whole bunch of apps and run them on the machine itself. But, you know, and that does work. But like, if you think about it as a NAS, not, not great NAS experience. Sure, you can install a Samba server, assign it some, you know, pool, and then do all of the Samba config and a CLI and all that stuff like that, figure out permissions. You can do all that. Like they're, they're both capable of doing each other's job. I'm kind of like, I want something in the middle and there is Hex OS, which is kind of coming, but that's, that's kind of like, you know, some joint venture between some people and Linus and, and, you know, TrueNAS or IX systems creating this more, I guess, consumer friendly version of TrueNAS. There's that coming. But, you know, I, I played with the beta, looks pretty cool. It's, it's kind of gonna be, you know, a facade on top of TrueNAS because, you know, they're gonna use TrueNAS APIs and it's really TrueNAS under the hood, but that's something different. I want the best of both worlds. I wish they would, I mean, they're not gonna do this, but if I, if I could design, I guess, the perfect NAS at home for me, it would be, you know, it would be like a TrueNAS-like experience for the NAS piece and maybe even for the application piece. But give me, you know, give me the virtualization capabilities that Proxmox has, you know, and the networking capabilities that Proxmox have. Or, you know, if, if, if I couldn't do that, I would love for Proxmox just to run Docker containers. They are so, I don't know what's going on there. They're so against like running Docker on the host. And I know you could shim it in and do it yourself, but they've even got to the point where now they're like converting, you know, OCI containers, which, you know, Docker containers, quote unquote, to LXCs to then run as an LXC because they don't want to run true OCI containers like Docker. Like, I don't get it. Like, I mean, I mean, I don't know, Docker CE, does it, does it cost money to, I mean, I don't know. I'm not a lawyer.
I know that there's, there's been a lot of change in the Docker world in the last, you know, six years. I mean, they were almost a dead company. And then, then they were revived. It was Docker 2.0. I talked to their CEO on a podcast here on the same podcast you're on a few years back. It was a fun conversation about Docker 2.0, where they went from zero revenue to revenue. And I mean, at some point you have to protect your moat. Really, you do as a company. And I imagine it's probably licensing. It's probably something there, but you know, as a user like you are, I want that world to marry. So figure it out.
Yeah, yeah. Like, and Docker's done, like now that they're, you know, they have model runner, they're doing all this scout stuff and scanning containers and like really building up their Docker desktop. And I'm just focusing on the Docker CE part. And maybe there is licensing around, you know, maybe you can't ship this with your own product, but let's take that all the way out and just go down to like container D. Like just get, you know, OCI images, Podman, I mean, something like at the end of the day, I don't want to run.
Proxmox is available, more license friendly.
Yeah, and so at the end of the day, like I wish I could run OCI containers as first-class citizens on Proxmox. And I, again, like, I don't know the reasoning behind it. I honestly feel like it has something to do with more of their strategy, you know, and how they're, you know, trying to be highly available and LXC is highly available, although that I don't think is there yet and VMs highly available. I just don't think they want containers spinning up on the host itself. I get it, you can get around that.
Friends, you know this, you're smart. Most AI tools out there are just fancy auto-completes with a chat interface. They help you start the work, but they never do the fun thing that you need to do which is finish the work. That's what you're trying to do. The followups, the post-meeting admin, the I'll get to that later tasks that pile up into your Notion workspace looks like a crime scene. I don't mind it. I've been using Notion agent and it's changed how I think about delegation, not delegation to another team member, but delegation to something that already knows how I work, my workflows, my preferences, how I organize things. And here's what got me. As you may know, we produce a podcast, it takes prep. It's a lot of details. There's emails, there's calendars, there's notes here and there. And it's kind of hard to get all that together. Well, now my Notion agent helps me do all that. It organizes it for me. It's got a template that's based on my preferences and it's easy. Notion brings all your notes, all your docs, all your projects into one connected space that just works. It's seamless, it's flexible, it's powerful. And it's kind of fun to use. With AI built right in, you spend less time switching between tools and more time creating that great work you do, the art, the fun stuff. And now with Notion agent, your AI doesn't just help you with your work. It finishes it for you based on your preferences. And since everything you're doing is inside Notion, you're always in control. Everything agent does is editable, it's transparent and you can always undo changes. You can trust it with your most precious work. And as you know, Notion is used by us. I use it every day. It's used by over 50% of Fortune 500 companies and some of the most fastest growing companies out there like OpenAI, Ramp and Vercel. They all use Notion agent to help their team send less emails cancel more meetings and stay ahead doing the fun work. So try Notion now with Notion agent at notion.com slash changelog. That's all over case letters, notion.com slash changelog to try your new AI teammate, Notion agent today. And we use our link. As you know, you're supporting your favorite show, the changelog. Once again, notion.com slash changelog. Do you mess with Fly.io by any chance, that world in like prod, Fly.io?
No, I haven't.
It's the, I mean, if you love containers, then you'll love Fly. I mean, Fly is what we host on changelog.com. They're a partner of ours. We love them, obviously. This is not technically paid. I'm paid to, not paid to love them. We just love them anyways. Fly is like that. You're running containers, right? That you're running a container in production. It's Firecracker VMs. I'm not familiar with everything behind it, but Fly machines, essentially. They spin up very fast. They run down really fast. Now I want, I would love to have a version of Fly in my home lab. And it sounds like that's what you're describing there, which is, I want an OCI container. I want, you know, as close to bare metal as I can. I don't have to spin up a Ubuntu VM to then throw Docker on, to then launch my Docker container. I would like to just have the entire system be container friendly, it sounds like you're saying.
Yeah, yeah, yeah. I mean, you know, similar to Cloud Run and Google or any of these things, you give it a manifest, you spin it up and it, you know, it's running, you know, container only. And I mean, that's kind of what I'm getting with TrueNAS right now. I feed it some YAML. I got to create a data set for it, but I got to tell it where to put the data it's going to use. Then I use a Docker compose file and I'm done. So that's kind of what I'm getting with TrueNAS now. I just wish, you know, Proxmox would just build that into their CLI or UI or whatever, so that people just don't have to do this. Well, I'm going to run an LXC and I'm going to run it as root, so then I can install Docker to then like run, you know, an OCI container inside of this LXC container, or, you know, or pay the virtualization tax and run it inside of a VM. I just want, you know, as bare metal as possible. And I don't know, I'm sure Proxmox has a lot of reasons why they don't do it. Probably, you know, has something to do again with their strategy. It doesn't fit in there, but I just don't see how you could ignore OCI containers in general. Like, yeah.
Yeah, I agree with that. I do agree with that. Well, the closest I've been able to come is the CLI I built, which in this case, it's got the cloud image, an Ubuntu or even a Fedora cloud image on the server already, which it uses that as its base, like you would. And so rather than going that whole route of creating the template, including the template to create a new machine, the CLI does a version of that through automation. Gotcha. And, you know, you're able to, you know, through cloud init, you're able to define the network, right, you've got all that there. The user, the network, the SHK, and then everything else is the Proxmox API, which I really wish Proxmox and TrueNAS did a better job of documenting their API. It's just not, it's not super, I mean, it's good documentation, but I just feel like they don't treat it like a first-class citizen. And me, the kind of developer I am is I want to play with your API. I want to build my tools on top of your system, not be forced to go to your web UI. Like even with TrueNAS, like you're the same. I don't want to fill out a form to spin up a new thing. Well, I would much rather automate through whatever layer it is, whether it's me or an agent, some sort of CLI, and then if the agent's using it, then it can just easily use the CLI I built, but I want to be able to automate those things on those kinds of systems. And the closest I've come is that exactly. It's like PMX, you know, VM, new, and then specify all these things, send it a template, which is super easy, and those are YAML files. Those are YAML files, like two or three YAML files to define a couple things, and you're off to the races. You can define a minimal Ubuntu, you know, brand new machine. Now you could define it to be bigger, but I just have found that it's just more easy to layer on a post-install bass script than try to script it all. Like you got into Ansible land, it was just really nasty. It got really error-prone. So I was like, you know what, forget all that. I just want to define a base VM, a base VM that is blessed with an IP address, with the RAM I want, with the CPU I want, with the disk I want, and, you know, protect it. You know, it's got a dash P on it, which means if I try to delete it accidentally, or my agent tries to, it's got to go through this whole entire dance and, you know, send across like documentation and social security numbers, stuff like that, to like delete a VM. Like it can't just accidentally delete a VM, you know what I mean? You got to do some things, you know? But it's pretty, I mean, like literally within, so your 15-minute scenario that you defined earlier with a new container, less than 60 seconds, Tim. Maybe even 30. Maybe a minute to get the IP address back, because it's got to like launch the VM, get all the updates, right, from Ubuntu or whatever, which takes time, and then actually launch the actual machine itself, and then get an SSH key. That's the thing that takes time, is the boot, the update, and then the finally, you know, QEMU giving you that IP address back, but like that's all like instant.
Yeah, that is awesome. And so when I said 10, 15 minutes, that's me figuring out, okay, data amounts, environment variables, like that's me, that's like research I had to do anyways. If I knew it, yeah, it'd still take me five minutes, but that's not 60 seconds. I should look at doing that with TrueNAS too. It's just automating-
We should collab. I'll show you what I'm doing behind the scenes because it's not ready, it's ready to be used if you don't mind warts. It's not open source ready yet, because I think people have a different expectation of what that might be. I want to do one more rewrite, because I now have a more clarity on what it is, but I think we could layer on what I'm already doing with the idea of already saying, okay, there's TrueNAS on the system here, and there's a separate layer that you can spin up that second, that same VM, but then also say there's mounts elsewhere. There's NFS mounts elsewhere, whatever you want to do to do your world, which you're talking about there.
Yeah, I'm curious why you don't use Ansible though, because I have my post Ansible. Like I have-
Bash. Bash and agents, agents like Bash.
Yeah. I didn't fight the system. I just mean like, no, I get it, but like I have my Ansible playbook for new VM, you know, and it has 50, 60 tasks that are all there. Yeah. And it does it intelligently. Like, hey, if I say, you know, stop the firewall service, well, if it's not running, it's not gonna stop it, you know? And hey, if I tell it to install this one package, it knows that doesn't run on this type of machine, so it's not gonna try, you know what I mean? Like I have my Ansible playbook where it's, I just click go. Like anytime I create a brand new machine, I have a standard playbook I run, and it's gonna apply updates, update, reboot, install Z shell, configure Z shell with Robbie Russell, because that's what I like. You know, it goes through the whole shebang of like, this is my standard VM, and I just don't even pay attention to it, you know? A couple seconds, you know, it probably takes, you know, minute or two throughout all the reboots and applying and installing packages. But after that, then it's like, it's ready, you know, ready for production.
Yeah. So I think the, I'll defend that by saying, I have never been, not for any reason, an Ansible guy. I just never really got into it. I didn't understand the world. I know what it does and how people use it. I get all that, client server. I get the recipes. I'm not foolish, but I never got into it to need infrastructure automation. And the lingua franca of agents is Markdown files and Bash. Bash is everywhere. You have to add on Ansible in your client or on your server somewhere, and it's baggage to native Linux, right? So it may work for you cause that's been your history, right? But in that world, when you're trying to automate that kind of thing, I just tell the LLM, Hey, I'm launching a new instance of DNS hole and here's the specifications for it. It will write a script. That script can be item potent and it will rerun it. And it's just as good, if not probably better and faster than Ansible.
Oh yeah, yeah, yeah, yeah. I'm not, I'm just thinking about like, if you're building, you know, if you're building a CLI, like at some point, like, you know, it has to, it has to be able to scale to different things. And so like, are you recompiling the CLI every time it needs to install one more thing or just writing a new script or how does that work?
You know what I mean? Well, it has versions, it's written in Go. So it's got versions. And so if there's new features or a patch release, I'll patch release it and, you know, throw a new version out there, which you can then, you know, run PXM update and it will get the new version of it.
Sure, I just mean like the source of truth for your list of things, you know, say like one day you want to install Z shell.
Here's the beautiful thing, Tim. Those aren't in there.
Yeah, that's what I was going to say.
Those things are in user land. Those templates are in user land. You can define them. And so that's why I want to rewrite it because I sort of like, you know, I haven't been doing this as a day job. This is my little scratch, my little itch. And I really haven't, I probably haven't changed the code in three months, honestly, like it does what I need it to do. This atom wants to change it, to release it and make it better. But the world I wanted to build was this really interesting CLI, but most of those things live in user land. And so I have a separate repository that the world can define their own minimal Ubuntu, minimal Fedora, minimal, you know, Debian, you name it, whatever you want to do. And all the patterns are there. And all you've got to do is clone the repository, put it in a certain place, update your config, and when you run PXM, you know, new VM or whatever it is, and you can do dash dash template or dash dash package, and you say the name, well, it knows that because config says all of your templates are over here. So you won't have to recompile the go binary, the go see a lot of do that. All that will live in user land. Originally they did. Okay, I was a fool. Okay, I put those things in there and I was going to make it to extract out because I wanted to give people a good bootstrap. Well, then I learned, well, that's just probably not the best way. And so it's better to have a user land repository that people can commit to and update, and it can be a point of people. And sure, it's one more step to clone that repo down and put it in a config, but I feel like the trade-off is better long-term. So that's where I'm at. I haven't gotten to the point where now I can put a lot of those minimal templates or even more expressive ones that use Ansible, which is totally possible. I just didn't want to do it. I just wanted to get a base image and unleash my agent on that image and be like, make the world. And it goes and it makes the world, you know?
Yeah, no, that sounds awesome. No, it sounds awesome. I'd love to check it out. Cause yeah, again, it was like, you know, I did an Ansible only because like, that was way better than running, you know, CLI commands over and over and over, you know, or even writing one big bash script. Now that's changed because LLMs love bash, you know? So now things have changed, but yeah, I should definitely revisit it. Yeah, I'd love an agent, yeah. Yeah, at some point I'd love to hook up my open web UI. So my internal chat that I use with Ollama and build, you know, use open agent or something is that new open agent that's out.
Open code.
Yeah, open code, sorry. Yeah, open code, run that agent and tell that agent to do stuff in Proxmox for me. All via chat. Like I just don't even want to run a CLI. That's what I'm saying. Like I don't even want to run the CLI. Sure, I could do that and expand the variables and figure out what it can and can't do.
Well, the problem with it is that Proxmox has some challenges, like it would have to navigate. The reason why I went the CLI route was one, I thought I wanted it for me. Now the whole world's shifted to be agent first.
Yeah, I want to build an MCP. I want to build an MCP for Proxmox that the MCP understands the Proxmox API. So, hey, MCP talk, you speak Proxmox API. LLM, you speak human, but you also speak to the MCP. You know what I mean? Now it's like, I speak human to an LLM that speaks MCP to that LLM kind of API to Proxmox and just do the stuff.
You'll get to a limitation at some point though. So the CLI is the blue layer in the middle there. So the CLI is kind of the important piece. And then you layer on the MCP server on top of your CLI. And so the MCP can speak native Proxmox as necessary. Like if there's an API and using native API or the things that it doesn't do that you've taken from 15 steps to one command and you've got tests against it, well, your CLI, in addition to the Proxmox API with your MCP server and your agent in OpenCode or in Clod, well, that's the beautiful world.
Yeah, man, yeah, that sounds awesome. That sounds awesome for sure. Because yeah, the Proxmox binary, whatever it is, it's a PCM, I don't know. Pxm, that can do so much. I mean, that's how people are doing even their Ansible scripts right now. They're like shelling into it, running a PCM command. So yeah, it sounds awesome for an MCP too to be able to say that. But then that's a lot of words for me to type. I'm getting lazy developer. Like I'd rather like you have a CLI and run a command. Like, do I want to sit here and describe what to build? No, just build a standard VM. Let me know when it's done. Call it this.
Right. Well, the determinism there is the challenge. You can do that, but the reason why MCP came into play was because LLMs are non-deterministic. You can say that, but every time you might get a different version of it, or if you're using Sonnet versus Opus now, and like Opus 4.5 is the default per Anthropic and Clod, they want you to use Opus 4.5, it's non-deterministic. It may know that and you get great results every time, but it's not the same path to roam. The MCP server is what helps you determine and create determinism or even skills. So you layer on skills, MCP and CLI with a decent API, which is why I would love Proxmox to just bolster that, make it more expressive, give me more in there, better documented. Because if you give us those tools, we'll use your thing more. And I got to imagine if I'm an enterprise buying a support license, that's how they make their money, right? Isn't that how Proxmox makes their money on Proxmox, is support licenses?
Yeah, that's how most open source are making it. Now they're starting to do premium features. It seems like that's like a huge trend now. It's like, oh, you can self-host the normal version, but then premium features. It's a combination of I'm either gonna be your SaaS, I'm gonna give you extra features, or, and, or I'm gonna be your support.
Yeah.
That seems how most open sources projects are monetizing now.
In large installations, I can see that. For me, I would buy a support license as a means to sustain, and just to get rid of that thing that pops up every single time, really. I would just be like, I would honestly give Proxmox a hundred bucks a year just to get rid of that, you know, realistically.
Yeah, there's a, yeah, I agree. I asked them one time, hey, do you have a home lab license? Do you have a cheaper version of the home lab license that I could use so I could get legit updates versus bleeding edge updates because their price per core or whatever it was was kind of cost prohibitive for a home lab person. It's like five, 600 bucks, might be more now per year. And I'm like, yeah, that's a lot. Not a lot for enterprise, but a lot for Timmer-prise, you know, so, and they were like, don't worry about it. They were like, don't worry about it. Just, you know, use our latest updates. Anyways, I say that because you can get rid of that nag screen really easy. You can, I don't know how to do it. Oh man, if, so since we're talking so much about Proxmox, have you heard about Proxmox Helper Scripts? No. I did a video on it, doesn't matter. You should check out Proxmox Helper Scripts.
I pay attention to your channel, Tim, and I missed this somehow.
That's all right, that's all right. No, it's going back to algorithm. You know, it's fighting for ears and eyeballs and Google says like, hey, this does not deserve ears, eyeballs. And plus like, dude, there's no way you can keep up. Anyways, that is a collection of tons of scripts, single liners you can run to do anything you want to do on Proxmox and all written in bash. There's one that's like get rid of nag screen that you can run, but there's also like the default script you run is so good. It'll like get rid of nag screens, set it to, you know, remove the enterprise repositories. You know, it'll do everything you want to do as a home labber, which is give me bleeding edge updates because that's all I can get. Turn off nag screens because I can't really afford a license and, you know, disable some other things because I'm never gonna use it. So really cool, really cool repository. It was actually made by someone who passed away and he passed it on to the community. So really cool website. And a really awesome group of people.
What is it called again, one more time?
Proxmox Helper Scripts.
Okay.
Might be proxmoxhelperscripts.com. I'm not sure, but if you check it out, it's so awesome. I mean, it's worth looking at.
Proxmox Helper Scripts, looks like a github.io website. Easily search that on the web. We'll link it up in the shows, of course. I know we're getting close to our time, Tim. I'm happy to keep talking to you, but.
Oh yeah, go to view scripts. I mean, anything. So they basically built almost like an app store for LXC containers, which is pretty cool. Like, hey, do you want to install, do you want to install home assistant as an LXC container on your Proxmox? Yes. Click one shell script, you're done. Do you want to install, oh yeah. Do you want, do you want Ollama with GPU enabled? Yes. Okay, run the shell script. Yeah, go to Proxmox VE Helper Scripts. It's community-scripts.github.io. And that's what they've done is they basically built almost like an app store for Proxmox because they're able to either create an LXC for it or create a VM for you. Like databases, like, hey, do you want to run Postgres? Yes. Do you click this button, run the shell script, you know?
And you're done. This is cool, man.
Yeah, it's really awesome. I did a video on it.
I had no idea this even existed.
Yeah, man, it deserves so much attention. And it's not building VMs, it's building LXCs for the most part, because that's what you want, right? You don't need a full fat VM to run, you know, MariaDB. You just need MariaDB running, you know, with a little bit of storage and a little bit of RAM. Yeah, yeah. Like, like, think about setting up, you know, Bitwarden, how hard that is, you know, or Othalia, you know, this does it for you and, you know, very simple shell script.
Well, all right, then. I'm glad to talk to you, Tim.
Yeah, man, like Home Assistant, like, you want to run Home Assistant in LXC container, you just want to try it out, click the script, two seconds later, it's running. You don't like Home Assistant, delete it, you know?
Are you using this for those? You're probably not using this for those things, that you're using it just for the simple things, though. It sounds like.
I am using it for some stuff. I used it for PyHole, set up PyHole as an LXC container, because I'm like, yeah, why not? Like, do I want to run, you know, sudo apt install and do all this stuff?
No.
Do I want to run it as a VM? No. So, yeah, so I've done it for certain things, for certain things in my lab. Yep. Did I do it for Redis? I might not have done it for Redis because I run Redis in a cluster. I do run Redis as an LXC. I do run it in a cluster, but I don't think I use their scripts. But anytime I, you know, anytime you want to test something out, I mean, this is almost faster than trying it with a Docker container, only because, you know, they fill out all the defaults for you. So you go from, you know, you go from running the script to it actually running. The challenge here and the one thing, the big takeaway is here is you don't run this script in a shell prompt. It's kind of weird. You have to run it from the Proxmox terminal web. That make sense?
You got to log into your web UI and open terminal.
I mean, things might have, yeah, things might have changed since then. But if you run it in your shell, you know, you're not executing it, I think, is the right person. So and that might have changed. They might have made changes. But I know for sure if you run it in the terminal from the web, it works. So yeah, set up Grafana, set up Prometheus. It's all right there.
Wow.
Yeah, it's pretty awesome.
That is pretty cool. I didn't even know this existed. I've been using Proxmox forever. And learn something new every single day. What's left? What's left on your thought list? I know you made a list.
I did make a list.
Anything else on your list? It's like, man, dude, we can't end the show without talking about this.
No, a lot of it was just, you know, I don't want to say complaints. But you know, just the state we're in with how expensive things are, nothing's available. But software is getting better. So that's a huge upside. And you know, people are making awesome things using a lot of tools and getting ideas outside of their heads. A lot of awesome cell phones and stuff in the open source world. I love it. I love it. I went from this, I feel like I went from this drought of software, I don't know, a couple of years ago where I'm like, yeah, you know, I've seen all the Docker containers people run at home. To now I'm like, oh my gosh, there's this whole new world of people building these new containers of things I can run that I've never even heard of before, you know, based on AI or maybe, maybe not. That like, I feel like so refreshed because now I can try all these apps again. So yeah, these are like, for me, it's like server apps. Think of it like that.
So I think this year's going to be a wild year. I would definitely encourage you to unleash Claude on your UDM, whatever you might have. And just say like, help me just examine my VLAN scenario. Examine my rules and my profiles. And it might be like, you know what, Tim, you're pretty good. Or it might be like, you know what, Tim, I can help you here, you know?
I was probably going to say like, why do you have these duplicate rules? There are probably rules in there that I either never deleted. Like, I am not a firewall rule expert. I'll be the first to say is like, I try it until it works. You know what I mean? It's always guess for me.
Well, let me just say that I set up my VLANs based on your video and it was upset with me. Just saying.
It's operator error. Just saying.
Just saying. And I really didn't know much about them. I was like, I've been told, hey, if you run a network, you should have VLANs because you want your kids to be on one thing, which I totally agree. You know, you want your guests to be on another. And I totally agree with that, too. And then everything else is on this trusted network, which I totally agree with. But then all this intermingling, I'm like, well, just because it's an NVIDIA shield, should I put it on my IoT thing? And then the answer is no. The answer should be, it should be entrusted. It just needs access to too much, you know? And there's other things that like, it's the dumb things that you definitely don't want on your trusted network on the IoT VLAN. So the three VLANs I'm for sure I want is trusted, kids, and IoT and guests.
Yep. Yeah, that's kind of where I settled, too. But then I have all of my networking equipment on a separate one. It's on the default one. My trusted is its own VLAN. I do have one more, and then cameras. All my cameras go on one VLAN. But yes, I agree. You got to think about, it's not necessarily what the device is. It's kind of the role that it plays and how much do you trust it. Role, yeah. Yeah, and so some people might say, yeah, put my HomePod on IoT. I don't want my HomePod. That's IoT. Well, if you give it access to your schedule and you're telling it to turn on your lights in your home, I think you kind of trust that thing enough to put it on your trusted network. And so that's the way I kind of think of it now is like, what does it need access to? And how hard is it to go to cross the chasm? Like Home Assistant, I do trust that thing, but I don't want to write 1,000 firewall rules for it to go talk to everything in IoT. So I put it on the IoT because I don't want to do the opposite. So yeah, even my Xbox for a while, I was like, die hard. Like, no, you are IoT. But I'm just like, you know, I put my password into this thing and I play games. It's going to go on my trusted network because I don't want to cross the network just to do other things, you know?
I think you just got to worry about if that thing ever gets circumvented. That's the main thing is like if it gets.
Yeah, yeah, it's all about limiting your blast radius, limiting your blast radius and how comfortable you are with that, you know?
Yeah. I think this year, though, is the year of AI in the home lab. I know it has been already for me. And I do lots of stuff across machines with Claude. I don't just do it on the single machine I'm on. I'm not only using it to build software or to build little itches and scratches and stuff like that. It really is like the moment, for example, when PxM gives me that IP address back, I take that info report it gives me, which I've designed to be agent friendly. In that case, I'm the CLI or I'm the API. I copy and paste it into the agent and say, here you go. Here's your machine. And then it logs in because it's got my SH key. And it's like, OK, sweet. It's a brand new base image of Ubuntu. Let me build your world here for you. Here's the bash script we want to write here. It stores it in Git in the repository in maybe a deploy file or deploy directory. And we always make it idempotent, so that way if we want to rerun it or something like that, or if it needs a different one, maybe a post or a preinstall, who knows what. But that thing has just been so cool to just unleash like that. So I think this year will probably be your year too of AI in your home lab. And that's kind of fun. New worlds, new capabilities, Tim.
Yeah, I definitely need to unleash some agents here and try some of that too. I've been, you know, I do run a ton of AI stuff and I've been doing a lot of LLM stuff especially. But agents, yeah, it would be cool to turn some stuff loose in my Proxmox cluster and just say, go build some stuff rather than using an Ansible playbook. Just go for it.
Rather than one year later, Tim, we should talk in three months. My prescription, if I'm your doctor, Dr. Stokoviak here, OK, is your prescription is go unleash Claude on your home lab network and do some cool stuff and come back in three months and tell me some tales. OK. Because I guarantee you, I guarantee you'll come back a whole different Tim.
I bet, you know, it's just like, again, like a lot of people probably going through this, like, you know, I've had the tools that I've used and I've designed the tools and I know they work and I'm comfortable with them and I write CI CD pipelines, but maybe I should just kind of just give that to AI and tell it the outcome I want and not worry so much about how it gets there.
Especially on the low stakes stuff. You know, if you just got like this little thing you want to do, what's the harm? You wouldn't have written it anyways. Who cares about the code? Why do you need a good dude code review? All you need is a code viewer. Your VS code is now, that stands for code viewer now. I'm just kidding, I'm just kidding, I'm just kidding.
It stands for continue, continue, continue.
Right? Yeah, exactly. Well, I mean, you can also automate a lot of that stuff too to be in YOLO mode. So a lot of people will say YOLO mode for some things. That's kind of Ralph Wiggum, that loop there. The Ralph loop is, if this is your first time hearing about it, Tim, you're going to hear a lot more about it soon, because it's the beginning of what's going to come when it comes to a well engineered. Like, it doesn't mean you have to use it as an engineer. You just have to be an engineer. So be a good engineer with a good to-do list or a good spec and unleash Ralph, that Ralph loop on it. And, you know, if the code works and it passed tests and these different things around security, well then who cares really, in the end, is it the best idiomatic go? I mean, I kind of do if I'm like maintaining this thing, but if the agent's maintaining it and I didn't have the software yesterday, I need it today and now it's here and it solves my problem and I don't have the time to maintain it anyways. It's like, did the tree fall in the woods and you hear it anyway? It's that whole thing, you know? Doesn't really matter.
Well, kind of related to that, one thing I did learn too is, you know, after having AI write some code, is having it write tests too is super important, I've noticed. Yes. So I've noticed like anything I want to keep, have it write tests, because not only does it prove that it works, it's a good hint for it to understand how the code works. Just like humans, you know, anytime out, like when I review someone's code, I'm like, where are the tests? Not because I'm asking where your tests are, because in your test, I can kind of figure out like what you were thinking when you wrote this and what you're trying to do. So I've noticed that too, it's like anything you care about, you know, that you have AI writing, have it write some tests too. It's, you know, cost you what? Some tokens, not really any brain power, but you tell them to do it too, because it's good. It just helps, you know, your agent understand in the future what it was doing before.
YouTube.com, what is your, what is your, do you say what your YouTube is out loud? Can you, is that-
And just TechnoTim. I usually just, yeah, just people Google TechnoTim. It's TechnoTim.com now, huge, huge change. No more TechnoTim.live, it was that, you know, I paid the squatter. Now I'm big time, I have a .com now. I had to put tons of redirects in place. I was like, redirects everywhere. Actually, I did them on the edge on Cloudflare, most of them, but then I had to like find my old links. Then I have link shorteners, then my email, and then I had to set up aliases and cut that over and a lot of DNS stuff. I actually did it in about a day. I had help from AI. I did it about a day. So it wasn't, it wasn't that difficult. It was just a lot of things to remember. A lot of things to remember to do. And still, I think I still broke something, but at least I didn't, at least I didn't lock myself out of my own email. So if you ever change domains, it's a lot. Do it as early as possible or never at all.
My gosh, it's like renaming something. It's the worst, especially if you, yeah, it's just the worst. Pick a good name from the beginning. Do what you can to never have to change your domain ever, ever, ever.
Yeah, yeah, yeah. Never name your business after a street or a product, unless it's like Main Street. I noticed that with so many local businesses, it's like, you know, we're, I don't know, we're whatever Boulevard, but they're not on whatever Boulevard anymore. It's like, oh, wait.
That doesn't make any sense, yeah. We actually have here in Dripping Springs, we have a Mercer dance hall and it used to be on Mercer. And it's not anymore because Mercer's real estate got more expensive. Now they're on like Route 12. Yeah, see, exactly. You're not Mercer dance hall anymore. It's like, where do you go to Mercer dance hall? Route 12. Everybody knows Mercer street here. You know, it's like, it's like, well, I'm here at Mercer. Where's the dance hall? It ain't there anymore. It's somewhere else.
Yeah. That and a product, like, you know, yeah. Or like a price, you know, dollar store. They're not even dollar store anymore. It's like dollar 25 store, you know, but they're still dollar store. That one's kind of generic, but you know, if it was like everything's a dollar or everything's $5 and it goes up to 10, then you're, you're in big trouble.
Well, there was another story called something five.
Oh, a five below?
Five below. Yeah. You can go in there and buy things above five bucks.
That is right. What is up with it?
Now I get it. The color is blue and they make it like it's cold, but.
That's right. Like it's freezing.
Double entendre is a single entendre now. You know, like, come on.
That is right. Yeah. Yeah. Yeah. You better keep a five below in there, you know, temperature wise because, you know, not everything's below $5.
That's right. That's right. Well, everyone, technotim.com, check that out. Thank you, Tim, for just exploring this fun world of Home Lab with me every year. But my prescription is go away and instead of coming back a year later, come back in three months and tell me about your new world. I want to see Tim's new world in three months when you just unleash AI, even more so, these agents on your Home Lab.
I'm here for it, man. I love talking to you too. It's always a pleasure. You give me so many ideas. And now I think I have a lot of ideas. I want to do something right after this next video. But yeah, I'm excited. I'm glad to be here. And it's always nice talking to you, man.
Yeah, same. Same, Tim. Same, Tim. Good seeing you. Glad you're well. Bye, y'all. Bye, friends.
Bye, friends.
A new year, fun time with Techno Tim. Always fun digging in and talking about the future of Home Lab with Tim every single year. Hope you enjoy the show. This is the year of software. More software getting built. More software developers coming in. More people building more software. More, more, more. I don't know about you, but my Home Lab is super active. My Proxmox is basically on fire over there. It's warm in my office because my stuff is always going so hot. It's kind of fun. I want to hear from you, though. What's your Home Lab like? Hanging out at my new fun Hangout, howitworks.club. That's a fun place to be. If you're not there, you're wrong. You should check it out, howitworks.club. And of course, hang out in Zulip. changelog.com slash community. It's free to join. I'm there, you're there. Well, you gotta be there. I mean, go to that URL and sign up. It's totally free. And hang out in our Home Lab channel and talk Home Lab tech with us, because that's fun. But we have some awesome sponsors, depo.dev, our friends over at Tiger Data, and our friends over at Notion, depo.dev, tigerdata.com, and notion.com slash changelog. Those are fun URLs to go to. Check them out, they love us, they support us. And of course, to our friends, our partners, our hosts, where we host our sprites, our machines, our apps, our everything, fly.io. If you're not using fly, well, that's just sad, man. Again, fly.io. And to the beat freak in residence, Brake Master Cylinder. The banging beats keep flowing. Love those beats, and I love Brake Master. All right, friends, show's done. We'll see you next week. Change log plus plus. It's better.