Thread #108567405
File: enchanted.jpg (988.5 KB)
988.5 KB JPG
Enchanted edition
previous: >>108509526
READ THE (temp)WIKI! & help by contributing:
https://igwiki.lyci.de/wiki/Home_server
/hsg/ is about learning and expanding your horizons. Know all about NAS? Learn virtualization. Spun up some VMs? Learn about networking by standing up a OPNsense/PFsense box and configuring some VLANs. There's always more to learn and chances to grow. Think you’re god-tier already? Setup OpenStack and report back.
>What software should I run?
Install Gentoo. Or whatever flavor of *nix is best for the job or most comfy for you. Jellyfin/Emby/Plex to replace Netflix, Nextcloud to replace Googlel, Ampache/Navidrome to replace Spotify, the list goes on. Look at the awesome self-hosted list and ask.
>Why should I have a home server?
De-botnet your life. Learn something new. Serving applications to yourself, your family, and your frens feels good. Put your tech skills to good use for yourself and those close to you. Store their data with proper availability redundancy and backups and serve it back to them with a /comfy/ easy to use interface.
>Links & resources
Cool stuff to host: https://github.com/awesome-selfhosted/awesome-selfhosted
https://reddit.com/r/datahoarder
https://www.reddit.com/r/homelab/wiki/index
https://wiki.debian.org/FreedomBox/Features
ARM-based SBCs: https://docs.google.com/spreadsheets/d/1PGaVu0sPBEy5GgLM8N-CvHB2FESdlf BOdQKqLziJLhQ
Low-power x86 systems: https://docs.google.com/spreadsheets/d/1LHvT2fRp7I6Hf18LcSzsNnjp10VI-o dvwZpQZKv_NCI
SFF cases https://docs.google.com/spreadsheets/d/1AddRvGWJ_f4B6UC7_IftDiVudVc8CJ 8sxLUqlxVsCz4/
Cheap disks: https://shucks.top/ https://diskprices.com/
PCIE info: https://files.catbox.moe/id6o0n.pdf
>i226-V NICs are bad for servers
>For more SATA ports, use PCIe SAS HBAs in IT mode
WiFi fixing: pastebin.com/raw/vXJ2PZxn
Cockpit is nice for remote administration
Remember:
RAID protects you from DOWNTIME
BACKUPS protect you from DATA LOSS
321 RepliesView Thread
>>
are hard drives cheap yet?
also where do i go to find stuff to buy? i assume just buying the cheapest option on https://shucks.top/ or https://diskprices.com/ is a bad idea
>>
>>
>>
>>108567605
idk but seeing shit like this
https://www.ebay.com/itm/227292254831
which was $99 last summer, is kinda sad, also gay
>>
anyone here understand networking? I switched to own router recently and had some weird shit happen, it is fixed now but I want someone to explain to me how, because my knowledge of PPP is not enough and I want to improve it.
I'm in Europe, on home fiber. My ISP normally gives everyone a chinese combo-router w built in ONT, but it has proprietary firmware with no admin access by enduser. I told them that I want to use my own router, the process they told me is: get a router that can tag traffic with VLAN, set your internet traffic to use a specific VLAN ID, use PPPoE creds that you have in your contract, we will send a technician to install a standalone ONT that you'll plug your router in.
So far so good, I set it up, technician comes in, we plug everything in, but I have no internet access. I look at the syslog on router - it manages to complete discovery (PADI, PADO back, PADR, PADS back) with ISP's POP, but fails CHAP auth. We double and triple check the creds, check the VLAN ID, they are correct. Then the technician makes a call to someone on their end, reads them the Mac on the ONT (not my router!), they do something, and magically CHAP works.
I have two questions. First, how in fuck was doing something related to ONT relevant to CHAP auth in PPP? I understand that ONT may need to be whitelisted so that people don't tap into the fiber, but if it was not, how was I even able to reach the ISP's POP with my PADI packets and get PADO back? And if it's point to point, how can their auth server tell anything about the ONT my connection is coming from? That info is probably not in the PADI packet, right?
Second, I looked on my local forums and people who do the same process with this ISP all get the same VLAN ID to tag their traffic with. So this is not about some kind of geographic segmentation. Then, why do the ISP require this?
>>
>>
>>
>>108567405
Hot bitch
>>
>>
>>
>>
>>
>>
>>
>>
File: 59f68297-a909-4580-ad20-550f5fc7a140-490885143.jpg (174.1 KB)
174.1 KB JPG
will having a 24/7 TempleOS virtual machine running on my server protect me from hard drives corrupting and my power from outages?
>>
>>
>>108568597
>>108568511
What server does she use? Old Dell serverz?
>>
>>
>>
>>
File: 1769011157926906.jpg (5.8 KB)
5.8 KB JPG
As if local hard drive prices here are not bad enough
>>
i'm looking for a web application that will control my music library on the host machine. i have a rpi hooked up to my stereo with some storage that has all my music, so i'd like to be able to output music on my rpi, rather than stream it to a client machine.
anyone done something similar? what project do you use?
>>
>>
>>
>>
>>
>>
>>
>>
>>108571702
mpd is perfect for your use case! You install mpd on both the music host and client. You have the host mpd manage the database, tags, etc. The client mpd uses the host mpd as a proxy to the database. You mount the host music directory via nfs. ezpz
>>
>>108567955
>how was I even able to reach the ISP's POP with my PADI packets and get PADO back?
They might allow unwhitelisted ONTs to come up in a limited state in order to provision them remotely and/or get extra logging from their BNG on auth attempts.
>people who do the same process with this ISP all get the same VLAN ID to tag their traffic with
I don't think they stretch a layer 2 domain across all customers who get the VLAN ID. The segment is only locally significant up to an aggregation switch or something like that. QinQ is always an option, so they can safely have overlaps if they want.
>>
>>
>>108573058
they do by default because the default vlan is 1, if you don't setup vlans then it will be exposed but when you properly configure them then it's not, that anon is a noob
t. tplink managed switch owner
>>
>>
>>
>>
>>
>>
>>
>want to use Ansible
>some modules need pip
>they don't use a venv, and pollute your Python install
Can I make Ansible use a venv? Or an alternative question, are these modules that use pip rare, and I just shouldn't worry about it?
>>
>>108573961
sorry, if i'm going to have to deal with an out-of-tree fs, i'm not choosing the one written almost exclusively by one autist who can't get along with others or follow instructions. i don't care about the drama, i care about what it means for me as a user.
>>
>>
>>
>>
>>
Looking for a music server to host. Requirements:
- web gui
- metadata editor in web gui
- good clients available for desktop linux and ios with caching for offline usage
Does such a thing exist? I'm on Subsonic and this shit sucks, esp. linux desktop client situation
>>
>>108573058
for 60 you can probably get one that works correctly, i was talking about the really really cheap ones
>>108573143
unless they changed something recently those $20 ones do have it hardcoded to be open to every vlan. i did a ton of research about this recently on forums because i needed a bunch of cheap switches for something
>>
>>
>>
>>
>>
>>108574369
https://www.linuxserver.io
sorry, i assumed you knew what it was.
>>
>>
>>
>>108574267
>unless they changed something recently those $20 ones do have it hardcoded to be open to every vlan. i did a ton of research about this recently on forums because i needed a bunch of cheap switches for something
My guess is they do that to avoid support calls from people accidentally locking themselves out of the switch.
>>
>>
File: Elunecosplay-5r70ceq5csye1.jpg (249.5 KB)
249.5 KB JPG
>>108567405
Holy slampig~
>>
>>
>>
>>
File: 1767347385874952.png (196.2 KB)
196.2 KB PNG
>>108577008
>>108577393
>This is the way
>>
File: 1769639546381196.png (195.9 KB)
195.9 KB PNG
>>108577491
Those big momentary red spikes are when I kexec into a new kernel by the way.
Kexec is pretty cool. It boots up the new kernel image crazy fast that nobody barely even notices the router had restarted.
>>
I'm not a home server/home lab guy, but I'm trying to find a use for this m4 mac mini I have. I'm thinking having it run 24/7 since its low power, and buying like a 4-bay storage rack, and just enabling SMB on it. I wouldn't get any sync capabilities, but like, it should be just that simple right? I could then just move files to it on my desktop and air drop pictures from my iphone to it, right?
>>
Serious question, why do people put PFSense/Opnsense on small PCs when you can put OpenWRT on a dedicated router? Is it just for learning purposes as in "i can make a router from scratch" or there are other benefits?
>>
>>108577766
Because somebody told them BSD firewall/router/gateway appliances are good.
You can put OpenWRT on a PC too like I did:
>>108577491
>>108577659
It is was better for an enthusiast. I have LXC and Docker working on this thing too so I'm not bound by the OpenWRT repos.
>>
>>
>>
>>108577867
It has character and that's a charm point for me. Maybe I'm just a masochist but I enjoy experimenting with different things on my router.
If you want something more "enterprisey" that runs Linux then there are things like VyOS. I like OpenWRT though, for me it's just so hackable, complete freedom to do what I want on it because even though it's a bit quirky it is still Linux at the end of the day and I know Linux.
Linux is love
Linux is life
>>
>>
>>108577967
Pick something with better hardware. You can put OpenWRT on a PC if you want. I don't know why people automatically think of the shittest MIPS devices, etc, when they think of OpenWRT. It can run on more powerful devices too, and the fact that they target these minimal systems is a plus-point for the desktop too because it means it isn't going to eat up fucktons of RAM and CPU so you have that spare for your DPI and TLS introspection or whatever crazy thing you're doing.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>108579252
money, I'm buying renewed drives to save cash. I really wanted to get a 4-bay DH4300, and in my ideal world I'd have 4 x 12tb new seagate drives. But I'm poor, and i had $300 in amazon credit and I had $500 now. I could afford this.
>>108579282
use case? Just a place to dump torrented games and comic books, and I have a little mini-pc I want to set jellyfin up on it, but honestly I don't really watch movies/tv I just think it'd be cool to be able to have a way for my mom to watch some random shit she wants on her ipad
>>
>>
>>108579335
Hmm that does give me pause. The reviews on Amazon were generally positive. How fast are they at replacing them? I'm fine with going through the warranty process, I'm hoping to eventually afford actual, new drives, but if they failure rate is that high...
>>
>>108579342
turnaround is surprisingly fast, about a week, maybe 2, but I'm kinda out in the boonies on the east coast and the vendor is in california, so I'd say it's pretty good
I setup them up in raidz2 as well so as long as I don't have 3 fail simultaneously I don't lose data, so far I've been fine.
it also might have been an issue on my end, my old server was an old dell poweredge t310 and the PSU was like 300w total. it finally failed recently and I moved to a new case and 650w PSU. I find it hard to imagine they'd be offering a 5 year warranty if the failure rate was usually that high.
>>
>>
File: 20260410_160511.jpg (634.6 KB)
634.6 KB JPG
>>108579219
No reason to buy mystery meat mdd drives when you can just buy recertified drives off of ebay from the manufacturers store for the same or lower price.
>>
got my old 2500k build back from my brother, turns out it was still running a pre-UEFI bios
gigabyte qflash wouldn't detect my USB drive so I ended up running their bios update .exe through freeDOS
worked like a charm, running memtest86+ now.
it had an issue before where it would occasionally lose video output but I think that's due to the aging r9 290x GPU. I pulled that and am running off the iGPU and it's working fine so far.
prolly gonna put it in my truenas system and move my 8700k to a dedicated jellyfin box.
>>
>>
>>
>>
>>
>>
>>108580629
Don't know. I think they have various different versions.
If you're buying an actual add-in card I'd probably just get a 10-gigabit NIC instead.
The trouble with most of these NICs is they're Chinese crap that aren't official.
>>
>>108580651
The on-board networking I have in:
>>108577491
>>108577659
Is a:06:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
Never had a single issue with it. I'm not doing virtualisation in Proxmox or running some BSD thing though.
The eth1 is an Aquantia 10-gigabit card for LAN. If I ever upgraded my Internet to more than 2.5 gig I'd swap that 10-gig card for one with two parts (instead of the single port that goes straight to my managed switch right now) and stop using the on-board Intel.
>>
>>108579647
>discovers a critical data corruption bug in his filesystem
>submits a patch after the merge window
>said patch also includes a rewrite and new features, which are not allowed during the stabilization phase
>kernel devs get mad because it's not the first time he pulls shit like this
>Linus refuses to merge
>Kent says Linus doesn't care about data safety, arguments ensue
>Linus tells him to fuck off
>all bcachefs code is stripped out from the kernel
>>
>>108580803
I think the question of who is right depends on whether his patch required the other changes. If it did, then getting it out should be the first priority, regardless of bureaucracy. If it didn't, and he could have just split it up into a separate PR, then yes, he was being autistic. Either way it seems like the project was doing better after becoming an out of tree module.
>>
>>108580876
It's literally just because like many developers nowadays he hates the mailing list so pulls just come in the form of:
>Please pull my Gitlab tree with all of the shit I've worked on
Fine for out-of-tree development, but not Linux's culture.
>>
>>108574092
Presumably, you can't fail to authenticate indefinitely and there's some sort of session limiter. DDoS is unlikely because someone would need to hijack a lot of ONTs and initiate an attack that would also cut off internet access.
>>
>>
>>
>>
>>108580651
>>108580682
Aren't 10GbE is gonna need some cooling and waste 20/30W on idle? That's sounds overkill for my use case.
>>
>>
>>
>>108581446
He believes his custom LLM is fully sentient, a female, and his girlfriend.
https://poc.bcachefs.org/
>>
>>
>>
>>
>>108582126
I have TrueNAS core on my box. Ir's off right now. I'm gonna add some more drives and then install FreeBSD on it. I find TrueNAS's interface a bit confusing, anyway, even with their guides. I think I prefer managing it through the command line.
>>
>>
>>
>>108582784
Lot's of kernel developers do that nowadays and Linus has no problem with it, it's just that Kent is a retard that doesn't manage his tree properly. It's his project and he's the sole developer so will just push all the shit to master rather than break it out properly.
A bug fix for something should literally be:
>Checkout tag X.X as fix-X.X-blah-blah
>Apply patch that only does that
>Submit upstream
>>
>want to expand my storage
>hdd prices are $500 and up
Guess I'm not buying anytime soon. It feels like computer hardware is now reserved to the elite. When your money is so worthless that someone can just keep spending on it infinitely, Im surprised ram and Hdd prices are in the ten thousands or millions.
>>
>>
>>
>>
>>
File: nothingtolose.jpg (55.6 KB)
55.6 KB JPG
>got an HSM
>ended up never running an internal CA anyway
I am a buffoon.
>>
>>108567405
>Think you’re god-tier already? Setup OpenStack and report back.
Is OpenStack particularly difficult? What's hard about it?
I don't really understand in what world any home server could need it either.
Please, OpenStack users, report in.
>>
>>108567605
>are hard drives cheap yet?
No, they aren't. To me this feels like the peak of how bad things are going to be. Secondhand resellers are completely sold out around me. May vary by location.
>also where do i go to find stuff to buy?
In my opinion, right now, waitfagging is king.
>>
>>108583361
Feels like my friend living in Brazil. Computer equipment is very expensive. Takes people one to two years of savings to finally buy a decent PC there. Except it's hitting everyone globally. I cant even begin to imagine the price now for them.
>>
>>108583378
LTT did a very interesting piece on the sort of things people in Brazil end up with:
https://www.youtube.com/watch?v=hWFDvZ29MCA
It's still expensive for them too, but there are all these weird off-brands you've never heard of because of how much the premium components get taxed.
>>
>>
>>108579219
i bought 9 16tb mdd drives off amazon a couple of years ago. 0 failures so far.
>>108579957
with a 5 year warranty?
>>108581316
latest chips are better
>>108583402
sorry, i don'tg give ltt views.
>>
>>108584175
>sorry, i don'tg give ltt views.
For (You):
https://gofile.io/d/Xjmwnh
Now you don't have to give him a view
>>
File: 1773472451187800.png (393.6 KB)
393.6 KB PNG
Can anyone explain fs that work over network please? I think I'm missing something important - everything seems to be either nfs under the hood or s3, anyways.
And what's actually the purpose of s3 on-prem, if most of services want file level access? I cannot setup rsyslog or db to use s3, can I?
>>
>>108584425
>And what's actually the purpose of s3 on-prem
People run things like Minio. It's for when you want to object storage to be able to just curl some API in your app and boom, you've just uploaded a file to a bucket somewhere to be immediately be served.
>>
>>
>>
>>
>>
>>
How much should I worry about a docker container that is run by the host as root and has the container user as root? It's not running as --privileged so it doesnt have like mounting capabilities, and the only external mount is a ro folder with some files in a scripting language for a weird computer algebra system thing. I really wish I have the guest user be nonroot, but I couldn't get the dumb program to work without it
>>
>>
>>
>>108585385
It's the build system that's Gentoo. You wouldn't notice it as a user. Although they're merging ChromeOS into Android now so it'll probably eventually lose that identity and just be another non-GNU operating system.
>>
File: 1748270789216456.png (231.3 KB)
231.3 KB PNG
>>108583001
>drives working now will eventually fail, then what
buy the same model on amazon and return your old one
>>
File: 1774185761104053.jpg (161.8 KB)
161.8 KB JPG
>>108583835
>i love this feeling.
its a love/hate relationship
>>
>>
File: images(5).jpg (4.3 KB)
4.3 KB JPG
>build home server pre memory shortage
>fall for the 128GB of ram meme
>>
>>
>>
File: Screenshot_20260411-192541_Firefox.png (22.3 KB)
22.3 KB PNG
>>108585947
It isn't enough in 2026
>>
>>
>>
>>108567405
so I have few questions
I have proxmox setup and I want to access certain servers from the internet. I found pangolin is best option now as cloudflare tunnel is limited. My question while reading about is that I have to install pangolin on a vps that will tunnel to my proxmox through newt? I thought it will be installed on my proxmox directly
other question is what's the difference (and I mean in terms of security resources best approach etc) to install servers directly in proxmox (docker/lxc) or use many of these platforms that give you that options within proxmox like runtipi/yunohost/cosmo etc? what's the difference if anyone tried the both approaches?
>>
>>
>>108586204
I have no idea what this pangolin thing is but if you control both ends just configure a Wireguard tunnel, allow all routes through it and then configure the firewall on either ends and be done with it.
>>
>>
>>
>>
>>
>>108586481
i like the beelink version, but i haven't kept up on stock of any of them. i think these days you just have to see what you can get. i can tell you that Qwen3.5 using about 90-100GB of RAM/VRAM is decent.
>>
>>108586481
i've been running the framework one, but i just saw the price increased by 1k kek
i can't complain, though it's kind of slow with dense LLMs since memory bandwidth isn't high enough, but this applies to all of these "ai" mini pcs
considering the beelink one can go up to 140w, i'd assume they clock it higher than the other ones so that might be your best option
>>
>>
>>
>>
>start qbittorrent
>shutdown and restart it before it's finished checking resume status on all torrents
>forces recheck on next startup
I hate this shit
I have 2300 torrents across like 30TB+ of data
it's gonna take days
>>
>>
>>
File: 1748810939303444.png (40.7 KB)
40.7 KB PNG
>>108585755
>>108585947
>>108586354
is a p40 good enough? I had a child worker that inadvertently had concurrency unset, resulting it spawning idle processes up to but not exceeding the thread count and since setting it to 8 it eats 10GB less ram then it used to (not all by itself since that container was always limited to 8gb, but i assume having less processes brought other containers out of a race condition, or something. Idk how else to explain the math)
>>
>>
>>108588952
If they're doing some sort of security research they'd want it. But it will break TLS / SSL and a lot of modern smart shit won't let you install the self-signed certs you need to stop them from complaining.
Passive inspection is better.
>>
>>
>>
File: 1751089610533149.jpg (901.6 KB)
901.6 KB JPG
New to this whole home server thing, am I supposed to get patch cords/ready-made cables, make cables myself, or both?
>>
>>
>>
>>108591519
Always buy. Crimping as a beginner is frustrating. Crimping at any level of experience is error-prone if you don't own cable certification equipment, which isn't cheap at all. The amount of issues bad cabling can and will cause is unbelievably and certain individuals, many of whom should know better, spend crazy time troubleshooting issues at high layers of the OSI level when the root cause is some dumbfuck making his own patch cords and mucking with the wire pairs.
>>
>>
>>108591600
>>108591651
>>108591702
>>108591771
Thanks for the answers anon, but I was wondering what exactly do you mean by certified. How do I know which cheap cat6a/cat8 cables are certified on Amazon, or am I looking at the wrong place?
>>
File: 1745138970386016.jpg (147.8 KB)
147.8 KB JPG
>>108591651
>If you care about uptime you get certified cables
i think if you cared about uptime you'd get a UPS or something
>>
>>108591963
don't buy cheap amazon cables. i still do it, but they fail all the time. mostly the plastic clips. buy from someone like cdw or infinitecables. don't buy a lot until you test the brand and are happy. certified means they've been through a cable tester that verifies that the cable can actually transmit what they're claiming, or that's what it should mean.
>>
>>
>>
>>
>>
Anons, complete newfag retard here, aside from the wiki, on OP, where can a retard beginner like me read more about this? Saw a video and wanted to dive into it both technically and on the setup side, but like I said I'm too much of a retard so I really really want to learn this, any good sources?
>>
>>108592919
i'm running qbittorrent on another PC and using the NAS as an SMB share, so it's bottlenecked by the gigabit ethernet interface
I just realized I could use the "force start" option to make it check the smaller torrents first, currently down to 1004 torrents remaining to be check and working on the sub-1GB torrents now
>>
>>
>>108592990
also, I've discovered a new strange issue
my truenas SCALE box has two shared drives, a 7x18TB raidz2 HDD array, and a single 4TB nvme drive
HDDs are connected to a dell PERC HBA, nvme is using one of 2 slots on my motherboard. motherboard manual says the nvme slots "share lanes" with SATA1/2 and 5/6 respecitively
reading/writing data from my main PC (2.5GbE) to either my 7x18TB raidz2 array OR the nvme drive gives a consistent ~113MB/s throughput, maxing out the gigabit ethernet on the truenas box (and I can see it reported as network traffic in windows task manager)
however, copying data directly from the nvme to the HDDs or vice versa gives highly inconsistent speeds, averaging ~40MB/s but occasionally dropping to 0-5MB/s (and much more rarely jumping close to 200MB/s). task manager shows zero network traffic, as does the truenas status page.
I'm assuming this has something to do with the mobo "shared lanes" thing, but I have no idea why.
I have a couple nvme pcie expansion cards, so I'm gonna try putting the nvme in one of those and see if it makes any difference, but don't wanna shut down at the moment.
>>
>>
>>
>>
>>
>>
>>108586721 (me)
used SAS drives from data centers are being sold in bulk for significantly cheaper then NAS drives, should I buy a HBA card and a SAS drive or will it be worse than using a sata NAS drive?
also im using an office pc.
>>
>>
>>108594471
do I need to worry about my power supply and cooling not being enough? ive heard HBA cards can get hot and draw extra power that might overheat since im using an office pc and not a server rig.
if it wasnt already obvious, im a newbie so im assuming these are stupid questions
>>
>>
>>108574024
I’m pretty sure there should be input params available to specify a venv. From a two second search, I found a built in pip module with virtual environment support and notes about the virtualenv tool to be installed on the host.
https://docs.ansible.com/projects/ansible/latest/collections/ansible/b uiltin/pip_module.html
To be fair, I only cursorily used Ansible at work. I daily drove Helm installs and wrote shell scripts to coordinate certain repeatable things
>>
>>
>>
Has anyone tried using a remote ffmpeg application with jellyfin for remote transcoding? There are two options that show up when you search, one being a rewrite of the other, but they both seem to be deprecated and kind of a pain in the dick to set up especially with a dockerized jellyfin. My media server is headless so I was thinking of using a piece of shit beelink with a piece of shit n4020 I have lying around to transcode since it (supposedly) supports quicksync.
>>
File: 1412938761273652.jpg (359.7 KB)
359.7 KB JPG
>>108594968
>>
>>
>>
Is there a way to run Nextcloud office or onlyoffice to have working access from different devices on the same lan / Wireguard without having to use HTTPS?
I can create docs but can't open. From what I can tell it seems you have to have https working between the server docker and the browser on the devices for it work.
>>
>>
>>
>>108598269
Anywhere between 0 and 30. I prune my history every 2 days and I don't like keeping too many tabs open, so sometimes I just bookmark everything I might want to check out later. That makes about 2k bookmarks.
>>
>>108567405
idk where to ask, I installed openwrt on my tp link archer a6 router, then I wrote the correct values for static IP, mask, gateway in the wan interface (original config on the same router with the default firmware), but I cant connect to the internet. and the RX on the wan increases, but I can't ping anything even the gateway. what could be the problem?
>>
>>
>>
>>108599087
yeah cant ping no DNS no gatewa, traceroute gives 1 * * *\n 2 * * \n 3 * * *, which I don't really know what means, don't even know if it goes through the gateway. and yeah, I set up everything through the web UI, tried to debug with chatgpt but to no avail
>>
>>
>>108599310
I don't really know, I'm not good at networking. probably some modem. Long time ago I changed the old router to a newer one by just coping the ISP provided static IP values (but I don't remember if I called or not afterwards for them to authenticate it or something) and now I just installed openwrt on the newer router and set up the wan again. Now I even pulled out the older router and with it it still doesn't work. I am either severely retarded and set up something very obviously wrong, or just have to call the isp and ask them. Maybe they have to allow for the MAC address, but the router is the same, or they detected some difference another way and disabled something. But i get a lot of RX, even if i haven't set it up (when the wan is DHCP by default), so i don't know at all. chatgpt didn't help
>>
>>108599415
if you're comfortable with shell ssh in as root and tcpdump the wan port. if the router can't even talk to the internet maybe you have a basic setting wrong or are plugged into the wrong ethernet port on the router.
>>
>>
>>
>>108599438
>>108599609
ok I got tcpdump, I do tcpdump -i wan , and I get a constant stream of:
IP x.x.x.x.num > x.x.x.x.num: UDP, length 1348 (or sometimes 2696). both ips strike me as completely random. For like 20 minutes I have 3gb of rx traffic, wtf is going on.
>>
>>
>>
>>108599707
unplug and verify your wan port is the one you think it is. you can see shit on the tcpdump, that's something. that means you probably have something misconfigured. you could try using dhcp on the wan port...
>>
>>108568841
>>108570214
>some fat hoe with the worst photoshop skills I have ever witnessed
Actually was hyped for some primo slampig but I can assure you, from a certified slampig enthusiast, that bitch is covered in ugly ass stretch marks and she has never posted her fat gunt without 9000 hours of photoshop.
Thank you for your attention on this matter.
>>
I wanted to set up a homeserver that I could play around with setting up various services on, so I thought I'd install proxmox, but I hadn't realised that LXC is not a full docker replacement but just a way to run nested linux environments, with no infrastructure for pre-defined "application containers". Which is annoying, especially when so much stuff provides ready-made docker containers already.
Now I see a few options before me:
>create LXC containers and then manually ssh in and install and set up each service - seems labour-intensive and not reproducible
>create a container template with docker installed, and run dockerized apps inside instances of the "docker container" - seems bloated and with double the overhead for no reason
>use community-scripts.org and trust downstream bash scripts maintained by random people for all my services - seems dodgy
>install docker on the debian host and manage it outside proxmox - seems like it defeats the point and might not work well with proxmox's networking management etc.
>ditch proxmox and just install debian as a thin host and run everything on docker - gives up the nice VM management plus proxmox seems cool
Maybe I'm overthinking this or looking at it wrong. What would you guys say is the "normal" way to run a maintainable and flexible homeserver with a bunch of random services? I have no experience managing physical servers yet so I'm honestly not sure what's the normal course of action here, especially if I want to keep proxmox (to learn, to have its networking capabilities, for flexibility in the future, etc.)
>>
>>108600021
proxmox is for wintards to have a linux server running a bunch of windows vms. it looks like ansible and bros may provide a way to config lxc containers--there has to be a config management solution for that.
>>
>>108600098
>proxmox is for wintards to have a linux server running a bunch of windows vms
I was half thinking of running OpenWRT in a VM (it's not like I need zero-overhead routing performance in my home network, at least right now), or Home Assistant OS that can install its own "addons" which I think are not always straightforwardly documented for manual installation alongside a container (that doesn't support addons), etc.
Though actually I say "etc" and I can't think of any other examples, so I suppose I could give up and ditch proxmox. What's the "normal" way to run a server with containerized services then, just debian or maybe alpine with docker installed?
>ansible to config lxc containers
Not a terrible idea, it would at least make the container setup repeatable, but a lot of shit comes with dockerfiles but not ansible roles, so it'd still mean writing manual setup roles for every service or container type I want.
>>
>>
>>
>>108600874
Just getting the LXC set up was a major process. I spent a lot of time figuring out how to pass through my integrated graphics card into an unprivileged LXC, and from my understanding doing this sort of defeats the purpose of running an LXC in the first place. I remember the UID mapping being an annoying process as well. Even after everything was set up I was constantly having little issues with reliability. Maybe all of these issues were caused by me running an unprivileged LXC, but I wasn't really willing to experiment further with LXCs after my experience.
As for running on the host, this was a lot less bad than running in an LXC. It does feel kind of dirty since it's bad practice to pollute the hypervisor, but I never had any actual issues. I stopped doing this because I'm starting to understand that best practices exist for a reason.
>>
>>
>>108600929
To be fair I don't think I'll be passing through any devices, my containers just need storage mappings and virtual networks. But yeah I don't like the nesting either way
>>108600937
Though it's not exactly better with a VM instead of an LXC in the middle.
Really I'm just disappointed that there doesn't seem to be any support from LXC for "application containers" and you're just supposed to set everything up yourself, leading to shit like docker-in-LXC even being considered when they're literally the same thing except for the configuration. Like shit, I dunno, even a parser for Dockerfiles that converts them into ansible commands or manually executes them would be good enough, so the setup could be "create LXC from base distro -> clone repository -> build from Dockerfile the project already defines -> launch".
>>
>>108600929
your use case is fairly niche
>>108600937
yes, i was expecting firewall issue or networking of some kind. i only really use lxc myself to test things on debian. i can leave it running and it doesn't eat as much ram as a VM would.
>>108600964
i found the ansible thing, but i'd expect something equivalent to kickstart or some form of bootstrapping to be an option somehow. somehow...
>>
>>108601070
>i found the ansible thing,
Oh, what do you mean exactly? My main hangup is that most projects don't provide official ansible instructions so I'd have to write them myself (or source some rando's ones which goes back to the same issue as just using community-scripts.org). What did you find?
>>
>>
>>
>>
>>
>>
>>
File: file.png (461.6 KB)
461.6 KB PNG
Need a SFF router recommendation.
My fiber modem is 90 feet of cable run to my server rack, so I need a router to go next to it that is smaller in size than a rack mountable one.
I need something that has one or two SFP+ ports as I've got symmetric 3GB fiber.
I currently use OPNSense on on of my VM's and really like how easy it is and how many features it has
Anybody got a recommendation for a good router I can use OPNSense or similar software on (VyOS, etc.)
WiFi not needed.
The route10 looks like it checks my boxes but the lackluster configuration is keeping me away
>>
>>
>>
>>
>>
>>
>>
I need help since Sonnet is now inusable:
I'm running Gemma 4 31 b at Q4 on a m1 studio with 64 gb of RAM off ollama, open webUI, and open terminal for commands execution.
The model takes a few minutes to load, but when it finally starts writing code it just stops midway through, i check ressources and RAM isn't filled completely, and i have ctx set to 8192 for larger context prompts for big gens and 24/7 generations.
Wtf is the bottleneck here.
>>
>>108594990
That module sets up a venv, which is a little different from what I want to do. I could use it as a step before any other Ansible playbooks run though.
The way to do it is to set something like this after making the venv:ansible_python_interpreter=/home/{{ansible_user }}/.local/share/ansible/venv/bin/py thon
>>
>>
File: 2618_hi_res.png (2.5 MB)
2.5 MB PNG
What does /hsg/ think about pic related?
https://mikrotik.com/product/hap_be3_media
>>
>>
I've got 2 old PCs collecting dust. One of them is running a FM2+ motherboard and the other one is running a AM2+ motherboard. Could they be upgraded to run a plex server? My dad wants to watch movies but doesn't know how to download them and he refuses to pay for netflix (he lives across the country so I can't really do it for him).
>>
>>
I'm really struggling to get my head around Docker. I understand what it does, I just don't understand why you need it and why it seems to be the go to for so many different things.
I'm using it for Games on Whales, and that I understand. You want to create spaces for people to play games are use a desktop even if they don't necessarily have an account on the server, and you don't want to make a mess everywhere, and also it packages up some preconfigure streaming stuff, which is nice.
But then there's a docker image for CoolerControl, a fan manager. Why on earth does that need to be containerized? I will not have users competing for control of the fans. I just don't get what the point is.
>>
>>
>>
>>
>>108604449
Docker for services on a server is nice because it makes sure shit doesn't pollute each other.
>file server A requires some lib depencency version X
>calendar sync server B requires lib version Y
>photo sharing server you want only officially supports Sneed Linux and isn't packaged anywhere for the distro you're using
>all of them spill config files into /etc, require user config files in ~/.config/, read random environment variables
>one of them stores cache in ~/.cache/, another drops random files into /var/cache/, another auto-downloads components of itself into /opt, another creates random files in /var/tmp for some reason
>two months later you decide you don't like service B and want to swap it out with an alternative
>also you want to try out server F just to see if it's useful, but you might uninstall it next week
>also three different services need a postgresql database and another one requires specifically mysql
None of that is in any way impossible to manage, but the end result is that after some time of running your server it's just going to be a mess with random stuff touching random files everywhere, configs scattered around, dependency hell, version conflicts, package sources from random places, databases running stuff from services you don't even remember or may not even be using anymore, etc.
Docker is a nice way to isolate basically everything about an application except its interface: any files/directories it explicitly needs to read from or write to, and its networking access. So each service gets its own chroot into a linux distro that it officially support, with the dependencies it needs, and can write its own cache files wherever. You "mount" your config files from whatever central location you want, into whatever schizophrenic location the service expects to find them. You have a file describing exactly which directories are mounted for each service, the env variables it uses, etc.
>>
>>
File: file.png (31.7 KB)
31.7 KB PNG
>>108587270
>>108593036
60 hours later and I'm down to 5 torrents remaining lol
>>
>>
File: 1766054255225950.jpg (206.7 KB)
206.7 KB JPG
Is it worth it trying to set up a home server for family when priority n. 1 is I need it to just work?
>old pc with some spare hdds in it
>linux distro
>syncthing
Would something like this work? I need it to be able to self-recover from power or internet outages and most importantly I need it to just work. Is it possible or should I just stick with google drive (bleh)?
>>
>>108604778
It should be able to mostly just work but not with 0 maintenance, you will need to update it regularly which can probably be done with very little maintenance but it's impossible to give a guarantee of "yes your linux server will work for 10 years straight without issues". Even if 99.99% of updates happen without issues there's no guarantee that one day you might not need to ssh in to fix one thing that didn't work properly.
You will also have to manage storage, hard drives can die. You can set up backups but you will need manual intervention if something goes wrong, to replace a drive for example.
>self-recover from power outages
I think this is possible too but another anon will have to give more details on this.
>syncthing
This generally just works, but it's also not quite a dropbox replacement: there's no way to browser a remote folder, its only functionality is to sync everything locally and then you can access it. Otherwise yeah, some minimal headless distro + syncthing is probably your best bet.
Also advantage of syncthing's model is that you always have backups of the data since everyone is always syncing the full folders.
>>
>>108604778
syncthing has been surprisingly reliable for me, at least on windows/android
I have it syncing:
1. a folder between my phone and NAS containing my keepass database and pdf copies of car registration and insurance
2. phone camera folder and clover downloads folder, so every photo I take or shitpost I save gets backed up to the NAS
3. various ROMs folders between my NAS and my brother's gaming PC, so he automatically gets a copy of my collection for him/his kids (I also built them a gaming PC for their living room preset with a bunch of emulators, plus we use steam family sharing)
power is solved easily, just set your BIOS to auto-power on after AC loss
internet outages are the only real issue, though if you wanted to waste a lot of money you could get a router with failover/load-balancing and pay for a 2nd internet connection (actually not a terrible use-case for 5G Cellular internet)
>>
>>
>>
>>108604542
I think, in short, it's similar to compiled in libraries for Windows software rather than the more common shared libraries used in Linux. Is that the main part of what you're saying or have I missed the point? I understand that isn't the only thing you're saying mind.
I can see the appeal of that, but also, for me at least, my package manager has handled that stuff OK, which is why I didn't see the use of it. On the other hand there have been occasions where someone has had to make conflicting packages for certain versions of a dependency, and luckily I had no conflicts because I had only a single program depending on that package at any version, but that could get messy quickly if there had been conflicts. Thinking back on that I can understand the appeal now. Thank you.
Do you think the CoolerControl example is a bit silly or would you containerize that as well? Is there any reasons not to? More space needed for libs, but that isn't a huge concern for me right now.
Basically, I'm wondering if you would say "if it has a container, use the container" or whether it requires thought depending on the specific piece of software.
>>
>>
>>108604849
It's not just libraries, it's everything else.
>general distro layout
E.g. one service only officially supports distro X, another only supports distro Y but there are unofficial packages made by some random guy, you personally prefer to use distro Z, and they all have slightly different assumptions to how services work or whatever.
>paths and files
You can have all your config files in /opt/configs or in /var/configs or in ~/.config/ or whatever and map them to whatever the service expects. Service A wants to read from /etc/service_A/conf, service B wants to read from /etc/conf.d/B.ini, service C wants to read from ~/ServiceC/config.conf and you want to run service C under its own user so now you also have a random service user with its own home directory polluting your machine.
>global services
You can have three instances of PostgreSQL, or you can share it between services in a way that's visible. You can have separate instances of Redis or whatever services want.
Basically EVERYTHING a service does is isolated to the container, except the things you EXPLICITLY map and allow, which are also visible in the docker config file so they're saved and documented and you can always read them if you don't remember how you set it up. If you want to turn off and uninstall a service you can be 100% sure it didn't leave anything hanging around on your system. If you want to have two copies of the service running for some reason you can easily do that by just tweaking the mapped directories and ports etc.
One thing I didn't have space to mention in my first post is that this is all about servers. For desktop software imo it makes no sense, as you rarely need to isolate it like that; it's MEANT to interact with your desktop. Using a docker container for fan control or whatever does seem very silly to me.
I'd only use it if my distro's package manager didn't have it and there was no appimage or whatever.
>>
>>
>>108604894
>For desktop software imo it makes no sense, as you rarely need to isolate it like that
Yeah, makes sense. This is for server use only. I know it's fairly normal to just leave fans on motherboard control for servers but I'm trying to minimize noise here and I find the mb curve is too aggressive. Have a few other reasons too.
>>
>>108604964
For a server then I dunno, it doesn't matter much either way to me. If there's a native package for your server distro then use that. In my mind, it's a system service, so it might as well integrate with the system; it doesn't need to be carefully isolated, composed with other services, or whatever. The fact that it probably drops its configuration into /etc/ is fine by me because again it's actually a system configuration.
But also since it's on a server you probably are already running dockers for other services, have the daemon in the background and have all your workflow set up for managing the rest of your docker containers, so adding one more container for fan control is not a big deal either. It wouldn't be my first choice but if say there's no package for your distro or it's unmaintained and outdated then I wouldn't be too bothered about just using the docker container either at this point.
>>
>>108604449
>I just don't get what the point is.
People tend to abuse things out of convenience. Docker "just werks" so you have lots of webshits writing software that only runs correctly in Docker even though there's no real reason to containerize it.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>108605219
Well, presumably caddy is running on your machine, and it listens to traffic on one port and proxies it to your localhost service. So the incoming traffic for that service is intended to arrive to the port caddy is listening to, so it goes through the reverse proxy; whereas traffic that arrives to port 12345 will go directly to the service, not caddy.
The port caddy is listening to is probably 80 or 443 by default if you're listening to HTTP or HTTPS, respectively, though of course it depends on your config if you changed the default.
Now, where do you want traffic from your router to go? To caddy, or to the service directly? That's the port you have to forward to.
And the port you forward from (if your router sets this differently) is going to be the port people use to connect, so you can of course have one external port, the router forwards it to a different port on your machine that caddy is listening to, and caddy proxies it through to a third different port that your service is listening to (12345 in this case).
>>
>>
>>108605219
Don't listen to this retard >>108605277
Listen to bid dicked networkchad who actually knows a few things >>108605210
>>
>>
>>
>>108605161
bit/vaultwarden are much better. if you use syncthing keep backups.
>>108605210
>>108605305
fuck off.
>>
>>108605396
>bit/vaultwarden are much better.
Elaborate my friend. I've been using keep ass for like 10 years and I've never felt like I was missing something, so I'm curious what bit does better.
>backups
Syncthing enforcing local sync to every connected device means my password database is better replicated than most of my other files, and I do have it set to versioned sync so syncthing keeps multiple old versions every time the file changes
>>
>>
File: file.png (27.6 KB)
27.6 KB PNG
>>108593076
>put nvme on PCIe expansion card
>800 MB/s writing from raidz2 to nvme
>~700MB/s writing back to raidz2
welp, apparently my motherboard nvme ports are retarded after all
>>108593308
just lz4
>>108596307
again, network transfers have no issue
>>
>>108603353
the router doesn't save me a cable run, but I can't put the router in my server rack for a 100' WAN cable and then run another cable all the way back upstairs with another 90' cable to the switches for my hardwired devices.
I also will be running copper since I'm a rentoid and can't put fiber runs throughout the baseboard and stuff without worrying about it getting damaged, so I'm stuck with running 10GB over copper.
>>
>>
>>108605418
i moved from keepass to bitwarden. one of my motivations at the time was multiple user support, but the bitwarden android app is nicer than what i was dealing with from keepass. the browser addon is also nice. i never got keepass browser integration going in a way i liked. also, i was syncing my keepass to google drive and that made me nervous.
>>
>have home media server
>just Windows 11 on an old 5600x pc
>(2) hard drives in raid 1
I want to break the raid 1 setup. right now I basically have Movies and TV Shows. I'd like to use (1) HDD for Movies and (1) for TV Shows. Given the fact they're currently RAID 1, both drives already have the data on them. Is there any way to break the RAID 1 and keep the data on both? I rather just delete the data later than reformat a drive and write the info again
>>
>>
>>
>>
File: Screenshot 2026-04-14 at 22-30-13 Wishful Bcalm.png (735.8 KB)
735.8 KB PNG
after 2 hours of fucking around i finally got rompr/mpd working on a rpi. now i have no audio over bluetooth.
>>
>>108606611
Good you don't NEED audio. You want audio big difference, Faggot.
>>108606483
Or a 3rd option. Inserting the drives in his anus where his head belongs.
>>
>>
>>
>>108606915
not for wifi silly, for routing. I need something capable to route traffic for my entire network up to 10Gb/s
Modem -> Router -> 20' Cable -> switch -> 70' Cable -> Server rack switch
I can't use a server rack router because that would require
Modem -> 90' Cable -> Router -> 70' Cable (back up tracing other cable) -> Switch -> 20' Cable -> Switch
I don't want to double up on wires to bring the WAN all the way to my dungeon where the server lives.
>>
>>
Stayed up way too late yesterday setting up dual Kanidm instances on my homelab and vps that replicate via a tailscale (headscale) tunnel between the two containers. But it's working now, neat. Maybe I'll add a third one on a second homelab mini pc to have even more redundancy in case one of the servers goes down cause of my tinkering.
Having redundant DNS instances like that turned out to be a great idea, I suppose the same goes for identity providers.
>>
>>
>>
>>
File: 1519575114568.jpg (41.6 KB)
41.6 KB JPG
Is syncthing the most hassle-free way to keep my files synced across my computers?
I have a home server and I've set up a network share, but despite everything I always keep a lot of storage on my devices too (8BT HDD in my desktop, 4TB SSD in my laptop) and I don't want to manually copy paste stuff like a caveman.
>>
>>108607915
Stuff like Seafile, and I think maybe OpenCloud?, support synced folders.
Syncthing is fantastic BUT it has no remote browsing feature. For every folder, you either sync the entire thing locally or you don't access it at all. If that's fine for what you want then no reason to switch. But something like Seafile can function more like a Dropbox replacement, in that it supports both remote folders you can browse and download only what you need, as well as both one-way and two-way synced folders where you have everything locally and it's also replicated on the server.
>>
>>
File: nervous.gif (571.7 KB)
571.7 KB GIF
>buy a new 14TB drive to pair with the current 14TB drive
>finish copying all files to the new one
>old one suddenly dies
>>
>>108581144
I looked into doing it for my hobby(tm) projects and honestly it seems way more complex than it needs to be and I doubt you'll write a policy that will be ultra secure anyway. reminder too selinux starts to breakdown around namespacing too since I don't believe it supports any concept of nesting?
>>
File: 1537455854212.jpg (18.8 KB)
18.8 KB JPG
>>108608571
This has happened to me a few times.
I bought new headphones and my old ones died the day after the new pair arrived.
I was looking for a larger microwave to replace my old one and just as I picked a model the old one died.
>>
>>
>>
>>
>>
>>
>>
Is this the way i should be automating updating docker images or is there some better method I should be doing? I have them all in /opt/docker and then created a update_image.sh file with the followingfor d in *; do
if [ -d "$d" ]; then
cd $d;
docker compose pull && docker compose up -d --remove-orphans;
cd /opt/docker;
echo "";
fi
done
docker image prune -af;
Then i have a cronjob like for monday at 2am0 2 * * 1 sh /opt/docker/update_image.sh
>>
>>
>>
>>
>>108610392
There's a fork of it that's actively maintained:
https://hub.docker.com/r/nickfedor/watchtower
>>
File: Karp_Headshot.jpg (870.4 KB)
870.4 KB JPG
>>108570459
Is going to protect your infra against Palantirium aberrations.