This looks neat, though sounds like only the grayjay/futo app can cast to it, and I doubt any official streaming app would natively adopt it. Assuming it’s not just casting a video feed from your phone, my guess as to how it works is, it just copies the relevant cookies over to the fcast device where it can just pretend to be your phone as far as the server is concerned.
This would be fine if it supports all the apps I use, and I’m the only one ever casting, but I don’t want to force guests to install and configure another middleware app to just to cast stuff. My hope is that Matter will somehow solve these, but I probably shouldn’t get my hopes up.
I should try setting up fcast either way though, see how it goes. Thanks.
I use technitium but it’s like pihole, designed for a few concurrent users in a local network? Instead you want that anyone in the world can use your DNS?
But you would only attract bad actors, normal users won’t use a random DNS server as it could redirect specific sites to phishing pages
I actually had a lot of fun a couple years ago deploying PiHole on one of my RaspberryPi’s and routing all my household machines through it. It worked great UNTIL… my kid was turning in empty homework on Google Classroom and his teachers were getting up him about it. We chastised him thinking it was his fault until I finally discovered that Pihole was messing up his uploads to GC and literally causing this problem. I got super angry with it and walked away without even trying to troubleshoot. Had to profusely apologise not only to his teachers but to him.
Abrechnung is really good and actively developed and improving. The UI is already pretty satisfactory, and there’s also an API which is needed if for example you want to bulk-import a spreadsheet, for now you have to code it a bit.
I think openvpn works completely fine for most use cases and didn’t have any trouble with it at all. I did however switch to wireguard on my gateway and I get a little better throughput compared to openvpn. That being said, I’m also using a pfsense box as my home gateway, so access to internal services has been easy as general routing gets.
I ran Pi-hole for years. Switched to adguardhome running on 2 servers (primary and secondary) with AGH sync keeping the two instances identical. I like the UI better, dns rewrites, and the ability to simply block services entirely with a single click.
I did this as well, I still have 2 pihole instances running with gravitysync for now, but AGH sync is much easier to setup and maintain. My 2 pihole instances are running for my guest network only and AGH is running everything else.
I set it up manually using this as a guide. It was a lot of work because I had to adapt it to my use case (not using a VPS), so I couldn’t just follow the guide, but I learned a lot in the process and it works well.
I had something manual setup originally as well, but it became a bit of a maintenance hassle. Moving configs to devices was a bit of a pain, and generating keys wasnt easy.
I can’t help you, but i had a similar experience with similar technology. I spent lots of time recovering content. I succeeded, but i have software development experience. I didn’t want a repeat of that.
In the end I picked windows file sharing for home networking and filebrowser for occasional access to data files from outside of home networking.
Web dav is a very good alternative. Apache web server will be easier, but nginx can be made to work.
Joplin is another good alternative, which is based on web dav.
You can pay around $2-$3 a month to shared services provider for hosted web dav.
I went with a pi running pi-hole. I got it as a project where the tool is the project. But, it’s essential infrastructure now and I don’t want to mess with it incase I break it. I’m an idiot with a poor history with pi guides so far, so I will break it. It’s running the adblock fine, I assume it’s doing the tracking and malware blocking fine too.
Sadly, that’s where I leave the project for now, I had intended to give it a HDD and some… other… software but I really don’t want to break it. I tried convincing the better half that I obviously need to N+1 but she wisely did not see reason.
If you want to try setting it up in high availability with failover, give me a poke. And until then - go to Teleporter in the settings, and download the backup. You can restore from there.
One thing worth saying is this - you can grab a cheap refurbished ssd (the smaller - the better), check it’s SMART data for any red flags, and attach it to the pi as OS disk. It will be much more reliable than SD, but overkill if you only run pi on the box. Alternatively look into log2ram, it keeps your SD card alive for longer :D but backup first!
Thanks. I already have Log2Ram running to prolong the life of the SD. My planned disaster relief is a spare SD, already set up and taped to the box ready to swap and reboot in case of emergency. SD cards are cheap so chucking <£10 at the setup once in a while is no big thing. A fresh install on the new SD allows me to improve on what I’ve already done, for example the new SD I’ll run DietOS instead of Raspbian, and reinforce skills. Less time efficient but that’s no matter when the box is working and it’s a hobby. I can then keep the old SD card taped inside the case as a physical back up. Perhaps more expensive in the long run, but an SD card taped to the inside of the case with simple instructions is an easy sell to the fiancée.
My experience with guides has shaken my confidence quite a bit. Which is fine, I’ll get over myself and the point is to learn, so me hitting snags is a good thing. But, until I have a functioning back up I’m not going to be fucking with it. Facebook cannot go down on account of my education.
But if I may, I have one question, a bunch of recommendations have the setup “segregated” (I dunno the word) in Docker and Portainers but I don’t understand the rationale. I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?
My current set up is Raspbian OS running Pi-hole as ad, tracker, malware block and DHCP (the ISP router is a Sky2 box so no IP or DNS customisation), Log2Ram and UncomplicatedFireWall.
I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?
So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.
The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).
Basically I have a very simple host setup with only a few packages installed. Then I would remotely configure and start up my containers, expose ports etc. And I can cleanly define where my configuration is, back up only that particular folder for example and keep the rest of the setup easy to redeploy.
I have nothing to add, and an upvote isn’t enough. Truly, thank you for your time, there’s a lot to think about.
I think for this initial iteration I’m going to direct install in the name of keeping it simple. Next go around I’ll try containerising, just to learn if nothing else. If I out-grow the Pi4 they’ll be good skills to have.
There’s nothing wrong with a single HDD in an old desktop except for the risk of failure.
I would start by getting one hdd that’s the same size or larger than the one you have and using it as backup. If the old HDD is very old and small you can probably find a larger one cheap, don’t go out of your way to find another small and old one.
Something like Borg Backup will be perfect if you use a Linux filesystem because Borg is incremental, has deduplication and compression built-in. There is a very simple graphical app for it called Pika Backup (for Linux).
There are other solutions if you use Windows but even a simple copy of your important files is better than nothing. Get a HDD and copy files to it right away.
Another backup solution is to buy a DVD or BluRay burner (can be USB or internal) and backup super important files to optical disks. This may or may not be cheaper than a HDD.
Do NOT rush into RAID, Unraid, TrueNAS and other fancy stuff like that. Your priority right now should be backup not RAID. RAID is a convenience for keeping a system running when a HDD fails but it is NOT a replacement for a good incremental backup.
After you have a backup in place and use it regularly you can consider whether RAID and availability is something you want/need.
It might not be applicable to you but in many cases single board computers are used where there is minimal changes in files in day to day basis. For example when used for displaying stuff. For such cases, it is useful to know that after installing all the required stuff, the SD card can be turned into read only mode. This prolongs its life exponentially. Temporary files can still be generated in the RAM and if needed, you can push them to an external storage/FTP through a cron job or something. I have built a digital display with weather/photos/news where beyond the initial install, everything is pulled from the internet. I’m working towards implementing what I’ve suggested above.
That would not be ideal, as I want to keep most logs of the system, and I don’t have a syslog server and even if I had one I wouldn’t be able to get everything I need… But it is a quite good idea for other usecase and I might do that with my future projects that doesn’t need a rw filesystem!
I love that idea, and I’d love to implement that. But I honestly can never figure out how people are working with services that enables the user to change settings (for example, to set their location to get their local weather) while still maintaining a read-only system.
You keep the user-changeable files on a separate filesystem. Whether that’s just a separate partition, or an external disk. Keep the system itself read only, and write-heavy directories like logs and caches in RAM.
I am still figuring it out since it is my hobby and I’m unable to devote much time to it. But I think it will be something like Ubuntu live disks which enabled you to try Ubuntu by running it from a DVD. You could run anything like web server, save files, settings etc. Only they would not persist after a reboot since every thing was saved in RAM. Only here it’ll be a write locked SD card instead of a DVD.
I’m also sure there must be a name for it and step by step tutorial somewhere. If only Google was not so bad these days…
Your biggest bang for buck is with cheap second hand drives, keep a spare on hand to rebuild the array / volume when one dies. You should be aware that the number of drives in the array directly affects the amount of usable space, 2 drives 50% of total available (a direct mirror, to compensate for the loss of one drive), 3 drives you get 66%, 5 gets you 80%. Say you get 6 4Tb drives, keep one as a spare and the remaining 5 will give you 16Tb usable (with one lost to parity so you can survive one disk failure). You then immediately want to save for a 16 Tb external drive for offline, preferably offsite backup (RAID is not Backup!). As others have wisely said, anything can be used to host, but aim at the most power efficient. If necessary get a PCI card for more SATA or SAS ports. Identify high value, small files, documents, current work, personal photos, source code and so forth and arrange for cloud backup, preferably with local encryption so you needn’t trust the cloud provider, preferably in at least two places (so one can go tits up or enshittify without bothering you). You’d be surprised what fits into a free 10Gb account if you triage well.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.