I hope to see Jellyfin support this too (Plex is already getting support apparently) and hopefully it will work desktop-to-desktop and not just between streaming devices and phones.
Although it’s probably not massively needed as Jellyfin can already control remote devices.
If something could cast from one of my devices to another of my devices using the cast button, that’s all I want. I can strap one of those devices to my TV and be golden.
Nice! Would you mind sharing the configuration for permanently mounting? I tried it in the past but I never really could get it to work right consistently.
I’ll look up the exact info when I get home and provide links if I can find them again.
The summary is that I had to add a line to /etc/fstab with the ip and folder route of the nas drive and folder, then the mount point in linux, the file system type for the mount, options that give login creds/group id + establish permissions I want to apply to the mount, and an option that keeps the drive from trying to mount until my network is connected.
Finally, for that last option to work, I had to enable a process that I forget the name of. I think it was in systemd, but I was able to initiate it from the command line.
Since I don’t know your level of expertise, I’ll go step by step. Forgive me if you already know how to do some of this.
In terminal, type “sudo nano /etc/fstab” (without quotes). This brings up a file where you can add the mount point so it mounts at boot and set options for the mount. Go to the end of the file and enter a line like the following, substituting your info in the appropriate places:
//[static ip for nas]/[top level folder on nas you want to mount] /[mount point in Linux] [file system type for mount] [mount options, nas login credentials, permissions] 0 0
Mine looks like this: //192.168.1.0/Media /mnt/Media cifs _netdev,user=anonymouse,password=*****,uid=1000,file_mode=0777,dir_mode=0777 0 0
The “_netdev” option is the one that delays the mount until after your network is up. The “file_mode” & “dir_mode” set the mount permissions. There is info out there showing how to insert a reference to a credentials file instead of placing them in fstab in plain text, but I didn’t bother since I have my computer and user profile pretty well locked down.
To get _netdev to work, I had to enter the following in terminal (without quotes): “sudo systemctl enable systemd-networkd-wait-online”.
I couldn’t find all the sites I visited while setting this up, but here are a few:
I believe they used heritrix at one point. The important bit is that there is a special archive format that they use which is a standard. There are several tools that support it (both capturing to it and viewing it) - it allows for capturing a website in a ‘working’ condition with history or something. I’m a bit fuzzy on it since it’s been some time since I looked into it.
I set it up manually using this as a guide. It was a lot of work because I had to adapt it to my use case (not using a VPS), so I couldn’t just follow the guide, but I learned a lot in the process and it works well.
I had something manual setup originally as well, but it became a bit of a maintenance hassle. Moving configs to devices was a bit of a pain, and generating keys wasnt easy.
@AverageGoob I have this issue with one of my hosts as well. It appears to be a problem with the micro SD card. Same card, different pi = same problem. I'm currently working around it with a watchdog but will need to replace the card soon.
Are you running your OS from USB or from a micro SD card?
I upgraded to the Pi4 but I use this case. It has a daughter board that lets me use an m.2 SATA SSD over USB. But any USB to SATA adapter should work fine
@a_fancy_kiwi I agree, same here. This is the last pi that's running off an SD card with services that do "significant" disk I/O. I have a few zeros that only really write to the card for OS updates. Their job is to collect data and send it via the network. I haven't had issues with that kind of workload using micro SD cards.
Edit: For Pis with write workloads I'm using basic USB3 SSDs. Didn't have good results with USB sticks though.
@AverageGoob The watchdog saves me from rebooting the host manually, but at the risk of data loss (though not more than a locked up SD card). I configured a custom script that writes to a file, when the card has problems, the watchdog kicks in. To keep the script from stressing the card even more, the script only writes to the file every few minutes.
As you said it's only a workaround. I'll move the stuff on the problematic host to a VM with SSD shortly.
Kind of. Linkwarden seems to save as PDF. That’s better than nothing, however preserving a functional copy of the pages would be better. Archivebox seems to do this.
I hate how cease and desist are essentially blackmail. Even if you did nothing wrong, you can still get fucked over by costs of a potential legal battle.
It’s a bigger problem in the States than elsewhere. In the US, awarding legal costs is the exception, not the norm, so someone with a lot of money and access to lawyers can basically intimidate a defendent into avoiding court. In the rest of the world, courts are much more likely to award costs to a defendent who has done nothing wrong - if you file a frivilous lawsuit and lose, you’ll probably have to pay the costs of the person you tried to sue.
This guy’s in Germany, so I think he’d be alright if he clearly won. The issue, however, is that courts aren’t really equipped for handling highly technical cases and often get things wrong.
I wrote a bash script a while back that uses sshfs to mount an ssh server to the filesystem, then uses dd to write /dev/mmcblk0 to it as hostname-date.img and finally unmount the ssh server. Cron job runs that daily.
I run that on each of my rpis. (just one rn, but theres been as many as 4 going).
Any time I have an issue, be that my fault or not, I can just pull the sd card and write the last .img to it directly.
There’s some extra stuff in there too: it checks for the dependancy sshfs and installs it if missing (for deploying to a new system without reconfiguring), cleans up backups older than x days, logging, and the ability to write the log file as a test instead of the whole filesystem.
Sorry, but do you have a setup where you don’t need to worry about the atomicity of that operation? It sounds simple and effective, so I’d like to do it, but I’m concerned I may get something halfway through a write.
I suppose the odds are you’d have at worst a bad log file whereas config files and binaries are used read-only the majority of the time.
I’ve run it on every pi I’ve used for several years now, though they are typically pretty quiet systems. Usually something like pihole or a reverse proxy. Not much writing going on. I’ve restored about a dozen of those images and never had an issue.
I also tend to keep 3-6 backups at a time. If the most recent is messed up for some reason, there’s others to try. (though I’ve never actually had to try more than one)
This looks neat, though sounds like only the grayjay/futo app can cast to it, and I doubt any official streaming app would natively adopt it. Assuming it’s not just casting a video feed from your phone, my guess as to how it works is, it just copies the relevant cookies over to the fcast device where it can just pretend to be your phone as far as the server is concerned.
This would be fine if it supports all the apps I use, and I’m the only one ever casting, but I don’t want to force guests to install and configure another middleware app to just to cast stuff. My hope is that Matter will somehow solve these, but I probably shouldn’t get my hopes up.
I should try setting up fcast either way though, see how it goes. Thanks.
Oh right, that makes sense. I was only thinking of Matter as serving low bandwidth devices but it also runs over WiFi and ethernet so I guess it can do video for security cameras etc. and evidently Casting audio and video also.
I’ve had this happen when I had too many USB devices plugged into it. It was having power underrun, and acting unresponsive while trying to compensate. I solved it with a powered USB hub.
Edit: I’ve had pairing it with an off brand power brick cause the same problem, too. Apparently the 3 and later Pi really want better power quality regulation, and some of the cheapo bricks I had lying around - while providing the right Volts and Amps, didn’t control the variation well enough for the modern Pi computer.
That’s the weird part is that I don’t have any USB devices attached. I have Ethernet, power cable, and the fan on the case has pins going to some headers.
The case did come with another power supply so maybe I’ll try that and see if anything changes.
This right here. As a member of the OpenNIC project, I used to run an open resolver and this required a lot of hands-on maintenance. Basically what happens is someone sends a very small packet requesting the lookup of something which returns a huge amount of data (like DNSSEC records). They can make thousands of these requests in a short period, attempting to flood out the target domain’s DNS servers and effectively take them offline, by using your open server as the attacker.
At the very least, you need to have strict rate-limiting controls on DNS lookups. And since the requests come in through UDP, they can spoof their IP address so you can’t simply block an attacker. When I ran into this issue, I wrote up scripts to monitor for a lot of requests to the same domain name and outright block those until the attack stopped. It wasn’t a great solution, but it did at least make sure my system wasn’t contributing to an attack.
Your best bet is to only respond to DNS requests for your own domain(s). If you really want an open resolver, think about limiting it by creating some sort of sign-up method (for instance, ddns servers use a specific URL to register the changing IP of known users), but still keep the rate-limiting in place.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.