It could be an issue with the codecs (browsers are usually pretty limited in what they support). You could try to use a client like Jellyfin Media Player instead. It bundles libmpv, so it plays almost any video format there is.
I don’t know of anything built for that purpose but you could use home assistant dashboards to pull it off pretty easily if you already have an instance set up.
I would like this for my media server, basically like a drop-in replacement for NFS shares. I still need it to be some sort of share instead of having to prompt it to send media across. Great project though, thanks
Back in the day I bought a fridge freezer combo, second hand, no handles. Used to be a built in model. As handles I used two magnets from full height drives, they were ludicrously strong and shaped like a little bit like a handle.
Full height drives were 3.25" high for those who are wondering.
It works the same either way. Borg does a lot of different backups on my home network. I also have more than just Borg backups that I want off-site, so an rclone of everything from that nas share once after everything else is done makes more sense than duplicating Borg everywhere. The rclone’d stuff can be used directly just like if it was put there by Borg itself.
Sending local ZFS snapshots to the remote ZFS might be problematic. Consider accidentally deleting important data locally and nuking all of your local snapshots, then sending that to the remote ZFS. You lost all of your snapshots and there’s no way to recover the deleted data. Instead do what I do - keep the two ZFS systems separate and use a non-ZFS mechanism to transfer data - rsync, Syncthing, etc. That way even if you delete everything locally, nuke all local snapshots and send the deletions via rsync remotely, you could still recover your data by restoring the remote ZFS to a snapshot prior to the deletions. For reference I have two ZFS machines doing frequent snapshots and Syncthing replicating data between them on immediate basis.
!selfhosted, please do critique if you find some fundamental issues with this.
Docs say this , so yeah. "send streams can either be “full”, containing all data in a given snapshot, or “incremental”, containing only the differences between two snapshots. ZFS receive reads these send streams and uses them to re-create identical snapshots on a receiving system. "
Do not try to host outbound mail on residential IP blocks, delivery will be really bad. Cheap VPS is same story. You best bet is VPS from some not well know provider, they may be avoid to be in blacklist in M$ and Google. Inbound mail is fine anywhere as so long as you can have port 25 open. DDNS works too.
Don’t worry about the UDP ports, they’re only needed on the LAN and only in certain conditions. Basically Jellyfin uses them to “announce” things to the LAN.
On 7359 it announces clients where to connect; this can help you when first starting a client to let it connect automatically instead of you having to enter IP or jellyfin.mydomain.com.
On 1900 it advertises itself as a DLNA server. This is only relevant if you have other DLNA-capable devices. DLNA is a cool protocol that allows devices to act as server, controller or renderer and to cooperate to cast streams. For example you can use your phone as a DLNA controller to get media from Jellyfin acting as a DLNA server and cast it to a TV acting as a DLNA renderer. If your TV has DLNA capability then you may be interested in the BubbleUPnP phone app which can act as a controller, and that’s when you may be interested in enabling 1900.
Or you can comment out the “ports:” section in your config and say “network_mode: host” instead and all 4 ports will be mapped automatically and work as intended (it’s what I do).
Good to know. I thought there was some issue with those ports and the reverse-proxy because the DLNA function doesn’t seem to be working but from some googling this seems to be more of a docker problem in general when you are not using host mode for networking.
So far so good. The URL is correct, because its the external address. You also don’t need to publish both http and https ports. I only map external https to internal http but you can do https to https. No serious modern browser tries http first and because I always force https anyways, it doesn’t need to be public. Only the reverse proxy may need it, for Let’s Encrypt.
Both UDP aren’t needed for public access. I only have mapped 8096 to my reverse proxy and it works.
selfhosted
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.