Dbzero Lemmy has a relationship with the Horde AI shared LLM group. My primary use is for chat roleplay but they have streamlined guides to hosting your own models for personal or horde use. One of the primary interfaces is SillyTavern but they integrate numerous models
I recently set up a personal Owncast instance on my home server, it should do what you’re looking for. I use OBS Studio to stream random stuff to friends, if your webcam can send RTMP streams it should be able to stream to Owncast without OBS in the middle - else, you just need to set up OBS to capture from the camera and stream to Owncast over RTMP.
the communication itself should be encrypted
I suggest having the camera/OBS and Owncast on the same local network as RTMP is unencrypted and could possibly be intercepted between the source and the Owncast server, so make sure it happens over a reasonably “trusted” network. From there, my reverse proxy (apache) serves the owncast instance to the Internet over HTTPS (using let’s encrypt or self-signed certs), so it is encrypted between the server and clients. You can watch the stream from any web browser, or use another player such as VLC pointing to the correct stream address [1]
it seems that I might need to self-host a VPN to achieve this
Owncast itself offers no authentication mechanism to watch the stream, so if you expose this to the internet directly and don’t want it public, you’d have to implement authentication at the reverse proxy level (HTTP Basic auth), or as you said you may set up a VPN server (I use wireguard) on the same machine as the Owncast instance and only expose the instance to the VPN network range (with the VPN providing the authentication layer). If you go for a VPN between your phone and owncast server, there’s also no real need to setup HTTPS at the reverseproxy level (as the VPN already provides encryption)
Of course you should also forward the correct ports (VPN or HTTPS) from your home/ISP router to the server on your LAN.
The guide above doesnt include Audiobookshelf installation, but you will quickly see that adding Audiobookshelf to the basic setup is very easy. There are two things I’ve learned since the initial setup, which are worth a deviation from the guide above.
First, the recommendation in the guide to use a separate userid and groupid (1001) for the docker containers vs. your own userid/groupid (1000) is a royal PITA and not necessary for most basic use cases.
Second, and much more important, you MUST set up your VPN in a Gluetun container and then make your torrent client container a “service” of the Gluetun container. Yes, I know, that sounds like some advanced-level abstraction, but it is actually extremely easy to do and it will save you from getting a nastygram from your ISP when your VPN loses connection. The MPIAA is extremely active with automated detection and processing of torrenting data, but if you set up your VPN with Gluetun, you have a perfectly effective kill switch when your VPN connection drops. And, no, the built-in killswitch on your VPN client won’t work with containers.
I use Plex instead of jellyfin, but there’s the ability to just add a friends library and it pulls in without mounting anything. I thought Jellydin had that as well?
plex uses a centralized service for this kinda of nonsense. most of us are using standalone server products.
this use case calls for either centralized storage (s3 bucket) or access mechanism(all them vpns) to distributed channels (ala plex)... but friends dont let friends use plex.
im curious about ipfs as distributed file systems sound like a new kink i should have
I would love to get rid of Plex, but jellyfin failed the spouse test last summer and it never really liked my GDrive mount
Plus, Plex clients are everywhere, so it’s all but guaranteed that whoever I decide to onboard is going to have something compatible. I’ve even had early smart TV’s from like 2013 with that weird Yahoo app store thing that had a Plex app that still worked even when the Netflix app didn’t lolol
Funnily enough, my wife is the only person who likes jellyfin. It works perfectly for her. Everyone else? I’ve never had it work even once. And I have no damn idea why.
You have to pay for Plex to access features you just have on Jellyfin. Like being able to stream to a mobile device.
I don’t know how so many people seem to have issues with it when its always been as easy as installing it directly on my computer and booting up the web interface, or now running it in Docker with a simple compose file.
There are alternatives for most features people think are missing. There are several apps that work on mobile if you want to stream music and alternate clients for video playback as well.
Jellyfin is nice but has a long way to go to replicate the features of Plex [like PlexAmp and Sonic Analysis] and features that are “Plex adjacent” [like Tautulli].
A dedicated music app?
Music filtering/smart playlists? Sonic analysis?
Good 4k/x265 performance?
Has a third party (or built in) utility that shows me streaming usage per person?
Allows me to limit remote users to streaming from a single IP address at a time?
Let’s me watch something together with another remote user?
Has an app for most any device (like Plex or Emby) that does NOT require sideloading?
Has built in native DVR steaming/recording support?
Two factor authentication?
I’m surprised the client doesn’t support switching between servers. When I had jellyfin running I exposed it through traefik to allow external playback. Figure it would make sense that you could just show multiple servers in the UI. Add several reverse proxied addresses and boom.
It’s actually a suggested configuration / best practice to NOT have container user IDs matching the host user IDs.
Ditch the idea of root and user in a docker container. For your containerized application use 10000:10001. You’ll have only one application and one “user” in the container anyways when doing it right.
To be even more on the secure side use a different random user ID and group ID for every container.
This is really dependent on whether or not you want to interact with mounted volumes. In a production setting, containers are ephemeral and should essentially never be touched. Data is abstracted into stores like a database or object storage. If you’re interacting with mounted volumes, it’s usually through a different layer of abstraction like Kibana reading Elastic indices. In a self-hosted setting, you might be sidestepping dependency hell on a local system by containerizing. Data is often tightly coupled to the local filesystem. It is much easier to match the container user to the desired local user to avoid constant sudo calls.
I had to check the community before responding. Since we’re talking self-hosted, your advice is largely overkill.
… basically anything. Yes. You will always find yourself in problems where the best practice isn’t the best solution for.
In your described use case an option would be having the application inside the container running with 10000:10001 but writing the data into another directory that is configured to use 1000:1001 (or whatever the user is you want to access the data with from your host) and just mount the volume there. This takes a bit more configuration effort than just running the application with 1000:1001 … but still :)
Yep! The names are basically just a convenient way for referencing a user or group ID.
Under normal circumstances you should let the system decide what IDs to use, but in the confined environment of a docker container you can do pretty much what you want.
If you really, really, really want to create a user and group just set the IDs manually:
I wish there were an alternative in a sane programming language that I could actually contribute to. For some reason PHP is extremely sparse in its logging and errors mostly only pop up on the frontend. Having to debug errors after an update and following some guide to edit a file in the live env that sets a debugging variable, puts the system in maintenance mode and stores additional state in the DB is scary.
Plus PHP is so friggin slow. Nextcloud takes noticeable time to load nearly anything. Even instances hosted by pros that only host nextcloud are just slow.
I’ve been using Linode for a decade (or more) now without any issues. I’d encourage you to contact their support about this issue. Assuming you’re on the up-and-up this sounds like a big and in sure they’d be happy to help.
If you decide not to go with Linode though I think Digital Ocean is a good alternative.
No they didn’t grandfather anybody in, they made the price changes to compute universally back in April of last year. The only plan not changed was the $5 nanode so if that’s all you’re running then that’s probably why your bill didn’t change.
19 has federation bugs. Mainly outgoing but I’ve also seen incoming federation gradually fail. Restart the docker container routinely (cron job) until fixes come out.
You totally can, but since it will be on all day with 4 hdd look into wattages you want to live with. There are some small NUCs or Pi based NAS with low wattages. There is OpenMediaVault, FreeNAS/TrueNAS software to install
Note that there is some reliability drawback of spinning hard disks on and off repeatedly. maybe unintuitively HDDs that spin constantly can live much longer than those that spend 90% of their time spun down.
This might not be relevant if you use only SSDs, and might never affect you, but it should be mentioned.
You could totally turn on as needed, WakeOnLan is good for that. But typically when people run a NAS it is for streaming audio, video, file sync and backups and maybe docker running other services so the NAS is typically on 24/7 so it is available on demand. But it doean’t have to be 100% uptime if you don’t want it to be. For example I have two OpenMediaVaults one on a pi and one an old IomegaNAS. The pi is on always with an attached drive, and serves Samba Shares and DLNA/DAAP shares. Has docker running syncthing, CUPS print server, Trillium Notes, and homeassistant; so makes sense for it to be on all day, especially because my wife’s system backsup to it daily automatically. The converted Iomega NAS is mainly a backup machine sInce it is old and not as performant (only has 100 network speed. So that gets turned on to do a bulk backup and not much else.
I haven’t tried it but I’ve been thinking about it… Since NextCloud supports s3 storage it would seem its photo apps, such as Memories should work that way?
Yep, that’s pretty much it. I have it working with iDrive this way. Install Nextcloud and the Memories app. Add S3 as external storage. Point Memories to external storage. Done.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.