I use Moonlight Qt on a raspberry pi 5, and used it on a raspberry pi 4 before that. Both connected via ethernet, streaming at 150 mbps. It works very well, feels like being at the computer. It feels like there is next to no delay, and moonlight reports around 5 ms.
Somewhere else I use a raspberry pi 3 A+ with Moonlight Embedded, connected via Wi-Fi, and it works pretty well, but I can notice the delay a bit more. Still able to stream at 40 mbps.
I have a 3b+ I want to try this with, it has double the ram and also Ethernet connection vs the 3a+. Do you see yours hit ram limit or you think the delay could be wifi related?
I use Syncthing for this type of task on my PC and Phone and it stores a copy of the shared folder on the server with the option for file versioning. Having a Server is optional by the way.
AFAIK, Syncthing clones the entire folder across peers (the server is just another peer it seems), which isn’t ideal for my use case Do you know any current way to configure it for selective syncing?
I don’t think it can do selective syncing. I’ve been also searching for a similar solution but didn’t find one. Finally opted for syncthing with my most important files. Other files I can get them via web using filestash.
Owncloud supports selective sync, and seems a lot better for performance compared to Nextcloud.
Alternatively you could roll your own with rclone which is essentially an open source alternative to mountain duck. Then you can just use a simple connection via SFTP, FTP, WebDAV, etc…
Non-OCIS Owncloud still needs a dedicated database and recommend against SQLite in prod
I’ve looked at rclone mounting with the --vfs-cache-* flags. But I’m not sure how it can smart sync like mountain duck or handle conflicts elegantly like the Nextcloud/Owncloud clients do. Let me know how to set it up that way if possible
I have a feeling you’re talking about the TTY. You can’t use the mouse cause there’s no graphical interface to begin with. You’re in “pure” console mode. It’s probably why fonts look weird too. It’s probably just not running at your monitor’s native resolution.
As other people said though, it’s pretty much expected. Servers are more or less expected to run “headless”. You’d typically SSH in rather than plug a monitor directly in the machine.
How often depends on how much work it is to recreate, or the consequences of loosing data.
Some systems do not have real data locally, get a backup every week. Most get a nightly backup. Some with a high rate of change , get a lunch/middle of the workday run.
Some have hourly backups/snapshots, where recreating data is impossible. CriticL databases have hourly + transaction log streaming offsite.
How long to keep a history depends on how likely an error can go unnoticed but minimum 14 days. Most have 10 dailes + 5 weeky + 6 monthly + 1 yearly.
If you have paper recipes and can recreate data lost easily. Daily seems fine.
Depending on what you are trying to host and where you live power usage and your own hardware might be more expensive than the VPS you require to host those.
This. Hosting at home might be cheaper if you are serving a lot of data, but in that case, the speed’s going to kill you.
I’m a keen self-hoster, but my public facing websites are on a $4 VPS (Binary Lane - which I recommend since you’re in Aus). In addition to less hassle, you get faster speeds and (probably) better uptime.
For a self-hosted RSS feed service, there are several options:
Tiny Tiny RSS: It’s an open-source web-based news feed reader and aggregator for RSS and Atom feeds, praised for its Android client availability.
FreshRSS: A free, self-hosted RSS and Atom feed aggregator that is known for being lightweight, powerful, and customizable. It also supports multi-user access, custom tags, has an API for mobile clients, supports WebSub for instant push notifications, and offers web scraping capabilities.
Miniflux: A minimalist and opinionated feed reader that is straightforward and efficient for reading RSS feeds without unnecessary extras. It’s written in Go, making it simple, fast, lightweight, and easy to install.
I’ve been running Miniflux on a free tier GCP instance for a few months now. Then I use RSS Guard on my desktop and FeedMe on my phone to read stuff.
I’d like to try FreshRSS, but just cannot get my URLs to resolve correctly with it. After a few hours of trying, I reverted to if it ain’t broke, don’t fix it. Miniflux all the way for me (for now).
I’m more worried about what’s going to happen to all the self-hosters out there whenever Cloudflare changes their policy on DNS or their beloved free tunnels. People trust those companies too much. I also did at some point, until I got burned by DynDNS.
We start paying for static IPs. If cloudflare shuts down overnight, a lot of stuff stops working but no data is lost so we can get it back up with some work.
They’re just creating a situation where people forget how to do thing without a magic tunnel or whatever. We’ve seen this with other things, and a proof of this is the fact that you’re suggesting you’ll require a static IP while in fact you won’t.
Where I live, many ISPs tie public IPs to static IPs if they are using CG-NAT. But of course there are other options as well. My point was that the other options don’t disappear.
Though I do get the point that Cloudflare aren’t giving away something for nothing. The main reason to me is to get hobbiest using it so they start using it (on paid plans) in their work, or otherwise get people to upgrade to paid plans. However, the “give something away for free until they can’t live without it then force them to pay” model is pretty classic in tech by now.
However, the “give something away for free until they can’t live without it then force them to pay” model is pretty classic in tech by now.
Yes, this is a problem and a growing one, like a cancer. This new self-hosting and software development trends are essentially someone reconfiguring and mangling the development and sysadmin learning, tools and experience to the point people are required to spend more than ever for no absolute reason other than profits.
Well, the issue here is that your backup may be physically in a different location (which you can ask to host your S3 backup storage in a different datacenter then the VMs), if the servers themselfs on which the service (VMs or S3) is hosted is managed by the same technical entity, then a ransomware attack on that company can affect both services.
So, get S3 storage for your backups from a completely different company?
I just wonder to what degree this will impact the bandwidth-usage of your VM if -say- you do a complete backup of your every day to a host that will be comsidered as “of-premises”
if you backup your vm data to the same provider as you run your vm on you don’t have an ‘off-site’-backup, which is one criteria of the 3-2-1 backup rule.
Yah, it’s been trash from the start. I tried it 2 years ago and the unpredictable weird shit it did was useless to try to troubleshoot. It was worse than trying to run Docker on Windows, if that can be believed.
Debian with the Docker convenience script is the way to run Docker.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.