It’s good for me because I’m piss poor at programming. In my defense, I’m not a programmer or even programmer adjacent. I do see how it wouldn’t be useful to a pro. It also has occasionally given me garbage advice that an expert would spot right away while I had to figure out in my own that it was ‘hallucinating’ again. There’s nothing better for learning than troubleshooting, though!
I can absolutely see it getting useful for a pro. It’s already a better version of IDE templates. If you have to write boilerplate code this can already do that. It’s a huge time saver for the things you’d have to go look up to remember how to do and piece together yourself.
Example: today I wanted a quick way to serve my current working directory over HTTP so I could do some quick web work. I asked ChatGPT to write me a bash function I could stick in my profile to do this, and I told it to pick a random unused port. That would have taken me much longer had I went to lookup how to do that all. The only hint I gave it was to use the Python builtin module for serving http.
There’s a project called Tabby that your can host as a server on a machine that has a GPU, and has a VSCode extension that connects to the server.
The default model is called starcoder, and it’s the small version, 1B parameters. The downside is that it’s not super smart (but still an improvement over built in tools), but since it’s such a small model, I’m getting sub-second processing times.
I remember years ago it already was like this in the forums. It actually made me stop using it and running a custom made web based reader for some time.
I wouldn’t use it anymore nowadays.
FreshRSS is the way to go. It even has plugins (and a plugin for YouTube channels as RSS feeds, very convenient).
Tailscale+Headscale are pretty easy to implement these days. Since it’s effectively zero trust, the tunnels become the encrypted channel so there’s an argument that HTTPS isn’t really required unless some endpoints won’t be accessing services over the Tailnet. SmallStep and Caddy can be used to automatically manage certs if it’s needed though.
You can even configure a PiHole (or derivative) to be your DNS server on the VPN, giving you ad blocking on the go.
there’s an argument that HTTPS isn’t really required…
Talescale is awesome but you gotta remember that Talescale itself is one of those services (Yikes). Like all applications it’s potentially susceptible to vulnerabilities and exploits so don’t fall into the trap of thinking that anything in your private network is safe because it’s only available through the VPN. “Defence in depth” is a thing and you have nothing to lose from treating your services as though they were public and having multiple layers of security.
The other thing to keep in mind is that HTTPS is not just about encryption/confidentiality but also about authenticity/integrity/non-repudiation. A cert tells you that you are actually connecting to the service that you think you are and it’s not being impersonated by a man in the middle/DNS hijack/ARP poison, etc.
If you’re going to the effort of hosting your own services anyway, might as well go to the effort of securing them too.
Tailscale isn’t an exposed service. Headscale is, and it isn’t connected to the Tailnet. It’s a control server used to communicate public keys and connectivity information between nodes. Sure, a threat actor can join nodes to the Tailnet should it become compromised. But have you looked at Headscale’s codebase? The attack surface is significantly smaller than anything like OpenVPN.
A cert tells you that you are actually…
I’m all for ssl/tls, but it’s more work and may not always be worth the effort depending upon the application, which is exactly why I recommended SmallStep+Caddy. Let’s not pretend that introducing things like a CA don’t introduce complexity and overhead, even if it’s just distributing the root cert to devices.
MITM/DNS Hijack/ARP Poisoning…
Are you suggesting that these attack techniques are effective against zero trust tunnels? Given that the encryption values are sent out of band, via the control channel, how would one intercept and replay the traffic?
Absolutely! And it’s a great system that I thoroughly recommend. The attack surface is very small but not non-existent. There have been RCE using things like DNS rebinding(CVE-2022-41924) etc. in the past and, although I’m not suggesting that it’s in any way vulnerable to that kind of thing now, or that it even affected most users we don’t know what will happen in future. Trusting a single point of failure with no defence in depth is not ideal.
it’s more work and may not always be worth the effort
I don’t really buy this. Certs have been free and easy to deploy for a long time now. It’s not much more effort than setting up whatever service you want to run as well as head/tailscale, and whatever other fun services you’re running. Especially when stuff like caddy exists.
I recommended SmallStep+Caddy.
Yes! Do this if you don’t want to get your certs signed for some reason. I’m only advocating against not using certs at all.
Are you suggesting that these attack techniques are effective against zero trust tunnels
No I’m talking about defence in depth. If Tailscale is compromised (or totally bypassed by someone war driving your WiFi or something) then all those services are free to be impersonated by a threat actor pivoting into the local network after an initial compromise. Don’t assume that something is perfectly safe just because it’s airgapped, let alone available via tunnel.
I feel like it’s a bit like leaving all your doors unlocked because there’s a big padlock on the fence. If someone has a way to jump the fence or break the lock you don’t want them to have free reign after that point.
My claim is that Headscale has a lesser likelihood of compromise than Nextcloud, and that the E2EE provides an encrypted channel between nodes without an immediate need for TLS. Of course TLS over E2EE enhances CIA. There’s no pushback to defense in depth here. But in the beginning, the E2EE will get them moving in the right direction.
OP began the post by stating that the login page to a complex PHP web application is internet facing (again, yikes). Given the current implementation, I can only assume that OP is not prepared to deploy a CA, and that the path of least resistance – and bolstered security – can be via implementation of HS+TS. They get the benefit of E2EE without the added complexity, for which there is plenty, of a CA until if/when they’re ready to take the plunge.
If we’re going to take this nonsense all or nothing stance, don’t forget to mention that they’re doing poorly unless they implement EDR, IDS, TOTP MFA on all services, myriad DNS controls, and full disk encryption. Because those components don’t add to the attack surface as well, right?
My only issue was with the assertion that OP could comfortably do away with the certs/https. They said they were already using certs in the post and I wanted to dispel the idea that they arguably might not need them anymore in favour of just using headscale as though one is a replacement for the other.
Yes, mostly gpt4all.io only to find out that even the “uncensored” models are bullshit and won’t even provide you with a Windows XP Pro key. That’s kind of my benchmark for models nowadays. :P
Well dang, I have Nextcloud installed as a snap (which has been perfectly stable for me when running on Ubuntu Server), but I was thinking of switching over to a docker installation; this thread doesn’t exactly fill me with enthusiasm for that idea…
Anecdotal, but Ive had a container running Nextcloud in an LXC on Proxmox along with PiHole, Step CA, Bacula, and quite a few other services and I’ve had zero downtime since June 2023. Even have Tailscale rigged to use PiHole as the tailnet DNS to have adblocking on the go.
Guess that restart: always value in the Compose config is pulling it’s weight lol
I ended up on the snap because I couldn’t get the AIO install working properly. My snap version has been super solid. I think I’m gonna stick with it for a while.
I haven’t tried any of them but I did just listen to a podcast the other week where they talk about LlamaGPT vs Ollama and other related tools. If you’re interested it’s episode 540: Uncensored AI on Linux by Linux Unplugged
For a self-hosted RSS feed service, there are several options:
Tiny Tiny RSS: It’s an open-source web-based news feed reader and aggregator for RSS and Atom feeds, praised for its Android client availability.
FreshRSS: A free, self-hosted RSS and Atom feed aggregator that is known for being lightweight, powerful, and customizable. It also supports multi-user access, custom tags, has an API for mobile clients, supports WebSub for instant push notifications, and offers web scraping capabilities.
Miniflux: A minimalist and opinionated feed reader that is straightforward and efficient for reading RSS feeds without unnecessary extras. It’s written in Go, making it simple, fast, lightweight, and easy to install.
I’ve been running Miniflux on a free tier GCP instance for a few months now. Then I use RSS Guard on my desktop and FeedMe on my phone to read stuff.
I’d like to try FreshRSS, but just cannot get my URLs to resolve correctly with it. After a few hours of trying, I reverted to if it ain’t broke, don’t fix it. Miniflux all the way for me (for now).
Backups are usually encrypted from most popular backup programs, either by default or as an option (restic, borg, duplicati, veeam, etc…). So that would take care of someone else getting their hands on your backup data.
I never store my actual files on a cloud service, only encrypted backups.
For local data on my devices, my laptop is encrypted with bitlocker, and my Android phone is by default. My desktop at home is not though.
Indeed. Whatever you put in a cloud needs backups. Not only at the cloud provider, but also “at home”.
There has been a case of a cloud provider shutting down a few months ago. The provider informed their customers, but only the accounting departments that were responsible for the payments. And several of those companies’ accounting departments did not really understand the message except for “needs no longer be paid”.
So for the rest of the company, the service went down hard after a grace period, when the provider deleted all customer files, including the backups…
Anyway, it just have one view mode with 3 panels and it’s not customizable. At the moment, the most featured and exstesible RSS Feed service seems to be FreshRSS as suggested in the thread by @specseaweed.
The real issue here is backups vs disaster recovery.
Backups can live on the same network. Backups are there for the day to day things that can go wrong. A server disk is corrupted, a user accidentally deletes a file, those kinds of things.
Disaster recovery is what happens when your primary platform is unavailable.
Your cloud provider getting taken down is a disaster recovery situation. The entire thing is unavailable. At this point you’re accepting data loss and starting to spin up in your disaster recovery location.
The fact they were hit by crypto is irrelevant. It could have been an earthquake, flooding, terrorist attack, or anything, but your primary data center was destroyed.
Backups are not meant for that scenario. What you’re looking for is disaster recovery.
On the other hand, most of the disaster senarios you mention are solved by geographic redundancy: set up your backup // DRS storage in a datacenter far away from the primary service. A scenario where all services,in all datacenters managed by a could-provider are impacted is probably new.
It is something that, considering the current geopolical situation we are now it, -and that I assume will only become worse- that we should better keep in the back of our mind.
It should be obvious from the context here, but you don’t just need geographic separation, you need “everything” separation. If you have all your data in the cloud, and you want disaster recovery capability, then you need at least two independent cloud providers.
Nope. Full self hosted livestreaming. I personally use it to stream games. I started a communit at !owncast/lemmy.world and I’ve listed a few different streams. Some folks game, classic movies, music, etc. It’s your own self hosted Twitch or YT streaming, etc.
I’m not understanding what you’re stating. Me streaming a video game isn’t blogging. If you mean that there isn’t a list of folks all streaming, well there’s directory.owncast.com to find folks. If you mean only you can stream to it, well that’s not true as you can set up multiple stream keys and allow others to stream to it as well. So I’m really not understanding what you’re stating.
dont want to get into a semantic argument about how you distribute data. but if you have a site where you post your own personal shit all the time, including 'streaming', youre doing nothing different than 'blogs' from 20 years ago. the number of viewsers/casters is irrelevant.
yes, i love all the new tech. its just funny how we keep renaming the same pieces.
streaming is just yesterdays podcasts which were everyones vlogs before that. its all the same shit.
i just found it funny they owncast guy claimed to not be able to 'talk' about his 'blog + video'
This is literally the self-hosted community. I’m talking about self-hosted livestreaming platform. If you want to call it a blog + video, ok sure. Everything is basically a rehash of everything else. Just trying to share some self-hosted information. And I’m not the dev of Owncast or anything, just someone trying to make others aware of self-hosting software.
@originalucifer@ozoned hm.. Why do you call it blog then? It's just someone's web page with text, pictures and video published to it. Languages evolves and new words can describe new implementations better.
it was about communication. we want to struggle so hard against calling it something we dont want, its now labeled 'difficult to describe'. which i find silly
It's not that difficult to describe. The media is described based on its content, format, and time of release.
If the core content is text-based, it's a blog. If it's audio-based, it's a podcast. If it's video-based, it's a either a vlog (for personal content) or simply video (for topical content). These all assume the content was first created, and then released.
If it's released at the time it's produced, it's a livestream, or just a stream.
I have been looking into a way to copy files from our servers to our S3 backup-storage, without having the access-keys stored on the server. (as I think we can assume that will be one of the first thing the ransomware toolkits will be looking for).
Perhaps a script on a remote machine that initiate a ssh to the server and does a “s3cmd cp” with the keys entered from stdin ? Sofar, I have not found how to do this.
What arm board :p
Honest question. All the ones I have seen are really awful and I would love to tinker with something that has real pcie (Ampere workstations do not count)
Both the ROCKPro64 and the NanoPi M4 from 2018 has a x4 PCIe 2.1 interface. Same goes for almost all RK3399 boards that care to expose the PCIe interface.
Update: there’s also the more recent NanoPC-T6 with the RK3588 that has PCIe 3.0 x4.
They could’ve exposed more SATA ports and / or PCI lanes and decided not to do it.
And… let’s not even talk about the SFF 8087 connector that isn’t rated to be used as an external plug, you’ll likely ruin it quickly with insertions and/or some light accident.
PCIe 2.0 x 4 > 2.000 GB/s PCIe 3.0 x 2 > 1.969 GB/s
But we also have to consider the suggested ARM CPU does PCIe 2.1 and we’ve to add the this detail:
PCIe 2.1 provides higher performance than the PCIe 2.0 by facilitating a transparent upgrade from a 32-bit data path to a 64-bit data path at 33MHZ and 66MHz.
I shouldn’t also have a large impact but maybe we should think about it a bit more.
Anyways I do believe this really depends on your use case, if you plan to bifurcate it or not and what devices you’re going to have on the other end. For instance for a NAS I would prefer the PCIe 2.1 x 4 as you could have more SATA controllers with their own lanes instead of sharing lanes in PCIe 3.0 using a MUX.
Conclusion: your mileage may vary depending on use case. But I was expecting to have more PCI lanes exposed be it via more m.2 slots or other solution. I guess that when a CPU comes with everything baked in and the board maker “only has” to run wires around better do it properly and expose everything. Why not all SATAs for instance?
A data cloud backup loss should be fine, because it’s a backup. Just re-up your local backup to a new cloud/second physical location, that’s the whole point of two.
I don’t see a need to run two conccurent cloud backups.
In this case, it is not you -as a customer- that gets hacked, but it was the cloud-company itself. The randomware-gang encrypted the disks on server level, which impacted all the customers on every server of the cloud-provider.
Yeah absolutely, but tonyou as an individual , it’s the same net effect of your cloud backup is lost. Just re-up your local backup to a different cloud provider.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.