I believe they used heritrix at one point. The important bit is that there is a special archive format that they use which is a standard. There are several tools that support it (both capturing to it and viewing it) - it allows for capturing a website in a ‘working’ condition with history or something. I’m a bit fuzzy on it since it’s been some time since I looked into it.
I’ve had some luck establishing the bottleneck using strace on both the sender side and receiver side. This will show if the sending rsync is waiting on local reads or remote writes and if the receiving rsync is waiting on network reads or local writes.
Kind of. Linkwarden seems to save as PDF. That’s better than nothing, however preserving a functional copy of the pages would be better. Archivebox seems to do this.
GE has garbage company for a very long time. It’s a shell of its former self. If I’m paying for a product I am going to do whatever I want with it because it’s my money. And if a company has a problem with that, it sounds like the company needs to fix it on their end. If it’s possible to create a plugin that cost your company millions of dollars is obviously you’re not running your company properly.
Have had this issue myself asking with other DD card related issues.
I can’t understand why the pi foundation persist with using SD as the only physically practical storage option.
They’re looking post the point of needing a way to snap on reliable EMMC storage, as a default, in a way that doesn’t leave a cable or something permanently plugged into a USB port.
Sure, USB is a fine option, but I hate that it’s only an option and not a designed default.
Most of us only need 8GB or so for the OS, 8GB or good quality durable EMMC should hardly cost anything.
Other tiny computers and even economy notebooks and Chromebooks already use this.
i dont think op is looking to mirror archive.org, my take was that they wanted someyhing like archive.org but selfhosted and for personal / small-scale use
Exactly. I’m already running a local wiki, but I don’t want stuff I link to in my wiki to result in 404 in a few years. Or worse, to some AI-ridden ad-infested dumpster fire.
You can use something as simple as a browser extension like SingleFile that can automatically download complete, contained copies of anything bookmarked or only certain URLs.
It seems like it’s written in Python too, which means I can maintain it if need be.
Oh boy I wish I had set this up many years ago. I wouldn’t have to resort to scouring !antiquememesroadshow for the top quality memes of the past when I need them…
On a far side of the moon note, I wonder if ActivityPub could be used to federate multiple archiveboxes to create a more resilient Internet Archive alternative. 🤔 Then integrate that with Lemmy to autoarchive links from posts. Aaand lemmy.world ran out of disk space. 🤣
Use encryption, using vpns for such a trivial task is a “really bad idea”
There are many cases when somebody wants to have their dns public, maybe they want to share with their friends, family, community, audience (not everyone is a solo server user)
Also, it’s good to use your dns even before connecting to the vpn. Just use encryption, it’s safe and nice
Keeping 53 opened is not that bad, the only thing you will notice is an increased load on your server if somebody tried to ddos somebody’s server using your dns
P.S. Or as somebody mentioned below, use rate limiting. It’s described pretty well in some other comments. Not just “spooky internet port”
Use a public dns provider. Cloudflare, route53, dyndns (are they still around?), etc. Cheap, reliable, no worries about joining a ddos by accident. Some services are better left to experts until you really know what you’re doing.
And if you do really know what you’re doing you’ll use a dns provider rather than host your own.
Host your own private DNS - yes, knock yourself out. I highly recommend it.
Public DNS? No - don’t do that.
There are two services homegamers should be extra cautious of and should likely leave alone - DNS and email. These protocols are rife with historic issues that affect everybody, not just the hosting system. A poorly configured DNS server can participate in a DDOS attack without being “hacked” specifically. A poorly configured mail server can be responsible for sending millions of spam emails.
For a homegamer you probably only need a single public DNS record anyway (with multiple CNAME if you want to do host based routing on a load balancer). You take on a lot of risk with almost zero benefit.
From outside? Set up a Cloudflare account and point the NS from your registrar to it.
From inside? Set up unbound on a docker host and don’t open it to the internet. Use that one when you’re local and the normal public DNS when you’re outside. But everything I’m seeing in here makes me sure you shouldn’t even consider opening ports in your firewall to expose inside host services. Use a VPN when you’re roaming, and only use your DNS for local servers/hosts via that VPN. The only use for your outside domain name should be to point a single hostname to your outside IP address so you can use it for your VPN endpoint.
Use DNS challenges for LetsEncrypt cert requests and remove host entries from your Cloudflare after you get your cert.
I use a DNS server on my local network, and then I also use Tailscale.
I have my private DNS server configured in tailscale so whether on or off my local network everything uses my DNS server.
This way I don’t have to change any DNS settings no matter where I am and all my domains work properly.
And my phone always has DNS adblocking even on cell data or public Wi-Fi
The other advantage is you can configure the reverse proxy of some services to only accept connections originating from your tailscale network to effectively make them only privately accessible or behave differently when accessed from specific devices
This is why the concept of running services until different ports than default isn’t a real security measure, it doesn’t actually take any effort to figure out what kind of service is running on a port.
I hate how cease and desist are essentially blackmail. Even if you did nothing wrong, you can still get fucked over by costs of a potential legal battle.
It’s a bigger problem in the States than elsewhere. In the US, awarding legal costs is the exception, not the norm, so someone with a lot of money and access to lawyers can basically intimidate a defendent into avoiding court. In the rest of the world, courts are much more likely to award costs to a defendent who has done nothing wrong - if you file a frivilous lawsuit and lose, you’ll probably have to pay the costs of the person you tried to sue.
This guy’s in Germany, so I think he’d be alright if he clearly won. The issue, however, is that courts aren’t really equipped for handling highly technical cases and often get things wrong.
This looks neat, though sounds like only the grayjay/futo app can cast to it, and I doubt any official streaming app would natively adopt it. Assuming it’s not just casting a video feed from your phone, my guess as to how it works is, it just copies the relevant cookies over to the fcast device where it can just pretend to be your phone as far as the server is concerned.
This would be fine if it supports all the apps I use, and I’m the only one ever casting, but I don’t want to force guests to install and configure another middleware app to just to cast stuff. My hope is that Matter will somehow solve these, but I probably shouldn’t get my hopes up.
I should try setting up fcast either way though, see how it goes. Thanks.
What’s wrong with miracast? Almost every device sold these days has some kind of radio, but no way to talk to each other. Releasing a new standard every few years won’t help much.
I don’t know the specifics of Miracast, but my impression was that it is specifically used to cast a video stream from one device to another device. That is sometimes useful, but not what I typically use my Chromecast for.
The most useful feature of my Chromecast is the ability to be logged into Plex/Netflix/HBO/Spotify/YouTube/etc on my (or my guest’s) mobile device, and effectively send a link and a (probably ephemeral) token to the Chromecast so that it can stream directly from the server to the Chromecast without my mobile device spending battery power and bandwidth being a middle-man.
And I assume the difficult part here is down to copyright reasons. Most of those streaming sites already limit the number of devices you can permit to stream content (which sucks, but is besides the point), so my impression is that they need to have some kind of under-the-table agreement with the Chromecast/Roku/Firestick/Apple TV/etc. folks to ensure that the device will correctly validate the credentials, not save any of the content, and properly dispose of everything when it’s done. And I assume Google has similar talks about when a device on the network is allowed to be listed as a casting device to apps.
Isn’t Miracast for sending video data? The thing I like about Chromecast is that the phone or remote app just tells the Chromecast where to load the media directly from, and then only sends playback control commands. That makes it a lot lighter resource wise because you don’t need to proxy the stream through a device like a phone that wants to go to sleep to save battery.
Oh right, that makes sense. I was only thinking of Matter as serving low bandwidth devices but it also runs over WiFi and ethernet so I guess it can do video for security cameras etc. and evidently Casting audio and video also.
selfhosted
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.