If you’re up for it, it’s generally better to not backup everything. Only backup the data that you need. Like a database. Or photos, music, movies, etc. for personal data. For everything else, it’s best to automate the install and maintenance of your server.
Nowadays I sort of do this with seafile. Select folders to sync, open the app every other time to resync stuff, carry on with your day. The only thing I wanted to take away if there is a better way to not have a massive hassle to reinstall everything in case something happens (and in case I forget to select a folder to sync also).
But your suggestion I think is very valid as well. At least for mint have a way to make a more automated installer or similar to get the stuff I use usually. Yet another rabbit hole to go into…
I tried both hosting my own mail server and using a paid mail hosting with my own domain and I advise against the former.
The reason not to roll out your own mail server is that your email might go to spam at many many common mail services. Servers and domains that don’t usually send out big amount of email are considered suspicious by spam filters and the process of letting other mail servers know that they are there by sending out emails is called warming them up. It’s hard and it takes time… Also, why would you think you can do hosting better than a professional that is paid for that? Let someone else handle that.
With your own domain you are also not bound to one provider - you can change both domain registrar and your email hosting later without changing your email address.
Also, avoid using something too unusual. I went with firstname@lastname.email cause I thought it couldn’t be simpler than that. Bad idea… and I can’t count how many times people send mail to a wrong address because such tld is unfamiliar. I get told by web forms regularly that my email is not a valid address and even people that got my email written on a piece of paper have replaced the .email with .gmail.com cause “that couldn’t be right”…
I get told by web forms regularly that my email is not a valid address and even people that got my email written on a piece of paper have replaced the .email with .gmail.com cause “that couldn’t be right”…
That’s the thing that holds me back from a non-standard TLD, as much as I’d love to get a vanity domain.
I’ve got a .org I’ve had for over 20 years now. My primary email address has been on that domain for almost as long. While I don’t have problems with web-based forms, telling people my email address is a chore at best since it’s not gmail, outlook, yahoo, etc…
I keep seeing people say this but I’ve yet to encounter it even once. I fully believe it happens with non-com/net/org TLDs but I’ve been using my .org as my daily driver for 2 decades and have never had it rejected or denied.
You can avoid the warmup by using an SMTP relay, and you can just use the one from your DNS provider if you’re not planning to send hundreds of mails per day.
What arm board :p
Honest question. All the ones I have seen are really awful and I would love to tinker with something that has real pcie (Ampere workstations do not count)
Both the ROCKPro64 and the NanoPi M4 from 2018 has a x4 PCIe 2.1 interface. Same goes for almost all RK3399 boards that care to expose the PCIe interface.
Update: there’s also the more recent NanoPC-T6 with the RK3588 that has PCIe 3.0 x4.
They could’ve exposed more SATA ports and / or PCI lanes and decided not to do it.
And… let’s not even talk about the SFF 8087 connector that isn’t rated to be used as an external plug, you’ll likely ruin it quickly with insertions and/or some light accident.
PCIe 2.0 x 4 > 2.000 GB/s PCIe 3.0 x 2 > 1.969 GB/s
But we also have to consider the suggested ARM CPU does PCIe 2.1 and we’ve to add the this detail:
PCIe 2.1 provides higher performance than the PCIe 2.0 by facilitating a transparent upgrade from a 32-bit data path to a 64-bit data path at 33MHZ and 66MHz.
I shouldn’t also have a large impact but maybe we should think about it a bit more.
Anyways I do believe this really depends on your use case, if you plan to bifurcate it or not and what devices you’re going to have on the other end. For instance for a NAS I would prefer the PCIe 2.1 x 4 as you could have more SATA controllers with their own lanes instead of sharing lanes in PCIe 3.0 using a MUX.
Conclusion: your mileage may vary depending on use case. But I was expecting to have more PCI lanes exposed be it via more m.2 slots or other solution. I guess that when a CPU comes with everything baked in and the board maker “only has” to run wires around better do it properly and expose everything. Why not all SATAs for instance?
I knew this day would come! blows the dust off my gateway machine with a P4 @ 1.6GHz Look, it’s even got a fdd, perfect for backup duty! If I could only find that Zip drive though…
I would suggest more learn by doing approach. Learning OSI model etc is nice, but it is quite jargon :)
Use some old PC as a server, and get some network cards into it, and use it as firewall/router. Route your home network/NAT/DNS/DCHP through it. Raspberry Pi’s are nice, but their hw is still bit limited.
OPNSense is quite nice and easy free and open source firewall/router solution.
If you want to add bit of flexibility, you can use some virtualization platform like VMware in to the machine, so that you can run OPNSense in it, with some other virtual servers.
Then when you get things working, you can start looking in to VLAN’s, because they are quite important part of enterprise networking. Most cheap switches nowadays support VLAN’s out of the box.
A custom router + managed switch is a great way to learn. Studying the fundamentals is also good, but in my opinion it’s not as fun as setting up your own network and learning hands-on.
If you decide to go this route I highly reccomend taking regular backups of your config (and backup again before you change stuff). Part of learning involves breaking things - trust me you will break your network - and in networking that’s one of the best ways to learn. Backups will give you an easy way to restore to a known working configuration.
I’d start with a second router added to the current network, use it to segment a “lab” network. Then, when it breaks you break it, it breaks the lab stuff and not your house stuff.
So it’s a computer that lets you remotely control another computer? Is the advantage over SSH or remote desktop etc that you can interact with stuff outside the OS, like in BIOS?
That’s basically it. It guarantees you can always access your computer remotely, even if you broke your ssh, or accidentally messed up your network config, or can’t boot due to filesystem corruption and need to run fsck from recovery mode.
Exactly, it isn’t a replacement. It is redundancy in the form of a screen with keyboard and mouse directly connected, but accessibly from remote (my couch). It is far from my primary interface with the server.
Yes. This is home-made out-of-band management, like HP’s iLO, Dell’s iDRAC, or generic IPMI. Not only is it a virtual KVM (keyboard/video/mouse), you can pass the host’s power button through this device so you can remotely power on or reset a hung or powered-off system, or mount and boot from a virtual floppy or ISO to completely reinstall the remote system.
Just throwing out an option if you aren’t aware, gohardrives on ebay and on their site sell used Hdds. 10Tb for $80. The catch is they’ve been used in data centers for 5 years. The company will guarantee the drives for an addition 5 years and it could save you a lot of money depending on how much you want to risk it. I went with 3, one being a parity drive in case other goes bad.
I currently have 6x10TB of these drives running in a gluster array. I’ve had to return 2 so far, with a 3rd waiting to send in for warranty also (click of death for all three). That’s a higher failure rate than I’d like, but the process has been painless outside of the inconvenience of sending it in. All my media is replaceable, but I have redundancy and haven’t lost data (yet).
Supporting hardware costs and power costs depending, you may find larger drive sizes to be a better investment in the long term. Namely, if you plan on seeing the drives through to their 5 year warranty, 18TB drives are pretty good value.
For my hardware and power costs, this is the breakdown for cumulative $/TB (y axis) over years of service (x axis):
The first two died within 30 days, the second one took about 4 months I think. Not a huge sample size, but it kind of matches the typical hard drive failure bathtub curve.
I just double checked, and mine were actually from a similar seller on Amazon - they all seem to be from the same supplier though - the warranty card and packaging are identical. So ymmv?
Warranty was easy, I emailed the email address included in the warranty slip, gave details on order number + drive serial number, and they sent me a mailing slip within 1 business day. Print that out, put the drive back in the box it shipped with (I always save these), tape it up and drop it off for shipping. In my case, it was a refund of the purchase pretty much as soon as it was delivered to the seller.
I was about to ask why this is better than the docker installation, but I see step one is to install docker haha.
I’ve been running the docker container for a long time, it works very well. It is a bit more complicated if you try and use extensions that require seperatw containers (like setting up collabora), but that can be done as well. It’s just more complicated.
I do remember needing to know how to access the internal terminal a few times, but I don’t remember why. If I think of it I’ll come back and add instructions.
As a former self-configured docker compose NC user, I have to say I’m way happier with the AIO. But still, the older docker method was head and shoulders over any other method of running NC that I’d used.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.