Yes, Ubuntu 20 isn’t EOL yet. A lot of those downloads are probably IT staff or developers that are running Ubuntu servers or developing on those versions.
ETA: We still have some RHEL 7 and clones at my day job
Sounds like these 80 year olds need some friendly data hoarders to help them to digitise their collections. (Or for the BBC to promise to return the film, undamaged, once they’ve digitised them.)
Why would the BBC want old film, once they had the footage? They’ve already thrown them away once! It’s only of value to collectors at this point, and the Beeb can’t sell it if they’re claiming it’s lost.
If they want it so much why don’t they pay him? Sounds like if it weren’t for him (and the others he seems to allude to) we wouldn’t have this opportunity.
20.04 and 22.04 were LTS versions, aka, long term support.
Any application that requires stability should run on LTS versions. Combined with Ubuntu being one of the most popular distros, makes 20.04 and 22.04 the most popular choices for anything in a home lab and many smaller business needs.
Whether you’re building a server for home DNS, or a time server for a small business, then you’re probably using Ubuntu as the base.
I think the next LTS version will be 24.04, so things might shift sometime after that.
I can give you an answer from someone who regularly downloads really old EOL versions of Ubuntu and Debian. I personally use them as part of attack and defense competitions. They are normally very close to unusable and are nearly impossible to update to a more recent or secure version. This forces my team to find creative ways to keep them working while also taking measures to isolate them as much as possible. I also use them to teach old exploits that have been patched in more recent versions, walking people through how it worked and why it existed.
It happens a lot more with Windows machines, but there might be some manufacturing systems out there that require software that won’t run on modern versions of the OS. These systems often require new manufacturing tools in order to upgrade, or they need massive overhauls that smaller companies can’t always afford.
This forces my team to find creative ways to keep them working while also taking measures to isolate them as much as possible. I also use them to teach old exploits that have been patched in more recent versions, walking people through how it worked and why it existed.
I am interested in learning more about this. I know a fair bit about networks but exploit history and modern attack / defense strategies and server hardening are not my main specialty. Do you have any good links or resources that you can share?
Ok so to be clear when I said team I mean a bunch of college students preparing for different ctfs, but these are some of the more helpful resources we have found:
Tryhackme: personal favorite especially for beginners Hackthebox: great for learning/practicing attacks Overthewire: another good ctf site
We try to build many of our own ctf like machines, then each person switches their machine with another person and the other person tries to secure the vulnerabilities without knowing anything about the machine. Once everyone has secured their machines we try to attack them using the notes made while setting them up. This is our step by step for that process.
download an old version of a distro. (Ubuntu 14, deb 9, ect)
install and setup the VM without any updates or changes to the default configuration
google the distro version (Ubuntu 14.04) + vulnerabilities or exploits
read through the different sites to find applications that had huge security issues on that version and begin installing some of the programs that have known exploits
So for example with Ubuntu 14.04 we know there are some Linux kernel exploits.
From here I could add some of the packages mentioned as having exploits and then attempt to exploit them. I could also check newer versions of Ubuntu like 16 to find vulnerabilities that would also apply to older versions.
There is also Mitre’s list(s) of the most dangerous software vulnerabilities. They have one for 2023, but also a catalog of lists from previous years.
10 years ago I worked at a university that had a couple people doing research on LHC data. I forget the specifics but there is a global tiered system for replication of data coming from the LHC so that researchers all around the world can access it.
I probably don’t have it right, but as I recall, raw data is replicated from the LHC to two or three other locations (tier 1). The raw data contains a lot of uninteresting data (think a DVR/VCR recording a blank TV image) so those tier 1 locations analyze the data and removes all that unneeded data. This version of the data is then replicated to a dozen or so tier 2 locations. Lots of researchers have access to HPC clusters at those tier 2 locations in order to analyze that data. I believe tier 2 could even request chunks of data from tier 1 that wasn’t originally replicated in the event a researcher had a hunch there might actually be something interesting in the “blank” data that had originally been scrubbed.
The university where I worked had its own HPC cluster that was considered tier 3. It could replicate chunks of data from tier 2 on demand in order to analyze it locally. The way it was mostly used was our researchers would use tier 2 to do some high level analysis, and when they found something interesting they would use the tier 3 cluster to do more detailed analysis. This way they could throw a significant amount of our universities HPC resources at targeted data rather than competing with hundreds of other researchers all trying to do the same thing on the tier 2 clusters.
There’s no authoritative list of instances since federation isn’t required, but tools like lemmyverse.net will give you a solid list of the ones discoverable from the most well known federations.
I don't know about now, but my first Linux OS was Ubuntu and I appreciated the long support because of this. That was in the dial up days, tho, I can't imagine why anyone would require that now ☠️
Want to minimise cost? 128 GB USB drives can be had for around ten dollars. Fifteen dollars will get you 256 GB. Stick with a cheap but reliable brand like SanDisk or PNY. It will last you years with proper care, even with regular use.
I do not really see a reason to invest in a “real” hard drive or external SSD for your use case. There isn’t enough data to justify it and USB drives can take a lot of abuse and still work.
Edit: There are a lot of fake USB drives sold online where the controller has been hacked to report a higher capacity to your operating system than actually exists, for example, reporting 512 GB when there’s actually only 64 GB of storage. If you try to store more than the actual capacity, your old files will be overwritten with the new ones. That’s why you should be suspicious of very high capacity drives (1 TB+) sold online for low prices. I would not buy any USB drive online that claims to have a capacity greater than 512 GB.
Regarding fakes, there are tools you can (and should) run on any new drive. I personally like h2testw, but there are others as well.
You should also avoid sketchy resellers. This includes Amazon, due to their policy of commingling and their response to counterfeits. Don’t buy from Amazon! Stick to trustworthy and first-party sellers. If buying online, make sure they don’t have a third party seller. In fact, probably best to skip any that even have a 3rd party “marketplace” system.
I don’t think there are really any fake 256 GB drives on the market. The real drives are priced low enough that there’s really not much profit to be made from selling fakes. It’s just not that much cheaper to make a 32 GB drive and flash fake firmware than to actually make a legitimate 256 GB drive. Or buy the AmazonBasics brand since only Amazon makes and sells them.
datahoarder
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.