Since he doesn't mention it in his 'fantastic' reporting, OpenSSH 9.6 was released Monday that will patch this attack. Also, since he doesn't mention it, if on the Internet, the MITM would have to be installed at both end points (client side and server side) to be effective without the patch.
In my opinion Dan Goodin always reports as an alarmist and rarely gives mitigation much focus or in one case I recall, he didn't even mention the vulnerable code never made it to the release branch since they found the vulnerability during testing, until the second to last paragraph (and pretended that paragraph didn't exist in the last paragraph). I can't say in that one case, it wasn't strategic but it sure seemed that way.
For example, he failed to note that the openssh 9.6 patch was released Monday to fix this attack. It would have went perfectly in the section called "Risk assessment" or perhaps in "So what now?" mentioned that people should, I don't know, apply the patch that fixes it.
Another example where he tries scare the reading stating that "researchers found that 77 percent of SSH servers exposed to the Internet support at least one of the vulnerable encryption modes, while 57 percent of them list a vulnerable encryption mode as the preferred choice." which is fine to show how prevalent the algorithms are used but does not mention that the attack would have to be complicated and at both end points to be effective on the Internet or that the attack is defeated with a secure tunnel (IPSec or IKE for example) if still supporting the vulnerable key exchange methods.
He also seems to love to bash FOSS anything as hard as possible, in what to me, feels like a quest to prove proprietary software is more secure than FOSS. When I see his name as an author, I immediately take it with a grain of salt and look for another source of the same information.
Derrick Plummer, a spokesman for Intuit, said taxpayers can already file their taxes for free and there are online free-file programs available to some people. Individuals of all income levels can submit their returns for free via the mail.
A “direct-to-IRS e-file system is a solution in search of a problem, and that solution will unnecessarily cost taxpayers billions of dollars,” he said. “We will continue unapologetically advocating for American taxpayers and against a direct-to-IRS e-file system because it’s a bad idea.”
And who believes that crap anyway? Intuit markets their solution due to the complicated nature of anything outside of standard deductions and figuring out if you should itemize and how to do that.
Intuit has spent $25.6 million since 2006 on lobbying, H&R Block about $9.6 million and the conservative Americans for Tax Reform roughly $3 million.
Now if the states get on board for easy filing online, it'll be great.
This and the rising costs plus adding ads to 'basic' tiers and attempting to create limitations (resolutions, "screens", offline downloads) is what might push me to build a nice, large NAS. We don't want Cable again.
Alright I know this is going to get some hate and I fully support emulation and an overhaul of US copyright and patent law but the justmeremember's supportive post is just bad. This is the same bad practice that many organizations, especially manufacturing, have problems with. If the ~20 years of raw data is so important, then why is it sitting on decades passed end-of-life stuff?
If it is worth the investment, then why not invest in a way to convert the data into something less dependent on EOL software? There's lots of ways, cheap and not to do this.
But even worse, I bet there 'raw' data that's only a year old still sitting on those machines. I don't know if the 'lab guy' actually pulls a salary or not but maybe hire someone to begin trying to actually solve the problem instead of maintaining an eventual losing game?
In ~20 years they couldn't be cutting slivers from the budget to eventually invest in something that would perhaps 'reset the clock?'
At this point I wouldn't be surprised to find a post of them complaining about Excel being too slow and unstable because they've been using it as a database for ~20 years worth of data at this point either.
Gnome's Boxes is pretty easy to use and of course uses qemu + KVM. This would be a type 1 hypervisor vs. Virtualbox's type 2. It is point and click like Virtualbox. You don't need to use Gnome's DE to use Boxes.
I have seen people post about your specific error for years when using the virtualbox website's repository instead of their own distro's repository (if it exists).
Well, I think a better solution would be to deliver all source code with the compiled software as well. I suppose that would extend to the operating system itself and the hope that there'd be enough motivation for skillful folks to maintain that OS and support for new hardware. Great, that would indeed solve the problem and is a potential outcome if digital rights are overhauled. This is something I fully support.
What is stopping them now from solving access to this data, even if it's in a proprietary format?
Really, again, I don't take issue with the abandonware argument but rather with the situation that I posted itself. Source code availability and the rights surrounding are only one part of the larger problem in the post.
Source code and the rights to it, aren't the root cause of the problem in the post that I was regarding. It could facilitate a solution, sure but given that there is at least ~20 years of data at risk currently, there was also ~20 years of potential labor hours to solve it. Yet, instead, they chose to 'solve' it in a terrible way. That is what I take issue with.
I appreciate the advice! I am thinking of Synology or perhaps DIY with either TrueNAS (Scale likely) or Unraid. Synology would be cheap, small, easy on power and thermals too though and I've been looking at the latest and previous gen DS2XX lines.
Also I appreciate the Jellyfin mention. I've been using Plex so long and was thinking about something else like Jellyfin especially but I've never worked with it before.
I didn't say capitalism is perfect nor did I imply it.
So hypothetically let's say the vendor lost the rights to the software since it is abandonware -- great. I'd love it.
What changes for justmeremember's situation? Nothing changes.
I suppose your only issue here is that the software vendor or some entity should support it forever. OK, so why didn't they just choose a FOSS alternative or make one themselves? If not then, why not now? There is nothing that stops them from the latter other than time and effort. Even better, everyone else could benefit!
Does that make justmeremember just as culpable here or are they still the victim with no reasonable way to a solution?
I posted simply because this specific issue is much too common and also just as common is the failure to actually solve it regardless of the abandonware argument instead of stop-gapping and kicking it down the line until access to the data is gone forever.
Pretty much sounds exactly like I was thinking of doing for the DIY. miniATX/ATX for all the expansion potential + SATA ports + large case to handle it + a CPU with 6 to 8 cores at least. Case would probably be a rack form factor but it doesn't really matter. Probably 32 GB of RAM + a Quadro GPU/Some cheap AMD GPU or something cheapish like that strictly for encoding + Proxmox + TrueNAS or perhaps just unraid. Probably no desktop environments unless something really needs it for some reason. Not sure if I'll go with a motherboard with an ILO/IPMI with its own NIC + vlan or not.
I was going to mix SSD/NVME for performance (if I mix these two, it'd be two separate performance tiers) and HDDs for capacity. Probably two 1+ Gbps NICs bonded and maybe a LACP port channel down the line. VPN with killswitch of course.
I could def. go cheaper on the hardware if I just wanted to use docker/podman mostly but I want VMs too. I'll probably manage updates and backups of what I really care about off network via ansible + rclone + restic repos. I might would use zram + lz4 for most of my VMs because why not.
In Boxes, power down your XP VM, click Settings -> Sharing Panel -> Enable Sharing toggle. Click File Sharing and enable File Sharing. Power on the VM.
At that point you should be able to drag and drop from your host direct into your VM for a file transfer.
You can also click the vertical dots menu in the Guest's console "screen" and click Send File... menu option.
In the same menu you can click Devices & Shares -> Realtek USB or whatever -> Local Folder -> Select from the dropdown for the Host's folder that you'd like to share -> Save -> Make sure Toggle on the right is on.
Then your folder, I believe in XP, will show up as a removable drive like a USB drive would.