Kid_Thunder

@Kid_Thunder@kbin.social

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Kid_Thunder,

I didn't know that ansible-galaxy had a comic

Kid_Thunder,

Finally. Intuit has been lobbying for years to keep this from happening.

Derrick Plummer, a spokesman for Intuit, said taxpayers can already file their taxes for free and there are online free-file programs available to some people. Individuals of all income levels can submit their returns for free via the mail.

A “direct-to-IRS e-file system is a solution in search of a problem, and that solution will unnecessarily cost taxpayers billions of dollars,” he said. “We will continue unapologetically advocating for American taxpayers and against a direct-to-IRS e-file system because it’s a bad idea.”

And who believes that crap anyway? Intuit markets their solution due to the complicated nature of anything outside of standard deductions and figuring out if you should itemize and how to do that.

Intuit has spent $25.6 million since 2006 on lobbying, H&R Block about $9.6 million and the conservative Americans for Tax Reform roughly $3 million.

Now if the states get on board for easy filing online, it'll be great.

Kid_Thunder, (edited )

Because it's often not worth the investment. You would pay a shit ton for a one time conversion of data that is still accessible.

Still accessible for now and less likely to be accessible as the clock ticks and less likely that there is compatible hardware to replace.

If it isn't worth the investment, then what's the problem here? So what if the data is lost? It obviously isn't worth it.

If the software became open source, because the company abandoned it, then that cost could potentially be brought down significantly.

OK but that isn't a counter point to what I said. If the hardware never fails, there is no problem either. What does that matter? And who cares if it was FOSS (though I am a FOSS advocate). What if nobody maintains it?

It doesn't matter because these aren't the reality of the problems that this person is dealing with. Why not make some FOSS that takes care of the issue and runs on something that isn't on borrowed time and can endure not only hardware changes but operating system changes? That'd be relevant. It goes back to my point doesn't it? Why not hire this person.

Clean room reverse engineering has case law precedent that essentially make this low risk legally (certainly nil if the right's holder is defunct).

You are also missing the parts where functional hardware loses support. Which is even worse in my opinion.

I didn't miss the point. I even made the point of having at least ~20 years to plan for it in the budget. Also the hardware has already lost support or there wouldn't be an issue, would there? You could just keep sustaining it without relying on a diminishing supply.

Or are we talking about some hypothetical hardware that wasn't mentioned? I guess I would have missed that point since it was never made.

Kid_Thunder, (edited )

I didn't say capitalism is perfect nor did I imply it.

So hypothetically let's say the vendor lost the rights to the software since it is abandonware -- great. I'd love it.

What changes for justmeremember's situation? Nothing changes.

I suppose your only issue here is that the software vendor or some entity should support it forever. OK, so why didn't they just choose a FOSS alternative or make one themselves? If not then, why not now? There is nothing that stops them from the latter other than time and effort. Even better, everyone else could benefit!

Does that make justmeremember just as culpable here or are they still the victim with no reasonable way to a solution?

I posted simply because this specific issue is much too common and also just as common is the failure to actually solve it regardless of the abandonware argument instead of stop-gapping and kicking it down the line until access to the data is gone forever.

Kid_Thunder, (edited )

Well, I think a better solution would be to deliver all source code with the compiled software as well. I suppose that would extend to the operating system itself and the hope that there'd be enough motivation for skillful folks to maintain that OS and support for new hardware. Great, that would indeed solve the problem and is a potential outcome if digital rights are overhauled. This is something I fully support.

What is stopping them now from solving access to this data, even if it's in a proprietary format?

Really, again, I don't take issue with the abandonware argument but rather with the situation that I posted itself. Source code availability and the rights surrounding are only one part of the larger problem in the post.

Source code and the rights to it, aren't the root cause of the problem in the post that I was regarding. It could facilitate a solution, sure but given that there is at least ~20 years of data at risk currently, there was also ~20 years of potential labor hours to solve it. Yet, instead, they chose to 'solve' it in a terrible way. That is what I take issue with.

Kid_Thunder, (edited )

It isn't necessarily a computer programming problem either. Rather it is an IT problem at least in part, one that the poster states is the primary job of his 'lab guy' -- to maintain two ancient Windows 95 computers specifically. That person must know enough to sustain the troubleshooting and replacement of the hardware and certainly at least the transfer of data from the own spinning hard drives. Why not instead put that technical expertise into actually solving the problem long-term? Why not just run both in qemu and use hardware passthru if required? At least then, you would rid yourself of the ticking time-bomb of hardware and its diminishing availability. That RAM that is no longer made isn't going to last forever. They don't even need to know much about how it all works. There are guides, even for Windows 95 available.

Perhaps there are other hurdles such as running something on ISA but even so, eventually it isn't going to matter. Primarily, it seems rather the hurdle is specifically the software and the data it facilitates though. Does it really have some sort of ancient hardware dependency? Maybe. But in all that time of this 'lab guy' who's main role is just these two machines must have some time to experiment and figure this out. The data must be copyable, even as a straight hard drive image even if it isn't a flat file (extremely doubtful but it doesn't matter). I mean the data is by the author's own emphasis CRITICAL.

If it is CRITICAL then why don't they give it that priority, even to the lone 'lab guy' that's acting IT?

Unless there's some big edge case here that just isn't simply said and there is something above and beyond simply just the software they speak about, I feel like I've put more effort into typing these responses than it would take to effectively solve the hardware on life support side of it. Solving the software dependency side? Depending on how the datasets are logically stored it may require a software developer but it also may not. However, simply virtualizing the environment would solve many, if not all, of these problems with minimal investment, especially to CRITICAL (their emphasis) data with ~20 years to figure it out. It would simply be a new computer and some sort of media to install Linux or *BSD on and perhaps a COTS converter if it is using something like an LPT interface or even a DB9/DE-9 D-Sub (though you can still find modern motherboards, cards or even laptops capable of supporting those but also certainly a cheap USB adapter as well).

Anyway, I'm just going to leave it at that, I think I've said a lot on the subject to numerous people and do not have much more to add other than this is most likely solvable and outside of severe edge cases, solvable without expert knowledge considering the timeframe.

Kid_Thunder, (edited )

So again and again and again, I was not arguing against the abandonware issue. I take issue with how the problem is being stop-gapped in this current situation and not in some hypothetical alternate timeline.

Instruments like the ones we use are super expensive

Great. I didn't imply otherwise.

On top of that most people here barely understand computer and software

So the lab guy maintaining Windows 95 era computer's hardware, barely understands computers. Got it. I suppose this same lab guy won't be able to do anything even if the source code was available and would still being doing the same job.

What you’re suggesting is treating the symptoms but not the disease. Making certain file formats compatible with other programs is not an easy undertaking and certainly not for people without IT experience.

I didn't say it isn't. I said they've had ~20 years to figure it out. What would source code being available solve for them then? We could assume other people would come together to maintain it, sure. I've also talked about other solutions in replies. There are even more solutions. I wasn't trying to cover all bases there. It is just that within a couple of decades this has been a problem, there has been plenty of time to solve it.

Software for tools this expensive should either be open source from the get-go or immediately open-sourced as soon as it’s abandoned or company goes bust

Oh OK, so that makes it less complicated. I thought the assumption here is that, in general, anyone in that lab barely understands a computer or how software works. So, who's going to maintain it? Hopefully, others, sure. I actually do talk about this in other replies and how it is something I support and that, in this case, the solution is to deliver the source with the product. FOSS is fantastic. Why can't that just be done now by these same interested parties? Or are we back to "can't computer" again? Then what good is the source code anyway?

But again, that's a "what-if things were different" which isn't what I was discussing. I was discussing this specific, real and fairly common issue of attempting to maintain EOL/EOSL hardware. It is a losing game and eventually, it just isn't going to work anymore.

Even with plenty of funding to workaround the issue that shouldn’t be necessary, it’s a waste of time and money just so a greedy company can make a few extra bucks.

Alright, the source code is available for this person. Let's just say that. What now?

What can be done right now, is fairly straight forward and there are numerous step-by-step guides. That's to virtualize the environment. There is also an option to use hardware passthru, if there is some unmentioned piece of equipment. This could be done with some old laptop or computer that you've probably tossed in the dumpster 10 years ago. The cost is likely just some labor. Perhaps that same lab guy can poke around or if they're at a university, have their department reach out to the Computer Science or other IT related teaching department and ask if there are any volunteers, even for undergrads. There are very likely students that would want to take it on, just because they want to figure it out and nothing else.

There may be an edge case where it won't work due to some embedded proprietary hardware but that's yet another hypothetical issue at stake which is to open source hardware. That's great. Who's going to make that work in a modern motherboard? The person that you've supposed can't do that because they barely understand a computer at all?

In this current reality, with the specific part of the post I am addressing, the solution currently of sustaining something ancient with diminishing supply is definitely not the answer. That is the point I was making. There is a potential of ~20 years of labor hours. There is a potential of ~20 years of portioning of budgets. And let's not forget, according to them, it is "CRITICAL" to their operations. Yet, it is maintained by a "lab guy" who may or may not have anything other than a basic understanding of computers using hardware that's no longer made and hoping to cannibalize, use second hand and find in bins somewhere.

If this "lab guy" isn't up to the task, then why are they entrusted with something so critical with nothing done about it in approximately two decades? If they are up to the task, then why isn't a solution with longevity and real risk mitigation being taken on? It is a short-sighted mentality to just kick it down the road over and over again plainly hoping something critical is never lost.

Kid_Thunder,

Cause the instrument is important and replacing it, aside from being a massive waste of a perfectly functioning instrument, costs hundreds of thousands if not millions of € that we can’t spend

Why would you need to replace the instrument? You only need to replace the computers' functions. Why does it need to cost anything other than some other old workstation tossed into an ewaste bin years ago?

some dude on Lemmy said we shouldn’t use stop-gap measures for a problem that’s completely artificial.

As opposed to some dude on Lemmy bemoaning that there just can't be solved without source even though I've given actual solutions available now and for little to no material cost?

You have admitted that you'd still have to rely on someone else's expertise and motivation in the hopes that they'd solve the problem for the lab, yet, in my opinion, you're just discarding solutions that I've presented as if they aren't solutions at all because, at least in one of your points, that they'd have to rely on someone else's expertise and motivation in the hopes that they'd solve the problem for the lab. Even then, as I said, they've had decades to figure it out and there exist step-by-step instructions already that are freely available to help them solve the problem or get them almost to the end, assuming, there is some proprietary hardware never mentioned.

Anyway, I don't really have anything else to add to the conversation. So you can have the last word, if you wish.

Kid_Thunder, (edited )

Alright I know this is going to get some hate and I fully support emulation and an overhaul of US copyright and patent law but the justmeremember's supportive post is just bad. This is the same bad practice that many organizations, especially manufacturing, have problems with. If the ~20 years of raw data is so important, then why is it sitting on decades passed end-of-life stuff?

If it is worth the investment, then why not invest in a way to convert the data into something less dependent on EOL software? There's lots of ways, cheap and not to do this.

But even worse, I bet there 'raw' data that's only a year old still sitting on those machines. I don't know if the 'lab guy' actually pulls a salary or not but maybe hire someone to begin trying to actually solve the problem instead of maintaining an eventual losing game?

In ~20 years they couldn't be cutting slivers from the budget to eventually invest in something that would perhaps 'reset the clock?'

At this point I wouldn't be surprised to find a post of them complaining about Excel being too slow and unstable because they've been using it as a database for ~20 years worth of data at this point either.

Kid_Thunder,

You set your brush type in ASCII/ANSI characters, set your size and color and then you paint using something like PabloDraw.

Think of opening something like paint, selecting the brush tool and a color and then painting shapes. Well, they do the same but instead of a solid or gradient brush color, the do the same except the brush uses character sets that you can select.

Kid_Thunder,

In my opinion Dan Goodin always reports as an alarmist and rarely gives mitigation much focus or in one case I recall, he didn't even mention the vulnerable code never made it to the release branch since they found the vulnerability during testing, until the second to last paragraph (and pretended that paragraph didn't exist in the last paragraph). I can't say in that one case, it wasn't strategic but it sure seemed that way.

For example, he failed to note that the openssh 9.6 patch was released Monday to fix this attack. It would have went perfectly in the section called "Risk assessment" or perhaps in "So what now?" mentioned that people should, I don't know, apply the patch that fixes it.

Another example where he tries scare the reading stating that "researchers found that 77 percent of SSH servers exposed to the Internet support at least one of the vulnerable encryption modes, while 57 percent of them list a vulnerable encryption mode as the preferred choice." which is fine to show how prevalent the algorithms are used but does not mention that the attack would have to be complicated and at both end points to be effective on the Internet or that the attack is defeated with a secure tunnel (IPSec or IKE for example) if still supporting the vulnerable key exchange methods.

He also seems to love to bash FOSS anything as hard as possible, in what to me, feels like a quest to prove proprietary software is more secure than FOSS. When I see his name as an author, I immediately take it with a grain of salt and look for another source of the same information.

Kid_Thunder,

Since he doesn't mention it in his 'fantastic' reporting, OpenSSH 9.6 was released Monday that will patch this attack. Also, since he doesn't mention it, if on the Internet, the MITM would have to be installed at both end points (client side and server side) to be effective without the patch.

Kid_Thunder,

No Home Depot or similar near you? You could have got same day service.

Kid_Thunder,

I appreciate the advice! I am thinking of Synology or perhaps DIY with either TrueNAS (Scale likely) or Unraid. Synology would be cheap, small, easy on power and thermals too though and I've been looking at the latest and previous gen DS2XX lines.

Also I appreciate the Jellyfin mention. I've been using Plex so long and was thinking about something else like Jellyfin especially but I've never worked with it before.

Kid_Thunder,

Pretty much sounds exactly like I was thinking of doing for the DIY. miniATX/ATX for all the expansion potential + SATA ports + large case to handle it + a CPU with 6 to 8 cores at least. Case would probably be a rack form factor but it doesn't really matter. Probably 32 GB of RAM + a Quadro GPU/Some cheap AMD GPU or something cheapish like that strictly for encoding + Proxmox + TrueNAS or perhaps just unraid. Probably no desktop environments unless something really needs it for some reason. Not sure if I'll go with a motherboard with an ILO/IPMI with its own NIC + vlan or not.

I was going to mix SSD/NVME for performance (if I mix these two, it'd be two separate performance tiers) and HDDs for capacity. Probably two 1+ Gbps NICs bonded and maybe a LACP port channel down the line. VPN with killswitch of course.

I could def. go cheaper on the hardware if I just wanted to use docker/podman mostly but I want VMs too. I'll probably manage updates and backups of what I really care about off network via ansible + rclone + restic repos. I might would use zram + lz4 for most of my VMs because why not.

Kid_Thunder,

This and the rising costs plus adding ads to 'basic' tiers and attempting to create limitations (resolutions, "screens", offline downloads) is what might push me to build a nice, large NAS. We don't want Cable again.

Non-root user that (suddenly) has elevated privileges in a specific command (only). [Have I been hacked?]

Title. Long,short story: creating or editing files with nano as my non-root user gives (the file) elevated privileges, like I have ran it w/ sudo or as root. And the (only) “security hole” that I can think of is a nextdns docker container running as root. That aside, its very “overkill” security-wise (cap_drop=ALL,...

Kid_Thunder, (edited )

The directory you are creating your files in likely is set to immutable or append only.

lsattr -d /path/to/directory

if you see i or a, then that's the issue.

You can remove them with
sudo chattr -i /path/to/dir immutable
sudo chattr -a /path/to/dir append only

Same goes for files but if it happens to all files in a directory, then that is probably it.

Kid_Thunder,

Gnome's Boxes is pretty easy to use and of course uses qemu + KVM. This would be a type 1 hypervisor vs. Virtualbox's type 2. It is point and click like Virtualbox. You don't need to use Gnome's DE to use Boxes.

I have seen people post about your specific error for years when using the virtualbox website's repository instead of their own distro's repository (if it exists).

Kid_Thunder, (edited )

In Boxes, power down your XP VM, click Settings -> Sharing Panel -> Enable Sharing toggle. Click File Sharing and enable File Sharing. Power on the VM.

At that point you should be able to drag and drop from your host direct into your VM for a file transfer.

You can also click the vertical dots menu in the Guest's console "screen" and click Send File... menu option.

In the same menu you can click Devices & Shares -> Realtek USB or whatever -> Local Folder -> Select from the dropdown for the Host's folder that you'd like to share -> Save -> Make sure Toggle on the right is on.

Then your folder, I believe in XP, will show up as a removable drive like a USB drive would.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #