@yote_zip@pawb.social avatar

yote_zip

@yote_zip@pawb.social

Every community I care about is dead

This profile is from a federated server and may be incomplete. Browse more on the original instance.

How to install Skyrim

I downloaded the game files, mounted the .iso files and added the game to setup.exe to my steam library and installed the game under the mountpoint. When i click play in steam i just get back to the installer. Running the SkyrimSe.exe in steam(located in the CODEX directory) doesnt work. I use Arch Linux.

yote_zip,
@yote_zip@pawb.social avatar

Are NPCs silent when talking? If so you’ll need to install faudio into your Wine prefix with Winetricks. Running with Steam may help a little also but l don’t remember if Proton includes faudio by default.

As for the cart crashing that’s probably just Skyrim. The opening cutscene is notoriously buggy.

yote_zip,
@yote_zip@pawb.social avatar

I no longer use Arch, but this wouldn’t have happened to me because I used vanilla Arch. On Manjaro it can happen at any moment that an AUR package silently depends on a new part of a dependency not implemented in the older versions. The AUR does not care to figure out which exact version dependencies are needed for a program, because you are expected to always have an up-to-date Arch system before installing. If the AUR cared about Manjaro compatibility they would need to mark every dependency with a minimum version number, but that’s a lot of effort and the AUR understandably doesn’t care about supporting Manjaro’s repos. If Manjaro stood up its own AUR this would no longer be a problem.

(Personally, I don’t think AUR packages are a good idea for system stability/security even on vanilla Arch, but it is understandable that people like them for their convenience.)

yote_zip,
@yote_zip@pawb.social avatar

The receipts that I just linked show far more than 2 mistakes. I don’t care whether they have fixed them or not, I care that they have made so many. Trust arrives on foot and leaves on horseback. Distro forks are nothing special, so why use the one with a history of bad management? Use Arch proper or any of the countless Arch forks that use the real Arch repos, which will inherently sidestep a lot of issues that Manjaro created for itself.

You say that delaying packages makes things more stable but there is a clear history of that not being the case, which has already been described in the links I posted. This is most importantly true in terms of delayed security updates. You also don’t understand how the AUR works in conjunction with outdated Manjaro packages, which will cause dependency problems and lead to breakage. This is a very simple cause and effect so I’m not sure how you think you can try to assert “everyone else must misunderstand how dependencies work”.

As for the last bit, no Arch is obviously not being hurt when Manjaro is called out. If anything I’ll bet Arch wishes Manjaro would stop tripping over itself and giving Arch a bad name. They are already sick of Manjaro users using the AUR and complaining every time it breaks their packages, and you can read what Arch’s security team thinks about Manjaro here on r/archlinux (image mirror here if you don’t want to visit that site).

yote_zip,
@yote_zip@pawb.social avatar

Arch has made a lot of mistakes, and their most recent one where they bricked everyone’s GRUB loader is the one that caused me to stop using it as a general recommendation. This sort of thing would never happen in Debian, and pretending that “every distro makes massive mistakes!” is disrespectful to distros that actually put a ton of effort into making sure these things don’t happen. Sweeping those mistakes under the rug is harmful to new users who don’t know what they’re signing up for when they download the distro that you are sugarcoating, and that is the primary reason to make sure that anyone considering Manjaro is aware of its past so they can make their own decisions.

Security updates aren’t delayed in Manjaro, they’re pushed through out of band.

Manually. Also read as: delayed. The comment from Arch’s security team that you are minimizing is part of the reason why this is a bad idea: “They just forward our security advisories without reading them. Leaving critical security issues to rot in their “stable” repositories while only pushing forward issues that are publicized or users telling them about”. Once again, why would I trust the Manjaro team to be on top of security when they can’t figure out how to keep an SSL cert alive? Their security mailing list hasn’t even been updated in a year.

Once you’ve compiled an AUR package it will remain compatible with the system you compiled it on until you update and introduce an incompatibility.

You are dodging the real dependency problem by focusing on this half. The real dependency problem is that when an AUR package updates and Manjaro’s packages are not new enough for the update, it will cause breakage. AUR packages are built with Arch Linux’s repos in mind and no care whatsoever for the versions of packages that Manjaro holds. Updating your AUR packages frequently will all but guarantee that you will eventually run an AUR update that requires a dependency with a newer version than Manjaro provides, and that app will break (or worse, the AUR package is a dependency for other apps which will cause further breakage). Even Manjaro knows this: “Using AUR also implies Arch stable branch - which is only achievable by using Manjaro unstable or testing branch.”. Also take it from their team: “The AUR is neither officially supported by Arch nor Manjaro. If you do use the AUR on Manjaro, use our unstable branch. Problem solved.”

That’s not the “Arch’s security team”, it’s one person on a 3rd party forum, with a history of issuing personal statements reeking of personal grudge. Yeah I know that comment unfortunately. It’s a singular, isolated piece of flamebait and it makes me sad to see it’s still being bookmarked and passed around 5 years later.

Yes very sad that a member of Arch’s security team made a warning about Manjaro’s security 5 years ago and still we have people pretending that it’s “flamebait” because that’s a convenient excuse to dismiss it.

yote_zip,
@yote_zip@pawb.social avatar

I wasn’t able to get the gsettings method to work (I’m on Wayland KDE), and that article doesn’t say anything about theming QT Flatpaks. Also, after “installing” my GTK theme as a flatpak via the method described, it still wasn’t available to my GTK Flatpaks via the GTK_THEME method. The steps in the itsfoss.com article do work, though there’s been a lot of squabbles about the “proper” way to expose themes to Flatpaks. Regardless, this all goes back to my point that theming Flatpak is clunky and should be much smoother.

yote_zip,
@yote_zip@pawb.social avatar

Right, I understand it’s not supposed to be used in “proper” usage, but it does work for all my GTK apps and the gsettings method does not work for me. Unless I’m supposed to store it somewhere else because I’m on KDE.

yote_zip, (edited )
@yote_zip@pawb.social avatar

I do have xdg-desktop-portal-gtk on Debian Stable, which is currently at 1.14.1-1. I’ll look around to see if there’s more documentation on this method, because I would prefer to not use the debug variables if possible.

Edit: I launched with GTK_DEBUG=interactive and I can see the theme inside the Flatpak gets set to Adwaita-empty instead of my actual theme, which does get properly returned via gsettings get org.gnome.desktop.interface gtk-theme

yote_zip,
@yote_zip@pawb.social avatar

That gets my normal GTK theme properly. I found a little more discussion on this here. Nothing very actionable but I did also confirm that my xdg-desktop-portal-gtk is running. It seems like this is supposed to be working, but I have a mostly stock Debian 12.1 KDE install and something seems to be wrong somewhere in the chain. I’ve also tried multiple GTK Flatpaks with the same results.

Edit: Also, I have both my themes folder exposed and the theme installed as a Flatpak via the linked script.

yote_zip,
@yote_zip@pawb.social avatar

They’re available as an option. You can source from any of Fitgirl, DODI, or KaOs if they have the game you want. Other repackers do exist and they’re all generally trustworthy, but those 3 put out a lot of content and have a good track record. ElAmigos is another good one that puts out a lot of releases.

Weird error copying MKV file

I have some locally stored media i was copying between drives and one mkv file gave this error error reading ‘video1.mkv’: Input/output error and only copied 176/256 MiB; the copied file plays the video only up to a certain point before abruptly closing; I can play the original file fine albeit there is a noticeable hitch at...

yote_zip,
@yote_zip@pawb.social avatar

Fair enough. I would at least try to get the damaged file off of the disk so you can potentially fix it later, or just have it available to play in its broken state. For the future you should probably be running monthly BTRFS scrubs to detect bitrot sooner, and potentially you should have some backups or data redundancy so you can repair the bitrot when it’s detected.

yote_zip,
@yote_zip@pawb.social avatar

Okay cool. I would be wary of that drive just in case, and I would definitely schedule weekly SMART short tests and monthly BTRFS scrubs on it if you go with BTRFS in the future. EXT4/XFS/etc do not have a concept of data checksums, which means they can’t scrub and check for bitrot - this might be problematic if you find that your disk starts causing bitrot because you won’t know where it’s happening.

I follow Backblaze’s rules on detecting impending drive failure:

  • SMART 5: Reallocated_Sector_Count.
  • SMART 187: Reported_Uncorrectable_Errors.
  • SMART 188: Command_Timeout.
  • SMART 197: Current_Pending_Sector_Count.
  • SMART 198: Offline_Uncorrectable.

If any of these SMART metrics are higher than 0 I’d expect failure soon and take precautions.

yote_zip,
@yote_zip@pawb.social avatar

It goes without saying but the number of errors you should get on a scrub is ideally 0. Bitrot happens from time to time which is why you should keep some data redundancy/backups so you can repair it when it’s detected, but that number seems higher than normal. Your disk may be going bad if you’re getting that many read errors; I’m not sure. I believe you’re already backing up data off this drive but yeah I would get everything important off the drive ASAP, then run a SMART short test and a SMART long test to see if that reports that anything is wrong. The disk may be fine but better to be safe than sorry.

Back to the video file, I’m assuming it was not one of the ones that BTRFS fixed automatically? The only real options for data recovery are to rescue the file minus the bad blocks with e.g. ddrescue (which I don’t personally have familiarity with) or something similar, or to encode through the errors with ffmpeg if it will let you.

yote_zip,
@yote_zip@pawb.social avatar

Try this answer. I guarantee there is a way to read the file front to back while skipping errors, but I run so much data redundancy that I don’t have any experience with it.

Thanks to dust I deleted a 70 gig file on my drive

Dust is a rewrite of du (in rust obviously) that visualizes your directory tree and what percentage each file takes up. But it only prints as many files fit in your terminal height, so you see only the largest files. It’s been a better experience that du, which isn’t always easy to navigate to find big files (or atleast...

yote_zip,
@yote_zip@pawb.social avatar

Try ncdu as well. No instructions needed, just run ncdu /path/to/your/directory.

yote_zip,
@yote_zip@pawb.social avatar

To add on here, you can use the Are We Anti-Cheat Yet? site to track which games are not working due to anti-cheat. In my experience it’s extremely rare for “Linux” (aka Wine/DXVK/VKD3D/et al) to not support arbitrary games. If a game is not working on Linux it’s almost certainly because of an anti-cheat or some bloated/obscure DRM telling Linux “no you cannot run this”.

yote_zip,
@yote_zip@pawb.social avatar

Are you buying the hardware for this setup, or do you already have it laying around? If you don’t have the hardware yet I’d recommend not using external USB drives in any way possible, as speed and reliability will be hindered.

If you already have the hardware and want to use it I’m not super confident on recommending anything given my inexperience with this sort of setup, but I would probably try to use ZFS to minimize any potential read/write issues with dodgy USB connections. ZFS checksums files several times in transit, and will automatically repair and maintain them even if the drive gives you the wrong data. ZFS will probably be cranky when used with USB drives but it should still be possible. If you’re already planning on a RAID6 you could use a RAIDZ2 for a roughly equivalent ZFS option, or a double mirror layout for increased speed and IOPS. A RAIDZ2 is probably more resistant against disk failures since you can lose any 2 disks without pool failure, whereas with a double mirror the wrong 2 disks failing can cause a pool failure. The traditional gripe about RAIDZ’s longer rebuild times being vulnerable periods of failure are not relevant when your disks are only 2TB. Note you’ll likely want to limit ZFS’s ARC size if you’re pressed for memory on the Orange Pi, as it will try to use a lot of your memory to improve I/O efficiency by default. It should automatically release this memory if anything else needs it but it’s not always perfect.

Another option you may consider is SnapRAID+MergerFS, which can be built in a pseudo-RAID5 or RAID6 fashion with 1 or 2 parity drives, but parity calculation is not real time and you have to explicitly schedule parity syncs (aka if a data disk fails, anything changed before your last sync will be vulnerable). You can use any filesystems you want underneath this setup, so XFS/Ext4/BTRFS are all viable options. This sort of setup doesn’t have ZFS’s licensing baggage and might be easier to set up on an Orange Pi, depending on what distro you’re running. One small benefit of this setup is that you can pull the disks at any time and files will be intact (there is no striping). If a catastrophic pool failure happens, your remaining disks will still have readable data for the files that they are responsible for.

In terms of performance: ZFS double mirror > ZFS RAIDZ2 > SnapRAID+MergerFS (only runs at the speed of the disk that has the file).

In terms of stability: ZFS RAIDZ2 >= ZFS double mirror > SnapRAID+MergerFS (lacks obsessive checksumming and parity is not realtime).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #