Ripcord is really unique and it’s still my favorite third party client. Abaddon might be worth trying. Unfortunately, most other third party clients are wrappers.
tried out abaddon but it tells me it couldnt fetch the build number which increases chances of being flagged their github has one related issue with no solution
Personally I never understood why file managers in linux refuse to do operations that require privileges. Guess what, if I have Nautilus open and want to move files into, let’s say, /usr/local, I don’t want to have to switch to the terminal to do so if I already have the stuff copied within nautilus. On Windows, I just get an admin password prompt if I try to do naughty stuff. On Linux, we have the whole polkit system, but no file manager seems to ever use it. Tbf, this is not a nautilus problem, as no file manager seems to do this.
Oh wow you can? I just switched to Nemo on Arch after using Thunar for a long time but I got annoyed at it for crashing a lot when I copy files to my FTP server. Very good to know!
I’m aware of nautilus-admin, but not only is it not maintained, imho it should be part of nautilus by default, and it has to open a new nautilus window when you use it. What I want is to drag and drop files to /usr/local and then get a password prompt to do the move. With nautilus-admin, I need to have the foresight to use “Open as admin” when going into /usr/local, but if I had that foresight then I might as well just start nautilus as root to begin with. Usually I just want to look into the folder, and only then realize I need to change something, which means a good old “go back up one folder, then search the local folder again, then right click, search for ‘Open as admin’, then get thrown into a new window, completely disorienting myself in the process”.
SATA III is gigabit, so the max speed is actually 600MB/s.
What filesystem? For example, on my ZFS pool I had to let ZFS use a good chunk of my RAM for it to be able to cache things enough that rsync would max out the throughput.
Rsync doesn’t do the files in parallel so at such speeds, the process of open files, read chunks, write chunks, close files, repeat can add up. So you want the kernel to buffer as much of it as possible.
If you look at the disk graphs of both disks, you probably see a read spike, followed by a write spike on the target, instead of a smooth maxed out curve. Then the solution is increasing buffers and caching. Depending on the distro there’s a sysctl that may be on by default that limits the size of caches to prevent the “I wrote a 4GB file to my USB stick and now there’s 4GB of RAM used for it and it takes hours after finishing the transfer before it’s flushed to the stick”.
SATA III is gigabit, so the max speed is actually 600MB/s.
My mistake, though still, a 4tb transfer should take less than 2hr at 5Gb/s (IN THEORY) Thank you @Max_P for pointing this out a second time elsewhere: 6Gb/s is what the sata 3 interface is capable of, NOT what the DRIVE is capable of. The marketing material for this drive has clearly psyched me out, the actual transfer speed is 210Mb/s
The filesystem is EXT4 and shared as a SMB… OMV has a fair amount of ram allocated to it, like 16gb or something gratuitous. I’m guessing the way rsync does it’s transfers is the culprit, and I honestly can’t complain because the integrity of the transfer is crucial.
I’m interested in ksmbd… I chose SMB simply because I was using it across lunix/windows/mac devices and I was using OMV for managing it, but that doesn’t mean I couldn’t switch to something better.
Honestly though, I don’t need faster transfers typically, I just happen to be switching out a drive right now. SMB through OMV has been perfectly sufficient otherwise.
ksmbd is still SMB, except it’s implemented within the Linux kernel. As a result, file transfers speeds are improved greatly compared to pure-Samba which runs only in userspace.
The second thing is, you need to check which SMB protocol you’re using, ideally you’d want to use at least SMB 3, anything older than that will be painfully slow.
Finally, I read in your other comment that you’re using spinning disks and a USB dock. That adds significant overheads.
The Ironwolf drive benchmarks starting at 250MB/s and slows down to 100MB/s as it reaches the end of the drive. (spinning disks gradually become slower the more full it becomes.) Now add file fragmentation + filesystem overheads (buffers, cluster size allocation etc) and the speeds could go down considerably.
Then there’s your SATA > USB dock - no dock would ever reach 5Gbps, that’s just false advertising - it’s only mentioning the theoretical protocol speed. In reality, you’d be seeing something like below 100MB/s write speeds for 128k sequential writes, but if your block size is smaller, expect far slower writes.
Combine all of the above and you can imagine just how much slower this whole thing can be.
For reference, see this benchmark as an example, to see what’s “normal” for a simple file transfer to a blank drive with no fragmentation: www.anandtech.com/show/6014/…/3
To keep your system secure no matter what, you open up only the ports you absolutely need.
People will always make a mistake while configuring software, a firewall is there to make sure that error is caught. With more advanced firewall’ you can even make sure only certain app’s have access to the internet to make sure only what you absolutely need toconnect to the internet does.
In general it’s for security, but can also be privacy related depending on how deep you want to get into it.
EDIT: It isnt about not trusting other devices on your netork,or software you run, or whether you are runni g a server. It’s about general security of your system.
My understanding as a NixOS user is a lot of its fundamentals are very strongly coupled to systemd. It’s responsible for things like running system activation scripts and managing any services it exposes options to, so replacing it sounds like a tall order.
I’m not aware of any Nix-based alternatives, but I’d definitely welcome them! Oh and also, as others have pointed out, Guix might fit the bill depending on your needs.
Even if you do trust the software running on your computer, did you actually fuzz it for vulnerabilities? Heartbleed could steal your passwords even if you ran ostensibly trustworthy software.
So unless you harden the software and prove it’s completely exploit-free, then you can’t trust it.
Is this Mozilla just essentially offering an alternative to the Firefox snap, or is there anything actually different in this package feature-wise compared to other packages (snap, flatpak, etc)?
Back in the day I was also dual booting windows and Ubuntu but windows was messing with the system clock.
Then I decided to just have a windows VM to be able to backup my iPhone (not my own choice😞), but I’d want to replace that VM by a MacOs so that at least I learn how to use a new system since I have to keep using windows at work anyway.
The community is a big plus on Linux and you always find someone ready to help you.
I also love the spirit of Linux where you rely way less on big corporations.
Daaem, I guess the poor dude at the receiving end did not consider it particularly fun. Well, at least they had sbin working, so probably possible to recover without a live cd. Huh, guess who’s now spinning up a VM to check it out 🤣
Checked it out: on arch that results in inability to run tty on reboot, then you’re dropped into initramfs’s rescue shell where you can simply +x new_root’s /usr/bin/* and be back up and running
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.