I know very little about contributing to open source but appreciated reading this. Seems like often the interpersonal element is the biggest challenge and the author handled it well.
Looks like you have your answer, but there are a crazy number of possible issues.
The biggest cause is misreading the performance specs.
A partial list of other options:
Mechanical drives store data in rings. Outer rings have higher speeds than inner due to constant angular velocity.
Seeks cost a lot of throuput on mechanical drives.
Oversubscribed drive cables.
HBA issues.
PCIe data path conflicts
Slow RAM
RAM full or busy
Extra cpy within RAM
NUMA path issues (of drives are connected to different NUMA nodes. Not an issue on desktops.)
CPU too busy
Transfer software doing extra things
File system doing extra.
RAID doing extra.
NIC on a different NUMA node than HBA (can be good or bad).
NIC sharing the data path in a conflicting way.
There are others. Start with checking theoretical performance from data sheets.
Also, details matter, and I don’t have enough of them to guess.
SATA III is gigabit, so the max speed is actually 600MB/s.
What filesystem? For example, on my ZFS pool I had to let ZFS use a good chunk of my RAM for it to be able to cache things enough that rsync would max out the throughput.
Rsync doesn’t do the files in parallel so at such speeds, the process of open files, read chunks, write chunks, close files, repeat can add up. So you want the kernel to buffer as much of it as possible.
If you look at the disk graphs of both disks, you probably see a read spike, followed by a write spike on the target, instead of a smooth maxed out curve. Then the solution is increasing buffers and caching. Depending on the distro there’s a sysctl that may be on by default that limits the size of caches to prevent the “I wrote a 4GB file to my USB stick and now there’s 4GB of RAM used for it and it takes hours after finishing the transfer before it’s flushed to the stick”.
SATA III is gigabit, so the max speed is actually 600MB/s.
My mistake, though still, a 4tb transfer should take less than 2hr at 5Gb/s (IN THEORY) Thank you @Max_P for pointing this out a second time elsewhere: 6Gb/s is what the sata 3 interface is capable of, NOT what the DRIVE is capable of. The marketing material for this drive has clearly psyched me out, the actual transfer speed is 210Mb/s
The filesystem is EXT4 and shared as a SMB… OMV has a fair amount of ram allocated to it, like 16gb or something gratuitous. I’m guessing the way rsync does it’s transfers is the culprit, and I honestly can’t complain because the integrity of the transfer is crucial.
I’m interested in ksmbd… I chose SMB simply because I was using it across lunix/windows/mac devices and I was using OMV for managing it, but that doesn’t mean I couldn’t switch to something better.
Honestly though, I don’t need faster transfers typically, I just happen to be switching out a drive right now. SMB through OMV has been perfectly sufficient otherwise.
ksmbd is still SMB, except it’s implemented within the Linux kernel. As a result, file transfers speeds are improved greatly compared to pure-Samba which runs only in userspace.
The second thing is, you need to check which SMB protocol you’re using, ideally you’d want to use at least SMB 3, anything older than that will be painfully slow.
Finally, I read in your other comment that you’re using spinning disks and a USB dock. That adds significant overheads.
The Ironwolf drive benchmarks starting at 250MB/s and slows down to 100MB/s as it reaches the end of the drive. (spinning disks gradually become slower the more full it becomes.) Now add file fragmentation + filesystem overheads (buffers, cluster size allocation etc) and the speeds could go down considerably.
Then there’s your SATA > USB dock - no dock would ever reach 5Gbps, that’s just false advertising - it’s only mentioning the theoretical protocol speed. In reality, you’d be seeing something like below 100MB/s write speeds for 128k sequential writes, but if your block size is smaller, expect far slower writes.
Combine all of the above and you can imagine just how much slower this whole thing can be.
For reference, see this benchmark as an example, to see what’s “normal” for a simple file transfer to a blank drive with no fragmentation: www.anandtech.com/show/6014/…/3
A+ timing, I’m upgrading from a 1050ti to a 7800XT in a couple weeks! I don’t care too much for “ai” stuff in general but hey, an extra thing to fuck around with for no extra cost is fun.
I’m a bit confused, the information isn’t very clear, but I think this might not apply to typical consumer hardware, but rather specialized CPUs and GPUs?
You didn’t mention if this is a HDD or an SDD. If it’s a HDD, you will never even reach SATA 2 speeds, although you should be able to saturate SATA 1. Realistically you might be able to push around 200MB/s on newer HDDs but that’s assuming nothing else gets in your way.
What is controlling the SATA drivers? A lot of times the stuff that comes on motherboards isn’t the greatest and getting a dedicated storage controller allows you to saturate the drive much more thoroughly. Specially if they have big caches.
I don’t have any previous knowledge of this at all, but from reading the docs, nothing you’re describing sounds wrong.
A u32 selector will match 4 bytes (u32 meaning unsigned 32bit presumably, which is 4 bytes).
It makes sense that you’d only be able to configure the matches on 4 byte intervals, because keeping them aligned may make the implementation simpler and more efficient. You can still match any set of bits this way.
Perhaps you could describe what you’re trying to match exactly and the selectors you tried.
I really appreciate this, thank you. I think I had confused myself by playing with ‘u16’ and ‘u8’ and somehow coming to the conclusion that they were matching the right side of a 32-bit string. (Which may still be true, but, I’m just masking u32s now).
This is what I ended up with, which is working the way I’d expect:
<span style="color:#323232;">tc filter add dev wlan0 protocol ip parent 1: prio 1 u32
</span><span style="color:#323232;"> match u32 0x30d6 0x0000ffff at -16
</span><span style="color:#323232;"> match u32 0xc92d1905 0xffffffff at -12 flowid 1:20
</span>
This sends Ethernet frames destined for 30:d6:c9:2d:19:05 to flow 1:20, and it doesn’t seem to match a second device I tested. So, all good! Thank you again.
The issue with those numbers is that they don’t account for people having multiple devices. My PC, Laptop, and Steam Deck all download apps from flathub, so I’m likely counted multiple times. On the other hand most people only use one device, so the actual numbers probably don’t doffer much. It’s an estimate anyway.
Edit: I’m not surprised the amount of people using flatpak/flathub increased so much. It’s my preferred method of installing proprietary software and works on any distro, even unconventional ones like NixOS or Alpine. Sandboxing continues to get better, be it isolation or usability.
This is honestly why anyone and everyone should invest in at least a small UPS that can keep your PC powered for at least five minutes so in this kind of scenario you can cancel the update, shut down safely, and resume when the power is back.
Btrfs snapshots + Timeshift that is configured to run pre-update is great for this, though I cannot remember if Fedora’s layout is compatible with Timeshift’s expected configuration.
OP, If you really want to go 100% with this, something like NixOS (which is definitely an extreme investment) or an Atomic distro like Fedora Silverblue works very well for this.
As you mentioned, with Fedora the best alternatives are immutable spins. Updating means downloading a new base image, applying overlays and additional installations to it and on the next reboot you start from that image. You can configure it to keep as many previous versions as you need and boot into those directly on startup. Since you never change your current image once it’s built, you can’t break a known good system. You can only ever break your next version and in that case, just boot the previous.
I’ve created an Ansible playbook that configures a vanilla Kinoite the way I want it. No need to back up the system if I can recreate it with less than a megabyte of text files. Secrets are in my password vault, personal files are in my personal cloud and get synced to and from the laptop continuously. I would never go back to backing up system files as opposed to recreating it with a playbook. That seems so wasteful in hindsight.
linux
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.