Interesting concept, I like the design, but the workflow is rather odd and would take some getting used to. Also, things like the UI need some work on scrolling, like the Sudo connect window scrolling the password out of sight if you fail the password entry.
I did not take this picture. I just nabbed the smuggest-looking cat-on-a-keyboard I could find.
But your questioning of my cat’s software testing experience has made her very upset.
Yep. In fact my comment seemed so clear to me that I assumed it was some kind of joke, but looking at the votes, maybe swapfiles aren't as well known as I thought.
Um, you really need to read the entire phrase and not pick out only what you want from it. 😃
Swap can make a system slower to OOM kill, since it provides another, slower source of memory to thrash on in out of memory situations
It means that if you try to use it as a source of memory, when you run out of actual RAM it will make your system almost completely unresponsive due to disk thrash, instead of allowing the kernel to just kill the process that’s eating your RAM. So you’ll just end up hard-booting system.
Yes, and that’s a good thing if you don’t want it to start killing processes. You have that extra time/space to deal with the out-of-memory condition yourself.
Or you can ignore that condition and continue using the system in a degraded state, with swap as “disk RAM”.
Like I said, the system will be almost completely unresponsive due to disk access being several orders of magnitude lower than RAM and allocation thrashing… you won’t be able to do much, the mouse, keyboard and display will react extremely slowly. There may be situations where you’d prefer this to an OOM kill, for example if you’re running a test or experiment where you’d rather have it finish even if it takes a very long time rather than lose the data. But if you’re a regular desktop user or server admin you’ll probably just reboot.
I actually am in the market for a new mobo and cpu.
Are there any mobo’s nowdays that don’t use UEFI? I just want an old traditional style BIOS with a jumper to restore it from a ROM chip if I get any malware, so I can actually trust my hardware.
I did force myself to deal with UEFI for the sake of windows, but gaming has gotten good enough on Linux, I don’t actually need to dual boot windows anymore.
No hate from me,but rather a simple question? Why use preconfigured distros instead of the original,always best, with archinstall script? You can even install pamac or whatever package installer tool manjaro uses.
but LXDE should effectively be considered “end of life”, the developer is in the process of porting everything over to Qt and working on releases of LXQt
with that, for a full DE – Xfce if you like GTK, LXQt if you like Qt
or a minimal setup with a WM plus utilities (like Openbox or one of the large selection of tiling window managers)
along those lines though, there are still a LOT of lightweight Linux distros to choose from
Crunchbangplusplus or BunsenLabs – successors to Crunchbang Linux – usually just Openbox WM and a few utils rather than a full DE
plain old Debian stable – proprietary drivers are now part of the installer, no more hunting for a special ISO – can choose your DE or WM during install
So I don’t get it, I have my entire boot image in a signed EFI binary, the logo is in there as well. I don’t think I’m susceptible to this, right? I don’t think systemd-boot or the kernel reads an unsigned logo file anywhere. (Using secure boot)
Depending on how the UEFI is configured, a simple copy/paste command, executed either by the malicious image or with physical access, is in many cases all that’s required to place the malicious image into what’s known as the ESP, short for EFI System Partition, a region of the hard drive that stores boot loaders, kernel images, and any device drivers, system utilities, or other data files needed before the main OS loads.
Right, I know EFI images are stored in the EFI partition, but with secure boot, only signed images can be executed, so they’d need to steal someone’s signing key to do this.
Just use rsync -va (possibly with --chown if you want user/group to be different at the destination and with --delete if you want removed files to be deleted) to continue the copy operation, it automatically takes care of figuring out which files still need to be copied and which are already there.
The default quick check algorithm of rsync is not safe for this. It only checks filesize and modification time to determine if files are equal. After a b0rked copy, these are not to be trusted.
You should add the -c flag so that files are properly checksummed, unfortunately if you have slow storage on either end, this often negates the speed advantage of rsync.
My memory of the cp command is that attributes such as file times were transferred at the last step. I think this would make rsync safe in most situations where a system crash wasn’t involved.
True if the initial state is unknown but if you do your initial copy and all the later syncs with rsync it is not really necessary since rsync puts the partial files in a temporary location (there are same parameters to control the details of that too).
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.