linux

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

DeathWearsANecktie, in Amazon Building its Own Linux-Based OS to Replace Android

A third competitor for Android and iOS would be amazing. But not if it’s Amazon…

chemicalwonka,
@chemicalwonka@discuss.tchncs.de avatar

OOnly if it is Libre Software. We are tired of proprietary rubbish

Patch,

Android is already free software, and see how far that gets you. The kicker is that you’re tied into their services (with all the data harvesting, targeted advertising and monetisation that that involves).

SatyrSack,

If you’re tied to their non-free services, then its not free software.

chemicalwonka,
@chemicalwonka@discuss.tchncs.de avatar

I use GrapheneOS. Not all android is born equal

baconicsynergy,

Yes, because it is permissively open source, not only are these companies free to build what they want - we are entitled to that same right. We therefore created LineageOS and GrapheneOS, and its really great.

There’s also a lot of motivated people getting regular Linux distributions running on mobile devices too, so we have that as well

oxjox, (edited )
@oxjox@lemmy.ml avatar

I just read an article about how they’re increasing advertising on their Fire TVs. Rest assured, an Amazon OS is an Advertising OS.

Although, from what I’ve gathered of public opinion online, there’s LOTS of people willing to forgo their privacy in exchange for free shit.

Edit: Oh…

They say they expect Vega to begin shipping on Fire TVs early next year.

And that article arstechnica.com/…/after-luring-customers-with-low…

Tak,
@Tak@lemmy.ml avatar

Amazon would sell your DNA for 25 cents if they could.

digdilem,

No difference to Google then

Tak,
@Tak@lemmy.ml avatar

Google would sell it for 20 cents

chemicalwonka, (edited )
@chemicalwonka@discuss.tchncs.de avatar

Apple would sell for 10 but would deny until death that it sold, and still convince you to believe that it didn’t sell.In addition to giving it a “cool” name like DNA Titanium Protection XDR or something like that,

semperverus, (edited )
@semperverus@lemmy.world avatar

pine64.com/product-category/pinephone/

pine64.com/product-category/…/pinephone-pro/

There is already something in the works (that you can technically buy right now if you wanted), and it actively respects your freedom. Granted, as with everything in this ecosystem, its a very slow burn, so it’ll be a while before the software is actually good, but it’s already made massive strides from where it started.

I would say wait a bit and take a look at this later, but i do have one friend daily driving one now to some success (this wasn’t possible a year ago).

droans,

That would be great, but you can buy a $20 burner from a gas station that’s more powerful than those phones.

The regular version uses the Allwinner A64 chip which retailed for $5 when it was released… Back in 2015.

The Pro version uses the RK3399S, which is a custom lower binned version of the RK3399. Neither chip was made available retail, but the SK3399 was released in 2016 and only otherwise used in low-end Chromebooks and SBCs.

semperverus,
@semperverus@lemmy.world avatar

Sure, but calling them out for not being a $20 burner phone doesnt make sense when you’re comparing that to a developer/development device. This phone specifically isnt meant for everyday consumers. What it is, however, is a signal that there is now a third competitor in the works, and it’s real and tangible.

BaroqueInMind, in Best lesser-known distribution/DE for low-end machines?
@BaroqueInMind@kbin.social avatar

Just install Arch without a desktop environment.

backhdlp,
@backhdlp@lemmy.blahaj.zone avatar

It’s kinda surprising how much you can do in just a tty, the only thing I can’t think of a method for rn is viewing/editing documents.

BaroqueInMind,
@BaroqueInMind@kbin.social avatar

I can’t think of a method for rn is viewing/editing documents

What is the extension of document? I bet you money it's possible in terminal. PDF? docx?

backhdlp,
@backhdlp@lemmy.blahaj.zone avatar

I mean any kind of document, so yes, PDF, docx, rtf, etc.

Thinking about it, isn’t lesspipe able to view documents?

BaroqueInMind,
@BaroqueInMind@kbin.social avatar

Literally all the extensions you mentioned can be viewed and edited in terminal by various tools.

backhdlp,
@backhdlp@lemmy.blahaj.zone avatar

I don’t know the tools, but that means you can probably do everything* in a tty without ever installing a graphical environment

*I almost forgot spreadsheets and presentations

BaroqueInMind,
@BaroqueInMind@kbin.social avatar

Do you actually want to know the tools for each of the extensions you mentioned or just having a conversation here?

starkzarn, in Have I successfully blocked ssh logins to root?

That all sounds correct to me. The random port you’re seeing in the logs is a high port, often referred to as an ephemeral port, and it is common for source ports. All good there.

cyberwolfie,

Ok, thanks - so if I understand correctly then, it is listening on port 22 as a default, and not accepting traffic on any port.

That brings of the question: wouldn’t I be better off changing the SSH-port? And is that so easy as to uncomment the #Port 22 line in the config file and changing the port number to something random, and saving that somewhere? Would I then be able to connect by running ssh myuser@mydomain.com:, or would I need to do anything else to successfully connect?

siph,

You would need to specify the new port when using ssh (using the -p$PORT option). Just keep in mind that security through obscurity is not considered secure in itself. You could instead consider a service like fail2ban that automatically blocks connections from certain sources depending on your set parameters.

Tanoh,

You would need to specify the new port when using ssh (using the -p$PORT option).

You can put a host entry for it in .ssh/config specifying the port.

cyberwolfie,

Just keep in mind that security through obscurity is not considered secure in itself.

Do you consider it to not be a helpful measure to take at all?

I have fail2ban configured - since it is reading from the auth.log, I guess I would not have to make any changes to the configuration there to have it work with a new port?

siph, (edited )

It’s a mixed bag. Personally I wouldn’t use a non-standard port.

Consider that port numbers under 1024 are Privileged Ports. You would either have to make sure that no other privileged service is running on the port you want to use for SSH when using another privileged port or you need to make sure that no unprivileged program tries to use the same port as your SSH service when using a non-privileged. Overall it adds a bit of overhead and possible headaches for barely any gain.

Fail2ban should work with a different port without any further configuration but it might not.

cyberwolfie,

Alright, cheers - I’ll leave it be as well then :)

squiblet, in Best lesser-known distribution/DE for low-end machines?
@squiblet@kbin.social avatar

I used to use WindowMaker on seriously underpowered laptops 10-15 years ago. Seems like it’s still just as efficient. For something more standard interface-wise you could try IceWM.

Another thing to do is build your own kernel without any features you don’t use. Not sure how much of a difference that makes exactly.

siph, in Have I successfully blocked ssh logins to root?

What @StarkZarn said is correct. Just one more thing: Did you reload/restart the sshd service after changing the configuration? If so you should be good.

cyberwolfie,

Yes, this is something I did when setting up the server some time ago, and as a step in the process I rebooted the system after changing the config.

ninekeysdown, in Why btrfs gets huge perf hit with background IO work?
@ninekeysdown@lemmy.world avatar

I was curious about this too. Definitely making me question some of my own thoughts and assumptions about btrfs.

ProtonBadger,

It's a single metric out of many for a file system, let's see if someone investigates.

willybe, (edited ) in Have I successfully blocked ssh logins to root?

Yes that’s the right way to block root login. An added filter you can use the ‘match’ config expression to filter logins even further.

If you’re on the open network, your connection will be heavily hit with login attempts. That is normal. But using another service like Fail2Ban will stop repeated hits to your host.

Ssh listens on port 22, as soon as a connection is made the host moves the connection to another port to free up 22 for other new connections.Btw: I wasn’t thinking clearly here. Out going connections won’t be using port 22, but the listening incoming port is always 22.

cyberwolfie,

Yes that’s the right way to block root login. An added filter you can use the ‘match’ config expression to filter logins even further.

Not sure what you meant about the ‘match’ config expressions here. Could you elaborate a bit further?

If you’re on the open network, your connection will be heavily hit with login attempts. That is normal. But using another service like Fail2Ban will stop repeated hits to your host.

Hehe, yeah, I’ve noticed… The reason I get a little anxious whether I did this correctly, is that 95% of the login attempts are to root, so I want to make sure it is disabled. I have set up Fail2Ban, but I am using default settings, which may be a bit laxer than they need?

I’ve also been advised and considered moving to ssh keys, but I have not gotten to that yet.

Ssh listens on port 22, as soon as a connection is made the host moves the connection to another port to free up 22 for other new connections.

Makes sense. One question that comes from this is: is it possible to disable that? I would never need two ssh-logins at the same time on my server. And the second question is what I asked above regarding whether I should change the port ssh listens to in order to reduce unwanted malicious login attempts?

ipsirc,
@ipsirc@lemmy.ml avatar

Ssh listens on port 22, as soon as a connection is made the host moves the connection to another port to free up 22 for other new connections.

Makes sense

No, it’s nonsense. Nothing like that happens.

willybe,

Match blocks allow you to restrict who/what is allowed or not allowed to connect to the server. There is a large number of options to utilize. Put this near the bottom of sshd_config. There should be an example there.

Here are some more examples: stackoverflow.com/…/sshd-with-multiple-match-sect…

cyberwolfie,

Thanks - I’ll look more into that to see if I can make any rules that would make sense for my use case.

Markaos, (edited )
@Markaos@lemmy.one avatar

Ssh listens on port 22, as soon as a connection is made the host moves the connection to another port to free up 22 for other new connections.

There’s no limit on the number of concurrent connections on a single port, and SSH runs completely on the one port it is configured to use. Otherwise allowing just the port 22 in firewall wouldn’t be enough to have a functional SSH connection with default settings.

You can verify that quite easily for example by spinning up three barebone Debian VMs connected to a single virtual network, configuring the firewall on the “server” VM to drop everything other than port 22 and then connecting from both client VMs - it will work just fine.

Maybe you’re confusing it with the fact that only one process can listen on a given port at a time? But that’s only for establishing new connections. Existing connections can be passed off to another running process or a child process just fine, and that’s how SSH handles separation between connections.

Edit: oh, you’re talking about the high port OP is wondering about. That’s just the source port, which is chosen randomly by the client OS when making a connection. Using port 22 (or any other port below 1025) as a source port would require root privileges on the client and would also conflict with the SSH server that could be running there. Still, it has nothing to do with SSH “moving connections over”

cyberwolfie, (edited )

Edit: oh, you’re talking about the high port OP is wondering about. That’s just the source port, which is chosen randomly by the client OS when making a connection. Using port 22 (or any other port below 1025) as a source port would require root privileges on the client and would also conflict with the SSH server that could be running there. Still, it has nothing to do with SSH “moving connections over”

Ah, I see, so the port numbers shown in auth.log are all client side ports. I guess I thought that the listening port would be in the log and assumed that the port listed there would be it, but when I read the lines again, it clearly says “from ip.ad.dr.ess port 12345”

redcalcium,

These days I don’t even bother installing fail2ban or changing ssh port anymore. I’ll just disable root and password login and be done with it.

mintyfrog, in NVIDIA Linux Driver Adds Wayland Bug Fixes and Improvements

Is it usable with hybrid graphics in a laptop yet? I’ve heard about horrible battery drain and sleep issues before.

Kerb, in Your chosen desktop Linux defaults?
@Kerb@discuss.tchncs.de avatar

i always use:

  • KDE
  • yakuake
  • kate
  • vlc
  • fishshell
  • gparted
  • firefox

no matter what the default might be

ares35, (edited ) in Best lesser-known distribution/DE for low-end machines?
@ares35@kbin.social avatar

c2d era laptop. first step if you haven't yet, swap the hdd for a low-cost sata ssd if you can. if you have some homeless sodimms, up the ram, too, if it won't cost anything to do it.

if you're going with mx, you want the fluxbox spin; or opt for antix with icewm instead.

otherwise start with a debian base install (no de or extra sw at install), then add only what you need. peppermint is another option--a basic debian with xfce out-of-the-box and little else. it's what i've been using lately on similar hardware.

for something 'different', you could look at slax.

mfat,

Thanks. I’ve already added an ssd drive and upgraded tge ram from 3 to 4gb. Another comment mentioned Icewm so I’m definitely giving it a try.

redcalcium, in Why btrfs gets huge perf hit with background IO work?

I think it’s not btrfs that’s slow, it’s bcachefs that’s insanely fast. Bcachefs almost as fast as ext4 while having that many features is insane.

Chewy7324, (edited )

This screenshot is the only metric where btrfs is incredibly slow.

Bcachefs random and sequential writes and reads are much slower than other filesystems in this benchmark.

I have no idea how the actual real world performance will be. Bcachefs still misses a lot of features so I’ll continue to follow the development, hopefully including performance improvements.

Bcachefs sequential write performance in this out-of-the-box comparison was coming in at around half the speed of Btrfs while XFS, F2FS, and EXT4 were the fastest.

www.phoronix.com/review/bcachefs-linux-67/2

Edit: The benchmarks were done with a debug variable set, which explains the weak IO.

www.phoronix.com/news/Bcachefs-Updated-Linux-6.7

redcalcium, (edited )

Wait, so the benchmarks mostly contradict claims that bcachefs is almost as fast as ext4 except in application startup time? What kind of test performed for that application startup time benchmark?

ProtonBadger, (edited )

The application startup test is designed to cold-load an application with heavy background IO going on.

yote_zip,
@yote_zip@pawb.social avatar

Note that in this benchmark, bcachefs had a debug variable turned on that allegedly severely hampered performance. Bcachefs has released an update to disable this variable but Phoronix hasn’t redone benchmarks yet. I wouldn’t put much value into any bcachefs-related comparisons from this current benchmark.

Chewy7324,

Interesting, that hopefully explains the weak IO troughput. Thank you.

authed, in Have I successfully blocked ssh logins to root?

Did you restart the server after you made the config change?

cyberwolfie,

Yes.

guywithoutaname, in Amazon Building its Own Linux-Based OS to Replace Android

Probably because it is stupid simple to escape their ecosystem just by sideloading apps. They want to lock you down with their own OS.

blitzkrieg, in Why btrfs gets huge perf hit with background IO work?

Hold up. I thought btrfs was better than ext4?

fuckwit_mcbumcrumble,

I’m sure this is a specific workload that BTRFS struggles with that others handle just fine.

Other workloads BTRFS will be better, and in others it will be worse. There’s no one size fits all.

Chewy7324,

Filesystems aren’t so simple. Modern advanced filesystems like btrfs, zfs and bcachefs are more than just filesystems.

E.g. they include features like volume management, compression and sometimes encryption. Most features can also be achieved with for example ext4 + lvm + luks, but it’s nice to have all in one system with unified configuration.

tl;dr

Btrfs does more than ext4, which can have a negative performance impact, depending on the use case/metric. Usually the features gained by btrfs outweigh the small difference in performance imo.

wviana,

Oh. So I didn’t need LVM and LUKS at my install?

Chewy7324,

Btrfs doesn’t do encryption, so luks is still necessary. LVM isn’t needed since btrfs subvolumes achieve the same in a more flexible way (no fixed size, snapshots).

axzxc1236,

Btrfs doesn’t have built in encryption, if you want to encrypt Btrfs you still need another layer (e.g. LUKS).

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s mostly better, but not in every way. It has a lot of useful features, at a performance cost sometimes. A cost that historically wasn’t a problem with spinning hard drives and relatively slow SATA SSDs but will show up more on really fast NVMes.

The snapshots, it has to keep track of what’s been modified. Depending on the block size, an update of just a couple bytes can end up as a few 4k write because it’s Copy-on-Write and it has to update a journal and it has to update the block list of the file. But at the same time, copying a 50GB file is instantaneous on btrfs because of the same CoW feature. Most people find the snapshots more useful than eeking out every last bit of performance out of your drive.

Even ZFS, often considered to be the gold standard of filesystems, is actually kinda slow. But its purpose isn’t to be the fastest, its purpose is throwing an array of 200 drives at it and trusting it to protect you even against some media degradation and random bit flips in your storage with regular scrubs.

yote_zip,
@yote_zip@pawb.social avatar

A couple nits to pick: BTRFS doesn’t use/need journaling because of its CoW nature - data on the disk is always correct because it doesn’t write data back over the same block it came from. Only once data is written successfully will the pointer be moved to the newly-written block. Also, instantaneous copies from BTRFS are actually due to reflinking instead of CoW (XFS can also do reflinking despite not being CoW, and ZFS didn’t have this feature until OpenZFS 2.2 which just released).

I agree with the ZFS bit and I’m firmly in the BTRFS/ZFS > Ext4/XFS/etc camp unless you have a specific performance usecase. The ability to scrub checksums of data is so invaluable in my opinion, not to mention all the other killer features. People have been running Ext4 systems for decades pretending that if Ext4 does not see the bitrot, the bitrot does not exist. (then BTRFS picks up a bad checksum and people scold it for being a bad filesystem)

Max_P,
@Max_P@lemmy.max-p.me avatar

People have been running Ext4 systems for decades pretending that if Ext4 does not see the bitrot, the bitrot does not exist. (then BTRFS picks up a bad checksum and people scold it for being a bad filesystem)

ZFS made me discover that my onboard SATA controller sucks and returns bad data occasionally under heavy load. My computer cannot complete a ZFS scrub without finding errors, every single time.

Ext4, bcache and mdadm never complained about it, ever. There was never any sign that something was wrong at all.

100% worth it if you care about your data. I can definitely feel the hit on my NVMe but it’s also doing so much more.

yianiris,
@yianiris@kafeneio.social avatar

@Max_P @yote_zip

A file system and a raid setup all in one with facebook's code for de-/compression ... yet another piece of software that tries to do the work for several others.

ext4 will rarely have extreme loads unless all you do is backup and mirroring. For more accurate data read/write and better checks use xfs and external journaling. On M2 it is by far the fastest solution and far more secure than ext.

yianiris,
@yianiris@kafeneio.social avatar

@Max_P @yote_zip

On magnetic disk there is physical location for each bit of data written, for ssd it is all virtual space handled by internal software simulating a magnetic drive. The variation from disk to disk is much higher than the sensitivity of the fs.

You want accuracy and reliability of data storage, use HDD and Raid!

Max_P,
@Max_P@lemmy.max-p.me avatar

RAID doesn’t checksum and heal the rotten data. It’s game over before you even have a filesystem on top of it, because said filesystem can’t directly access the underlying disks because of the RAID layer.

Errors will occur, and RAID has no way of handling it. You have a RAID1, disk 1 says it’s a 0, disk 2 says it’s a 1. Who’s right? RAID can’t tell, btrfs and ZFS can. RAID won’t even notice there’s a couple flipped bits, it’ll just pass them along. ZFS will just retry the read on both disks, pick the block that matches the checksum, and write the correct data back on the other disk. That’s why people with lots of data loves ZFS and RAIDZ.

The solution isn’t more reliable hardware, the solution software that can tell you and recover from your failing hardware.

woelkchen,
@woelkchen@lemmy.world avatar

Valve decided on Ext4 for Steam Deck and did so probably not just for shits and giggles.

doomkernel, in Fonts

Fira Code Nerd Font for the terminal and stock fonts for everything else

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #