Chobbes

@Chobbes@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Chobbes,

This is what I thought too, but in my case it turned out my drive was busted and btrfs detected an error and went read only… which was super annoying and my initial reaction was “ugh, piece of shit filesystem!” But ultimately I’m grateful it noticed something was wrong with the drive. If I was just using ext4 I just would have had silent data corruption. In that sense other filesystems do silently do their jobs… but they also potentially fail silently which is a little scary. Checksums are nice.

Chobbes,

They’re just suggesting that you should accept both cash and electronic payments.

Chobbes,

No, they are not. They are not end-to-end encrypted but they are encrypted between your PC and your service provider, between service providers and between service providers and receivers. End-to-end encryption is needed to defend against your service provider or entities that can order your provider around but not against random hackers snooping around in your network.

This is true AND untrue at the same time! It’s true that most e-mail providers will talk to other e-mail providers with TLS, but it’s trivial to downgrade the connection in most circumstances. If you can man-in-the-middle e-mail servers you can just say “hey, I’m the e-mail provider you’re trying to talk to, I don’t support TLS, talk to me in plain text!” and the senders will probably oblige. There’s a few standards to try to address this problem, like DANE (which actually solves the problem, but is unsupported by all large e-mail providers), and mta-sts which is a much weaker standard (but supported by gmail and outlook). In practice there’s a good chance that your e-mail is reasonably well secured, but it’s absolutely not a guarantee.

Chobbes,

Nowdays client-server and server-server communication is ecrypted and signed, so no an issue now.

This is probably true, but in a very unsatisfying way. It’s not accurate to say this is not an issue now because mail servers talk to each other with opportunistic encryption — if both ends say “hey, I support TLS” they’ll talk over TLS, but if either end claims to not support TLS they’ll default to plain text. This is deeply concerning because it’s very possible for somebody to mimic another server and get the connection downgraded to plain text, bypassing TLS altogether. There are standards to deal with this, like DANE, but most large e-mail providers don’t support this… The other more recent standard to address this is called MTA-STS, but it’s much weaker than DANE and can potentially be exploited (but at least gmail and outlook support it, I guess). E-mail security is in a weird place. It’s slightly better than the “completely unencrypted” situation that people seem to think it is… But it’s also pretty much impossible to guarantee that your e-mail will not be sent over plain text.

Chobbes,

In my experience with my Apple Watch you have to activate the wallet functionality in order to pay for something by clicking the side button twice, which should make it harder for somebody to just walk around with a terminal charging random people. Phones usually need to be unlocked to make payments too. In theory NFC credit cards could be scanned like this, and if you’re worried about that you can look into NFC blocking wallets… I’m not super worried about it, though, because usually you wouldn’t be on the hook for such a fraudulent charge.

Chobbes,

AFAIK DKIM/DMARC now is mandatory on most servers.

DKIM and DMARC don’t have anything to do with this. DKIM is a way for e-mail servers to sign e-mails with a key that’s placed in DNS in an attempt to prevent e-mail spoofing, but this in no way protects e-mails you send from potentially being read in plain text. DKIM is also not necessarily mandatory, and you can potentially get away with just SPF. Many mail servers also do not have strict sender policies, which could potentially allow for spoofing in certain situations. Also neither DKIM / SPF provide any protections if an attacker is able to poison DNS records.

GPG. Or other E2EE.

I mean, yes, but that’s not really the point. PGP has essentially nothing to do with the e-mail protocols aside from the S/MIME extensions. Almost no institution is using PGP to secure e-mails. You could also encrypt something using PGP before you sent it over the fax lines in theory.

Chobbes, (edited )

That depends on the specific TLS setup. Badly configured TLS 1.2 would allow downgrade attacks, TLS 1.3 would not.

Why would TLS 1.3 prevent this kind of downgrade attack? The issue is that TLS has never been a requirement for e-mail servers, so for interoperability they only do TLS opportunistically. Even if you configure your own e-mail server to only talk over TLS, nobody else knows that your server only speaks TLS (or speaks TLS at all), so if somebody is pretending to be your mail server they can just claim to only speak plain text and any sender will be more than happy to default to it. If you support DNSSEC you can use DANE to advertise that your mail server speaks TLS, and even fix the certificates that are allowed, but senders will actually have to check this in order to make sure nobody can intercept your e-mail. Notably both outlook and gmail do not support this (neither for sending nor receiving!), they both instead rely on the weaker MTA-STS standard.

my guess would be that at least the big ones like gmail don’t allow unsecured communication with their servers at all

They absolutely do :).

I highly doubt the “in most circumstances” line

That was maybe too strong of a statement, at least with the recent adoption of MTA-STS this is at least less trivial to do :). The intent of this statement was more “if you are in the position to be a man-in-the-middle between two generic e-mail servers it is trivial to downgrade the connection from TLS to plaintext”. I wouldn’t be surprised if it was hard-coded that gmail and outlook should only talk to each other over TLS, for instance, which should prevent this for e-mails sent between the two (I also wouldn’t be surprised if this wasn’t hard-coded either… There’s sort of a bad track record with e-mail security, and the lack of DNSSEC from either of these parties is disappointing!). Ignoring special configuration like this, and without MTA-STS or DANE these downgrade attacks are trivial. Now with the advent of MTA-STS you’ll probably have a reasonably hard time downgrading the connections between some of the large e-mail providers. Though notably this is not universally supported either, iCloud supports neither MTA-STS nor DANE for instance, and who knows about all of the various providers you never think of. This is a bit of a tangent, but a good talk about how large mail providers might not be as well configured as you’d hope: www.youtube.com/watch?v=NwnT15q_PS8

Chobbes, (edited )

Neither TLS provide in such case. Attacker can request ACME cert.

Depends whose DNS you can mess with, but yes! It may be possible to poison DNS records for one e-mail server, but ACME certificate providers like letsencrypt (supposedly) try to do DNS lookups from multiple locations (so hopefully a simple man-in-the-middle attack will not be sufficient), and they do lookups directly from the authoritative DNS servers. This is, of course, not perfect and theoretically suffers from all of the same mitm problems, but it’s more thorough than most mail servers will be and would potentially limit who would be in the position to perform these attacks and get a bogus certificate issued.

With DNSSEC and DANE you are even able to specify which TLS certificate should be used for a service in a TLSA record, and you can protect your A records and your CAA record which should make it much harder to get bogus certificates issued. Of course you need to trust the TLDs in order to trust DNSSEC, but you already do implicitly (as you point out, if you control the TLD you can get whatever certificate you want issued through ACME). The reality right now is that all trust on the web ultimately stems from the TLDs and DNS, but the current situation with CAs introduces several potential attack vectors. The internet is certainly a lot more secure than it used to be even 10 years ago, but I think there’s still a lot of work to be done. DNSSEC, or something like it, would go a long way to solving some of the remaining issues.

Chobbes,

I’m not responding to that comment?

Chobbes,

If you’re active outside it’s surprisingly hard to be cold to be honest. Beyond that the most important thing is having a wind proof layer on the outside, and probably some decent gloves.

Chobbes,

I’m from Canada, so… I have?

Chobbes,

I’ve lived where it regularly gets near -40C. Often feel chillier laying down in a “cold” house than even just walking outside for a bit. If you have a thick coat and you’re moving it’s not unusual to get too warm, which can be a bit of a problem if you start sweating. I would bike in the winter and I basically just needed a wind breaker and a light jacket (and good gloves, obviously!). One thing that kind of sucks is taking the bus in the winter because you walk to the bus stop, but then sit there in the cold, and then when you finally get on the bus it’s disgustingly warm.

Chobbes,

This is such a weird take to be honest… it’s weird to want CS lecturers to work in their free time, it’s weird to expect their applications to be better, and it’s weird because this is something that many lecturers and programmers already do… so I don’t get it, and it feels disrespectful to all of the volunteer foss maintainers?

Wanting to improve my Linux skills after 17 months of daily driving Linux

I’ve been daily driving Linux for 17 months now (currently on Linux Mint). I have got very comfortable with basic commands and many just works distros (such as Linux Mint, or Pop!_OS) with apt as the package manager. I’ve tried Debian as a distro to try to challenge myself, but have always ran into issues. On my PC, I could...

Chobbes, (edited )

I really do recommend doing a Gentoo install at some point, because I think you would learn a lot from it. It’s a really nice experience and a well put together distro. The compiling is potentially not as bad as you think, but there are a couple of packages that are notoriously painful to compile (there are prebuilt binaries available for some of the painful ones if desired too). You’d probably get a decent amount out of an Arch install too. Arch isn’t my cup of tea, but lots of people like it and it’d be quicker to get started than Gentoo. I’m not sure I’d recommend it for you at this stage but eventually you should check out NixOS too! You can even try the package manager out on any distro you want. NixOS is really interesting, but it does things a bit different from other distros, and if you’ve done an Arch / Gentoo install it’ll be interesting to see what NixOS does in contrast.

Other things to mess with… You mention partitioning, so make sure to check out LVM, and also consider reading a bit about filesystems. Maybe give btrfs a go :).

I wouldn’t worry about daily driving either Gentoo or Arch. Once you have them set up you’ll probably be fine.

Chobbes,

I don’t think it’s that clear cut to be honest. More code doesn’t mean the package benefits more from optimizations at all, and even if that were true you might care more about the performance of the kernel or various small libraries that are used by a lot of programs as opposed to how fast some random application that depends on qt-WebKit is:

Chobbes, (edited )

I think Arch kind of deserves the hate it gets. I love barebones distros and have been a gentoo user (now on NixOS), and I’ve used arch a fair bit too… I just don’t feel like Arch is a well maintained distribution. There’s all sorts of little things that they can’t seem to get right that other distros do, like that silly issue where they won’t update the arch keyring first, so if you haven’t updated in a while it breaks. In my experience there’s a million little paper cuts like this and I’ve just been kind of unimpressed. If it works for you that’s great! I’ve just been disappointed with it. I get the niche that it fills as the binary “from scratch” rolling release distro, but I think the experience with it is a little rough. I’ve found gentoo more user friendly, which probably sounds bizarre if you haven’t used gentoo, but ignoring compiling stuff, gentoo does an excellent job of not breaking things on updates, and it’s much easier to pin and install specific versions of packages and stuff.

Chobbes,

I mean… I would consider anywhere that you might download software from sensitive. This isn’t really a smart move. And sure, the mirror’s page they link to uses https, but if the regular site doesn’t a man-in-the-middle could change the url and serve an official looking malicious version… I wouldn’t consider putting your users at an elevated risk when it’s relatively easy to set up TLS “a smart move”.

Chobbes,

Who says it hasn’t happened? :P

If it hasn’t I would just assume that Slackware isn’t a big enough target and that anybody in the position to man-in-the-middle a large number of people would have better targets. I mean, to be clear TLS is not a silver bullet either, but it goes a long way for ensuring the integrity of the data you receive over the internet in addition to hiding the contents.

Distros usually sign their ISOs with PGP as well (Slackware does this), so it’s a good idea to verify those signatures as it’s a second channel that you can use to double check the validity of the ISO (but I’m not sure many people actually do this). Of course, anybody can make PGP keys so you have to find out which key is actually supposed to be signing the iso, otherwise an attacker can just make a bogus key and tell you that that’s the Slackware signing key (on the official website too, because it doesn’t use tls!). The web of trust arguably helps some (though this can be faked as well unless you actually participate in key signing parties or something), and you can hope that the Slackware public key is mirrored in several places that you trust so you can compare them… but at the end of the day for most people all trust in the distribution comes from the domain name, and if you don’t have TLS certificates you’re kind of setting up a weak foundation of trust… Maybe it will be fine because you’re not a big enough target for somebody to bother, but in this day and age it’s pretty much trivial to set up TLS certificates and that gets you a far better foundation… why take the risk? Why is it smart to unnecessarily expose your users to more risk than necessary?

Chobbes,

I think even if you’re tech-savvy you can have issues with Arch tbh. I don’t think the distro is without merit — a minimal rolling release binary distribution is clearly something people want… But I’m not sure Arch does a great job of being that (for me, at least), and I’ve personally found pacman and the official packages to be kind of lacking (keyring update issue that they’ve maybe finally fixed, installing specific versions of packages / pinning specific versions / downgrading packages are either not supported or not well supported, immediately removing kernel modules on upgrade, even if the currently running kernel may need them, etc…). It just doesn’t feel very polished in my experience and for my use cases (clearly it works for some people!), and that’s what has driven me away from Arch personally. I think a lot of this stems from Arch’s philosophy of being aggressively minimal, which is maybe fair enough… but I don’t think it’s for everybody.

Chobbes,

I don’t think that’s necessarily true. It could use energy proportional to the energy that could be gained by moving an object somewhere.

Chobbes,

A lot of open source projects do have windows versions, and the big projects that come to mind like blender or Firefox definitely do… but there’s a a lot of little pieces of software that don’t. One example that comes to mind for me is the Dino XMPP client… Linux only for now, unfortunately!

Chobbes,

I have no idea as I’ve never been a windows user, haha. Dino is one of the examples I know about though, because I know I can’t recommend it to windows users.

Chobbes,

Yeah, I don’t have good answers for you… I honestly don’t know what the best way to get people into it is. The resources really are not great.

FWIW I think when it does end up clicking everything is a LOT less complicated than it seems at first. Nix is sort of all about building up these attribute sets and then once that really sinks in everything starts to make a lot more sense and you start to realize that there aren’t that many moving parts and there isn’t much magic going on… but getting there is tricky. A lot of people recommend the nix pills, and honestly I think it’s the best way to understand nix itself. If you do earnestly read through them I think there is a good chance you will come out enlightened… they just start so slow and so boringly that it’s tempting to skip ahead and then you’re doomed. They also have a bit of a bad habit of introducing simple examples that don’t work at first which can be confusing, and eventually some of the later stuff seems like “ugh, I thought we already solved this” but it’s building up nicer abstractions. The nix pills give a pretty good overview of best practices in that sense, I think… so maybe it’s the source of truth you’re looking for (or part of it anyway). I think the nix pills are a bit more “how the sausage is made” than is necessary to use nix, but it’s probably the best way to understand what all of these weird mkDerivation functions you keep seeing are actually doing, and having an understanding of the internals of nix makes it a lot easier to understand what’s going on.

Chobbes,

I am annoyed whenever I launched something from dmenu and I don’t get error output or logs anywhere.

I do wonder why you would have missing dependencies for all of these applications? Shouldn’t your package manager handle that…?

Chobbes,

I think Lutris can install its own versions of wine which is probably why it’s not included (also you don’t need to use wine at all with Lutris). I guess I’m not surprised you ran into these issues on Arch. I wouldn’t expect this on the more mainstream distros a new Linux user would be likely to use, since these distros are more likely to take a batteries included approach to packaging. I’d hope running into missing dependencies when launching a program is a fairly uncommon experience, at least for anything installed with a package manager on most systems.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #