It’s less complicated than it looks like. The text is just a poorly written mess, full of options (Fedora vs. Ubuntu, repo vs. no repo, stable vs. beta), and they’re explaining how to do this through the terminal alone because the interface that you have might be different from what they expect. And because copy-pasting commands is faster.
Can’t I just download a file and install it? I’m on Ubuntu.
Yes, you can! In fact, the instructions include this option; it’s under “Installing the app without the Mullvad repository”. It’s a bad idea though; then you don’t get automatic updates.
A better way to do this is to tell your system “I want software from this repository”, so each time that they make a new version of the program, yours get updated.
but I have no idea what I’m doing here.
I’ll copy-paste their commands to do so, and explain what each does.
The first command boils down to “download this keyring from the internet”. The keyring is a necessary file to know if you’re actually getting your software from Mullvad instead of PoopySoxHaxxor69. If you wanted, you could do it manually, and then move to the /usr/share/keyrings directory, but… it’s more work, come on.
The second command tells your system that you want software from repository.mullvad.net. I don’t use Ubuntu but there’s probably some GUI to do it for you.
The third command boils down to “hey, Ubuntu, update the list of packages for me”.
I would have guessed that Ubuntu would install it by default since its a very common way to get stuff from the internet (when in the terminal), but apparently not (the other option is wget which is most likely installed, but that uses a different way to get the stuff).
You should be able to install curl with sudo apt install curl
Hmm… ProtonVPN team solved this in better way. They put the repo configuration stuff into DEB file, so it’s just a matter of double clicking it and clicking install on Debian-based and Ubuntu-based (I know Ubuntu is Debian-based) distros and then installing the ProtonVPN client through either GUI or CLI package manager, whichever you wish to use. More newbie-friendly.
Unfortunately, I also just learned they dropped support for Arch Linux :(
We’d love to support the new app for arch Linux but honestly we’re understaffed and don’t have the bandwidth to be supporting the same distros that we did before with the previous client (4 packages before vs 10 packages now). If anyone from the community is willing to make AUR packages for themselves and publish/maintain them we’re totally fine with that, as long as people keep in mind that it would be an unofficial version because we currently don’t support arch Linux with the new v4 app.
Hmm… ProtonVPN team solved this in better way. They put the repo configuration stuff into DEB file, so it’s just a matter of double clicking it and clicking install
I was wondering how they’d solve signature checking and key installation - and looking at their page they seem to recommend skipping checking package signatures which, to be honest, isn’t a super good practice - especially if you’re installing privacy software.
Please don’t try to check the GPG signature of this release package (dpkg-sig –verify). Our internal release process is split into several part and the release package is signed with a GPG key, and the repo is signed with another GPG key. So the keys don’t match.
I get it’s more userfriendly - and they provide checksums, so not a huge deal, especially since these are not official Debian packages, but the package signing has been around since 2000, so it’s pretty well established procedure at this point.
The curl that ships with apt is ubiquitous enough that I trust doing sudo curl xxx yyy more than enough if it means avoiding typing curl xxx /tmp/yyy && sudo mv /tmp/yyy yyy
Frankly in this case even a simple bash script would do the trick. Have it check your distro, version, and architecture; if you got curl and stuff like this; then ask you if you want the stable or beta version of the software. Then based on this info it adds Mullvad to your repositories and automatically install it.
I like them, even for software installation. Partially because they’re lazy - it takes almost no effort to write a bash script that will solve a problem like this.
That said a flatpak (like you proposed) would look far more polished, indeed.
You seriously need to stop what you’re doing. Log in with ssh only. If you need multiple terminals use multiple ssh sessions, or screen/tmux. If you need to search something do it on your desktop system.
The server should not have Firefox installed, or KDE, or anything related to desktop apps. There’s no point and nothing good can come of it.
Unity employee here, idk anything specific about the departments that handle this I wouldn’t even know what their name is. With that caveat, I will say that all the layoffs last year going into this year, changing CEOs, and the competition between big company beurocracy and the dying breath of small company culture, a lot of departments are behaving erratically. I wouldn’t be surprised if nobody internally has a clear answer why this was banned but others aren’t. Some workers may legit be trying to help but their hands are tied for corporate or maybe even legal reasons, it could be people trying to keep their heads down and close tickets quickly to keep metrics up in the hopes they’re less likely to be fired. I think you all know this already but please don’t be too hard on the workers we’re doing what we can but it’s a corporate mess right now
Yeah it’s a bit of a shit show for sure. Unfortunately I do not have anything else lined up right now, I know that’s an unsafe decision. My life has been a mess lately I can only handle so much at once and finding different work is exhausting
If you’re a software engineer, and you’re in the unity Austin area, lmk. Assuming you would be open to writing b2b software, the company i work for is huge, and still hiring devs.
A friend of mine worked in a position I would have assumed was considered vital to one of Unity’s products, in fact to my knowledge they were the only one keeping that part running. Apparently the higher-ups were able to lay them off without much hesitation this time around. The company seems to be leaking hard.
You don’t understand how development works, at all. The developers themselves don’t make these kind of decisions at these companies. They just do what they are told to do by their higher-ups. The higher-ups happen to be corporate businesspeople that don’t really know much about tech, and only care about profits.
The blame for Unity’s failures belongs to the executives and businesspeople, not the developers.
Look, it’s a low level employee of a faceless corporation!
GET 'IM!
Jokes aside, thanks for the transparency, and salute to you and your coworkers for trying to weather the storm caused by “shifting paradigms”… that’s what they call it, right? I know the execs can shift my paradigm, that’s for sure.
Tech in general but especially the game industry desperately needs to unionize. If the last couple years doesn’t convince tech bros they’re just as expendable as all the other working class out there, idk what will. Got to do something to insulate us from “restructures”, “rightsizing”, and “company resets”
also for non-KDE, non-Gnome systems, there’s appimaged – requires a little more setup, but handles the set executable, automates the AppImage integration (.desktop files and menus), keeps a watch on specific folders for new AppImages, and provides a way to check for updates
I’m saving this. I don’t use any appimages (except a cracked Minecraft bedrock launcher but we dont talk about that one), but I’m still going to save this.
When I clicked on new app image, the OS told me, that program /name of app/ will be launched, I clicked "Continue" and it runs! No meddling with "chmod" or anything like that.
Same, I love AppImages for that. I just wish they also had way to contain configurations instead of putting it on the system. That would make it even more portable.
I installed Linux a few weeks ago and it was on Tuesday I wanted to add some programs I had installed (it was mGBA and melonDS) to my steam launcher, I went through the hassle of making a . desktop file for both of them (I was dumb and used a Ubuntu based distro, so it installed as a snap, which sucks hard on a hdd) and then it wouldn’t launch, I searched up again (I was using chatGPT for all of this, I asked it a lot how to do stuff, it’s like this was it’s purpose beacuse it always worked first try), did the chmod x+ command and then I was done
There is no install needed, you can just edit permissions and make the file executable and then when you open it or click it the app runs.
What won’t be created by default is an application menu to run it from whatever desktop environment you use. You can create those if you wish. You can create a launcher in the menu manually, or you can use a tool called AppImageLauncher to create these for you.
Honestly, if all you’ve ever experienced in regards to terminals is windows CMD, then you really haven’t seen much. I mean that possitively. Actually, it will give you a far worse impression on what using a Linux / Unix terminal can be like (speaking as someone who spent what feel’s like years in terminals, of which the least amount in windows CMD).
I suggest to simply play around with a Linux terminal (e.g. install VirtualBox,.then use it to install e.g. Ubuntu, then follow some simple random “Linux terminal beginner tutorial” you can find online).
I found since people are used to app stores, I’ve had a much easier time convincing people to try out Linux. My mom even said that she always wished her windows PC had a proper app store.
I don’t think getting instagram, or photoshop off the microsoft store is giving anyone a virus. And I’ve never gotten a virus from it in the few times I’ve used it.
Of course, and much of it is on the app store now (which I rarely use myself), but for someone like OPs mom who just wants an easy app store, well there is one.
I think it’s still important to explain the key difference between an “app store” and a package repository: the latter isn’t a “store” because everything is free.
I get your point that the exploit existed before it was identified, but an unmitigated exploit that people are aware of is worse than an unmitigated exploit people aren't aware of. Security through obscurity isn't security, of course, but exploiting a vulnerability is easier than finding, then exploiting a vulnerability. There is a reason that notifying the company before publicizing an exploit is the standard for security researchers.
You're right that it's never an OK title, because fuck clickbait, but until it's patched and said patch propagates into the real world, more people being aware of the hole does increase the risk (though it doesn't sound like it's actually a huge show stopper, either).
Weakness and risk are distinct things, though—and while security-through-obscurity is dubious, “strength-through-obscurity” is outright false.
Conflating the two implies that software weaknesses are caused by attackers instead of just exploited by them, and suggests they can be addressed by restricting the external environment rather than by better software audits.
In my opinion Dan Goodin always reports as an alarmist and rarely gives mitigation much focus or in one case I recall, he didn't even mention the vulnerable code never made it to the release branch since they found the vulnerability during testing, until the second to last paragraph (and pretended that paragraph didn't exist in the last paragraph). I can't say in that one case, it wasn't strategic but it sure seemed that way.
For example, he failed to note that the openssh 9.6 patch was released Monday to fix this attack. It would have went perfectly in the section called "Risk assessment" or perhaps in "So what now?" mentioned that people should, I don't know, apply the patch that fixes it.
Another example where he tries scare the reading stating that "researchers found that 77 percent of SSH servers exposed to the Internet support at least one of the vulnerable encryption modes, while 57 percent of them list a vulnerable encryption mode as the preferred choice." which is fine to show how prevalent the algorithms are used but does not mention that the attack would have to be complicated and at both end points to be effective on the Internet or that the attack is defeated with a secure tunnel (IPSec or IKE for example) if still supporting the vulnerable key exchange methods.
He also seems to love to bash FOSS anything as hard as possible, in what to me, feels like a quest to prove proprietary software is more secure than FOSS. When I see his name as an author, I immediately take it with a grain of salt and look for another source of the same information.
I recently jumped to mint, and I have to say I’m very happy with it. I struggled with like two things but the OS is popular enough that there are walkthroughs for nearly everything. And I was able to get Linux-based or browser-based software for everything I did on my windows computer
How does Mint compares to Fedora? I decided to finally switch almost a month ago, and went with Fedora because it seemed like the best solution for general development, and I really like their Toolbox. However, I’ve been running into some issues mostly regarding gaming and NVIDIA drivers, and in general getting some applications to work on Fedora was more painful than apparently in most of the other systems.
So, should I switch, or will the Wine/Steam/Lutris experience be mostly the same on Mint as it is on Fedora?
Most problems I’ve seen between Nvidia and Linux were caused by Wayland. If you’re using Fedora with Gnome (the default) then you can try hitting the gear icon when logging in and choosing “gnome on xorg” (screenshot). That might help with the drivers.
For any other issues, Mint might be easier just because it’s based on Debian, which is immensely popular. It’s more of a well beaten path, and there’s probably more help online for any issues you run into.
Debian is Debian based and regular Mint is Ubuntu LTS based and use theirs respective repos (not a big difference for the average user). While currently the non Debian version is the main and recommended version, due some controversial changes in Ubuntu people want to move away from Ubuntu and the devs have considered making the Debian edition the main one.
Mint is great. It also works well out of the box in virtual machines. I like the MATE versions for my older machines.
There is a major shift happening right now, and mint is slower than many to adopt changes. I’d argue that’s good for mint users, but it may be bad for you personally if you plan to learn about modern linux. Idgaf personally about X11 vs Wayland, because I just need to be able to use my programs.
I personally started by playing around with Ubuntu, but it just didn’t feel intuitive coming from windows.
Went over to Mint, and was very happy,especially with drivers and gaming. I even fully removed my windows installation during this period. Having gained a better understanding of Linux, I have now moved on again.
The only real drawback of Mint is not natively supporting KDE Plasma (as they did before). And yes, you can just install it yourself, but I wouldn’t recommend a beginner who barely knows how to install Linux to attempt such an endevour.
One word of advice to OP: don’t wait till you can’t use Windows anymore. Start by dual booting and getting a hang of Linux, but with windows at the ready for any tasks you cannot yet do/feel comfortable doing on Linux. As you get a better hold of Linux, you should naturally begin to use Windows less.
The worst thing someone can do, is to jump OS without any backup or safety net. Learning to use Windows took a long time, getting a hang of new concepts and getting used to an alien environment. Now, already having a hang of “computers” (Windows), we have digital needs and expectations (E-Mail, gaming, etc.) which will need fulfilling, but many seem to forget that a different OS means different ways of doing our daily tasks and different challenges to handle.
And yes, “different”, because Windows definitely also comes with it’s own unique challenges, you just don’t see them as much when having gotten used to them.
One word of advice to OP: don’t wait till you can’t use Windows anymore. Start by dual booting and getting a hang of Linux, but with windows at the ready for any tasks you cannot yet do/feel comfortable doing on Linux. As you get a better hold of Linux, you should naturally begin to use Windows less.
my isp supports ipv6 but disables it in openwrt config.
i found a way to get root access to it and re-enabled it, seems to work just fine. the configuration is kinda fucked up but kinda works (dhcpv6, no slaac)
Yeah, the sole reason I don’t have linux on my old laptop is that lenovo has completely proprietary video drivers for it. I’m talking “manufacturer’s installers don’t think there’s a video card there” proprietary.
Edit. By software I’m talking about in game features.
Like FSR and such? That’s available on Linux (FSR 1.x is integrated into SteamOS for compositor-level upscaling). AFAIK AMD does not officially support FSR on Linux but it’s written in a way that it should work with minor integration work. It’s written with cross-platform support in mind, given that it’s targeting PlayStation etc. als well.
I’m also nervous about using an OS I’m not familiar with for business purposes right away
Absolutely STOP. Do not go with Linux, go with what you are comfortable with. If this is business, you do not have time to be uncomfortable and the learning curve to ramp up to ANY new OS and be productive is something that's just a non-negotiable kind of thing.
If you've never used Linux, play with Linux first on personal time. For business time, use what you know works first and foremost.
All OSes are tools. You do not just learn a tool when your job is waiting for a bed frame to be made or whatever.
TL;DR
If you are not comfortable with Linux, do NOT use it for business.
Spend your time making sure you are protected against ransomware with good offline backups and able to recover your practice. Keep your payments separate from your comms machine.
Your job is going to have lots of shady things to click on/invoice/etc
Plan for it so a malicious client/infected evidence/mistaken click doesn’t take down your practice.
I’m 25y into this as a technologist and still make mistakes on “oh this will be quick”. Make sure your time sinks are 100% aligned with your business. Think of automation / value and you’ll have the right mindset.
If you find the tech side fascinating, there’s always demand for good tech lawyers and lawyer comms are entryways into technology management.
My brother in Christ, this isn’t about the money. This is about meeting business deadlines. OP can’t be using time trying to figure out something on Linux when his clients are waiting.
His first clients are also going to be where his solo practice either sinks or swims.
This is good advice, I appreciate it. But I should clarify, I definitely won’t be launching my practice before I’m comfortable with the OS. I’m probably going to take some other user’s suggestions and do some test runs on my home machine to figure things out. I’m not launching tomorrow, there’s no real rush. My current contract runs until May 2024. So I’ve got 6 months ahead of me to figure things out.
My advice is try using existing documents with Libre office. You can install it on windows as well.
I use Linux for over twenty years now and installed windows on a vm last week to Wirte my resume. Libre office is fine, you run into problems when opening and editing existing ms office documents. At least that is my experience.
But give Libre office on windows a shot, see if you like it.
I’m going to nitpick your comment because we are Linux users and it’s in our blood.
Heard about LaTex? You don’t really need to use Word to write resumes. In fact, I’d advise you against it. It’s easier to go to overleaf, download an existing template and generate a usable pdf that won’t break.
In addition to the other comment re. LibreOffice, I’d also recommend trying out OnlyOffice - generally, it has better compatibility with MS Office formats compared to LO, and the UI is very similar to MSO which may make it easier to use.
PDFs might be your sticking point. I’ve not found any software that will handle all the different things you can do with acrobat in an easy way. But I have to heavily modify PDFs from time to time, and you may not have nearly the needs I do.
I’d suggest checking out libre office, and see if you can find a PDF application that satisfies you. The app store on pop os is really good, as is the interface, and if you don’t like tiling window managers, you can turn it off.
Another suggestion is to recognize you’re a novice. If you read something that sounds like a perfect setup, but it’s a little complicated, put it off. You don’t want to get in over your head, because linux distros will not keep you from breaking things. The defaults of any large distribution are a pretty safe bet.
Nixos is at 23.11 :) Also, rolling releases are kinda fun: the latest commit so far is 46ae0210ce163b3cba6c7da08840c1d63de9c701 which roughly translates to nixos-unstable 403509863565239228514588166489915404446713104129 :D
There are several ways to exploit LogoFAIL. Remote attacks work by first exploiting an unpatched vulnerability in a browser, media player, or other app and using the administrative control gained to replace the legitimate logo image processed early in the boot process with an identical-looking one that exploits a parser flaw. The other way is to gain brief access to a vulnerable device while it’s unlocked and replace the legitimate image file with a malicious one.
In short, the adversary requires elevated access to replace a file on the EFI partition. In this case, you should consider the machine compromised with or without this flaw.
You weren’t hoping that Secure Boot saves your ass, were you?
Ah, so the next Air Bud movie will be what, Hack Bud?
“There’s nothing in the specifications that says that a dog can’t have admin access.”
“Nothing but 'net!”
Doesn’t this mean that secure boot would save your ass? If you verify that the boot files are signed (secure boot) then you can’t boot these modified files or am I missing something?
If it can execute in ram (as far as I understand, they’ve been talking about fileless attacks, so… Possible?), it can just inject whatever
Addit: also, sucure boot on most systems, well, sucks, unless you remove m$ keys and flash yours, at least. The thing is, they signed shim and whatever was the alternative chainable bootloader (mako or smth?) effectively rendering the whole thing useless; also there was a grub binary distributed as part of some kaspersky’s livecd-s with unlocked config, so, yet again, load whatever tf you want
Last time I enabled secure boot it was with a unified kernel image, there was nothing on the EFI partition that was unsigned.
Idk about the default shim setup but using dracut with uki, rolled keys and luks it’d be secure.
After this you’re protected from offline attacks only though, unless you sign the UKI on a different device any program with root could still sign the modified images itself but no one could do an Evil Maid Attack or similar.
The point with m$ keys was that you should delete them as they’re used to sign stuff that loads literally anything given your maid is insistent enough.
[note: it was mentioned in the arch wiki that sometimes removing m$ keys bricks some (which exactly wasn’t mentioned) devices]
Well, not an expert. We learned now that logos are not signed. I’m not sure the boot menu config file is not either. So on a typical linux setup you can inject a command there.
In many of these cases, however, it’s still possible to run a software tool freely available from the IBV or device vendor website that reflashes the firmware from the OS. To pass security checks, the tool installs the same cryptographically signed UEFI firmware already in use, with only the logo image, which doesn’t require a valid digital signature, changed.
You don’t have to clean your ~/.cache every now and then. You have to figure out which program eats so much space there, ensure that it is not misconfigured and file a bugreport.
<span style="color:#323232;">% du -sh ~/.cache
</span><span style="color:#323232;">1,6G /home/bizdelnick/.cache
</span>
I don’t remember if I ever cleaned it up. Probably a couple years ago when I moved my old HDD to new PC with freshly installed OS. It does not grow accidentally. Only in some very rare cases. As well as some other dirs under ~ and var. If it is a critical system, set up monitoring of free filesystem space. If not, you will notice if it becomes full (I can’t remember when this happened to me last time, maybe ~15 years ago when some log file started to grow because of endless error messages).
Because some users experienced accidential grows like OP had 160 Gbyte. So general advice for linux users can be stated as: Check your ~/.cache every now and then
Critical systems/servers shall better be monitored as you suggest.
Some users experienced accidential growth of /var/log. Some users experienced accidential growth of /var/cache. Some users experienced accidential growth of /var/lib. Some users experienced accidential growth of ~/.xsession-errors. Shall I continue?
Does every user need to begin his day checking all that places? No, he does not. It is waste of time. Such situations are extremely rare. If you are paranoid, check df to see if you have enough free space, and only if it unpredictably shrinked begin to ivestigate which directory has grown.
I don’t get your point. Why should somebody do this every day?
As the experience from other users in this thread, it seems not extremely rare to have an overgrown ~/.cache/ folder. So checking it from time to time is a good advice. If we all do this for a time, and create bug tickets for software which is not cleaning up. Then this problem will hopefully go away with future software releases.
I think I accomplished a similar effect on my first linux distro a long time ago with a program called "compiz" (iirc). "I'm so frickin 1337," I whispered under my breath. Nobody cared except me, though, lol.
Yep, same! Some of my friends have told me it’s a bit “silly” for me to have it enabled - but there’s plenty of bad things that occur on a daily basis in my life, I do not think there’s a single problem with having some wobbly windows as a small vice to enjoy haha.
But seriously, yesterday I cloned my main partition to a new laptop into an LVM volume on LUKS. Because I did not have any way of putting the new NVMe and old SATA SSD into one machine, I just used netcat over an ad hoc network.
Last time I tried to mess with Windows partition I tried to expand it to merge free space in my C:\ drive, but I couldn’t do that because Windows put the recovery partition in the middle, with no permission to remove it. Had to jump through a million hoops to get Windows to remove it.
I mean sure, Windows is easier in many ways. Not partition management. Anything but that. What a pain.
Ran into that a few years ago. I think I ended up fixing it by booting linux off a flash drive and moving the partitions around in that. It wasn’t to difficult after I just gave up trying to do it in Windows. Such a stupid problem.
I think I see a theme here. Doing fun normie stuff on iOS/ipadOS is easy. Doing technical stuff is usually completely impossible.
Doing technical stuff on Linux is easy as long as you know what you’re doing. Doing popular normie things on Linux is a bit hit-or-miss. Some things work perfectly, but other things are a royal pita.
Windows seems to be in between the two extremes in more than one regard. Microsoft seems to be working to find some sort of compromise in these things.
The rate was around 100MB/s. So I think the bottleneck was probably the read/write speeds of the SSDs, considering I have ~900Mbit/s down from speedtest.net, and this setup removed every hop except the old and new Laptops Gigabit Lan Port and the Gigabit patch cable between them. But with larger files/partitions over the internet this would probably help
I’d guess many distros would’ve had errors with preinstalled and configured helpers. Debugging them would be a pain
Gentoo, LFS, Arch etc. are installed manually, so one typically knows their system very well, including packages and configs they might have to hard configure interfaces etc. in
I just noticed I did not fully expand the fs on the target machine after shrinking it on the source machine to be sure it fits. No problem, growing ext4 file systems with resize2fs (indirect dependency of linux and base) works on mounted fs’ too, the Kernel just needs to be newer than 2.6 (so since 2003).
Took less than 1 second and works flawlessly, live. Conkys fs_free just jumped from 20 to 76. Still time to clear my caches.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.