IT’S DONE! Went and got myself a new SSD this afternoon and put POP!_OS on it. Looks like I got it all right but I can only boot into my Windows11 side through the BIOS. I tried all the GRUB commands but apparently after more reading GRUB isn’t used in pop 22.04. Any other ways to have a selection screen of some sort for the OS I want to boot rather than having to wait for the splash and frantically hit “F2” at the right time?
There are few things I’d suggest more than keeping Windows and Linux installations WELL separated. I’ve had windows update EFI entries for the whole system more than once, leaving the linux OS unbootable.
This page has a different, simpler approach and more specific steps. Apparently you can just copy the Microsoft EFI folder to a specific directory in your Linux drive’s ESP partition. I’d be a little bit concerned about Windows not being able to update its EFI bootloader, but I also don’t know if Windows ever updates that. The page also has instructions on how to interact with the systemd-boot menu during boot.
You could also install grub yourself, but I can’t guarantee that’ll be easy. Mashing F2 might be the sanest solution, unless you plan on booting into Windows every day.
And if you read a few paragraphs more, there’s a Download and install the app section too, rather than add their repos. Which is what the OP wanted anyway…
It’s not difficult. I’ve installed several apps that way already. I just don’t like blindly following instructions while having zero understanding of what I’m actually doing here. Also, in this case the instructions are unhelpful because nowhere it tells me to install curl first and because of me not having it the first command just comes back with an error.
The way to solve that problem is to read the commands and look up what they do. The installation method they describe is pretty standard and inoffensive. And provides automatic updates. The commands used aren’t complicated and they’re some of the system fundamentals for Debian/Ubuntu systems so it’s a good idea being familiar with them.
In the time it took you to write this shit post and respond to all the comments you could have spent a couple of minutes reading and educating yourself on the process. It’s legit pretty simple especially if you’re willing to do a little research.
cURL is a very commonly used program to download individual files from the command line and worth installing to have it around in the future.
sudo apt update
sudo apt install curl
The first command tells your package manager to update its list so you ask for the latest version. You can skip it if you’ve already updated today. The second command tells your package manager to install cURL.
This will happen every now and then, especially when building a package from source. You won’t have some common utility that the documentation writer assumed you had, and you will need to find what package provides it and install the package.
Yes people would assume you have curl. Curl is often used to install programs. And curl is definitely one of the things that can do malicius things this way. So you are right to be hesitant to use commands that you don’t understand. Most Linux users have forgotten how hard it is to learn the first stuff with no preaquired knowledge.
If you have googled “what is curl and how is it used” you may have found some relevant info.
I have given up on Linux because installing was hard in the past
There are some tools that make installing software easier. Like “appimage” files that are single files that (after you make executable) are completely self contained.
Flatpacks and snaps have an “store” like experience.
.deb files are also sometimes simple (also need to be made executable) (depends on the distro)
You can verify the signature of the manual download as well. Either way, you are trusting the files you download over HTTPS from mullvad.net. There’s no real difference, except that when you use the repo, you are trusting it indefinitely, whereas if you download the deb directly, you are only trusting it once.
Using the repo is less secure, because it opens you to future attacks against the repo itself.
That’s why you download the key from multiple distinct domains from multiple distinct locations using multiple distinct devices and veryify their fingerprints match. If the key/fingerprint is only available on one domain, open a bug report with the maintainer.
There’s nothing wrong with installing a .deb manually.
Personally, I’d hesitate to add any third-party repos unless there is a very good reason. In this case, the only real difference is that you won’t get the updates automatically with sudo apt update; sudo apt upgrade without the repo. Either way, the desktop app will notify you when updates are available. There’s very little advantage to using the repo.
Adding a repo is very rarely required. It has deeper consequences than simply installing an app, and requires a higher level of trust. If you don’t understand the security implications of adding a repo (and its associated key), then my advice is: just don’t.
What are you on about? If you are using the 3rd party repo, you are just as likely to get malware than if you download the deb directly from the wbsite. Its literally the same thing, just adding the repo means that the malware could get installed automatically and without you knowing where it came from.
No, you’re confusing two vectors of attack. I’m saying that if you fan trust the vendor, then you’re still at risk from downloading malicious software that was manipulated between the vendor and you (man in the middle attack), unless you verified a signature using a key stores offline (note https is still vulnerable because the keys are stored online)
Not untrue, and I don’t think that the possibility should be glossed over, but honestly, what do you think is more likely: this specific person getting specifically MitM’ed by a bad actor, or a bad actor taking control of a repo that hundreds of people blindly trust. I have a sneaking suspicion that OP’s threat model isn’t sophisticated enough to need to really, truly, be worrying about that.
This sort of thing happens dragnet. And mullvad users are definitely a group to be targeted. Dont assume OP isnt a refugee or journalist and give them bad advice that could get them killed
Been on Wayland since 2016 and to this day my only issues (apart from when I had an Nvidia card for a few months, that is…) was video sharing in Discord/steam in-home streaming, both of which still don’t work right.
Other than that, it’s been great. Multi-monitor works way better, far fewer bugs, my desktop feels a lot more fluid and smooth.
On laptops, Wayland+Gnome gestures are exceptional, putting even Apple’s gestures to shame. I cannot stress enough how good of a job Gnome+Wayland does with trackpad gestures. It makes other gesture systems, especially ones under X11, feel like they were cobbled together by a Fallout 3 modder.
Overall Wayland has been great for me. I just wish Discord would fix their shitty app.
Dbus attempts to provide a standardized system for inter process communication (IPC) without the complexity of managing individual connections between each and every process that needs to share certain types of data.
But the implemenation has it’s fair share of issues and the attempted reimplementation even more so, because it wants to leverage everything important to Systemd instead.
Appimages do not have repositories, unlike Flatpaks and Snaps. It’s harder to install or update them since there isn’t a package manager for the Appimages.
I don’t know about Snaps, but Flatpaks are sandboxed, which basically means more security, since apps won’t get access to your system without permission. It’s kind of like Android where apps have to ask for permission to things like camera access, filesystem access, etc.
While I do use Appimages, they remind me of Windows and having to go to websites to download stuff.
Personally, I prefer just installing software with my distro’s package manager and resort to Flatpaks or Appimages when it’s not available in the repos.
All Canonical contributions have been relicensed and are now under AGPLv3. Community contributions remain under Apache 2.0.
So they can happily port over code from the Incus fork but Incus cannot import the code without changing the license first. It’s meant to be a one-way street. Typical Canonical.
In short incus has Apache 2.0 copyright licene that states:
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole
While AGPL v3.0 that Canonical just adopted states:
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
. . .
You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy …
Meaning if incus uses any part of Canonicals source their code can’t be licenced under Apache but rather AGPL v3.0, which pulls any other derivative of incus.
Projects which choose BSD/Apache type licences do so fully in the knowledge that their code may be incorporated into projects with different licences. That’s literally the point: it’s considered a feature of the licence. These projects are explicitly OK with their code going proprietary, for example. If they weren’t OK with it, they’d use a GPL-type copyleft licence instead, as that’s conversely the literal point of those licences.
Being mad about your Apache code being incorporated into a GPL project would make no sense, and certainly wouldn’t garner any sympathy from most people in the FOSS community.
Its not a one way street but this makes more libre thing. Canonical didnt make it proprietary to create a one way street but made it more libre by adopting AGPL license which gives users more rights to the code
Its not a one way street but this makes more libre thing. Canonical didnt make it proprietary to create a one way street but made it more libre by adopting AGPL license which gives users more rights to the code
Why is there still a CLA that allows them and only them to sell proprietary versions then? Don’t fall for Canonical’s PR bullshit.
Look, I’m usually first in line to shit on Canonical, but I can’t get mad at them adopting AGPL. This is objectively the best license for server software. Incus should also switch to AGPL for all Canonical code, and seek to have contributors license their code as AGPL as well.
I will however point out the hypocrisy and inconsistency of it, because the Snap server is still proprietary after all of this time. If this is their “standard for server-side code” then apply it to Snaps or quit lying to us.
They would have used a license like SSPL or the newer BSL for that. AGPL keeps it open.
No, the copyright owner can sell proprietary versions however they like. Outside contributions are required to sign Canonical’s CLA. Read github.com/canonical/lxd/blob/…/CONTRIBUTING.md#l… before making claims.
I don’t understand how AGPL allows Canonical to make and sell proprietary copies of this software without violating their license. That’s the only way your scenario could happen. If you’re aware of a situation where a company can do this, I’d love to learn.
I don’t understand how AGPL allows Canonical to make and sell proprietary copies of this software without violating their license. That’s the only way your scenario could happen.
“To release a nonfree program is always ethically tainted, but legally there is no obstacle to your doing this. If you are the copyright holder for the code, you can release it under various different non-exclusive licenses at various times. […] the GPL is a license from the developer for others to use, distribute and change the program. The developer itself is not bound by it, so no matter what the developer does, this is not a “violation” of the GPL.”
It requires that you make available the full source code to anyone who you give binaries too (like the GPL), but also requires you make available that source to users of the software over a network. So, someone could not make a proprietary fork of AGPL software to sell exclusively as a service. In order to provide that service you have to also be willing to provide the source, including changes, which would allow users to then choose to run that service themselves instead of being forced to pay the provider.
The full details are complex but I’ll give you the basic gist. The original GPL licenses essentially say that if you give somebody the compiled binary, they are legally entitled to have the source code as well, along with the rights to modify and redistribute it so long as they too follow the same rules. It creates a system where code flows down freely like water.
However, this doesn’t apply if you don’t give them the binary. For example, taking an open source GPL-licensed project and running it on a server instead. The GPL doesn’t apply, so you can modify it and do whatever, and you aren’t required to share the source code if other people access it because that’s not specified in the GPL.
The AGPL was created to address this. It adds a stipulation that if you give people access to the software on a remote system, they are still entitled to the source code and all the same rights to modify and redistribute it. Code now flows freely again, and all is well.
The only “issue” is that the GPL/AGPL are only one-way compatible with the Apache/MIT/BSD/etc licenses. These licenses put minimal requirements on code sharing, so it’s completely fine to add their code to GPL projects. But themselves, they aren’t up to GPL requirements, so GPL code can’t be added to Apache projects.
For a user: In Wayland programs are supposed to draw their own title bar. Java aplications and old applications must use a backwards compatibility layer that can cause flicker and bad font rendering. The terminology is different (compositor = window manager). Some niche new programs may only run on Wayland. Wayland hasn’t been adopted by BSD (AFAIK).
For a programners: Wayland has more modern, tidy code, but not all toolkits support it natively and few are easy. If you code exclusively for Wayland, a lot of users won’t use your program at the moment.
In Wayland programs are supposed to draw their own title bar
That’s incorrect. GNOME does it like this, Plasma doesn’t. KDE came up with a standard so a program can communicate this to the DE, GNOME slept on it. That’s why e.g. mpv doesn’t run well on GNOME.
Java aplications and old applications must use a backwards compatibility layer that can cause flicker and bad font rendering.
There have been efforts to provide better support for Java applications on the Wayland. For instance, the OpenJDK project has been making progress on implementing native “pure” Wayland toolkit integration not dependent upon XOrg/X11 or XWayland.
but not all toolkits support it natively and few are easy.
There have been significant developments in providing native support for Wayland in various toolkits. For example : Clutter, GLFW 3, SDL, GTK 3.20+, QT5+, EFL, Slint, Iced & OpenJDK. Just to name a few.
While it is true that not all toolkits have full native support, ongoing work is/has largely shifted towards much better Wayland support.
Wayland is a new implementation on how your system draws windows and components in your computer (I’m not sure if it’s responsible for the content) and it tries a different way to do it compared with xorg.
Xorg is the old implementation, and it has been patched to support most of the new features, and Wayland is trying to get the same features xorg has and some extras.
As a Linux user I have noted that sharing screen is more private in Wayland, also I think multi monitor refresh rate was a problem in xorg. One common issue with Wayland is the GPU compatibility, as far as I know Wayland runs better in AMD GPU I think is because of Mesa integration.
Yes, exactly, I think it is pretty clear that the linux community is moving towards wayland. Most distro and desktop environment are all in the process of removing xorg slowly.
Desktop Linux is in its never-ending process of replacing old displaying system with new one. The process is long and not really transparent, because the two displaying systems were designed in completely different times for different hardware and with different security concerns in mind, therefore the X11 clients (all the software that was ever made or ported to Linux) are very much incompatible with Wayland. For backwards compatibility there’s Xwayland, which provides full blown Xorg server running on top of Wayland compositor with all the things X11 app requires. Until now, Firefox, even though had its Wayland backend as WIP feature (possible to activate with environment variable MOZ_ENABLE_WAYLAND=1) it defaulted to Xwayland on Wayland sessions. It now uses native Wayland backend by default providing better efficiency, DPI scaling, touchpad gestures etc
Xorg and Wayland are graphics systems. You may have seen a command line during boot with tons of text scrolling past, that’s all you get without either Xorg or some alternative running.
Xorg runs on X11, which was designed a long time ago during the mainframe era of computers. It used to be the standard for all GUIs on Linux, BSD, and other Unixlikes for ages.
However, modern computers are nothing like the computers X11 was originally based on, and X11 started showing shortcomings. So, years ago people working on Linux decided it was time to design a new system, one that’s designed around our modern computers and operating systems. That new system is Wayland, through Ubuntu sported their own Mir for a while too.
Wayland was designed not to be a network protocol (though you can still run applications on remote computers if you wish). It also has a bunch of security benefits, like “not every application can read your key strokes, copy your password, or record your screen without you noticing”.
This new system doesn’t have the benefit of multiple decades of hard work behind it. As you may imagine, this also broke applications for a while. There’s a tool called XWayland that can run X11 applications on Wayland, so most programs just work, but things like screen recording are severely limited under thst system.
On the other hand, if you’re on a laptop, Linux can now finally reliably use touch pad gestures with more than two fingers through Wayland. You could write scripts and tools to fake them before, but they’re actually part of the UI nlw.
Wayland does have APIs for almost everything now, but not all applications have been updated to use those APIs. For example, Zoom didn’t wait for the screen recording API to be standardised, so it implemented screen sharing under Wayland as “take a thousand screenshots”. Some programs work by listening for keyboard hotkeys (basically processing every key and checking if it’s their special hotkey) but that’s no longer supported unless the program has focus.
There were also issues with drivers (well, almost exclusively Nvidia) but those have mostly been solved. It’s not for everyone yet, but there’s no reason not to try Wayland if you don’t have a full Linux setup already, anymore.
As with any big change to the Linux ecosystem (systemd, anyone?) there’s a lot of fighting between people who want the shiny, better new thing, and people who liked the way things were before. Both sides have good arguments. Big parties in the Linux world, like Gnome and KDE, are moving towards a Wayland-only desktop. At the moment you can run Gnome and KDE on either, but that’ll be harder in the future. Other GUIs, like XFCE, are heavily geared towards X11, and it may take years before you can even run those on Wayland. Some, like i3, have replacements (Sway) that do almost the same thing.
Interestingly, hardware vendors also seem to go with Wayland for their custom Linux projects. For example, the Tizen watches Samsung used to sell run Wayland. The Steam Deck runs Wayland in game mode, using a purpose built Wayland compositor as a GUI, but X11 for desktop mode.
In practice, you shouldn’t need to worry about things too much right now. Some programs will not work on Wayland, others will not work on X11. Some games run a few fps faster on Wayland, others run faster on X11, but the differences aren’t huge. If both Xorg and Wayland are supported in your distro, you can switch between the two using a button on the login screen.
As for Firefox: Firefox has had native Wayland support for a while. It was already capable of using all the Wayland APIs and protocols at least a year ago. However, by default, it ran through XWayland, to remain as compatible as possible, while Wayland was still being tested. Now, with the upcoming release, Firefox goes native.
For Xorg users, nothing will change. For Wayland users with touchpads or touch screens, gestures like pinch to zoom will be smoother and there will be more of them. The only difference is that you don’t need to stuff MOZ_ENABLE_WAYLAND=1 in a script somewhere to activate the new capabilities on Wayland.
Both are display servers which is software that allows programs to write to the screen. X is older and was created back in 1984 at MIT and Wayland is a much newer protocol that is designed to work better on newer (post 90’s) hardware.
The biggest difference is that Wayland basicly allows your desktop direct access to the screen and X has a server that runs and allows your desktop to connect to it. X was originally designed to run remotely as back in the day there was one big commuter that many people connected to.
If this is all very confusing you probably should just stick to your distros defaults. Most of the time you don’t need to care.
Wayland is a communication protocol that specifies the communication between a display server and its clients, designed to be a replacement for the X11 window system protocol and architecture.
I might be a little nitpicky here, but I feel it’s an important distinction to make as there is no single common Wayland server like Xorg is for X11.
A display server using the Wayland protocol is called a Wayland compositor, as it additionally performs the task of a compositing window manager.
Xorg on the other hand is basically one fat display server designed like a house of cards that everyone uses.
xorg is a old implementation of x11, which is basically abandon-ware right now. No one is adding feature to it, testing it, or fixing security vulnerabilities. It also lack some common-sense security feature: for example every program can get every input (keyboard and cursor location) without root, so a key logger is trivial to implement in xorg.
Wayland is newer, with more features (reasonable multi-monitor support, one-to-one gestures, etc). But many application framework and hardware have poor support for it, because it is slightly newer (developed on 2008, so not that new any more). Notably, electron and nvidia are typically the worst offender, like everything on linux, but both has come a long way.
I have wayland on my laptop, since one-to-one gesture is a must for me, and I present quite often using that laptop. My desktop is on xorg, since I have a nvidia GPU and use quite a lot of electron app.
AMD GPU + KDE Plasma with Wayland finally gets me close enough to the smoothness of Windows, especially the per-display settings for fractional scaling and high refresh rate were sorely lacking on Linux. It’s not perfect yet (and neither is Windows’ implementation) but it improved the Linux desktop experience a ton!
It’s like scrolling on your phone, where the content on your screen follows exactly your fingers movements. On Wayland you can do the same with a trackpad, like for example when scrolling, switching workspaces or invoking the activities overview. It feels much nicer, more immediate and more natural than on X.org, where gestures are just triggering a shortcut after a set distance.
I would add, even my last Windows machine surface laptop 2 with latest windows 11 don’t have this feature. it is so awkward, I just stopped using gesture all together.
I looked into this for myself but I was told a lot of what has already been said here and decided to go with a pixel + graphene OS .
I’m probably sticking with this type of setup until there’s a proper Linux phone that can be easily used as a daily driver.
I think the key is patience. One day I think we’ll have something that will be able to daily drive Linux that won’t feel like you’re using the alpha release of android.
I mean GrapheneOS is probably the most secure OS on the planet and also privacy friendly. Android is annoying sometimes but its pretty okay and the security standards are veeery high
True. That’s why I chose graphene in the end. Ib liked the idea of the multiple user profiles to sandbox certain activities to a disposable profile and the fact that if I need Google play I can sandbox that as well by running it as a regular system app on it’s own profile…
So far it’s been pretty good. I think my only complaint is I can’t get my Chromecast devices to connect. I got my Google TV to connect but all Chromecast devices refuse to pair for some reason but that pales in comparison to the headaches I’d get trying to daily drive a Linux phone. But I’m big on Linux (I use it as a daily driver on my computer) so I’m patiently waiting for a daily driver worthy Linux phone. 🤞
If you want to save money on a pixel I’d recommend taking a look at swappa(dot)com Even if you don’t want a used one they do have people selling pixels new sealed in the box for usually pretty good deals. I think the cheapest new in box option is a pixel 6a. Or 6 pro.
You can also find new pixel 7, 7 pro, the 8 and the 8 pro.
I’ve been buying and selling devices there for years and never had a problem as a buyer or a seller.
Graphene also is way easier to use their google play sandbox then messing around with microg and shit on lineage. I’d love to see the sandbox stuff ported to lineage or similar but I imagine its pretty baked into graphene. Since I care a little about privacy but also just want my phone to be rooted so I can do whatever I want with it. Oh well maybe someday.
You can root GrapheneOS, no problem. Its just not as secure anymore and the bootloader cant be locked, banking apps will not work anymore, Android Attestation, Safetynet etc
You can root GrapheneOS, no problem. Its just not as secure anymore and the bootloader cant be locked, banking apps will not work anymore, Android Attestation, Safetynet etc
I’ve bought a Nexus 4 to play around with Ubuntu Touch many years ago, but I really don’t think I could daily drive even a more powerful Linux phone. Many apps from messengers to banking apps are Android/iOS only, so it’d be really inconvenient to use — not to mention problems with calling and a not-so-great camera.
Almost all things I want to do on a phone are possible with a Pixel + GrapheneOS, which also makes an open source, secure and private phone OS.
Usually ssh’ing into a server through termux is all I need, altough it’d be cool to be able to plug my phone into a monitor and have a desktop with me all the time. But it being “cool” is the problem, as it’s not useful day to fay for me. If I need a pc I’ll take my laptop. I’ll probably try it at some point, but that’s many years into the future.
Performance and bugs might still be a problem with these relatively young projects. But if all you need is a browser I do believe it might be worth a shot.
In the EU 2FA for banking is required by law which usually comes down to either an Android/iOS app or a chipTAN device. That’s why browser isn’t an option for me. Sadly I don’t think waydroid passes the basicIntegrity check of AOSP [1], so emulation is out of the picture too.
Sms is not as secure as a 2FA app or the bank’s own app. SMS verfification also costs money, so it will raise your monthly fees quite much if you wish to receive a text on every transaction.
As I said, SMS is secure enough without being the nightmare of a proprietary spyware app. As for fees, you have an American perspective, in most of the world SMS has been free to send for decades, and was always free to receive. The ideal solution is indeed a 2FA app, but those never took off.
I have a European perspective and here you need to pay per text message. Receiving is free, but the bank is charged and they put their charge on me, so they bill me for the messages, unfortunately. In the US SMS is free in most plans as I know.
Man, the call problems are a dealbreaker for any phone at all, imo. Maybe not for a toy, but it’s bonkers that they’d release a phone OS that isn’t 100% call stable.
There has to be a device to develop support for calling. Since there’re multiple open source phone projects it’s also not simple to just write an implementation for them. Additionally carriers don’t work all the same (different bands, …), so it’s really not easy to solve with the few resources available. As far as I know much of the development on these phone OS is done by volunteers and pine64 isn’t a big established company either.
linux
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.