Seems like you’re annoyed that I pointed out that what you were saying was irrelevant? And so you reply with more irrelevant crap (on a very nerdy, not-fun-at-parties internet forum for Linux discussion)? Let me know if I got that wrong.
If you’re doing it for the memes then you don’t really need to worry about malware. Your machine is probably too old for anything that’s still floating out there to even work on it.
Many people browse 4-5 pages a day, see a few emails, print a few pdfs, and a core2duo, or x4, for 40#/$/Eu a box run flawlessly with linux and xfce/lxde for example.
Even video-conferencing works fine.
Driver code is still there, you can add it back if you want, same with ide drivers and such, support was removed but code still exists, just add it and compile your own kernel, there are alot of tutorials in internet about it
The drivers were removed in 6.3. Debian 12 is still running on 6.1. Debian 12 just came out and still has many years of support ahead of it (at least 5). You can get plenty of use out of these cards before they stop working.
Someone needs to maintain them for them to keep working. Nobody else is willing to do that anymore, but you can still volunteer as a maintainer. If you don’t, it’s as much your fault as anyone elses.
I was using Clementine for a long time and switched to Strawberry about a year ago. Since they’re related, migrating libraries from one to the other was also possible.
Is it dumb that I only backup my docs and anything else I think is important? I can rebuild fairly quickly if something would happen. I ask since I know that people backup a variety of their directories
As long as you’re not going to store sensitive data in there, I’ve just been using GitHub. I’ve got a Private Repository setup with my configs (.bashrc as well as WM configs and other dot files) and I just commit/push it up and heave an update script pull it down elsewhere. Then it’s also version controlled.
I think I maybe phrased it horribly, my question was more like, what do I need to learn in order to modify myself the .bashrc by myself instead of using a programme. Does it make sense?
Use chatgpt. Take the first line of your bashrc file and ask it to explain it. Than the second line etc. Won’t be always perfect but for bashrc it shouldn’t be a problem and you can learn a lot from it.
You need to learn bash scripting. Also, there are a few default files that the .bashrc uses which can be helpful to compartmentalize the custom things you do to it so that it's easier to undo if you screw something up. To do that, just add this to the bottom of your .bashrc
if [ -f ~/.bash_custom ]; then
. ~/.bash_custom
fi
What that will do is check if the .bash_custom file exists and then run the .bash_custom file in your home directory and apply anything in there. Also, you can call the file whatever you like, but bash does have some defaults that it will check for and run them without editing the .bashrc at all. It's kind of hard to find a list of the the files that it automatically checks for, but I know that .bash_aliases is one of them, and I think it checks .bash_commands as well, but I'm not entirely sure. Either way, you can force it to check your custom one by using the code above.
Then you can create the file and add any custom things in there that you like. For example, I like to frequently update through the terminal but running sudo apt update && sudo apt upgrade && sudo apt autoremove && flatpak upgrade was a bit tedious and I wanted a bit less feedback so I made a custom alias for my personal use.
Which hides most of the text from updating and just gives me feedback on what it's currently doing if I don't really care to know all of the details. So now I just run update in the terminal and plug in my password and it updates and upgrades everything in a human readable way.
There's a lot that can be done with bash scripting, like editing files, iterating over files and directories, setting environment variables. It's basically a full programming language so the limits are mostly your imagination.
While that is true, it’s also r13y on another level: Reproducible evaluation. That mostly stems from pure eval and locking.
In the “before times”, you’d get your Nix expressions from some mutable location in the Nix path, so running i.e. a nixos-rebuild on your configuration could produce two different eval results when ran at two different times, depending on whether anything about your channel configuration changed in the mean time. This cannot happen with flakes as all inputs are explicitly given and locked.
You could achieve the same using niv etc. before but that had its own issues.
It was usually recommended to lock to inputs anyhow with all the fetchers requiring a hash which I hated having to manually update & like the UX flakes provides (I really wish they supported more than Git & Mercurial tho). You can still have different evals tho if you point to latest.tar.zstd or other non-hashed thing like a branch where the referred to can change & it won’t reproduce. I haven’t used channels in years, but doesn’t that just refer to the running system, not using Nix to build projects?
Yes, it does use a Linux kernel but no one would consider it open unless you limit yourself to AOSP. Google branded Android is closed and is regulated and restricted.
Seconding this. MPD + ncmpcpp + an MPRIS plugin. With the latter I can control the music playback through global keyboard shortcuts and the system tray UI if necessary.
It’s usually pretty easy to find secondhand Microsoft Surface devices for that price point (check craigslist, FB marketplace, and the like), and they absolutely rock when using a Gnome-based distro (like Fedora).
I just installed Fedora with KDE plasma and Wayland last weekend using the surface kernel. Was pretty painless, after abandoning a couple other distros that did not play nice.
The instructions on the GitHub are also very good, though obviously every years surface has its challenges I’m sure.
Just make sure about what you’re getting yourself into: features such as IR facial detection and stylus input isn’t supported on all models OOTB by the surface kernel. Also, it is a bit messy with how surface handles booting.
Linux on surface works (I got Ubuntu on both my Go gen 1 and pro 7) but expect to put in time to tinker and manage unexpected issues.
I don’t want to step on your workflow too much since it somehow seems to work for you but your main issue stems from the fact that you clearly don’t work with your server as if it actually was a server.
You shouldn’t really have a desktop interface running there in the first place (let alone as root and then using it as a regular user). You should ask yourself what it actually solves for you and be open to trying different (and more standard) solutions to what you’re trying to achieve.
It’d probably consist of less clicking and using the CLI a bit more, but for stuff like file management you can still easily use https://linux.die.net/man/1/mc.
If you need terminal sessions that keep scrollback and don’t stop when you disconnect you should learn to use tmux or screen or something like that. But then again if you’re running actual software in there then you should probably use a service (daemon) for that.
As for whether it’s a security issue, yeah it most definitely is. Just like it’s a security issue to run literally any networked application as root. Security isn’t black and white and there are trade offs to be made but most people wouldn’t consider what you’re doing a reasonable tradeoff.
I had actually moved from a fully CLI server to one with a full desktop when I upgraded from a single board computer to x86. The issue is that it’s not just a NAS, but I regularly use it to offload long operations (moving, copying, or compressing files, mostly) so I don’t need to use my PC for those. To do that I just remote into it and type in the command, then I can turn my PC off or do whatever without affecting the operation. So in a way it’s a second PC that also happens to be a server for my other machines.
I use screen occasionally, and I used to use it a lot more when it was CLI only, but I find it really unwieldy due to how it manages multiple active terminals where you have to type in the ID of each screen to go back into it, and also because it refuses to scroll even when run in a terminal emulator that supports scrolling, where it just cycles between recent commands when you move the scroll wheel.
Not trying to make excuses, just trying to explain my reasoning. I know it’s bad practice and none of these are things I’d do if I was managing an actual production server, but since it’s only accessible from my LAN I tend to be a lot more lax with it.
I’m wondering if I could benefit from some kind of virtualized setup that separates the server stuff while still letting me remote into a desktop on the same machine for doing stuff, or if I can get away with just remoting into not the root user. Though I’ve never used a hypervisor and have no idea how to so I’m not sure how well that would go, since the well-known open source ones like Xen seem really technical and really feels like something not meant to be used outside an actual data centre.
KVM is awesome. It is the core of Proxmox which is my preferred way to manage VMs and LXC containers now. I used to run debian+KVM+virt-manager or cockpit but Proxmox does all the noodling setup for me and then just works.
Sorry, this is very much a PEBKAC issue. This is a excerpt from my tmux config:
<span style="color:#323232;"># Start windows and panes at 1, not 0
</span><span style="color:#323232;">set -g base-index 1
</span><span style="color:#323232;">setw -g pane-base-index 1
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Use Alt-arrow keys without prefix key to switch panes
</span><span style="color:#323232;">bind -n M-Left select-pane -L
</span><span style="color:#323232;">bind -n M-Right select-pane -R
</span><span style="color:#323232;">bind -n M-Up select-pane -U
</span><span style="color:#323232;">bind -n M-Down select-pane -D
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Shift arrow to switch windows
</span><span style="color:#323232;">bind -n S-Left previous-window
</span><span style="color:#323232;">bind -n S-Right next-window
</span><span style="color:#323232;">
</span><span style="color:#323232;"># No delay for escape key press
</span><span style="color:#323232;">set -sg escape-time 0
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Increase scrollback buffer size from 2000 to 50000 lines
</span><span style="color:#323232;">set -g history-limit 50000
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Increase tmux messages display duration from 750ms to 4s
</span><span style="color:#323232;">set -g display-time 4000
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Bind pane creation keys to reuse current directory
</span><span style="color:#323232;">bind % split-window -h -c "#{pane_current_path}"
</span><span style="color:#323232;">bind '"' split-window -v -c "#{pane_current_path}"
</span>
I hope the comments are self explanatory.
Scrolling works with Ctrl+b Page Up/Down. There are other shortcuts, but this is probably the most obvious. q to quit scrolling.
Ctrl+b d to detach from a session. tmux a to attach. As always, many options are available to have many named sessions running simultaneously, but that is for a later time.
I see. In that case you should really try tmux; I didn’t vibe with screen either but I find tmux quite usable.
For the most part I just open several terminal windows/tabs on my local machine and remote with each one to the server, and I use tmux only when I explicitly need to keep something running. Since that’s usually just one thing I can use like two tmux commands and don’t need anything else.
Oh and for stuff like copying and such I’d use rsync instead of primitive cp so that in case it gets interrupted I only copy what’s needed.
I wouldn’t bother with virtualization and such; you’d only complicate things for yourself. Try to keep it simple but do it properly: learn some command line basics and you’ll see that in a year it’ll become second nature.
I don’t really have much of an opinion about Wayland but it’s still funny to me whenever somebody using Wayland shits on X11 and then tries to share their screen on Zoom or something. If Wayland ends up being great I’ll be happy, but for now X11 just kind of works, so I don’t understand why people are so eager to switch? This isn’t to say I don’t understand the desire to build something better and more secure than X11, I’m just not sure what the end user gets out of Wayland right now. I don’t have VRR monitors and stuff, though, so maybe I’m not running into problems I would be if I wanted fancier features. Plus, I use xmonad and some other stuff right now that won’t work on Wayland, so I don’t have much incentive to try it. Hopefully everything gets Wayland updates eventually.
With those however, they never ran on Linux. This situation is different because it did run. I’ve only used Zoom once, so no clue if it worked excellently or if it was “meh”, but it sounds like it did the job before.
Regardless, it doesn’t matter if Zoom hasn’t updated their Electron to account for the Wayland changes - all people will see is that it doesn’t (or did, but no longer) works on Linux and will blame Linux instead.
Which, that is fine if we want Linux to always be a hobbyist operating system. However if we want Linux to be more accessible to people then unfortunately the ball is in our court to try to not break something as simple (or rather, what most regular users would define as simple) as this.
Isn’t screen sharing working since some time? Works even on WebEx from Firefox, can pick any window to share. Granted a few years back it didn’t work, but now it does. Maybe it’s a zoom bug… 🤔
Probably, but my exposure to Wayland has just been people complaining about how much X11 sucks and then proceeding to have more problems than everybody else.
No,I just had to deal with this myself. Most you can do is share your entire desktop in Wayland, and it’s shaky. For the first time, I had to switch to Xorg and bingo, zoom works. Fonts are actually antialiased and kerned properly for certain applications that weren’t… Really surprising.
Again, none of that is a failure of Wayland, it’s a failure of Zoom to run on Wayland. One day, and this is in the next 5 years, Wayland-only apps will refuse to run on X.Org and the situation will be reversed.
You can share screen perfectly fine under Wayland. Many apps use it fine, and even in case of Discord if you use it with a browser it’s doable.
No Wayland dev can fix an issue that originates from lack of app support. There has been many Wayland issues through the years and trust me, I know, but how do you expect them to fix Zoom? Acquire the company and take it behind the shed?
how do you expect them to fix Zoom? Acquire the company and take it behind the shed?
I mean, you could - for example - implement the interface these apps expect to exist and use with your amazing new compositor™.
This is precisely why companies just say “fuck Linux users” - instead of supporting a single operating system where everything kinda “just works” across versions for decades you have to checks notes support 20 different compositors across 2 vastly different display servers and dozens of various desktop environments and such… All for an OS that’s used by maybe 3% of your users if you’re lucky.
The interface exists. It’s up to zoom to support it. Why are you under the impression there is a technical issue? THERE IS NONE.
It’s up to Zoom to support the aforementioned interface.
Wayland’s display handling in this manner is for security, the user will be shown a permission request dialogue to let the app access the screen only if you permit it, it’s also disallowed from accessing anything except what you’ve given it permission to. This is not even a new concept, just not doable under X.
It’s also possible to create the lawless model of X under Wayland through a protocol if you desire to make one, but it makes little sense to throw away this better model just for the sake of some shitty proprietary apps who don’t care for Linux anyways
I remember having this realisation about Mir, but only after we collectively ran it off the cliff wall. The main reason everyone piled on Mir was that it was thought that Canonical would be priming Linux desktop for fragmentation with two competing standards.
But in fact, Mir was providing a solution to the fragmentation Wayland was bringing. Now we have 3, 4, 5 Mir-s, all with slight incompatibilities. Want a feature? Better hope all of them decide to implement the extension after someone proposes it. We know how well that worked in the past.
This is also ironic because the detractors of Xorg constantly talked about the issues with Xorg extensions and how many of them there were. But I never really had to look up which extensions Xorg supported, while I have had to do that with Wayland compositors.
The main reason that I piled on Canonical was that they kept on spreading FUD about Wayland to try to promote / justify Mir rather than discussing in good faith.
You want everything just handed to you or what? You’re asking for cheap, best, and easy. At some point you need to decide what your goal is and accept that you’re going to have to compromise.
Huh? I know exactly what I want that’s why my post was very specific. If you don’t have anything to contribute to the post go outside and take your elitism out on a punching bag.
When I installed ubuntu on my surface go 2 it was as easy as there is good known documentation on it. Only thing is you want to pick up a usb c dock to plug a keyboard along with the installation media. github.com/linux-surface/…/Surface-Go-2
The install was really easy even if sometimes the Surface is a bit difficult to boot on an usb drive. I don’t know why but the Usb drive is easier to boot when using ventoy on it with multiple bootable iso’s on it.
Otherwise everything is easy and I had nothing to do to make it work fine on Fedora.
I just don’t know how the installation process would have been without the typecover (keyboard).
Lately I’ve installed the Linux Surface kernel to improve the mouse bluetooth also.
You can verify the signature of the manual download as well. Either way, you are trusting the files you download over HTTPS from mullvad.net. There’s no real difference, except that when you use the repo, you are trusting it indefinitely, whereas if you download the deb directly, you are only trusting it once.
Using the repo is less secure, because it opens you to future attacks against the repo itself.
That’s why you download the key from multiple distinct domains from multiple distinct locations using multiple distinct devices and veryify their fingerprints match. If the key/fingerprint is only available on one domain, open a bug report with the maintainer.
There’s nothing wrong with installing a .deb manually.
Personally, I’d hesitate to add any third-party repos unless there is a very good reason. In this case, the only real difference is that you won’t get the updates automatically with sudo apt update; sudo apt upgrade without the repo. Either way, the desktop app will notify you when updates are available. There’s very little advantage to using the repo.
Adding a repo is very rarely required. It has deeper consequences than simply installing an app, and requires a higher level of trust. If you don’t understand the security implications of adding a repo (and its associated key), then my advice is: just don’t.
What are you on about? If you are using the 3rd party repo, you are just as likely to get malware than if you download the deb directly from the wbsite. Its literally the same thing, just adding the repo means that the malware could get installed automatically and without you knowing where it came from.
No, you’re confusing two vectors of attack. I’m saying that if you fan trust the vendor, then you’re still at risk from downloading malicious software that was manipulated between the vendor and you (man in the middle attack), unless you verified a signature using a key stores offline (note https is still vulnerable because the keys are stored online)
Not untrue, and I don’t think that the possibility should be glossed over, but honestly, what do you think is more likely: this specific person getting specifically MitM’ed by a bad actor, or a bad actor taking control of a repo that hundreds of people blindly trust. I have a sneaking suspicion that OP’s threat model isn’t sophisticated enough to need to really, truly, be worrying about that.
This sort of thing happens dragnet. And mullvad users are definitely a group to be targeted. Dont assume OP isnt a refugee or journalist and give them bad advice that could get them killed
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.