Weird, the video announcement states the software is FOSS. I think that’s regarding his YouTube content and not this stuff. The video also states github is wip
To be honest I think the wordpress install handled all that, or maybe wordpress handles it inheritly, I’m not sure. I simply pointed a domain to my static ip and forwarded http/https to the correct LAN ports and it just worked on its’ own.
I probably shouldn’t have mentioned Wordpress, I’m mostly focused on the Gogs server right now, I just added it for more context on the issue
Thank you, I will try there, I was trying to install PiVPN since I can connect to the Gogs server on my local network, if I could just get a VPN server running it should work, but of course more issues with that. The cause could well be some config I might have changed and forgotten about, reimaging and starting fresh might be the easiest solution.
Though, I did just upgrade to FTTP - which added a modem or some kind of device between my router and the internet, so maybe there could be some extra config surrounding that I’m just not aware of
This. I was getting fits when I read the post. Fiber is actually great for selfhosting a service but the security side is very dangerous for uninformed individuals.
If configuring an nginx server is already stretching it, its only a matter of time until your stuff is encrypted and ransomed. Same goes for all data in your network. If the pi is not in its own zone, it has now become a door to your network with barely a lock, let alone a good one.
I would highly recommend reading up on network security and probably prioritize isolating the pi and making at least daily backups.
You router is going to be scanned for open ports every couple minutes. If the wordpress doesnt have a strong password, you‘re in for a bad time.
The appeal of it, to me, is the same as why Docker containers are really good. You write your definition, save it to git, for example, and if you ever need to setup your computer from scratch, if you restore that config, it’ll setup your computer exactly like it was before. But even besides that, being able to roll back if something goes wrong, is a big plus
That’s what I keep reading and why I would like to give it a try. For now I’m still confused how this is easier/more efficient than sharing your list of packages, restoring a backup, or using downgrade in Arch. I’m really interested because I like to try new stuff, especially if they bring something of interest.
I really have hard time to see the difference for now after my first setup in a VM but also because imaging my full Arch system on a new machine 2 years ago only took me an hour and less than 10 command lines.
Again, I’m genuinely trying to understand what I’m missing. From my reading NixOS seems to be the only distro I could switch to.
my thoughts which may have inaccuracies: in NiXOS The package declares the exact version of dependencies needed. when you update nixos it takes up quite a bit of space because you may have some links to one library but another app uses something else and both are stored on drive, and your old install is still there to roll back to. On other distros a package lists dependencies, but during updates a single dependency may have a bug fix point release, and upRev. so the behaviour of that app you added may change depending on all it subparts changing. So when you install non nix today or 6 months that package also determines how it may function. if Dependencies updated in the meantime your install may act different. NiX prevents this since you have a repeatable install.
Thanks for taking time to share this detailed thought. That’s an interesting point I forgot because I didn’t experience any related issues over 15 years with Arch but that’s still a nice approach. I can certainly see why this is a big plus for NixOS.
I haven’t had issues with my OpenSUSE Leap install in 7 years either, there is careful curating, and automated QA testing, and roll back snapshotting if you break something while messing about. But I have a NixOS machine also. It provides a nice way of configuring a repeatble system, which is probably a huge bebefit for folks making / deploying linux devices that are 100% repeatable.
Right, I totally agree. If I would have to deploy my config on several machines or create dedicated config using a common base then I would have been convinced. I’m still not convinced from a dummy single user point of view but I still believe in this distro and like its approach so I’ll continue experimenting with it and we’ll see where my journey leads me.
At least for now I’m glad to have a new toy I can mess up with. With my Arch system I was getting this weird feeling where I was happy to have an efficient and stable machine while at the same time being bored to have nothing to test/tweak/destroy and rebuild. I mean I love to learn and discover new things so I experiment a bunch of applications and parameters I will never need anyway but it becomes harder and harder to find something that keeps me entertained for more than a day.
I hear you. My openSUSE Leap has been so stable that I got bored with nothing to tweak. Their MicroOS has an immutable system with config file setup capability, and sombody built this for it to make config file creation simple opensuse.github.io/fuel-ignition/editso that was fun for a while. But NixOS was a nice distraction also
Because your Nix config also configures your software, not just installs it. Admittedly, with base NixOS that’s more true with server software than desktop. But with the addition of home manager you can also configure many desktop apps in your Nix config.
Thank you for this addition. I very much appreciate the fediverse community who is helping people to understand things, share their knowledge, and acting nicely (if we exclude some rare people who are clearly not used to live within a sane community). I’ve seen home manager but this raised one more question to me: what’s the added value compared to stow for example? Thanks again for sharing your thoughts.
I’ve never used stow so I can’t speak to it specifically. Home manager is nice for two reasons. If you’re already using NixOS you can have one unified config for your whole system. And because Nix is a programming language generation these configs, you may be able to do thing you wouldn’t otherwise. It also has some nice defaults that you may not get without.
Due to the still early development of NixOS, Home manager is in some ways very similar to nix-env and flakes is still highly experimental. Also, the configuration parameters are changing quite significantly with the distro development. I’m sure this will all settle down when the distro will become more mature but to be honest that’s also what attracts me. I like chaos ^^ Seriously, this shows me some potential for great achievements. I will continue testing NixOS but for now I didn’t find THE reason to leave Arch yet. If I would have to deploy my config on several machines or create dedicated config using a common base then I would have been convinced. Will see where my journey leads me.
Sure but not everything can be defined in the Nix config. Firewalls have an issue, some options for packages are not implemented yet. For example systemd networkd doesn’t have all the features implemented.
NixOS in it’s current form does have it’s limitations but it’s ever improving. I personally have never had issues doing what I needed to firewall wise, but I’ve not done much of anything complex. Mostly just opening ports and a little port redirecting.
Thanks. Finally after Mint didn’t recognise my network adaptor I tried Manjaro (everything worked great, but I don’t think I’m ready for Arch) so ended up on Pop_OS … everything works so I’m going to stick with this for now.
Good to hear, I’ve not had any issues so far. The only “niggle” I’ve had is when pairing my Bluetooth devices I’ve needed to turn Bluetooth on and off for each pairing bit once done they’ve reconnected fine.
Nice! I know that OpenBSD people have been working on a wayland compatible thing which takes into account Linux-specific things (libinput?), but last I heard it’s not ready. I have my hopes up though! Could be the year of desktop BSD if they port COSMIC.
It would certainly be easier for them to port COSMIC because there are very few dependencies on shared C libraries. Cargo links all Rust libraries statically, so it’s easier to maintain and update components. This will depend how open they are to accepting Cargo and Rust into their ecosystems.
Indeed - the general configure, build install steps are fairly universal and the configure script doesn’t have to cover from autoconf. We still have that and Makefiles as a wrapper around a meson based setup to keep the process familiar.
Hell maybe I do need to learn some shit, because I was under the impression that you cd into the folder after you untar it, then type ./configuremakesudo make install, but the last two packages I attempted to install from source like this just did nothing.
Maybe. But maybe they did nothing because there was no ./configure script and you had to use another tool, e. g. one of that I mentioned, so you need to learn another shit.
BTW installing anything from source like this is the right way only in (B)LFS.
But you definitely don’t need to learn this if you are a developer and starting a new project in 2024. You can use cmake or write plain makefiles, even shell scripts if you want, but as you value life or your reason keep away from the autotools. It is a nightmare to debug thousands lines of scripts they generate and put into your source tree.
I’ve been following the work on COSMIC (though not super actively) and I keep on saying that I like what I’m seeing because, well, I do! The idea of a tiling DE is a very exciting one and COSMIC really has the potential to become a Major Linux DE.
As a regular i3 user, I was very satisfied on how tiling was implemented into the Pop shell of Gnome. After a few keybind change here and there it almost felt like home maneuvering the windows and workspaces. One minor complain is glitches happen when external monitor is connected/disconnected on the fly (laptop usecase), in which case windows are disoriented and thrown around at random unexpected places instead of staying at where they were. I’m blaming Gnome on that one however, since I’m assuming it is related on how Gnome handle multiple screens and Pop shell act on top of it, so I’m expecting it to be fixed in Cosmic DE
Yeah, I’m a Pop user and like what they do with Gnome now. I can’t wait to see what it’s like when the desktop isn’t limited by the Gnome extension system.
I’m just happy there’s a rust DE being written in slint. KDE is nice and all, but it’s all C++. No way am I touching that trainwreck of a language again.
A great start to the week - @pop_os_official will collaborate with us to offer Slint as an alternative toolkit for application development on Cosmic Desktop.
The keyword is alternative. All first party applications are written natively with our libcosmic toolkit, which is based on iced-rs. We are using a fork of iced though because we needed to implement a custom runtime with the sctk (smithay client toolkit) for COSMIC applet development, but our desktop applications will use the original winit runtime.
Do what I do. “Oh shoot, Jellyfin stopped, now I have to remember how to tell Arch to clear out its cached packages” (it’s pacman -sc if you’re me and you’re reading this in the future)
This is me… In general with Linux. So I have a whole section of my Obsidian vault dedicated to troubleshooting and setup steps for my server projects. It’s saved me hours of research already. Stupid brain…
I’m not sure if Ubuntu requires a wired internet connection. I’ve installed a different distro yesterday and wifi worked fine during the installation. The installer asked me to connect to network and I used the wifi. I’ve never plugged a network cable into the machine. Maybe it’s the same with Ubuntu. But sure, there are other possibilities. Offline installers and/or you can install Linux on a different machine and then swap the harddisk/ssd. Just take care not to overwrite the internal disk of your laptop. Make sure it writes to the correct disk (or unplug other ones).
Same as Debian since Bookworm (12). Nonfree firmware comes in the installation files now, so you can opt in or out at that stage and not have to scramble if you forgot.
That’s it. I have installed Ubuntu many times connected over Wi-Fi without any problems, except one special case many years ago. In that case, the system had some brand new Wi-Fi adapter, so I had to install the driver over Ethernet. But in almost any case it just should work and you can simply try to get a wireless connection in a live sytem to find out. And as mentioned above, internet connection is not necessary while installing from USB stick with the usual image. Its just recommended to save time and install the latest updates of some components during the initial system installation. But of course, you can do it later and of course you can do it over Wi-Fi (except some very rare special cases as mentioned at the beginning).
im honestly wondering why there are ai images in this article? they don’t contribute anything id prefer screenshots instead to actually see relevant stuff - like with the line about garuda’s ui. otherwise a pretty ok article
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.