IMO, Honestly, at this point you probably arent going to miss much.
Youve presumably booted from an external device and installed an OS. I assume the time amd date are right.
Only question really is are you using efi or mbr boot method. If efi, you are probably fine and future proofed unless you want secure boot (windows) you may face an issue then. Thats not to say you will, just you might.
You can either try to contact the seller and ask for the password or just erase the UEFI settings by shorting some jumper or something. There should be instructions how to do that for your specific model.
Glad that there are govt. level systems who make use of Linux (mostly ubuntu ) systems here. For eg. , in kerala, all govt schools and colleges use linux for computer labs and academic activities, i guess the local body administration too use ubuntu.
Only wish we could convince those private sector schools of the money they’ll save converting those slow windows systems to linux.
Problem with private schools is most likely the training investment required. They can’t as easily replace IT team and/or CS teachers with those who can handle Linux
I don’t wanna repeat myself, but: 7840u for the next few years, then I hope RISC V will be mature enough to kick some ass (and that framework releases a board for it).
I’d hope so. It wouldn’t be good if they were blind.
Anyway, I didn’t question the accuracy of what he said, just that his manner of delivery is obnoxious and portrays an attitude of self-importance. It feels like he’s yelling at the viewer with hostility.
Aussies tend to be quite direct. It’s basically our natural state. I get how it can be perceived as hostile, but I don’t actually think Brodie is very abrasive. He seems like a pretty relaxed guy.
Also, GitHub PRs atleast to me feel like they encourage reviewing changes by the total diff of the entire PR, and not each commit. I don’t want a slog of commits that don’t add any value - it just makes doing things like reverts more annoying. Stuff like Gerrit and phabricator enforce reviews by making you review individual commits / changes / whatever you want to call them and not branch diffs.
GitHub has an option when merging a PR to “squash and merge”. This option squashes all of the commits on the PR branch into a single commit and cherry-picks it on top of the base branch. We use this by default in our open source projects at work. Most people are not gonna go through the effort of making a well defined patch series the way it would be required for a Linux kernel contribution. Most changes aren’t that big though and so it doesn’t really matter. Send as many commits as you want in the PR, I’ll just review the diff as a whole and squash it when I’m done. Workflows should adapt to user preference, not the other way, and this is a good example of that.
How much of that is what GitHub encourages and how much of that is what Users prefer? Plenty of users seem to enjoy phabricator / Gerrit for code review in practice precisely because of their workflows.
Well squash and merge isn’t default or pushed in any way. It’s an option, and we chose to enable it ourselves because that’s what works best for us. It’s what works well for many other projects too, which is why many choose to enable it instead of the default merge commit.
Yeah, but phabricator and Gerrit are entirely separate workflows from GitHub, and a lot of people prefer that workflow because it leads to encouraging better histories and reviews. It helps you in getting rid of the “fixed typos” type of commits, while still letting you make larger PRs.
GitHub obviously does let you keep a clean git history, but the code review workflow in GH just doesn’t encourage reviewing commits.
I think the idea here is that reviewing individual commits is irrelevant if the plan is just to squash it all down. Each PR corresponds to a single change on the main branch in the end, the fact there was a main commit followed by a half size “fixed typos” and “fixed bug” commits doesn’t actually matter since it will be blown away in the end. The process results in the same clean history with good individual commits on the main branch, just as if the user squashes those commits locally before pushing it up to the code review platform.
Right, but squashed commits don’t scale for large PRs. You could argue that large PRs should be avoided, but sometimes they make sense. And in the case where you do have a large PR, a commit by commit review makes a lot of sense to keep your history clean.
Large features that are relatively isolated from the rest of the codebase make perfect sense to do in a different branch before merging it in - you don’t merge in half broken code. Squashing a large feature into one commit gets rid of any useful history that branch may have had.
I agree, and GitHub allows choosing how to merge each PR individually if you need to do something different for a specific PR. Large PRs like that are at most 1% of our total PRs, and we review those more per-commit and use a merge commit instead of a squash. By default we optimize for the other 99%.
“Although we’ll be hosting the repository on GitHub, our contribution workflow will remain unchanged and we will not be accepting Pull Requests at this time”
We can all read it right there plan as day.
If they weren’t planning to/considering it, then why specify “at this time”?
I’m only a native English speaker, so guess I could be interpreting it wrong.
Do tell oh wise one, what alternative meaning could it possibly have?
I’m only a native English speaker, so guess I could be interpreting it wrong.
You should try being a native English reader.
What it means is “they will not be accepting pull requests at this time.” Whether or not they are open to changing this in the future is not specified. They have not specifically stated that this is off the table, nor have they stated this is their intent.
So they are considering it.
Thanks for confirming my point.
If they weren’t, they wouldn’t have specified, they’d just say “we will not be accepting Pull Requests”.
How does the opinion of your supposed internal contact at mozilla affect the basic English interpretation of the public announcement?
We’d see who’s interpretation is right? Duh.
Padenot (contributer with direct ties to Mozilla internal) agrees with me on GitHub PR being terrible. 🤣
Waiting for other responses. 10000007231000000724
Note : most of them are sleeping rn, so it’s going to take a bit of time.
I asked Glob (the literal author of the announcement) directly as well. Waiting for him to wake up and see it, he was up at 3am last night, lol.
You’re quite the lunatic. I’m obviously not defending GitHub PRs, or saying Mozilla should or should not use them. I said “we are not open to PRs at this time” is not the same as “we will be open to PRs in the future.” The truth of that statement has absolutely nothing to do with whether or not Mozilla is, in fact, open to using PRs in the future. But there’s no point in telling you that, because you’re clearly unhinged. Have a good life.
Is not what I said they meant to being with. I said planning/considering, which is wildly different.
Second :
Who’s unhinged? 10000007281000000729Looks like I was right all along, they were indeed considering it but have since decided against it because of the same concerns I had mentioned previously. Is this definitive enough for you?
Nobody here needs “a long history of submitting patches to Firefox” to have an opinion on the tools used to manage the project. I assume that most here sharing their opinion don’t and yet you need not scroll far. You merely need some knowledge and experience with the tools, be it in personal, corporate, FOSS, etc. projects. Besides I don’t spend my free time helping FOSS projects just to use it to be like “my opinion better” that’s literally just the “appeal to authority fallacy”. But if you must know, I have helped here and there throughout the years under various different aliases/accounts. (Why “various aliases”? because I enjoy helping not some meaningless credit, it’s just how I am.)
So what you are saying is that as someone who has never worked on the Firefox codebase, you still somehow know more about managing contributions to one of the largest FOSS projects in the world that has been running pretty successfully for the last 25 years?
Idk, maybe try a bit of humility - like if it looks like they are making a weird decision, maybe it’s not because they are dumb and you are very smart, maybe it’s because they know stuff that you don’t?
First off, not what I said.
Second off, I never called them dumb. I actually happen to have a good relationship with them, so I take offense to what you’re implying. I mearly stated that I don’t like GitHub and gave some legitimate reasons. Which btw : 100000072310000007241000000728Maybe the one who should learn humility is you.
They’ve been completely dropping the ball for years. I used to donate regularly but have completely given up on this project. It’s a farce at this point.
Thankfully I only have simple needs so Krita suffices and I don’t have to deal with the never-improving UX nightmare and never-releasing changes.
Yeah, I’m salty. It’s just that GIMP was a shining star of FOSS and it’s just been slowly rotting from inaction.
They’ve been completely dropping the ball for years. I used to donate regularly but have completely given up on this project. It’s a farce at this point.
Liberapay shows the number of donors has almost doubled in the last few months (look at “view income history”), so i hope it is an indication that they made good changes to the project management and the future will be better.
That’s good to hear, and I really would love for things to get sorted out. Gimp 3.x has many improvements for sure but there’s a long way to go and actually releasing these improvements is necessary…
If gimp can become another blender that would be incredible.
In X11 it’s server side, and in gnome wayland it’s of course client side, but they look exactly the same as the SSD ones. I doubt they’ll change that between the current beta and the 3.x release.
The GTK3 port is done, and now they need to finalize the new extension API and improve their color space support (particularly CMYK). It would be nice if Wayland had a color management protocol extension standardized by then, but I don’t think it’s a blocker.
Software information says you’re already using Fedora ? Do you mean you want to switch distros ? If so, it’ll be useful to say what you’re looking for and why Fedora didn’t fulfill those needs so we can recommend alternatives.
Firstly, check the logs directly to get a more concise error that we can analyse. journalctl is the standard systemd logging client you can use in the terminal. By specifying the unit (units can be socket files, timers, services) you can get logs specifically for said unit.
You can utelize flags such as -e to scroll to the end of a journal, -f to follow a journal in realtime and utelize the -p flag to set priorities like error, crit, warning (-o error) and others to filter away common journal entries so you don’t have to scroll through every line in the log.
Secondly, and this is gonna sound weird, but reboot into windows twice. The first time you boot windows run diskchk on the partition(s) in terminal/powershell/command as administrator. If it tells you it needs to do an offline scan, reboot and you’ll see an offline diskchk screen on boot before login. If not, reboot again into windows anyways, and then reboot into Linux.
The reason is that NTFS has a weird failsafe flag that NTFS on Linux considers a no-go, and it’s usually set if the system crashes more than twice, but not always. If Linux NTFS drivers see the flag, it won’t mount as a precaution. The only way to reset the flag is to reboot in windows twice. Not once, not three times, but twice.
This might be outdated info, but that was the fact some years ago. There might be a way to fix it with modern day Linux, but I don’t know, especially when I have no direct and informative errors to go by.
I bought a used Thinkpad Yoga 370, with a 7th gen i5, 8gb ram (single slot sodimm, which is a real pity) which I later upgraded to 16gb. Also the pen slots right into the frame of the laptop for storage and recharging, so you don’t need to carry it around separately, though it may be a bit small for some people. I personally find myself comfortable with it.
I went right to arch (btw), as I was on both on my old laptop and my desktop, the archwiki has a page dedicated to this laptop, listing which features work and which don’t. If you mess around with the fingerprint sensor and python-validity package you can get it to work, but I don’t use it anyway. The rest works out of the box, though I have never tried the modem (my version lacks antennas and the module) and the express card reader.
I use xournal++ to take notes in uni. I tend to make a huge journal for each course (easily 150+ pages at the end of the semester), so make sure to disable autosaves as sometimes they hang up the whole program while trying to save.
At first I was using gnome on wayland, which has pretty good palm rejection, autorotation and sensor/webcam remapping and works great out of the box in general. Later moved to i3 on xorg as somehow a tiling window manager made more sense to me on a touchscreen device (android is kind of a tiling window manager if you think about it). Currently on i3, using touchegg to use custom gestures for the WM and specific programs. I am currently wondering whether to move to hyprland as I noticed slightly worse palm rejection on i3/xorg when compared to gnome/wayland (still very usable though), but I still want a tiling window manager and customizable touch gestures, which Hyprland should have a plugin for.
I general I find this laptop great, the x1 yoga should be good too, but I have never tried it on linux.
I am thinking of switching from Linux Mint to Fedora. I have always liked Fedora, but have been bitten by some BS like NVIDIA drivers not working and some programs only available as a .deb file (I know about alien… or do I?)
I love GNOME DE, has that modern “I work on a spaceship” feel.
I mostly do music production and some gaming, so pipewire seems intriguing.
Here is the real question: Should I got Silverblue? I just learned about distrobox, so maybe that is my solution for programs I cannot get through flatpak?
You can install silverblue, and then rebase to ublue ( universal-blue.org ). Specifically to the “silverblue-nvidia” variant, and you should get a nice silverblue experience without any of the nvidia struggles, as people at the ublue project take care of that stuff for you.
And yes, distrobox is the goto solution to run stuff that is basically ubuntu-only, or by extension bound to any distro variant / version and not flatpak. This includes graphical applications. Distrobox works great, I do all my work in it.
Oh, I totally misunderstood the OS. I was under the impression that using os-tree should be totally avoided for anything other than necessary system programs, and all other software should be installed with flatpaks or containers.
I now understand that using os-tree for some programs is inevitable, and I should embrace it, though still catiously to maintain as clean of an OS as possible for maximum longevity.
I was under the impression that using os-tree should be totally avoided for anything other than necessary system programs
Interaction with ostree directly shouldn’t occur that often; with sudo ostree admin pin *number* (and its -u option) probably being the commands your average Joe should interact with. You probably meant rpm-ostree.
and all other software should be installed with flatpaks or containers.
It’s indeed true that initially Fedora intended flatpaks should be preferred. If the software isn’t available there, then Toolbx(/Distrobox) is used to access it through a container. And if all else fails, then it’s layered through the rpm-ostree command.
I now understand that using os-tree for some programs is inevitable, and I should embrace it, though still catiously to maintain as clean of an OS as possible for maximum longevity.
You’re getting the drill! Though, I wonder why you weren’t able to rebase to uBlue and had to resort to installing the Nvidia drivers through RPM Fusion instead. It’s fine as long as it works, but I imagine that some issues might arise eventually. So consider sharing the steps you took so that the community might help out; perhaps even over at uBlue Discord. You could also just share it here if you will.
I think the key is restarting at every step it asks you to, and maybe after anything that seems major or is a prerequisite for another set of program installs. I mean, I got a black screen the first time, but after a hard reset, it just worked.
No doubt UBlue is probably a lot easier. I did not realize I could have just downloaded the ISO instead of trying to rebase, but I like what I am running.
Anyways, doing it the hard way is helping me learn the intricacies of an immutable system, so I am having fun.
Indeed, in your case acquiring uBlue through its ISO was probably the best option; but I’m glad to hear that it worked out in the end!
Anyways, doing it the hard way is helping me learn the intricacies of an immutable system, so I am having fun.
Well said!
Just in case; consider the following:
Pin your current working deployment with the aforementioned sudo ostree admin pin 0 command. After which it remains accessible regardless unless you unpin it later on. This should allow you a working deployment if all else fails and thus a safe haven to rely on.
I saw that the image was failing to build, so I took a chance and followed the RPMFUSION guide and installed it successfully. I am learning to use toolbox for CLI stuff, but now I am going to learn about Distrobox!!
linux
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.