I recently experienced this. I’m thinking it could be a lot of things, but when I logged into the device through the hypervisor I got a domain controller trust issue error. Can you login locally to the device and check for errors?
Just pick one and roll with it. Eventually as you use it you might want to switch so do that if you wanna. Distro just comes down to preference so find one you like. At the end of the day they’re all Linux
Back in the day X was a great protocol that reflected the needs of the time.
Applications asked it to draw some lines and text.
It sent input events to applications.
People also wanted to customize how their windows were laid out more flexibly. So the window manager appeared. This would move all of your windows around for you and provide some global shortcuts for things.
Then graphics got more complicated. All of a sudden the simple drawing primitives of X weren’t sufficient. Other than lines, text and rectangles applications wanted gradients, rounded corners and to display rich graphics. So now instead of using all of these fancy drawing APIs they were just uploading big bitmaps to the X server. At this point 1/3 of what the X server was previously doing became obsolete.
Next people wanted fancy effects and transparency (like drop shadows). So window managers started compositing the display. This is great but now they need more control than just moving windows around on the display in case they are warped, rendered somewhere slightly differently or on a different workspace. So now all input events go first from X to the window manager, then back to X, then to the application. Also output needs to be processed by the window manager, so it is sent from the client to X, then to the window manager, then the composited output is sent to X. So another 1/3 of what X was doing became obsolete.
So now what is the X server doing:
Outputting the composited image to the display.
Receiving input from input devices.
Shuffling messages and graphics between the window manager and applications.
It turns out that 1 and 2 have got vastly simpler over the years, and can now basically be solved by a few libraries. 3 is just overhead (especially if you are trying to use X over a network because input and output need to make multiple round-trips each).
So 1 and 2 turned into libraries and 3 was just removed. Basically this made the X server disappear. Now the window manager just directly read input and displayed output usually using some common libraries.
Now removing the X server is a breaking change, so it was a great time to rethink a lot of decisions. Some of the highlights are:
Accessing other applications information (output and input capture) requires explicit permission. This is a key piece to sandboxing applications.
Organize the system around frames to avoid tearing except for when desired (X doesn’t really have the concept of a frame).
Remove lots of basically unused APIs like fonts, drawing and many others.
So the future is great. Simpler, faster, more secure and more extensible. However getting there takes time.
This was also slowed down by some people trying to resist some features that X had (such as applications being able to position themselves). And with a few examples like that it can be impossible to make a nice port of an application to Wayland. However over time these features are being added and these days most applications have good Wayland support.
I use FreeCAD and Assembly3 for everything and have for many years now. I sometimes use realthunder’s fork of FreeCAD but right now it’s quite a bit behind upstream and there are some cool new features in sketcher so I use upstream for those.
Some people get confused about workflow in FreeCAD because there are so many options and every youtube video has different opinions or tries to feature a particular workbench like curves or something. My opinion… Pretty much your workflow starting out should be to ignore everything else and use part design and sketches, it’s the simplest way:
enable autosave with a short interval, like 2min
Switch to part design workbench
create body
create sketches as the base of the features of your part attached to the xy, xz, yz planes, offset them to create a “wire frame” that resembles your project
a. Your sketches should be fully constrained
b. Your sketches should have as little geometry in them as possible, if you need more complex stuff make more sketches
c. Your sketches should have closed wires, you can’t pad something that doesn’t create a face.
use pad, pocket, revolution, loft, and hole operations on those sketches to form a 3d solid
if you need to create additional sketches which import geometry from the previous operations (using the external geometry tool), import SKETCH geometry from the previous ops, not edges of solids, whenever possible. Hide your solid, unhide your sketch, select that with the external geometry tool.
a. Use sketch on face sparingly.
Do fillets and chamfers last, if you need to change something, delete them and recreate them once you’ve made your changes.
To make multiple parts make multiple bodies with the same workflow as above.
Once you get pretty good at making static parts with constrained geometry, holes, threads (with the hole function), etc, which you can do with only the stuff above, then you can branch out into other workbenches like assemblies or curves, but all of those things build on the concepts above, so it’s easy to get overwhelmed if you try to do it all right from the start. Learning how to recover from a mistake is just part of CAD in general, though I admit that it’s a bit more effort to find what’s wrong in FC vs commercial platforms, but we aren’t here, on lemmy, in a linux community, to use commercial platforms.
AFAIK that’s pretty much the same workflow as F360 uses for single-solid parts though things have different names. pad=extrude for example.
It’s obviously far from perfect but in my opinion it’s the best solution that runs natively on Linux and is actually open source. Also assembly3 uses solvespace as it’s backend solver so if you make assemblies using that you are kindof using solvespace too.
Also, I hear/read a lot of complaining about instability but I’ve honestly never had a crash that wasn’t on an experimental branch like RT or the edge release of upstream. However step 0 above should help if you’re worried about that.
First of all: Did you do apt dist-upgrade as well? If I remember correctly that is a new required step when upgrading to a new Debian release.
If that doesn’t help, you could check if your nvidia-detect package version is the expected version, that comes with Debian 12.
If neither of these steps help you could disregard nvidia-detect and try the steps listed in the following link. It seems the firmware was moved to a separate repository compared to Debian 11. You might need to add that by hand. wiki.debian.org/NvidiaGraphicsDrivers#bookworm-52…
Thank you very much for your answer. I was not aware of the dist-upgrade being required now, so I did that, but unfortunately it did not change anything after a reboot. I reinstalled nvidia-detect to see if that caused any issues, but that did not seem to be the case. Your last step I actually already did some time ago, and I tried to do the same no. Unfortunately that also did not seem to have fixed the problem. The nvidia graphics settings software is still installed, but it only shows some very limited control options compared to how it used to be. This is what that program looks like now:
I had a 1650 until recently. I thought Wayland was just buggy as hell but as soon as I put in an AMD card it was smooth as butter. I know it’s not always an option but in my area cards sell locally for $80 specifically the Rx 5500 and RX 580
Yeah. Actually wanted to put an -nvidia hate- comment at the bottom, buzt thought it’d be too much. I do actually have some money lying around, so if I switch, it’ll be an amd. I currently have the classic: GTX 1060 in my machine, how did you feel your performance change, if at all?
Linux Mint Debian!! It’s simple, already set up, easily extensible, and is based on one of the most popular distributions. You can always find out how to do something on Debian and it won’t break on you.
Snap being partly proprietary while also being forced on Ubuntu users leads me to avoid Ubuntu derived distros. Plus my philosophy when it comes to Linux is that you wanna stay close to a distro’s upstream, so I only really recommend the big ones like Debian, Fedora, Arch, or openSUSE. The less levels of maintainers the better, essentially.
I get your point. But Linux Mint does not have Snap by default, so that does not really apply.
I’d still recommend the normal Ubuntu based one since there is so much easily available help out there for any Ubuntu based system.
The Debian dist is (iirc) just there in case Ubuntu becomes unsuitable as an upstream in the future. I would treat it as a safe backup option, not a primary choice and def. not something I’d recommend to beginners.
linux
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.