I have a very unusual workflow. In addition to not stacking windows, I don’t minimize them either. Instead, I spread them out over many workspaces. Per workspace, I usually only have 1 or 2 windows, but I ‘group’ workspaces to keep semantically related windows together.
And I do that, by having all workspaces in a column and just placing windows in neighboring workspaces + leaving workspaces empty between the groupings. I also have a minimap for my workspaces in my panel, to just keep track of all of this.
I like this workflow a lot, because it maps semantics to location. It feels like a desk where you just place related documents next to each other and might place some documents more in the middle, others in a faraway corner.
This is in contrast to the traditional Windows workflow or the workflow that many tiling folks use, where the first workspace is for web browsing etc…
Those use groupings based on the kind of task you do in them (often effectively being tabs in an application), like web browsing. They don’t group by the topic, e.g. you might frantically research ants and use a separate browser window, separate text editor etc., all grouped up for ants.
Now, traditional use of workspaces does allow this grouping by topics, by just assigning each workspace a topic. But personally, I found that too static.
Like, yeah, I have some larger, completely distinct topics, but often I’ll just quickly research bees and that’s kind of ant-related, but doesn’t need to be fully mixed with that either. I’d rather just place it to the side of the ant stuff.
I do this too, but additionally group these outputs strategically on my 4 displays. I never thought of it like a desk with papers on it but that’s very much what it is. And also how I organize papers on the few occasions that I do that.
That’s pretty much what I do as well. It was an absolute game-changer for me when I discovered tiling WMs some ~7 years ago, because it meant super consistent keyboard shortcuts for getting to exactly what I wanted to interact with. I know where individual apps/tasks go, so I put them there. And then when I need to switch to them, it’s as straightforward as Super+[workspace].
Also helps a ton that i3wm’s workspaces only take up a single monitor at a time, which makes it excellent for jumping between monitors.
None of this is set in stone, but I usually follow a relatively consistent pattern:
Center Monitor
1: Primary/“serious tasks” web browser
4: Any remote or virtualized desktop I might have open at the time
6: Image/video editors. Also sometimes just misc usage.
8: Development web browser next to neovim
9: Steam/games
10: Misc. Often a DBMS or file manager
11: Misc. Often where I put any secondary tasks or second projects I need to reference
12: Misc. Often where I’ll stick any long-running tasks that I just need to check on every now and again.
I don’t do that (again, too static for me), but I have larger meta-workspaces still, which group 20 workspaces each into very big, very distinct topics like “Orga” and “Work”.
I just use Super+p to run commands. Awesome and custom keybidings are to easily move between tags, windows and monitors, not to launch programs. I use nvim for coding and this combined with awesome means I can do a lot without touching my mouse. At work I use Cinnamon and IntelliJ tools and it’s just less ergonomic. Not a huge difference but I definitely prefer my home setup. In general all Linux WM I used over the years were easy to configure and get good experience. The worst environment I had to ever use was OS X. I just hated all their weir solutions like the launch bar and the common menu bar on top. On Linux I never had any issues.
25 years ago I worked at a university computer lab that was Windows-heavy because Dell wouldn’t stop donating PCs. However we didn’t have enough UNIX workstations as we had to pay for Sun / HP / IBM out of pocket. Converting them to Linux workstations would be nice because the Dells had more grunt than the aging RISC workstations.
I proposed to switch a few desks worth to Debian and was given the go-ahead. After a few days learning how to preseed an installation image and getting a PXE server going I had 8 machines running CDE just like the AIX and HP/UX boxes. Users that didn’t need one of the commercial engineering applications tied to one OS or another didn’t notice any difference between the free (now as in both speech and beer) Dells and the proprietary workstations.
A couple of months after we got the pilot rolling, the university’s IT director came to check it out and told me we’re on the “lunatic fringe” for deploying an OS developed by volunteers, but otherwise offered approval as long as we could maintain security and availability.
Now every student in our local school district gets issued a Chromebook running Linux under the hood. Who’s the lunatic now?
The expansion of the Internet has witnessed a resurgence of the gift economy, especially in the technology sector. Engineers, scientists, and software developers create open-source software projects. The Linux kernel and the GNU operating system are prototypical examples of the gift economy’s prominence in the technology sector and its active role in using permissive free software and copyleft licenses, which allow free reuse of software and knowledge.
Essentially the line of thought is that open source software is an example of mutual aid and the gift economy.
And the FOSS system seems to be collapsing right now for the same reason that anarcho-communism only works short-term until someone sees commercial value in it and abuses the system to the limit.
Big corporations initially providing exceptional services based on FOSS and after a while use their market share to excert undue control about the system (see e.g. RedHat, Ubuntu, Chrome, Android, …)
Big corporations taking FLOSS, rebranding it and hiding it below their frontend, so that nobody can interact with or directly use the FLOSS part (e.g. iOS, any car manufacturer, …)
Big and small companies just using GPL (or similar) software and not sharing their modifications when asked (e.g. basically any embedded systems, many Android manufacturers, RedHat, …)
Big corporations using infrastructure FOSS without giving anything back (e.g. OpenSSL, which before Heartbleed was developed and maintained by a single guy with barely enough funding to stay alive, while it was used by millions of projects with a combined user base of billions of users)
The old embrace-extend-extinguish playbook is everywhere.
And so it’s no surprise that many well-known FOSS developers are advocating for some kind of post-FOSS system that forces commercial users to pay for their usage of the software.
Considering how borderline impossible it is for some software developer to successfully sue a company to comply with GPL, I can’t really see such a post-FOSS system work well.
Case in point. You think quoting an argument and sneering is a counterargument. Obviously, because you don’t know the first thing about labor theory of value.
Someone asked if you think capitalists or engineers did the engineering, and you revealed you don’t understand the question.
You are once again building a flawed model of the dynamic at play here in an attempt to ease the discomfort you feel from encountering something that doesn’t make sense to you (why did I choose to join this community?). I’m not even attempting to build any counterarguments because the responses I’ve gotten don’t even attempt to understand what I’ve said in the beginning. To be utterly frank I just lack respect for people who think of themselves as any flavour of anarchist while still dreaming of a system as thoroughly rigid as the artificially created Internet. You pretend to hate the system while desperately trying to invent excuses for continuing to make yourself at home within it.
No dude, you demonstrably said ‘I’m going to repeat your argument so you can think about it.’ Projecting some emotional state onto me is not gonna change how you fucked this up.
This is mockery. I am calling you ignorant.
I am trying to highlight how you joined an explicitly leftist server, whilst remaining aggressively unaware of… genuinely the first things people learn about leftism. So when you try smugly posturing your way out of a pointed question, you’re just revealing you know less than nothing.
To be utterly frank I just lack respect for people who think of themselves as any flavour of anarchist while still dreaming of a system as thoroughly rigid as the artificially created Internet.
Anarchists being naked hippies, of course, not organized laborers. The internet was mostly designed and operated by academics. It runs on half a century of “does this sound right?” collaborative standards. Whatever browser you’re reading this in has its origins in anti-monopolist diehards building better software out of spite.
None of which is even addressing the initial failure. Capital didn’t design your computer. Intel’s founders definitely did, but only because they were workers dissatisfied under Fairchild, who were in turn workers dissatisfied under Shockley. The early history of silicon valley is halfway to semiconductor co-ops.
Well you solved that conundrum rightly. Now let’s go linch those dirty Apple and John Deere engineers. Since they’ve designed those machines, they must be the only responsible parties for designing them with their extreme anti-consumer and anti-repair policies. They must get commissions on every licensed repair or something, it’s definitely got nothing to do with capitalists putting restrictions on the design team in order to increase profits, nope…
You’re completely off on what I’m getting at. The idea of “Capitalist” hardware, as though the Capitalist did the labor, is wrong. Engineers are paid for their labor power, they don’t typically get royalties or anything of the sort, just like any other laborer.
Someone saying that FOSS software relies on Capitalist hardware is putting the Capitalist over the Engineer, as though the Capitalist created the hardware, and not the labor of the miners, assemblers, designers, engineers, and so forth, regardless of who owns the Capital the labor is done by the Workers. FOSS is agnostic to whoever owned the Means of Proruction of the hardware using or producing it.
Amazing how every single part of your comment is so wrong.
It’s actually a really good analogy,
Not an analogy, an example. Those two are different things.
because it can only run on
No, it can run on many things, including open source collaborative hardware that exists.
fully-capitalist hardware.
What the hell even is that? Fun fact: until very recently most of the computer hardware was made in communist China. I know, scary. And now that a lot of effort is being made to get that production out of there, those efforts are being sponsored by public money to an incredible degree. Billions of dollars of taxes (you know, community resources) are being poured into that because big corporations are the biggest lovers of government handouts.
TBF the error can become that big if you do a bunch of unstable operations (i.e. operations that continue to increase the relative error), though that’s probably not what is happening here.
To get to 0.01 error, you’d need to add up trillions of trillions of floating point errors. It will not happen solely because of floating point unless you’re doing such crazy math that you shouldn’t be using primitives in the first place.
As the answer in the link explains, it’s adjustment of your scaling factor to the nearest whole pixel, plus a loss of precision rounding to/from single/double floating point values.
Gnome is coded with JavaScript (lmao 🤣) so yeah, I Think you are right.
EDIT: Actually, even if JavaScript and other languages have this issue, the value 1.7518248558044434 has not this issue. There is another reply that explains it and makes totally sense. But still pretty lame to know the desktop runs with JavaScript. (Yeah, I hate Gnome)
It’s not a “language” issue it’s a “computer” issue. This math is being done on the CPU.
IEEE 754
Some languages do provide for “arbitrary precision math” (Java’s BigDecimal for example) but it’s slower to do that. Not what you want if you’re multiplying a 4k matrix every millisecond.
And Gnome is far from the only desktop that uses JS, KDE Plasma, for example, also uses a lot of JavaScript.
It’s weird when people bash Gnome for using JS, when practically everybody else uses it a lot too. Shows that they’re just regurgitating “Gnome = bad!!!” nonsense.
We get it, you think disliking Gnome is a quirky, edgy personality trait.
Mostly C because you need to type more C code to do the same with JavaScript, so I suppose most of the logics are using JavaScript. Plasma desktop has 2% JavaScript (invent.kde.org/plasma/plasma-desktop), it’s not comparable. 🙂
There’s a lot more to your UX than just the Plasma desktop. And you’re also trying to pass off Gnome’s shell as being Gnome desktop. Pretty disingenuous.
Using JavaScript isn’t inherently a bad thing. JavaScript can be very useful when used for scripting. Obviously anything with a new for performance will be done in C.
JavaScript isn’t the best language to make a desktop interface in my opinion, it can be very efficient, but you can see in bugs (at least in the past) how bad performance it had, and they needed to re-factor it to replace to C or improve the JavaScript. I’m just laughing and making fun of it using JavaScript, not saying it is slow, Gnome is pretty fast nowadays.
There is less than 4% more code in C than JavaScript. That’s pretty much, many features on the gnome-desktop is using JavaScript too, like gestures and mouse events.
Well, I started this thread saying it runs on JavaScript, and I mean that they need JS for most of the interactions with the desktop, like gesture or mouse events. 😞 Even if most of the code is C, we all know we need to write much many lines of code of C to do the same with JS, so most of the logics on GNOME is computed by JS. We need some rust here. 🦀 🦀 🦀 🦀
You don’t get to decide what too much JS in the project is unless you actually work on and have in depth knowledge of the project. I dont like JS, but it has its uses.
Many people are conflating modern electron bloatware with ‘JS bad’, but things are not that simple.
This isn’t only an app issue, it’s the implementation in Mutter.
On KDE for example, I’ve set 150% fractional scaling, and all apps look sharp.
I was really hyped when the recent update introduced “proper” fractional scaling, and was bummed when I noticed it didn’t work in many of my apps, especially Electron ones.
I know some schools in my country use their own linux distribution on pair with windows. And my organization has also their own linux distribution but it is barely used really. I dont know anyone who uses it, but I do know it exists.
It is related to a mix of actual display resolution vs conversions to virtual resolutions (the scaled resolution), and use of single precision floating point calculations.
Essentially my understanding is what it is doing is storing the value needed to convert your actual resolutions number of pixels (2160p) to a virtual resolution number of pixels (2160/1.75 horizontally) but that gets you fractions of a virtual pixel. So instead of 1.75 it scaled by 1.75182… to get to a whole number of virtual pixels to work with. Then on top of that the figure is slightly altered from what we’d expect by floating point errors.
If you take the actual horizontal resolution 2190 and divide it by the virtual resolution it’s trying to use 1233 pixels, you need a conversion value of 1.75182… to convert to it so you don’t get fractions of a pixel. If you used 1.75 you’d get 1234.2857… pixels. So gnome is storing the fraction that gets you a clean conversion in pixels to about 4 decimal places of a pixel.
Full credit to rakslice at Stack Exchange who also goes into the detail.
So I did miss that Linus is in the article, but the reference to him says he was awarded the title, which makes it sound like an honour rather than a hierarchical system. I don’t believe that he’s ever been anything other than the projects owner/founder but I’m happy to learn if I’m wrong.
Yes, that’s just how open source works. Of course they always serve at the pleasure of the community, otherwise forks would happen. Nobody said otherwise. As the “Usage” section of that article implies, the “benevolent” bit comes from the feedback loop of a happy community supporting their dictator-for-life.
I mean how the community refers to him. I’ve never read a thread where someone called Linus a BDFL, I have with python. If they do, they do. Just haven’t seen it myself.
Free software doesn’t have owners. If someone else did a better job of being the “benevolent dictator” of a fork of Linux, everyone would start using that fork. Arguably this is a more free-market system than non-free software.
You can fork it, sure Linus is very respected and his decisions are considered very important but you can fork it and change however you want so it’s still compatible with Anarchism.
Linus’ power doesn’t come from Ownership, but respect. Anyone can fork it and do what they want, but because Linus is respected, everyone else follows suit.
Anarchism would function in a similar manner, it wouldn’t be a bunch of opinionated people doing whatever they want, but people generally listening to experts who don’t actually hold systemic power.
Many times what? Most forks die within a few months. Especially for big and well known projects. For example, io.js was a fork of NodeJs. Didn’t last long and was killed by NodeJs. All the Firefox forks are pretty much dead as well. Linux also had plenty of forks by people who disagreed with Linus and where are they now? I bet you don’t even remember their names.
Forks don’t work unless the original project is dead.
So mass adoption is your answer, and I’d say you’re misguided. The purpose of FOSS isn’t to make a profit, but to satisfy uses and needs. If a few people have a need for a fork and use it, then it’s a success.
You’re judging FOSS software by popularity, rather than use, as though it’s for profit.
Most new businesses fail as well. Maybe we shouldn’t be starting new businesses either? Or perhaps this more about people being unprepared and out of their depth whether it’s starting a new business or forking a code base. And not the individual actions themselves.
This is incorrect. It’s true that most (in fact, I would say almost all) forks go nowhere but that doesn’t mean forking isn’t incredibly valuable. Even the example you cite, “original project is dead” isn’t just incidentally useful, it’s critical to open source. Other examples include:
project’s core team is part of a for profit org that is moving the project in a bad, profit motivated direction:
project’s leader suddenly and dramatically loses respect (maybe he killed his wife or something);
project’s leader dies without leaving a digital will regarding who controls the core repo;
project continues to direct effort into features while falling to address major security concerns;
project is healthy and useful in every way but there is an important use case not being addressed, and the fork would address it.
Even if 99% of forks fail, that’s irrelevant because 99% of original projects fail in the same ways. Forks are critical to open source.
I would say we should just let unjust societies fail so just ones can take their place, but that seems to be the natural course. We’re seeing that right now.
Nextcloud is a FOSS fork of OwnCloud. Both projects are great in their own way, hugely successful and serve a lot of people very well. They just moved in different directions.
This is just one example of many. Ability to fork is super important to ensure that projects stay open source, like in this example.
I would disagree and say it’s more akin to a philosopher king hence less anarchy and more monarchy. It’s all good until the king dies and let’s see who succeeds them.
I meant that as a reply to the second paragraph which generalised anarchism; including the non-Linux world.
I also disagree that this isn’t an issue in the broader Linux community however. See for example the loud minority with an irrational hate against quite obviously good software projects like systemd who got those ideas from charlatans or “experts”.
I know, I used Linux as an example. Just like not everyone needs to be a weatherman to trust weatherman that can recognize experts among themselves, so too can engineers recognize experts among themselves, and so forth.
Skilled programmers can see that Linus is an expert. It works in tech. It probably works in any professional environment - anywhere where skilled people are picking someone highly skilled.
For the average person, we have clearly seen average people suck at picking expert leaders, though it works fine in small groups
There’s a word for this, the promotion of leaders based on merit instead of popularity - Technocracy. And it’s not a distinct ideology but a syncretic one that has been adopted by many groups with differing politics. The most prominent example would be the Technocratic faction of the People’s Republic of China, which was opposed to the Maoists back in the 50s and 60s; they argued for society to be led by experts instead of Democratically with a strong emphasis on Peasant participation (the standpoint of the Maoists). China today follows a moderate path taking from both factions.
In the West, however, Technocracy is mostly associated with Liberals; however, I would argue that the modern Liberal view of Technocracy is fundamentally flawed, since it relies on Capitalism distributing wealth meritocratically (which Socialists understand is not the case).
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.