I used it at work recently to update my work-provided HP Thunderbolt dock, and it resolved an issue where the external monitors would fail to activate after resuming from standby. I never got an update notification when I was using my Windows laptop so I was oblivious to it; it was only thanks to connecting it to my Linux laptop and fwupd, that I found out there was an update, which subsequently resolved the issue.
I love it when stuff like this happens and Linux saves the day. =) (and I get to show off to my Windows heathens colleagues.)
I don’t know about you but I want the companies to take self hosted and Foss solutions seriously. The fact that they are wanting to work with him is a major step in the right direction. It would be dumb to discourage companies from supporting foss.
Are they supporting FOSS, or looking to buy out the project to make it a closed in-house solution and avoid the bad publicity they created this last week?
On Lemmy any comment you post gets federated out to other servers, so it’s available to anyone who sets up a server. So by design it is not possible to control who gets to see or archive your comments. I could set up a server to permanently archive every comment it sees, and if your server sends me your comment it goes into my archive. Probably people are already doing this for data mining. It’s not clear that you could bolt some kind of privacy control on to this architecture, which is fundamentally designed for sharing.
Although I agree that is how things work now, one could imagine a different approach:
For instance, I could maybe control who my content gets federated to. That is, if I decide I don't particularly want my content blasted to certain places that my instance would not call any blocked ones with my data.
If that causes some issues with ActivityPub, you can imagine encrypted blobs that could only be opened by others with a shared key.
We don't need to achieve perfection out of the gate, to me these questions are worth discussing so that we can build out more high quality tech for the fediverse, let's not try to just immediately shut down discussion.
How would you ensure other instances are not sharing your content?
To me this seems to be a question of ideology. I came here from Reddit because this is an open forum with transparent history.
Federetion by design ensures that accessibility (as far as I understand, correct me if I’m wrong). This design principle to me is the core. If that seems like an issue maybe this style of social media is not for you.
In this context, it’s an open public digital space. Noone is obligated share anything.
The part that is discussed as a privacy issue is a design element. It is by design post are visible to everyone, it is by design that comments are visible to everyone.
How is it a privacy issue when the user desides what to post for everyone to see?
If you are looking for a different design ideology then maybe you need a different social media platform.
So regarding an open, public digital space like Twitter, how do you feel about people having the ability to lock their accounts and instantly hide all their tweets from the public?
Mastodon doesn’t have that, but it could.
My reaction to adding something like that will always be “that would be rad” regardless of previous assumptions about how public an app should be, or truisms like “the Internet is forever”, because I believe strongly that trying to fix issues is better than letting them languish unchecked.
I’ve never been on Twitter. Besides Reddit I really disliked all other main platforms. So answering your question: I don’t care, it’s a different platform for different style of social media interactions.
the Internet is forever
My position has nothing to do with this sentiment. Internet forgets, and often.
I like federated nature of Lemmy, I like that there is no “private” accounts. This is a feature not a bug.
I’m not trying to argue against privacy, but what you are describing isn’t a privacy issue or an issue at all. It’s a design element. And it’s this design is why I like it here.
As someone here has said, at some point the responsibility has to fall on the user. You don’t need to share anything. As long as the nature of the platform is clear (and it’s a separate discussion) the is no issue to be fixed.
If to you that is seems as an issue, well then maybe you are at the wrong place. And if the platform changes in the direction I don’t agree, I will leave.
For example, privacy settings on Facebook are available to all registered users: they can block certain individuals from seeing their profile, they can choose their “friends”, and they can limit who has access to their pictures and videos.
We were talking about the definition of privacy, and I was giving an example to bolster my definition of it. We can switch to a different topic if you want, but first I want to cement this definition.
Does this really bother folks that much? I’ve never cared much about this kind of leeching as long as it isn’t dominating my personal bandwidth. And it never has so far, though I have to admit my 4ish tb of movies isn’t exactly high demand stuff, so I don’t usually have more than one or two at a time being leeched.
Hell, I don’t really care much about leeches anyway, as long as the overall ability to find things is still there. I look at it as the price of the freedom of torrents. There’s always going to be higher demand for data than there are people with the resources/time/willingness to seed heavily. I’m okay with that because I don’t really want to have to keep and seed every fucking thing out there, it would take much more storage than I can afford to be able to keep a ratio on most private trackers since I tend not to keep shit I don’t intend to watch at least once a year unless it’s something pretty damn obscure.
It does dominate my personal bandwidth. You’ll notice if one of your torrents is on their app. You’ll be uploading all day and it’ll fill half your peers list.
As someone that uses stremio quite a lot thats sad but understandable to hear. I do try to make up for my leeching habbits by keeping every normal torrent seeding indefinitely.
I actually see a legitimate use case for it and helped add the actions version in a project where I'm a collaborator.
Quite a bit, certain bugs disappear after an update without us targeting it (partially because the logs get fudged a bit after going through dependencies, so sometimes multiple bugs have the same cause or it's actually a dependency issue that got fixed) and sometimes we forget about old feature requests.
The stale reminder doubles as a reminder for us to (re)consider working on the issue. When we know something probably isn't gonna get fixed suddenly, we apply a label to the issue. For enhancements that we'll definitely work on soon™, we apply help wanted. We've configured the action to ignore both. We also patrol notifications from stale to see if something shouldn't go stale. This is a medium-sized project so we can handle patrolling and IMO this helps us quite a bit.
Fair enough; I didn’t consider artifacts like logs and traces. I suppose a stale marker might prompt the original reporter to retest and supply fresh ones (or confirm it’s fixed in the dependency case).
In an ideal world I suppose we’d have automated tests for all bug reports but that’s obviously never going to happen!
It amazes me that all it takes is just changing user agent to Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) and it can bypass paywalls on many sites? I thought those sites would try harder (e.g. checking if the ip address is truly belong to google), but apparently not.
Same. I thought there would be more stuff happening in the background but when I saw it’s just hijacking the google bot headers to display the html i was a bit disappointed it’s so stupidly easy.
Checking ip ownership is a moving target more likely to result in outcomes these sites don’t want (accidentally blocking google bots and preventing results from appearing on google).
Checking useragent is cheap, easier, unlikely to break (for this purpose, anyway) and the percentage of folks who know how to bypass this check is relatively slim, with a pretty small financial impact.
Google literally has an official list of IP ranges for their crawlers, complete with an API that returns the current IP ranges that you can use to automate a check. Hardly a moving target, and even if it is, it doesn’t matter if you know exactly where the target is at all times.
The spacing in the email screwed up the formatting:
Dear Andre,
I’m Gianpiero Morbello, serving as the Head of IOT and Ecosystem at Haier Europe.
It’s a pleasure to hear from you. We just received your email, and coincidentally, I was in the process of sending you a mail with a similar suggestion.
I want to emphasize Haier Europe’s enthusiasm for supporting initiatives in the open world. Please note that our IOT vision revolves around a three-pillar strategy:
achieving 100% connectivity for our appliances,
opening our IOT infrastructure (we are aligned with Matter and extensively integrating third-party connections through APIs, and looking for any other opportunity it might be interesting),
and the third pillar involves enhancing consumer value through the integration of various appliances and services, as an example we are pretty active in the energy management opening our platform to solution which are coming from energy providers.
Our strategy’s cornerstone is the IOT platform and the HON app, introduced on AWS in 2020 with a focus on Privacy and Security by Design principles. We’re delighted that our HON connected appliances and solutions have been well-received so the number of connected active consumers is growing day after day, with high level of satisfaction proven by the high rates we receive in the App stores.
Prioritizing the efficiency of HON functions when making AWS calls has been crucial, particularly in light of the notable increase in active users mentioned above. This focus enables us to effectively control costs.
Recently, we’ve observed a substantial increase in AWS calls attributed to your plugin, prompting the communication you previously received as standard protocol for our company, but as mentioned earlier, we are committed to transparency and keenly interested in collaborating with you not only to optimize your plugin in alignment with our cost control objectives, but also to cooperate in better serving your community.
I propose scheduling a call involving our IOT Technology department to address the issue comprehensively and respond to any questions both parties may have.
Hope to hear back from you soon.
Best regards
Gianpiero Morbello Head of Brand & IOT Haier Europe
Those didn’t completely break federation, they just had some issues with a few services besides lemmy. They’re addressed now, but federation compatibility will always be an ongoing task as new services get added and existing ones change their activitypub responses.
As far as I’m aware the most widely-accepted standard for responsible disclosure is 90 days. This is a little different, since that’s normally between businesses and includes the time needed to develop a solution; it’s not typically aimed at federated or self-hosted applications rolling out an already-created patch. On the one hand, granting them that extra time to upgrade seems reasonable. On the other, wouldn’t anyone wanting to exploit a vulnerability be able to reverse-engineer it pretty easily by reading the git history?
The 90 days disclosure you’re referencing, which I believe is primarily popularized by Google’s Project Zero process, is the time from when someone discovers and reports a vulnerability to the time it will be published by the reporter if there is no disclosure by the vendor by then.
The disclosure by the vendor to their users (people running Lemmy instances in this case) is a completely separate topic, and, depending on the context, tends to happen quite differently from vendor to vendor.
As an example, GitLab publishes security advisories the day the fixed version is released, e.g. …gitlab.com/…/critical-security-release-gitlab-16….
Some vendors will choose to release a new version, wait a few weeks or so, then publish a security advisory about issues addressed in the previous release. One company I’ve frequently seen this with is Atlassian. This is also what happened with Lemmy in this case.
As Lemmy is an open source project, anyone could go and review all commits for potential security impact and to determine whether something may be exploitable. This would similarly apply to any other open source project, regardless of whether the commit is pushed some time between releases or just before a release. If someone is determined enough and spends time on this they’ll be able to find vulnerabilities in various projects before an advisory is published.
The “responsible” alternative for this would have been to publish an advisory at the time it was previously privately disclosed to admins of larger instances, which was right around the christmas holidays, when many people would already be preoccupied with other things in their life.
Lemmy is, like a lot of Fediverse platforms, about as private as it can be. There’s no trackers, you’re not forced to use real names or any other identifying information, no adverts follow you from site to site, no browser fingerprinting and no instance owners are trying to sell your data.
Beyond that, what you choose to say on Lemmy is your responsibility and yours alone.
After a extremely long week, I sometimes participate in open source. I have to deal with malicious commits. I have to follow up on issues from misguided individuals who are actually looking for tech support. I have to guide new contributors to how this massive repo works and to submit tests. I have to negotiate with the core team and these convos can often last months/years.
And contributing to open-source is one of the few things that give me pleasure, even if it’s a extremely thankless job.
But I’m tired man.
I’m not dealing with low-quality memers who are providing zero value. Nor should we encourage it.
I do FOSS as well, but I’d rather people have fun punting the stalebot than just keep repeating “this issue still exists”. I will probably get a chuckle out of it.
I don't know about the creators of this project, but in general: So that they can use the stuff in their closed source applications while finding enough contributors to write software for them for free.
After reading your link, they can absolutely be used interchangably in a comparison with copyleft licenses. Your own link says that they are very similar.
For some software, where EEE tactics aren’t a concern, but corporate adoption matters, these licenses make perfect sense. However. that’s not the case here: an OS is a prime target for EEE.
Because I like the 2-clause BSD license. I am not a fan of “copyleft” or forcing obligations on people in general. I want my software to be available for anyone who wants to use it.
He missed the entire point of copyleft which is a bit disappointing.
All well, at least it is libre. I respect his choice in the end as pressuring or forcing someone to use a copy left license us just as bad as proprietary software
The GPL is a better choice if you want to make money from the software. With a pushover license, your competitors can extend the program and profit from it in a way you can’t because they aren’t required to give the changes back. The GPL evens the playing field. Of course, you often see the original company requiring a CLA so they retain copyright over all of the code.
On the other hand, it does enable possibilities that you would be very unlikely to get otherwise. For example, Cedega (formerly WineX) forked Wine when it used a pushover license and brokered deals with game companies to make the DRM compatible with WineX/Cedega. That meant you could play these games on Linux-based OSes with Cedega, but not Wine. I really wonder if it would have been possible to make Wine compatible with some of these DRM schemes otherwise. Consequently, however, Cedega could not incorporate any changes from LGPL’d Wine, as that would have required them to license Cedega under the LGPL, too.
That’s another issue. You can incorporate MIT-licensed software in GPL software, but you can’t incorporate GPL software in MIT-licensed software. So going with the GPL gives you more options. As SerenityOS is building everything from scratch, this isn’t an issue, but you can well see how it could be. The LGPL is far less disruptive to people who want to release their software under a pushover license. It only requires you give back any changes to the LGPL-licensed part, and does not cover other parts of your program. Personally, I really like the LGPL. It levels the playing field while being quite compatible. It’s not perfect either, of course.
It’s a tricky question, and there are no right answers. Ultimately, the decision is up to the developer and I can’t fault any choice, including the decision to use a proprietary license.
I personally won’t use any proprietary software and I especially won’t use any DRM. The purpose of the GPL isn’t to force companies to pay up to get out of copy left. The purpose is to keep the code free no matter what so that people can control there own computing
That’s also my preference, but very few games are free software. And most of the games I want to play are encumbered with DRM or cost ten times as much to get DRM-free. Of course, I buy them DRM-free because the DRM doesn’t work with Wine, but if it worked with Cedega…well, I might re-evaluate.
The purpose of the GPL isn’t to force companies to pay up to get out of copy left.
That’s why it was created, but in practice, many companies make money by selling exceptions. See Cal.com and CKEditor5, for instance. I didn’t mention this at all in my comment, though, so I’m not quite sure which part you’re responding to. By “level playing field”, I meant that everyone can improve Sourcehut and sell a service with more features, but they need to release those new features under the same license, meaning they will make it back to Sourcehut proper. Selling exceptions isn’t the only way to make money from free software.
Just to get it out there… I checked this out about a year ago. It’s not completely open source. The project consists of many executables and “pre complied dependencies” that don’t appear to share matching checksums which may indicate modifications of some sort. Looks like a great tool, but I’m extremely skeptical of what’s going on under the hood.
Hopefully they do truly open source it and prove me wrong, I’d love to give it a try some day.
github.com
Top