Running local, self hosted forgejo with a few runners.
Now my code is neatly checked with pre-commit and linters, build when new tags are pushed, renovate is scheduled every 24 hours to check for new releases of stuff etc.
The renovate image has been pulled by hand and the forgejo-runner will happily start the image. Both PAT and GITHUB secrets are configured as ‘action secrets’ within the renovate repository.
Besides the workflow, the repository contains renovate.json and config.js, so renovate has the correct configuration.
Do you have any particular way of organizing the links themselves? I’ve moved to hosting all my bookmarks in Obsidian as well and am curious as to how others go about it
I treat links like atomic notes. I add as much detail as I feel like to each link, sometimes I go back and add tags and notes. Then I have an exceptionally poor process that attempts to go back to each link, get the archivebox archive and uses python to attempt to grab the article text (I tried using newspaper3k at first, but it’s unmaintained, so moved to readability). Then sticks the resulting link text into the note.
Honestly It’s a mess, and I really haven’t figured out how to do link things together very well, but, for now, it’s my little disaster of a solution.
Not until it’s a mandatory popup or recommendation any time you want to watch. Or maybe mandatory ads and popups on “new releases”. With Plex, nothing surprises me now.
You mentioned that your cpu is getting maxed out on wireguard. That makes a lot of sense since it’s generally not hardware accelerated, old low end CPUs could struggle here.
What choices do you have for protocols with your VPN software?
I want to use glueten container, but I’m flexible about everything else. I can try openVPN server, but not sure what AES128 means (I know its some kind of encryption, but don’t know how to use that in my case). There are many different servers to chose, Ill try few with UDP openVPN. thanks
Ok in that case. The goal is to use a cipher suite that works well on your device that is still secure. AES is accelerated on most processors these days. But you’ll want to confirm that by looking up your specific cpu (both host and client machines!) and checking for AES acceleration.
AES-128-GCM would be my suggestion.
UDP mode provides less overhead, so it should be faster for you.
Alternatively you could use IPsec instead of openvpn but that’s a chore to configure. But it has the benefit of being free and being natively supported by many devices.
You would still want to configure an appropriate cipher suite that’s fast and secure.
While this is conclusively stoned as “cpu” issues, in case anyone else finds this thread…
While your isp can’t read the data over the VPN, they CAN see that you’re using a VPN and intentionally slow down your connection with traffic shaping because you’re putting so much data through the vpn.
with a NAS i tend to go with a commercial product, and only for that purpose. it stores the data, maybe serves it up as file server. thats the NAS one job.
my processing happens on another box, like your pi. i want the nas focused on its single purpose.
so my suggestion would be to pickup a netgear/synology whatever, but only use it as a nas.
if you want to expand that pi, just use a real machine and upgrade yourself to maybe a nice docker setup.
This is where I landed on this decision. I run a Synology which just does NAS on spinning rust and I don’t mess with it. Since you know rsync this will all be a painless setup apart from the upfront cost. I’d trust any 2 bay synology less than 10 years old (I think the last two digits in the model number is the year), then if your budget is tight, grab a couple 2nd hand disks from different batches (or three if you budget stretches to it,).
I also endorse u/originalucifer’s comment about a real machine. Thin clients like the HP minis or lenovos are a great step up.
you may need to check your server’s DNS configuration or make sure that the hostname “lemmy-ui” is correctly defined and reachable in your network. It looks like it’s expecting the lemmy-ui to be on the .57 machine. If you are expecting it on the .62 then something is misconfigured in the script.
It just looks like it can’t find that host.
Sorry I can’t be more help. I don’t run a Lemmy instance and I’m not familiar with the ansible config you are using.
VPN limiting your bandwidth? Sounds like a CPU issue. You’ll be surprised how much CPU overhead it takes to encrypt and decrypt traffic at such high speeds.
Sadly in the 4 years I’m using it jellyfin still couldn’t figure out how to correctly display series season covers and has some streaming bugs (no audio when audio is DTS and PGS subs are enabled ect)
You should redo your org from scratch and let all the default plugins do the work. Mine looks great and I never changed anything, just followed the recommended file org pattern for Movies and TV Shows.
Exactly. 99% of these issues are not naming the files as Jellyfin needs, which I understand can be annoying if you have a large number of files to move to it. And having the right access permissions for files, if you are on Linux.
Filebot is nice for that, it is what I used when I first got into Plex and realized the reason I had so many problems is because of the way I named files. This was before I even knew Sonarr and Radarr existed, now you can get them to do it.
Iirc Plex supports transcoding for downloads, while Jellyfin only allows downloading the original file. But I’ve heard transcoding downloads is broken on Plex, so ymmv.
Intro skip is only available as a plugin on Jellyfin.
Also, Findroid has a better ui and supports downloads, while the official app has more features (ie. settings/admin panel).
It’s in the second paragraph. This is the beginning of the monetization for everything in Plex now that they have a good user base. They are starting to ramp up the milking.
It will become like any other shitty streaming service eventually.
I stand with you for the subdomain and bare metal thing. There are many great applications that I’m facing trouble implementing since I don’t have control over A domain settings within my setup. Setting mysite.xyz/something is trivial that I have full control over. Docker thing I can understand to some extent but I wish it was as simple as python venv kind of thing.
I’m sure people will come after me saying this or that is very easy but your post proves that I’m not alone. Maybe someone will come to the rescue of us novices too.
No, it’s not that. The point is not that using a sub domain is easy or not, you might not have access to using one or maybe your setup is just “ugly” using one or you just don’t want to use one.
Its standard practice in all web based software to allow base URLs. Maybe the choice of framework wasn’t the best one from this point of view.
As for docker, deploying immich on bare metal should be fairly easy, if they provided binaries to download. The complex part is to build it not deploy.
But you gave me an idea: get the binaries from the docker image… Maybe I will try.
Once you have the bins, deploying will be simple. Updating instead will be more complex due to having to download a new docker image and extract again each time.
Another such application that I wish had easy implementation for what you call base URLs is Apache Superset. Such a great application that I’m unable to use in my setup.
It’s damage control, they realised what they did was getting them bad PR since news of it started spreading so they are attempting to remedy the bad PR through damage control
Oh absolutely agree, but this is where they can use it.
The dev can say that they obviously need an official plugin, and work with them on that because now they have 1,800 clones of an unofficial one that may not be optimized.
We also get to know that our tiny HA community has hit a critical mass large enough to get a corpo to freak out a bit
I did my part and sent them a “do this and I’ll never buy a Haier product” email. Corporations exist to maximize profits. Communities like ours just have to learn how to make it clear to them that shutting us out will hurt their profitability.
I think we should all be really proud of ourselves. We banded together and, regardless of WHY Haier is doing this, got them to open a line of communication. This is a huge win!
The whole point of spreading the word about an incident like this is to get public attention on it, and make the company realize that the way they’ve handled things was bad.
A letter like this indicates that they’ve realized they fucked up and they want to do things differently going forward. That doesn’t mean they’re suddenly trustworthy, but it does mean they can be negotiated with.
The correct response is to accept the offer of working together. We want to encourage companies to be cooperative and discourage insular, proprietary behavior. If you slap away the offered hand then you discourage future cooperation, and now you’re the roadblock to developing an open system.
When you start getting the results that you want, don’t respond with further hostility.
Keep pummeling them. There’s no integrity behind this, and going along will just let them get away with their bad behaviour.
They played the “We’ll sue your ass off” card first. That means it’s already in the legal realm, they never even triedto work with the OSS community, they basically said “fuck you” until the community replied, very clearly.
Had the community not responded by replicating the repo 1000+ times, and making a story about it, they would’ve continued down the path of slapping the little guy around.
They now realize they can’t compete with potentially 1000 people working on this, against them. They also fear they’ve pissed off some technophile who has some serious skills or connections. Wonder if they saw a sudden increase in probes on their internet interfaces.
Exactly this. I understand the cynicism, but it ultimately doesn’t matter what the motivation of a company walking back a poor decision is. We take the chance for mutual collaboration and hopefully everyone benefits.
On an individual level, that’s when people can evaluate if they still want to boycott and do whatever their own moral compass demands. But refusing to work together at this point just means we definitely don’t get the chance in the future to steer things in a better direction.
And even if the cooperation doesn’t last, it’s an opportunity for the open source developers to work with the product engineers and get direct information from them right now. There’s nothing as valuable as talking to the guy that actually designed the thing, or the guy who can make changes to the product code.
Even if that relationship doesn’t hold long term, the information gathered in the short term will be useful.
If I were part of this project this is what I’d be going for. Push the company to give you direct contact with the relevant engineers, right now while the negative public opinion is fresh and they’re most willing to make concessions, and then get as much out of that contact as you can. Take them at their word, make them actually back it up, take advantage of the offer to cooperate. Sort the rest of it out later.
Yeah, they can fuck off. When their opening salvo was threats and legal bluster, I don’t see why anyone should trust an alleged olive branch now. The right thing to do was not to send this email second.
I have to work with Haier in my business now as well ever since they bought GE. They’re a shitty company that goes back on their word constantly (at least within the B2B space), and nobody should be giving them one thin dime.
Respectfully, I disagree. Yes, indeed this first message is PR damage control, but there is something to be gained here for the FOSS community.
This backtrack sends the message out, discouraging other companies with legal departments from trying the same trick else they risk sales. If a positive resolution comes out of this (A. Andre’s project becomes officially supported by Haier with more features whilst being more efficient with API calls, or B. Haier develops a local API option) then it shows other companies there is value in working together with the FOSS community rather than viewing them as an adversary or as competition to be eliminated.
Nah, this is Haier trying to save face. They saw how the story went, that the repo was forked a thousand times in a few hours. They know their engineering team can’t win, long term, against dedicated, pissed off geeks.
Would they play nice with you if the tables were reversed? No.
They already played the legal card, engaging with them at this point would be extremely naive.
Fuck them. Now is the time to pummel them even harder. Making them eat their words is what will send a message to the rest of the jackasses designing garbage and tracking us relentlessly for access to what should be trivial to engineer features.
Generally, an engineer wants their product to work well and work efficiently. They put effort into a product, and it feels good to see people benefit from that work. The ones making the decisions have money on their mind. If a FOSS version of their paid platform costs them too much money, they will shut it down. Not because it was the engineers decision, but because the one’s making the decision likely don’t even know what github is and just know it’s taking away that sweet subscription money.
They both represent the company. The company came on strong all ban-hammery, the news flashed around, his repo got forked over a thousand times in a matter of hours.
Haier found themselves on the defensive suddenly, so they got one of their engineers to play nice.
They now know they have 300k users who are pissed at them. People are choosing other products over this already.
Fuck them. With a pineapple. Corporations aren’t people, I owe them no consideration, no courtesy, especially when they act like this.
Recently, we've observed a substantial increase in AWS calls attributed to your plugin, prompting the communication you previously received as standard protocol for our company, but as mentioned earlier, we are committed to transparency and keenly interested in collaborating with you not only to optimize your plugin in alignment with our cost control objectives,
i get it; their amazon account gets hit hard by some plugin data stream, they trace the source and kill it for monetary reasons. makes total sense. handled terrible, but still, i also completely understand getting some giant bill from amazon and freaking the fuck out.
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.