selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Gooey0210, in I love my Gitea. Any tips and tricks?

The trick is to switch to forgejo

SpaceCadet,
@SpaceCadet@feddit.nl avatar

Mental note: have to migrate my gitea instance over to forgejo.

BOFH666,

Absolutely!

Running local, self hosted forgejo with a few runners.

Now my code is neatly checked with pre-commit and linters, build when new tags are pushed, renovate is scheduled every 24 hours to check for new releases of stuff etc.

Just a few containers and a happy user :-)

naomsa,

do you use forgejo-runner or another ci/cd image?

Gush5310,

I am not the OP but I use Woodpecker CI.

I like to keep things separated, in a KISS fashion. This makes changing either software easier.

BOFH666,

Still testing and fiddling, but I’m using the forgejo-runner. Renovate is just another repository, with a workflow to get it started:


<span style="color:#323232;">on:
</span><span style="color:#323232;">  schedule:
</span><span style="color:#323232;">    - cron: '5 2 * * *'
</span><span style="color:#323232;">    - cron: '5 14 * * *'
</span><span style="color:#323232;">
</span><span style="color:#323232;">jobs:
</span><span style="color:#323232;">  build:
</span><span style="color:#323232;">    runs-on: docker
</span><span style="color:#323232;">    container:
</span><span style="color:#323232;">      image: renovate/renovate:37.140-full
</span><span style="color:#323232;">    steps:
</span><span style="color:#323232;">      - name: Checkout
</span><span style="color:#323232;">        uses: actions/checkout@v3
</span><span style="color:#323232;">
</span><span style="color:#323232;">      - name: Run renovate
</span><span style="color:#323232;">        env:
</span><span style="color:#323232;">          PAT: ${{ secrets.PAT }}
</span><span style="color:#323232;">          GITHUB_COM_TOKEN: ${{ secrets.GITHUB }}
</span><span style="color:#323232;">        run: |
</span><span style="color:#323232;">          echo "Running renovate"
</span><span style="color:#323232;">          cd ${GITHUB_WORKSPACE}
</span><span style="color:#323232;">          renovate --token ${PAT}          
</span>

The renovate image has been pulled by hand and the forgejo-runner will happily start the image. Both PAT and GITHUB secrets are configured as ‘action secrets’ within the renovate repository.

Besides the workflow, the repository contains renovate.json and config.js, so renovate has the correct configuration.

Dehydrated,

I was about to suggest that

Cyberflunk, in Linkwarden - An open-source collaborative bookmark manager to collect, organize and preserve webpages

Archivebox is in my obsidian workflow, it grabs every link in my vault and archives it. I didn’t see an API in linkwarden, perhaps I missed it.

eduardm,

Do you have any particular way of organizing the links themselves? I’ve moved to hosting all my bookmarks in Obsidian as well and am curious as to how others go about it

Cyberflunk,

I treat links like atomic notes. I add as much detail as I feel like to each link, sometimes I go back and add tags and notes. Then I have an exceptionally poor process that attempts to go back to each link, get the archivebox archive and uses python to attempt to grab the article text (I tried using newspaper3k at first, but it’s unmaintained, so moved to readability). Then sticks the resulting link text into the note.

Honestly It’s a mess, and I really haven’t figured out how to do link things together very well, but, for now, it’s my little disaster of a solution.

possiblylinux127, in Plex To Launch a Store For Movies and TV Shows

If its DRM free its not a problem.

shrugal, (edited )

It would be great, but no chance in hell movie studios would go along with this.

possiblylinux127,

Yeah, I know

cyberpunk007,

Not until it’s a mandatory popup or recommendation any time you want to watch. Or maybe mandatory ads and popups on “new releases”. With Plex, nothing surprises me now.

billwashere,

Yeah this “pretty major UX refresh” line bothers me A LOT. Kinda like when Amazon fucked up the UX/UI on the cheap fire tv sticks.

possiblylinux127,

Well I use Jellyfin as I don’t want proprietary software on my system and this isn’t making me have any regrets. The benefit of Plex is what again?

Toes, in VPN speed

You mentioned that your cpu is getting maxed out on wireguard. That makes a lot of sense since it’s generally not hardware accelerated, old low end CPUs could struggle here.

What choices do you have for protocols with your VPN software?

Try AES128 UDP mode with openVPN.

rambos,

I want to use glueten container, but I’m flexible about everything else. I can try openVPN server, but not sure what AES128 means (I know its some kind of encryption, but don’t know how to use that in my case). There are many different servers to chose, Ill try few with UDP openVPN. thanks

Toes,

Ok in that case. The goal is to use a cipher suite that works well on your device that is still secure. AES is accelerated on most processors these days. But you’ll want to confirm that by looking up your specific cpu (both host and client machines!) and checking for AES acceleration.

AES-128-GCM would be my suggestion.

UDP mode provides less overhead, so it should be faster for you.

Alternatively you could use IPsec instead of openvpn but that’s a chore to configure. But it has the benefit of being free and being natively supported by many devices.

You would still want to configure an appropriate cipher suite that’s fast and secure.

rambos,

My CPU (g3930) supports Intel AES New Instructions if thats it. Ill look more into it, thank you

Toes, (edited )

Yeah give that a go. Glad to help 🙂

ArbiterXero, in VPN speed

While this is conclusively stoned as “cpu” issues, in case anyone else finds this thread…

While your isp can’t read the data over the VPN, they CAN see that you’re using a VPN and intentionally slow down your connection with traffic shaping because you’re putting so much data through the vpn.

rambos,

Oh good to know

originalucifer, (edited ) in Suggestions for NAS (or other hardware) solution to home setup
@originalucifer@moist.catsweat.com avatar

whats your budget?

with a NAS i tend to go with a commercial product, and only for that purpose. it stores the data, maybe serves it up as file server. thats the NAS one job.

my processing happens on another box, like your pi. i want the nas focused on its single purpose.

so my suggestion would be to pickup a netgear/synology whatever, but only use it as a nas.

if you want to expand that pi, just use a real machine and upgrade yourself to maybe a nice docker setup.

thirdBreakfast,
@thirdBreakfast@lemmy.world avatar

This is where I landed on this decision. I run a Synology which just does NAS on spinning rust and I don’t mess with it. Since you know rsync this will all be a painless setup apart from the upfront cost. I’d trust any 2 bay synology less than 10 years old (I think the last two digits in the model number is the year), then if your budget is tight, grab a couple 2nd hand disks from different batches (or three if you budget stretches to it,).

I also endorse u/originalucifer’s comment about a real machine. Thin clients like the HP minis or lenovos are a great step up.

cybersandwich, in Weird issue with lemmy ansible

you may need to check your server’s DNS configuration or make sure that the hostname “lemmy-ui” is correctly defined and reachable in your network. It looks like it’s expecting the lemmy-ui to be on the .57 machine. If you are expecting it on the .62 then something is misconfigured in the script.

It just looks like it can’t find that host.

Sorry I can’t be more help. I don’t run a Lemmy instance and I’m not familiar with the ansible config you are using.

arudesalad,

It’s on the .57 machine and in the same docker environment as the proxy

narc0tic_bird, in VPN speed

Use Wireguard instead of OpenVPN.

rambos,

I am, but as others said I think my CPU cant handle it

ChrislyBear, in VPN speed

VPN limiting your bandwidth? Sounds like a CPU issue. You’ll be surprised how much CPU overhead it takes to encrypt and decrypt traffic at such high speeds.

rambos,

Yeah this seems right. My CPU utilization goes to 100% during speedtest and I have celeron 😆 Thank you!

cyberpunk007, in Plex To Launch a Store For Movies and TV Shows

Time to move to jellyfin I guess

Osiris,

Jellyfins pretty great. It’s much simpler than Plex and has quite a few less features, but it does what it does really well

Smash,

Sadly in the 4 years I’m using it jellyfin still couldn’t figure out how to correctly display series season covers and has some streaming bugs (no audio when audio is DTS and PGS subs are enabled ect)

rezz,

You should redo your org from scratch and let all the default plugins do the work. Mine looks great and I never changed anything, just followed the recommended file org pattern for Movies and TV Shows.

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

Exactly. 99% of these issues are not naming the files as Jellyfin needs, which I understand can be annoying if you have a large number of files to move to it. And having the right access permissions for files, if you are on Linux.

CazRaX,

Filebot is nice for that, it is what I used when I first got into Plex and realized the reason I had so many problems is because of the way I named files. This was before I even knew Sonarr and Radarr existed, now you can get them to do it.

Osiris,

Yup! Somewhere along the line they improved how it tags and fetches show Metadata. Now the default setup is great

otter,

What are some missing features? I’m only familiar with jellyfin

Chewy7324,

Iirc Plex supports transcoding for downloads, while Jellyfin only allows downloading the original file. But I’ve heard transcoding downloads is broken on Plex, so ymmv.

Intro skip is only available as a plugin on Jellyfin.

Also, Findroid has a better ui and supports downloads, while the official app has more features (ie. settings/admin panel).

otter,

Didn’t know about findroid, looks cool

I’ll probably keep both installed

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

They have both good usecases. I have findroid installed on my families devices since all they need is just playing media. It is great for that.

I myself have both because I can administrate the server from my phone via the official app. I also use mostly findroid for watching my media.

helenslunch,
@helenslunch@feddit.nl avatar

Findroid keeps saying it’s not supported on my Pixel 7 for some reason

RootBeerGuy, (edited )
@RootBeerGuy@discuss.tchncs.de avatar

Thats because if you get it from f-droid, it is on Izzy’s repo and there only the 32 bit version is available. The Pixel 7 is 64 bit only.

I have the same issue, solved it by using obtainium and getting the package directly from their github.

helenslunch,
@helenslunch@feddit.nl avatar

Sure enough, that fixed it, thanks.

What’s the deal with devs not keeping up with FDroid repos?

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

Izzys repo is not fdroid. And the developers never managed to put it themselves on fdroid. So its a mess of many things.

oDDmON,

The lifetime license is definitely declining in value.

cyberpunk007,

Agreed, but we probably all knew it had to decline at some point. Nothing is really free.

I’m just glad I feel like I definitely got my money’s worth. Feb 2014 is when I got it. So 10 years, 75$. I cant complain 😂

ebits21,
@ebits21@lemmy.ca avatar

Jellyfin really is free :p

cyberpunk007,

I’m not really sure what you’re implying here

emeralddawn45,

You said nothing is really free.

avidamoeba, (edited )
@avidamoeba@lemmy.ca avatar

Sorry, why are we switching away from Plex because of this? Genuinely asking.

E: Wow at the downvotes to an honest question. 🥲

cyberpunk007,

See my long winded response here

lemmy.ca/comment/6529326

Potatofish,

So I read it and I still don’t see the outrage.

Croquette,

It’s in the second paragraph. This is the beginning of the monetization for everything in Plex now that they have a good user base. They are starting to ramp up the milking.

It will become like any other shitty streaming service eventually.

Potatofish,

And? As long as I can watch my content, why should I care? It’s a business with employees and they need to make money somehow.

Croquette,

If you cannot comprehend why people are outraged at a product they used getting degraded, not sure what to tell you.

Potatofish,

It still does what it did when I started using it. What’s the problem?

avidamoeba,
@avidamoeba@lemmy.ca avatar

Thanks. I understand this perspective.

paraphrand,

Wow, the option to purchase things is really that bad huh?

cyberpunk007,

If you follow the history, you’d get it. Just another nail in the coffin.

AntonChigurh,
@AntonChigurh@lemmy.world avatar

Emby is good

DontNoodles, in Fighting with immich

I stand with you for the subdomain and bare metal thing. There are many great applications that I’m facing trouble implementing since I don’t have control over A domain settings within my setup. Setting mysite.xyz/something is trivial that I have full control over. Docker thing I can understand to some extent but I wish it was as simple as python venv kind of thing.

I’m sure people will come after me saying this or that is very easy but your post proves that I’m not alone. Maybe someone will come to the rescue of us novices too.

Shimitar,

Us novices?

No, it’s not that. The point is not that using a sub domain is easy or not, you might not have access to using one or maybe your setup is just “ugly” using one or you just don’t want to use one.

Its standard practice in all web based software to allow base URLs. Maybe the choice of framework wasn’t the best one from this point of view.

As for docker, deploying immich on bare metal should be fairly easy, if they provided binaries to download. The complex part is to build it not deploy.

But you gave me an idea: get the binaries from the docker image… Maybe I will try.

Once you have the bins, deploying will be simple. Updating instead will be more complex due to having to download a new docker image and extract again each time.

DontNoodles,

Another such application that I wish had easy implementation for what you call base URLs is Apache Superset. Such a great application that I’m unable to use in my setup.

x4740N, in Sounds like Haier is opening the door!
@x4740N@lemmy.world avatar

It’s damage control, they realised what they did was getting them bad PR since news of it started spreading so they are attempting to remedy the bad PR through damage control

Corporations only care about profits, not people

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

Oh absolutely agree, but this is where they can use it.

The dev can say that they obviously need an official plugin, and work with them on that because now they have 1,800 clones of an unofficial one that may not be optimized.

We also get to know that our tiny HA community has hit a critical mass large enough to get a corpo to freak out a bit

SoleInvictus,
@SoleInvictus@lemmy.world avatar

I did my part and sent them a “do this and I’ll never buy a Haier product” email. Corporations exist to maximize profits. Communities like ours just have to learn how to make it clear to them that shutting us out will hurt their profitability.

I think we should all be really proud of ourselves. We banded together and, regardless of WHY Haier is doing this, got them to open a line of communication. This is a huge win!

NaibofTabr, (edited )

Yes, it is damage control. That’s OK.

The whole point of spreading the word about an incident like this is to get public attention on it, and make the company realize that the way they’ve handled things was bad.

A letter like this indicates that they’ve realized they fucked up and they want to do things differently going forward. That doesn’t mean they’re suddenly trustworthy, but it does mean they can be negotiated with.

The correct response is to accept the offer of working together. We want to encourage companies to be cooperative and discourage insular, proprietary behavior. If you slap away the offered hand then you discourage future cooperation, and now you’re the roadblock to developing an open system.

When you start getting the results that you want, don’t respond with further hostility.

BearOfaTime,

Nope.

They’re on the ropes.

Keep pummeling them. There’s no integrity behind this, and going along will just let them get away with their bad behaviour.

They played the “We’ll sue your ass off” card first. That means it’s already in the legal realm, they never even triedto work with the OSS community, they basically said “fuck you” until the community replied, very clearly.

Had the community not responded by replicating the repo 1000+ times, and making a story about it, they would’ve continued down the path of slapping the little guy around.

They now realize they can’t compete with potentially 1000 people working on this, against them. They also fear they’ve pissed off some technophile who has some serious skills or connections. Wonder if they saw a sudden increase in probes on their internet interfaces.

Make it hurt. Let them be the cautionary tale.

delcake,

Exactly this. I understand the cynicism, but it ultimately doesn’t matter what the motivation of a company walking back a poor decision is. We take the chance for mutual collaboration and hopefully everyone benefits.

On an individual level, that’s when people can evaluate if they still want to boycott and do whatever their own moral compass demands. But refusing to work together at this point just means we definitely don’t get the chance in the future to steer things in a better direction.

NaibofTabr, (edited )

And even if the cooperation doesn’t last, it’s an opportunity for the open source developers to work with the product engineers and get direct information from them right now. There’s nothing as valuable as talking to the guy that actually designed the thing, or the guy who can make changes to the product code.

Even if that relationship doesn’t hold long term, the information gathered in the short term will be useful.

If I were part of this project this is what I’d be going for. Push the company to give you direct contact with the relevant engineers, right now while the negative public opinion is fresh and they’re most willing to make concessions, and then get as much out of that contact as you can. Take them at their word, make them actually back it up, take advantage of the offer to cooperate. Sort the rest of it out later.

dual_sport_dork, in Sounds like Haier is opening the door!
@dual_sport_dork@lemmy.world avatar

Yeah, they can fuck off. When their opening salvo was threats and legal bluster, I don’t see why anyone should trust an alleged olive branch now. The right thing to do was not to send this email second.

I have to work with Haier in my business now as well ever since they bought GE. They’re a shitty company that goes back on their word constantly (at least within the B2B space), and nobody should be giving them one thin dime.

Rentlar, (edited )

Respectfully, I disagree. Yes, indeed this first message is PR damage control, but there is something to be gained here for the FOSS community.

This backtrack sends the message out, discouraging other companies with legal departments from trying the same trick else they risk sales. If a positive resolution comes out of this (A. Andre’s project becomes officially supported by Haier with more features whilst being more efficient with API calls, or B. Haier develops a local API option) then it shows other companies there is value in working together with the FOSS community rather than viewing them as an adversary or as competition to be eliminated.

BearOfaTime,

Nah, this is Haier trying to save face. They saw how the story went, that the repo was forked a thousand times in a few hours. They know their engineering team can’t win, long term, against dedicated, pissed off geeks.

Would they play nice with you if the tables were reversed? No.

They already played the legal card, engaging with them at this point would be extremely naive.

Fuck them. Now is the time to pummel them even harder. Making them eat their words is what will send a message to the rest of the jackasses designing garbage and tracking us relentlessly for access to what should be trivial to engineer features.

kilgore_trout,

Legal threats come from lawyers, while this email comes from an engineer.

huginn,

… Which makes it even less credible legally.

Unless you’re getting C-suite level emails saying they’re not going to do it, don’t trust them.

And even then you should be ready to sue.

Bazoogle,

Generally, an engineer wants their product to work well and work efficiently. They put effort into a product, and it feels good to see people benefit from that work. The ones making the decisions have money on their mind. If a FOSS version of their paid platform costs them too much money, they will shut it down. Not because it was the engineers decision, but because the one’s making the decision likely don’t even know what github is and just know it’s taking away that sweet subscription money.

lemming741,

But a company is a sum of these (and other) people. In this case, it’s a draw at best, not a win.

BearOfaTime,

So?

They both represent the company. The company came on strong all ban-hammery, the news flashed around, his repo got forked over a thousand times in a matter of hours.

Haier found themselves on the defensive suddenly, so they got one of their engineers to play nice.

They now know they have 300k users who are pissed at them. People are choosing other products over this already.

Fuck them. With a pineapple. Corporations aren’t people, I owe them no consideration, no courtesy, especially when they act like this.

originalucifer, in Sounds like Haier is opening the door!
@originalucifer@moist.catsweat.com avatar

Recently, we've observed a substantial increase in AWS calls attributed to your plugin, prompting the communication you previously received as standard protocol for our company, but as mentioned earlier, we are committed to transparency and keenly interested in collaborating with you not only to optimize your plugin in alignment with our cost control objectives,

i get it; their amazon account gets hit hard by some plugin data stream, they trace the source and kill it for monetary reasons. makes total sense. handled terrible, but still, i also completely understand getting some giant bill from amazon and freaking the fuck out.

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

Yup exactly. They just need better responses than “get legal on the phone”

pearsaltchocolatebar,

Did you not read the letter you posted? It said a call with the IoT department.

tja,
@tja@sh.itjust.works avatar

Did you not read the linked issue? The first thing they did, before this letter, was sending a cease and desist

pearsaltchocolatebar,

I misread the comment, for sure. I thought they were talking about the call the letter referenced.

shnizmuffin,
@shnizmuffin@lemmy.inbutts.lol avatar

“We don’t know how to rate limit our API or set billing alarms in the AWS console.”

possiblylinux127,

They likely due. However overhead cost is overhead cost

SeeJayEmm, in LinguaCafe - Confused why the provided docker-compose doesn't work.
@SeeJayEmm@lemmy.procrastinati.org avatar

Reads to me like the container it’s running as a user that doesn’t have permission to the volume path.

Natal,

Is there a command to check that?

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I’m not sure if there’s a correct way. What I’ve done in the past is use “ps” to find out what user the processes are running as.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 20975616 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4210688 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 36