selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

Batbro, in I love my Gitea. Any tips and tricks?

I forked a piece of code and found a bug, I’m still afraid to merge it in because I might have hit it by mistake

jaybone, in I love my Gitea. Any tips and tricks?

People who say “codes”

praise_idleness,

Thank you for letting me know. As you might guess English is not my first language. Always appreciate these inputs.

Bazoogle,

lol, I have no idea why someone down voted you.

But yea, the plural of code in the context of programming scripts is just code, but if you were to talk about codes like a code to get into a door pin-pad, it has an “s” at the end for plural. To be honest, I’m sure there’s plenty of native English speakers not in the tech world that would likely also call it “codes” when talking about programming.

rooster_butt,

From my experience this is a very Indian thing.

WPlinge,

It’s also a lot more common in the HPC community from what I’ve seen. Fortran people often have codes they want to run.

zrk,

Also heard it a lot from Chinese speakers.

RegalPotoo, in Sounds like Haier is opening the door!
@RegalPotoo@lemmy.world avatar

From the previous issue it sounds like the developer has proper legal representation, but in his place I wouldn’t even begin talking with Haier until they formally revoke the C&D, and provide enforceable assurances that they won’t sue in the future.

Also I don’t know what their margins are like, but even if this cost them an extra $1000 in AWS fees on top of what their official app would have cost them (I seriously doubt it would be that much unless their infrastructure is absolute bananas), then it would probably only be a single-digit number of sales that they would have needed to loose to come out worse off from this.

BearOfaTime, (edited ) in File server with on-demand sync, preserve the filesystem, and runs without external DB?

Commenting largely to watch - I use Syncthing as my daily driver sync tool, and Resilio for the on-demand stuff.

Resilio has on-demand/selective sync, but I don’t recall if it’s open source, I don’t think so. Plus, it’s hard on memory with larger folders, as it keeps the index in ram. My media sync folder really impacts my desktop, and I only run Resilio on my mobile devices when I want to sync something, then turn it off.

Berinkton, in File server with on-demand sync, preserve the filesystem, and runs without external DB?

I use Syncthing for this type of task on my PC and Phone and it stores a copy of the shared folder on the server with the option for file versioning. Having a Server is optional by the way.

rearview,

AFAIK, Syncthing clones the entire folder across peers (the server is just another peer it seems), which isn’t ideal for my use case Do you know any current way to configure it for selective syncing?

Jeief73,

I don’t think it can do selective syncing. I’ve been also searching for a similar solution but didn’t find one. Finally opted for syncthing with my most important files. Other files I can get them via web using filestash.

MangoPenguin, (edited )
@MangoPenguin@lemmy.blahaj.zone avatar

Owncloud supports selective sync, and seems a lot better for performance compared to Nextcloud.

Alternatively you could roll your own with rclone which is essentially an open source alternative to mountain duck. Then you can just use a simple connection via SFTP, FTP, WebDAV, etc…

rearview,

Non-OCIS Owncloud still needs a dedicated database and recommend against SQLite in prod

I’ve looked at rclone mounting with the --vfs-cache-* flags. But I’m not sure how it can smart sync like mountain duck or handle conflicts elegantly like the Nextcloud/Owncloud clients do. Let me know how to set it up that way if possible

recursivesive,

I vouch for Syncthing as well. I enabled storing in my own remote hosting provider marking it as untrusted, so my files are encrypted there.

Gooey0210, in I love my Gitea. Any tips and tricks?

The trick is to switch to forgejo

SpaceCadet,
@SpaceCadet@feddit.nl avatar

Mental note: have to migrate my gitea instance over to forgejo.

BOFH666,

Absolutely!

Running local, self hosted forgejo with a few runners.

Now my code is neatly checked with pre-commit and linters, build when new tags are pushed, renovate is scheduled every 24 hours to check for new releases of stuff etc.

Just a few containers and a happy user :-)

naomsa,

do you use forgejo-runner or another ci/cd image?

Gush5310,

I am not the OP but I use Woodpecker CI.

I like to keep things separated, in a KISS fashion. This makes changing either software easier.

BOFH666,

Still testing and fiddling, but I’m using the forgejo-runner. Renovate is just another repository, with a workflow to get it started:


<span style="color:#323232;">on:
</span><span style="color:#323232;">  schedule:
</span><span style="color:#323232;">    - cron: '5 2 * * *'
</span><span style="color:#323232;">    - cron: '5 14 * * *'
</span><span style="color:#323232;">
</span><span style="color:#323232;">jobs:
</span><span style="color:#323232;">  build:
</span><span style="color:#323232;">    runs-on: docker
</span><span style="color:#323232;">    container:
</span><span style="color:#323232;">      image: renovate/renovate:37.140-full
</span><span style="color:#323232;">    steps:
</span><span style="color:#323232;">      - name: Checkout
</span><span style="color:#323232;">        uses: actions/checkout@v3
</span><span style="color:#323232;">
</span><span style="color:#323232;">      - name: Run renovate
</span><span style="color:#323232;">        env:
</span><span style="color:#323232;">          PAT: ${{ secrets.PAT }}
</span><span style="color:#323232;">          GITHUB_COM_TOKEN: ${{ secrets.GITHUB }}
</span><span style="color:#323232;">        run: |
</span><span style="color:#323232;">          echo "Running renovate"
</span><span style="color:#323232;">          cd ${GITHUB_WORKSPACE}
</span><span style="color:#323232;">          renovate --token ${PAT}          
</span>

The renovate image has been pulled by hand and the forgejo-runner will happily start the image. Both PAT and GITHUB secrets are configured as ‘action secrets’ within the renovate repository.

Besides the workflow, the repository contains renovate.json and config.js, so renovate has the correct configuration.

Dehydrated,

I was about to suggest that

neidu2, in File server with on-demand sync, preserve the filesystem, and runs without external DB?

rsync?

dual_sport_dork, in Sounds like Haier is opening the door!
@dual_sport_dork@lemmy.world avatar

Yeah, they can fuck off. When their opening salvo was threats and legal bluster, I don’t see why anyone should trust an alleged olive branch now. The right thing to do was not to send this email second.

I have to work with Haier in my business now as well ever since they bought GE. They’re a shitty company that goes back on their word constantly (at least within the B2B space), and nobody should be giving them one thin dime.

Rentlar, (edited )

Respectfully, I disagree. Yes, indeed this first message is PR damage control, but there is something to be gained here for the FOSS community.

This backtrack sends the message out, discouraging other companies with legal departments from trying the same trick else they risk sales. If a positive resolution comes out of this (A. Andre’s project becomes officially supported by Haier with more features whilst being more efficient with API calls, or B. Haier develops a local API option) then it shows other companies there is value in working together with the FOSS community rather than viewing them as an adversary or as competition to be eliminated.

BearOfaTime,

Nah, this is Haier trying to save face. They saw how the story went, that the repo was forked a thousand times in a few hours. They know their engineering team can’t win, long term, against dedicated, pissed off geeks.

Would they play nice with you if the tables were reversed? No.

They already played the legal card, engaging with them at this point would be extremely naive.

Fuck them. Now is the time to pummel them even harder. Making them eat their words is what will send a message to the rest of the jackasses designing garbage and tracking us relentlessly for access to what should be trivial to engineer features.

kilgore_trout,

Legal threats come from lawyers, while this email comes from an engineer.

huginn,

… Which makes it even less credible legally.

Unless you’re getting C-suite level emails saying they’re not going to do it, don’t trust them.

And even then you should be ready to sue.

Bazoogle,

Generally, an engineer wants their product to work well and work efficiently. They put effort into a product, and it feels good to see people benefit from that work. The ones making the decisions have money on their mind. If a FOSS version of their paid platform costs them too much money, they will shut it down. Not because it was the engineers decision, but because the one’s making the decision likely don’t even know what github is and just know it’s taking away that sweet subscription money.

lemming741,

But a company is a sum of these (and other) people. In this case, it’s a draw at best, not a win.

BearOfaTime,

So?

They both represent the company. The company came on strong all ban-hammery, the news flashed around, his repo got forked over a thousand times in a matter of hours.

Haier found themselves on the defensive suddenly, so they got one of their engineers to play nice.

They now know they have 300k users who are pissed at them. People are choosing other products over this already.

Fuck them. With a pineapple. Corporations aren’t people, I owe them no consideration, no courtesy, especially when they act like this.

Rentlar, (edited ) in Sounds like Haier is opening the door!

I’m glad the threat of being on a FOSS Hall of Shame is effective for some companies, and that they can’t just frivolous lawsuit away a hobby developer without consequences to their bottom line, which would have set a bad precedent against small-time FOSS developers everywhere.

Now their status to me is moved from “Shitlist” to “Shitlist Pending”, they’ve talked their talk so now it’s time to see them walk their walk. Best would be to allow users to control their Haier products from their own servers rather than Haier’s. That will reduce their cloud computing bills from 3rd party users but they can still offer “compelling value” in their walled garden ecosystem as a simple one-and-done setup. Win-win right?

originalucifer, in Sounds like Haier is opening the door!
@originalucifer@moist.catsweat.com avatar

Recently, we've observed a substantial increase in AWS calls attributed to your plugin, prompting the communication you previously received as standard protocol for our company, but as mentioned earlier, we are committed to transparency and keenly interested in collaborating with you not only to optimize your plugin in alignment with our cost control objectives,

i get it; their amazon account gets hit hard by some plugin data stream, they trace the source and kill it for monetary reasons. makes total sense. handled terrible, but still, i also completely understand getting some giant bill from amazon and freaking the fuck out.

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

Yup exactly. They just need better responses than “get legal on the phone”

pearsaltchocolatebar,

Did you not read the letter you posted? It said a call with the IoT department.

tja,
@tja@sh.itjust.works avatar

Did you not read the linked issue? The first thing they did, before this letter, was sending a cease and desist

pearsaltchocolatebar,

I misread the comment, for sure. I thought they were talking about the call the letter referenced.

shnizmuffin,
@shnizmuffin@lemmy.inbutts.lol avatar

“We don’t know how to rate limit our API or set billing alarms in the AWS console.”

possiblylinux127,

They likely due. However overhead cost is overhead cost

x4740N, in Sounds like Haier is opening the door!
@x4740N@lemmy.world avatar

It’s damage control, they realised what they did was getting them bad PR since news of it started spreading so they are attempting to remedy the bad PR through damage control

Corporations only care about profits, not people

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

Oh absolutely agree, but this is where they can use it.

The dev can say that they obviously need an official plugin, and work with them on that because now they have 1,800 clones of an unofficial one that may not be optimized.

We also get to know that our tiny HA community has hit a critical mass large enough to get a corpo to freak out a bit

SoleInvictus,
@SoleInvictus@lemmy.world avatar

I did my part and sent them a “do this and I’ll never buy a Haier product” email. Corporations exist to maximize profits. Communities like ours just have to learn how to make it clear to them that shutting us out will hurt their profitability.

I think we should all be really proud of ourselves. We banded together and, regardless of WHY Haier is doing this, got them to open a line of communication. This is a huge win!

NaibofTabr, (edited )

Yes, it is damage control. That’s OK.

The whole point of spreading the word about an incident like this is to get public attention on it, and make the company realize that the way they’ve handled things was bad.

A letter like this indicates that they’ve realized they fucked up and they want to do things differently going forward. That doesn’t mean they’re suddenly trustworthy, but it does mean they can be negotiated with.

The correct response is to accept the offer of working together. We want to encourage companies to be cooperative and discourage insular, proprietary behavior. If you slap away the offered hand then you discourage future cooperation, and now you’re the roadblock to developing an open system.

When you start getting the results that you want, don’t respond with further hostility.

BearOfaTime,

Nope.

They’re on the ropes.

Keep pummeling them. There’s no integrity behind this, and going along will just let them get away with their bad behaviour.

They played the “We’ll sue your ass off” card first. That means it’s already in the legal realm, they never even triedto work with the OSS community, they basically said “fuck you” until the community replied, very clearly.

Had the community not responded by replicating the repo 1000+ times, and making a story about it, they would’ve continued down the path of slapping the little guy around.

They now realize they can’t compete with potentially 1000 people working on this, against them. They also fear they’ve pissed off some technophile who has some serious skills or connections. Wonder if they saw a sudden increase in probes on their internet interfaces.

Make it hurt. Let them be the cautionary tale.

delcake,

Exactly this. I understand the cynicism, but it ultimately doesn’t matter what the motivation of a company walking back a poor decision is. We take the chance for mutual collaboration and hopefully everyone benefits.

On an individual level, that’s when people can evaluate if they still want to boycott and do whatever their own moral compass demands. But refusing to work together at this point just means we definitely don’t get the chance in the future to steer things in a better direction.

NaibofTabr, (edited )

And even if the cooperation doesn’t last, it’s an opportunity for the open source developers to work with the product engineers and get direct information from them right now. There’s nothing as valuable as talking to the guy that actually designed the thing, or the guy who can make changes to the product code.

Even if that relationship doesn’t hold long term, the information gathered in the short term will be useful.

If I were part of this project this is what I’d be going for. Push the company to give you direct contact with the relevant engineers, right now while the negative public opinion is fresh and they’re most willing to make concessions, and then get as much out of that contact as you can. Take them at their word, make them actually back it up, take advantage of the offer to cooperate. Sort the rest of it out later.

Unchanged3656, in Sounds like Haier is opening the door!

Well, how about having a local API and have no calls at all to your cloud infrastructure? Probably too easy and you cannot lock people into your ecosystem.

helenslunch,
@helenslunch@feddit.nl avatar

From any practical standpoint, this makes so much sense.

Sometimes my Tesla fails to unlock for some reason and I have to disable my VPN and then stand next to it like a God damn idiot for 10 seconds while it calls it’s servers in fucking California to ask it to unlock my car.

dual_sport_dork,
@dual_sport_dork@lemmy.world avatar

As if I needed yet another reason to never ever own a Tesla.

My car has this crazy technology in it: You can stick the key in the door and twist and it’ll unlock. Even if the network is down or the battery is dead. Arcane, right?

gravitas_deficiency,

I will be driving my 03 1.8t 5mt Jetta into the ground, thank you very much.

SoleInvictus,
@SoleInvictus@lemmy.world avatar

Hell yes! My sister-in-law has your same year but the diesel version and that thing is a champ. It’s rated at 45 mpg on the highway but she typically gets 50+, even with nearly 200k miles on it.

I had a 2004 1.8t Jetta for 12 years but I swapped it for a Prius. I love the Prius features and fuel economy but I miss how damn quick that my Jetta was, plus I loved the interior color scheme.

helenslunch,
@helenslunch@feddit.nl avatar

Haha yeah there are other, more reliable methods but the “phone as a key” is also super convenient when it works properly, which is most of the time. It just would be a lot smarter if it worked locally.

dual_sport_dork,
@dual_sport_dork@lemmy.world avatar

…Or if there were an alternative option that didn’t rely on software and electronics is my point.

Cars have had electronic remote keyless entry for decades. It’s not new. Some of them even have phone apps that duplicate that functionality. No one but Tesla has been stupid enough to remove the keyhole, though.

helenslunch, (edited )
@helenslunch@feddit.nl avatar

I understood your point. My point is those electronics make it more convenient to use. Would I appreciate ALSO having a physical unlock mechanism? Sure. It also increases the attack surface.

Cars have had electronic remote keyless entry for decades.

As does Tesla.

Bazoogle,

I think it could definitely be possible to do locally, and I wouldn’t want a car where I have to connect to servers to connect to it. But I am also not sure I want a car that can be opened with a command on the car itself. The code to access your CAR being stored locally on the car itself, with no server side validation, does seem kinda scary. It’s one thing for someone to manage to get into your online login where you can change the password, it’s another for someone to literally be able to steal your car because they found a vulnerability. It being stored locally would mean people would reverse engineer it, they could potentially install a virus on your car to be able to gain access. Honestly, as a tech guy, I don’t trust computers enough to have it control my car.

helenslunch,
@helenslunch@feddit.nl avatar

It already unlocks locally over Bluetooth.

morph3ous,

The issue you are experiencing likely has nothing to do with the VPN. Network connectivity is not needed to unlock the car. I have been in places with no cell phone signal and it still works.

I do sometimes experience the same issue you are. If I wake up my phone, then it works. So it may be working for you not because you disabled the VPN, but because you woke up your phone and it then sent out the bluetooth signal to let the car know you were nearby.

helenslunch, (edited )
@helenslunch@feddit.nl avatar

When I have the VPN on I get nothing but a “Session Expired” notice for several months at a time.

psivchaz,

It’s a bit of both! Certain commands to the car can be done locally via Bluetooth OR via Tesla servers. The tricky bit is that status always comes from the server. If you are on a VPN that is blocked (like I use NordVPN and it is often blocked) then the app can’t get status and as long as it can’t get status it may not even try a local command. It’s unclear to me under what circumstances it does local vs cloud commands, and it may have to do with a Bluetooth LE connection that you can’t really control.

When you don’t have service, or you’re on VPN, it may be worthwhile to try disabling and reenabling Bluetooth. I have had success with this before. If you’re using android, it seems like the widget also uses Bluetooth, so you could try adding the widget to your home screen and using that. You can also try setting the Tesla app to not be power controlled, so it never gets closed.

Either way, there’s a definite engineering problem here that feels like it should be fixed by Tesla. But I can at least confirm that, even in situations with zero connectivity, you should be able to perform basic commands like unlock and open trunk without data service.

jkrtn,

I’m glad the people with this device are getting traction on using it with their HA, but holy hell this is a complete non-starter for me and I cannot understand why they got it in the first place. There’s no climate automation I would ever want that is worth a spying device connected to the internet and a spying app installed on my phone.

ikidd,
@ikidd@lemmy.world avatar

Extend this to robot vacuums. I have no clue in hell why anyone would want their vacuum connecting to a cloud service that won’t be there in 2 years.

Auli,

Yep people should only purchase things that don’t require the cloud. Local control is the best.

Rentlar,

Someone tell Gianpiero! You could save up to 20% on Amazon fees in just 5 minutes. Commit to a Local API today!

Unchanged3656,

Probably more. Your app can use the local API then as well. And AWS is insanely expensive, especially if you forget to block log ingestion to Cloudwatch (ask me how I know).

jkrtn,

I’m cynical so I assume they are turning a profit selling user data. So the lost money is not from AWS expenses but from not having installed apps to steal more data.

jabathekek, in Sounds like Haier is opening the door!
@jabathekek@sopuli.xyz avatar

The spacing in the email screwed up the formatting:

Dear Andre,

I’m Gianpiero Morbello, serving as the Head of IOT and Ecosystem at Haier Europe.

It’s a pleasure to hear from you. We just received your email, and coincidentally, I was in the process of sending you a mail with a similar suggestion.

I want to emphasize Haier Europe’s enthusiasm for supporting initiatives in the open world. Please note that our IOT vision revolves around a three-pillar strategy:

  • achieving 100% connectivity for our appliances,
  • opening our IOT infrastructure (we are aligned with Matter and extensively integrating third-party connections through APIs, and looking for any other opportunity it might be interesting),
  • and the third pillar involves enhancing consumer value through the integration of various appliances and services, as an example we are pretty active in the energy management opening our platform to solution which are coming from energy providers.

Our strategy’s cornerstone is the IOT platform and the HON app, introduced on AWS in 2020 with a focus on Privacy and Security by Design principles. We’re delighted that our HON connected appliances and solutions have been well-received so the number of connected active consumers is growing day after day, with high level of satisfaction proven by the high rates we receive in the App stores.

Prioritizing the efficiency of HON functions when making AWS calls has been crucial, particularly in light of the notable increase in active users mentioned above. This focus enables us to effectively control costs.

Recently, we’ve observed a substantial increase in AWS calls attributed to your plugin, prompting the communication you previously received as standard protocol for our company, but as mentioned earlier, we are committed to transparency and keenly interested in collaborating with you not only to optimize your plugin in alignment with our cost control objectives, but also to cooperate in better serving your community.

I propose scheduling a call involving our IOT Technology department to address the issue comprehensively and respond to any questions both parties may have.

Hope to hear back from you soon.

Best regards

Gianpiero Morbello Head of Brand & IOT Haier Europe

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

Thanks, on my phone and can’t edit it well right now

zaphod, (edited ) in I love my Gitea. Any tips and tricks?
@zaphod@lemmy.ca avatar

The idea of “self-hosting” git is so incredibly weird to me. Somehow GitHub managed to convince everyone that Git requires some kind of backend service. Meanwhile, I just push private code to bare repositories on my NAS via SSH.

azertyfun,

You’re completely missing the point. Even Gitea (much simpler than GitHub, nevermind GitLab) is much more than a git backend. It’s viewable in a browser, renders markdown, has integrated CI functionality, and so on.

Even for my meager self-host use-case, being able to view markdown docs in the browser is useful from time to time, even on my phone.

As for the things I use (a self-hosted) GitLab instance at work for… that doesn’t even scratch the surface.

ook_the_librarian,
@ook_the_librarian@lemmy.world avatar

Do you honestly think they’re “completely missing the point”? Read the meme. There’s no mention of gitea. Self-hosting git is nothing to wiggle your tie over. Maybe setting up the things you are talking about are, but git?

azertyfun,

The title of the post is literally “I love my Gitea”.

The content of them meme does conflate “git” with its various frontends (like gitea), but it’s an incredibly common misnomer so who cares?

The person I responded to then went on a weird rant about how “git by itself is distributed” which is completely irrelevant to the point since OP’s Gitea provides a whole lot more.

ook_the_librarian,
@ook_the_librarian@lemmy.world avatar

I said “read the meme” because that is all I was addressing. The title is just engagement-bait as far as I’m concerned. It’s either a meme or question. I’m sure others are here for the question but not the meme. And therefore, I’m being engagement-baited. Who knows, but I was clear about what I was talking about.

I just think saying “you’re completely missing the point” to a comment that is perfectly on topic is completely uncalled for.

I reason I think git is dead-simple to “self-host” is because I do it. I’m not a computer guy. I just used svn to version control some papers with fellow grad students. (it didn’t last, i was the only one that liked it.) so now i use git for some notes i archive. I’m not saying there aren’t tools to considerably upgrade the easy-of-use factor that would require some tech skills I don’t possess, but I stand by point.

platypus_plumba, (edited )

They didn’t convince anyone of anything, they just have a great free-tier service, so people prefer using it than self-hosting something. You can also self-hosted Github if you want the features they offer, besides Git.

zaphod, (edited )
@zaphod@lemmy.ca avatar

This post is about “self-hosting” a service, not using GitHub. That’s what I’m responding to.

I’m not saying GitHub isn’t valuable. I use it myself. And in any situation involving multiple collaborators I’d probably recommend that kind of tool–whether GitHub or some self-hosted option–for ease of user administration, familiar PR workflows, issue tracking, etc.

But if you’re a solo developer storing your code locally with no intention to share or collaborate, and you don’t want to use GitHub (as, again, is the case with this post) a self-hosted service adds a ton of complexity for only incremental value.

I suspect a ton of folks simply don’t realize that you don’t need anything more than ssh and git to push/pull remote git repositories because they largely cargo cult their way through source control.

HappyRedditRefugee,

Is running a docker container a lot of overhead?

Ernestly asking, since my opinion is skewed cause im use to running containers.

zaphod, (edited )
@zaphod@lemmy.ca avatar

Absolutely. Every service you run, whether containerized or not, is software you have to upgrade, maintain, and back up. Containers don’t magically alleviate the need for basic software/service maintenance.

HappyRedditRefugee,

Yes, but doesn’t that also apply for a machine running bare git?

Not containers also adds some challenges with posibly having dependecies problems. I’d say running bare git is not a lot easier than having a container with say forgejo.

zaphod, (edited )
@zaphod@lemmy.ca avatar

No. It’s strictly more complexity.

Right now I have a NAS. I have to upgrade and maintain my NAS. That’s table stakes already. But that alone is sufficient to use bare git repos.

If I add Gitea or whatever, I have to maintain my NAS, and a container running some additional software, and some sort of web proxy to access it. And in a disaster recovery scenario I’m now no longer just restoring some files on disk, I have to rebuild an entire service, restore it’s config and whatever backing store it uses, etc.

Even if you don’t already have a NAS, setting up a server with some storage running SSH is already necessary before you layer in an additional service like Gitea, whereas it’s all you need to store and interact with bare git repos. Put the other way, Gitea (for example) requires me to deploy all the things I need to host bare repos plus a bunch of addition complexity. It’s a strict (and non-trivial) superset.

HappyRedditRefugee,

I dont know. 😆 im really just trying to get it in case -for example- of needing to advice someone in such a case :) my confusion probably comes from the fact that I have never host anything outside containers.

I still see it a bit diferent. A well structured container structure with configs as files instead of bare commands, back up volumes would be the same effort… But who knows. Regarding the rest like proxies, well you do not really need one.

Thanks taking the time to explain you point tho!

zaphod, (edited )
@zaphod@lemmy.ca avatar

Honestly the issue here may be a lack of familiarity with how bare repos work? If that’s right, it could be worth experimenting with them if only to learn something new and fun, even if you never plan to use them. If anything it’s a good way to learn about git internals!

Anyway, apologies for the pissy coda at the end, I’ve deleted it as it was unnecessary. Keep on having fun!

vzq,

Bare repos with multiple users are a bit of a hassle because of file permissions. It works, and works well, as long as you set things up right and have clear processes. But god help you if you don’t.

I find that with multiple users the safest way is to set up/use a service. Plus you get a lot of extra features like issue tracking and stuff.

zaphod, (edited )
@zaphod@lemmy.ca avatar

Agreed, which is why you’ll find in a subsequent comment I allow for the fact that in a multi-user scenario, a support service on top of Git makes real sense.

Given this post is joking about being ashamed of their code, I can only surmise that, like I’m betting most self-hosters, they’re not dealing with a multi-user use case.

Well, that or they want to limit their shame to their close friends and/or colleagues…

Crashumbc, in I love my Gitea. Any tips and tricks?

I’m number 2

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #