Well, how about having a local API and have no calls at all to your cloud infrastructure? Probably too easy and you cannot lock people into your ecosystem.
From any practical standpoint, this makes so much sense.
Sometimes my Tesla fails to unlock for some reason and I have to disable my VPN and then stand next to it like a God damn idiot for 10 seconds while it calls it’s servers in fucking California to ask it to unlock my car.
As if I needed yet another reason to never ever own a Tesla.
My car has this crazy technology in it: You can stick the key in the door and twist and it’ll unlock. Even if the network is down or the battery is dead. Arcane, right?
Hell yes! My sister-in-law has your same year but the diesel version and that thing is a champ. It’s rated at 45 mpg on the highway but she typically gets 50+, even with nearly 200k miles on it.
I had a 2004 1.8t Jetta for 12 years but I swapped it for a Prius. I love the Prius features and fuel economy but I miss how damn quick that my Jetta was, plus I loved the interior color scheme.
Haha yeah there are other, more reliable methods but the “phone as a key” is also super convenient when it works properly, which is most of the time. It just would be a lot smarter if it worked locally.
…Or if there were an alternative option that didn’t rely on software and electronics is my point.
Cars have had electronic remote keyless entry for decades. It’s not new. Some of them even have phone apps that duplicate that functionality. No one but Tesla has been stupid enough to remove the keyhole, though.
I understood your point. My point is those electronics make it more convenient to use. Would I appreciate ALSO having a physical unlock mechanism? Sure. It also increases the attack surface.
Cars have had electronic remote keyless entry for decades.
I think it could definitely be possible to do locally, and I wouldn’t want a car where I have to connect to servers to connect to it. But I am also not sure I want a car that can be opened with a command on the car itself. The code to access your CAR being stored locally on the car itself, with no server side validation, does seem kinda scary. It’s one thing for someone to manage to get into your online login where you can change the password, it’s another for someone to literally be able to steal your car because they found a vulnerability. It being stored locally would mean people would reverse engineer it, they could potentially install a virus on your car to be able to gain access. Honestly, as a tech guy, I don’t trust computers enough to have it control my car.
The issue you are experiencing likely has nothing to do with the VPN. Network connectivity is not needed to unlock the car. I have been in places with no cell phone signal and it still works.
I do sometimes experience the same issue you are. If I wake up my phone, then it works. So it may be working for you not because you disabled the VPN, but because you woke up your phone and it then sent out the bluetooth signal to let the car know you were nearby.
It’s a bit of both! Certain commands to the car can be done locally via Bluetooth OR via Tesla servers. The tricky bit is that status always comes from the server. If you are on a VPN that is blocked (like I use NordVPN and it is often blocked) then the app can’t get status and as long as it can’t get status it may not even try a local command. It’s unclear to me under what circumstances it does local vs cloud commands, and it may have to do with a Bluetooth LE connection that you can’t really control.
When you don’t have service, or you’re on VPN, it may be worthwhile to try disabling and reenabling Bluetooth. I have had success with this before. If you’re using android, it seems like the widget also uses Bluetooth, so you could try adding the widget to your home screen and using that. You can also try setting the Tesla app to not be power controlled, so it never gets closed.
Either way, there’s a definite engineering problem here that feels like it should be fixed by Tesla. But I can at least confirm that, even in situations with zero connectivity, you should be able to perform basic commands like unlock and open trunk without data service.
I’m glad the people with this device are getting traction on using it with their HA, but holy hell this is a complete non-starter for me and I cannot understand why they got it in the first place. There’s no climate automation I would ever want that is worth a spying device connected to the internet and a spying app installed on my phone.
Probably more. Your app can use the local API then as well. And AWS is insanely expensive, especially if you forget to block log ingestion to Cloudwatch (ask me how I know).
I’m cynical so I assume they are turning a profit selling user data. So the lost money is not from AWS expenses but from not having installed apps to steal more data.
Nothing changed, the hardware is the same as before. Your little pi servers are still doing the exact same work they did before. The only variables are prices on SBCs vs used small factor x86s, and the short, short attention span of terminally online hobbyists.
Use whatever you like, no need to race after others’ subjective (and often hyperbolic) judgment.
It’s damage control, they realised what they did was getting them bad PR since news of it started spreading so they are attempting to remedy the bad PR through damage control
Oh absolutely agree, but this is where they can use it.
The dev can say that they obviously need an official plugin, and work with them on that because now they have 1,800 clones of an unofficial one that may not be optimized.
We also get to know that our tiny HA community has hit a critical mass large enough to get a corpo to freak out a bit
I did my part and sent them a “do this and I’ll never buy a Haier product” email. Corporations exist to maximize profits. Communities like ours just have to learn how to make it clear to them that shutting us out will hurt their profitability.
I think we should all be really proud of ourselves. We banded together and, regardless of WHY Haier is doing this, got them to open a line of communication. This is a huge win!
The whole point of spreading the word about an incident like this is to get public attention on it, and make the company realize that the way they’ve handled things was bad.
A letter like this indicates that they’ve realized they fucked up and they want to do things differently going forward. That doesn’t mean they’re suddenly trustworthy, but it does mean they can be negotiated with.
The correct response is to accept the offer of working together. We want to encourage companies to be cooperative and discourage insular, proprietary behavior. If you slap away the offered hand then you discourage future cooperation, and now you’re the roadblock to developing an open system.
When you start getting the results that you want, don’t respond with further hostility.
Keep pummeling them. There’s no integrity behind this, and going along will just let them get away with their bad behaviour.
They played the “We’ll sue your ass off” card first. That means it’s already in the legal realm, they never even triedto work with the OSS community, they basically said “fuck you” until the community replied, very clearly.
Had the community not responded by replicating the repo 1000+ times, and making a story about it, they would’ve continued down the path of slapping the little guy around.
They now realize they can’t compete with potentially 1000 people working on this, against them. They also fear they’ve pissed off some technophile who has some serious skills or connections. Wonder if they saw a sudden increase in probes on their internet interfaces.
Exactly this. I understand the cynicism, but it ultimately doesn’t matter what the motivation of a company walking back a poor decision is. We take the chance for mutual collaboration and hopefully everyone benefits.
On an individual level, that’s when people can evaluate if they still want to boycott and do whatever their own moral compass demands. But refusing to work together at this point just means we definitely don’t get the chance in the future to steer things in a better direction.
And even if the cooperation doesn’t last, it’s an opportunity for the open source developers to work with the product engineers and get direct information from them right now. There’s nothing as valuable as talking to the guy that actually designed the thing, or the guy who can make changes to the product code.
Even if that relationship doesn’t hold long term, the information gathered in the short term will be useful.
If I were part of this project this is what I’d be going for. Push the company to give you direct contact with the relevant engineers, right now while the negative public opinion is fresh and they’re most willing to make concessions, and then get as much out of that contact as you can. Take them at their word, make them actually back it up, take advantage of the offer to cooperate. Sort the rest of it out later.
I hate how cease and desist are essentially blackmail. Even if you did nothing wrong, you can still get fucked over by costs of a potential legal battle.
It’s a bigger problem in the States than elsewhere. In the US, awarding legal costs is the exception, not the norm, so someone with a lot of money and access to lawyers can basically intimidate a defendent into avoiding court. In the rest of the world, courts are much more likely to award costs to a defendent who has done nothing wrong - if you file a frivilous lawsuit and lose, you’ll probably have to pay the costs of the person you tried to sue.
This guy’s in Germany, so I think he’d be alright if he clearly won. The issue, however, is that courts aren’t really equipped for handling highly technical cases and often get things wrong.
I’m absolutely at that point with Nextcloud. I kind of didn’t want to go the syncthing route, but I’ll probably give it a shot anyway since none of the NC alternatives seem any better.
Any guidance on this? I looked into Synthing at one time to backup Android phones and got overwhelmed very quickly. I’d love to use it in a similar fashion to NextCloud for syncing between various computers too.
Well, it works in a different way than NextCloud. You don’t have a server, instead you just make a share between your computers and they are all peers.
It takes some getting used to the idea, but it’s actually much simpler than NextCloud.
@squidspinachfootball@marcos Syncthing syncs. It does one way syncs, but if your workflow is complex and depends on one way syncs that's probably not what you want.
Sync things between operational systems, then replicate to nonoperational systems, and backup to off site segregated systems.
I was very intimidated as well, I’ll try to simplify it, but as always check the documentation ;)
This is the process I used to sync between my Windows PC and Android phone to sync retroarch saves (works well, would recommend, Pokemon is awesome) I’ve never done it on a Linux, though i assume it’s not too different
I downloaded the Synctrazor program so that it would run in the tray, again I’m not sure what the equivalent/if this would be necessary on Linux.
No shade to the writers, but the documentation isn’t super noob friendly, as I figured out. I’d recommend trying to cut out all the fluff, and boil it down to bare essentials. Download the program (whichever one seems right for your device, there’s an app for Android) and follow the process for syncing stuff (I believe I used a video guide, but it’s not actually as complicated as it seems)
If you need specific help I’d be happy to answer questions, though I only understand a certain amount myself XD
It really wasn’t all that complicated for me. Install the client on two devices set a share up on one device go to the other device Hit add device put the share ID in. Go back to the first devices admin and say allow the share
i have been running the new owncloud (ocis) and, with some quirks and very basic functionality, it’s been running for 2+ years and survived multiple updates without major complications
If you’re on Firefox on desktop/laptop, check out Bypass Paywall [0]. It was removed from the firefox add-on store due to a DMCA claim [1], but can be manually installed (and auto updates) from gitlab. The dev even provides instructions on how to add custom filters to uBlock Origin [2], so you don’t have to add another extension but still get some benefit.
Sadly in the 4 years I’m using it jellyfin still couldn’t figure out how to correctly display series season covers and has some streaming bugs (no audio when audio is DTS and PGS subs are enabled ect)
You should redo your org from scratch and let all the default plugins do the work. Mine looks great and I never changed anything, just followed the recommended file org pattern for Movies and TV Shows.
Exactly. 99% of these issues are not naming the files as Jellyfin needs, which I understand can be annoying if you have a large number of files to move to it. And having the right access permissions for files, if you are on Linux.
Filebot is nice for that, it is what I used when I first got into Plex and realized the reason I had so many problems is because of the way I named files. This was before I even knew Sonarr and Radarr existed, now you can get them to do it.
Iirc Plex supports transcoding for downloads, while Jellyfin only allows downloading the original file. But I’ve heard transcoding downloads is broken on Plex, so ymmv.
Intro skip is only available as a plugin on Jellyfin.
Also, Findroid has a better ui and supports downloads, while the official app has more features (ie. settings/admin panel).
It’s in the second paragraph. This is the beginning of the monetization for everything in Plex now that they have a good user base. They are starting to ramp up the milking.
It will become like any other shitty streaming service eventually.
Proprietary when flatpak exists, and it doesn’t properly address how apps should dynamically request access to things they need. Every time I’ve used either solution I’ve run into some permissions problem.
For desktop apps maybe. How do you run a flatpak from the cli? “flatpak run org.something.Command”. Awesome.
Both suffer from not making it obvious what directories your application can access and not providing a clear message when you try to access files it can’t. The user experience sucks.
Yeah, they can fuck off. When their opening salvo was threats and legal bluster, I don’t see why anyone should trust an alleged olive branch now. The right thing to do was not to send this email second.
I have to work with Haier in my business now as well ever since they bought GE. They’re a shitty company that goes back on their word constantly (at least within the B2B space), and nobody should be giving them one thin dime.
Respectfully, I disagree. Yes, indeed this first message is PR damage control, but there is something to be gained here for the FOSS community.
This backtrack sends the message out, discouraging other companies with legal departments from trying the same trick else they risk sales. If a positive resolution comes out of this (A. Andre’s project becomes officially supported by Haier with more features whilst being more efficient with API calls, or B. Haier develops a local API option) then it shows other companies there is value in working together with the FOSS community rather than viewing them as an adversary or as competition to be eliminated.
Nah, this is Haier trying to save face. They saw how the story went, that the repo was forked a thousand times in a few hours. They know their engineering team can’t win, long term, against dedicated, pissed off geeks.
Would they play nice with you if the tables were reversed? No.
They already played the legal card, engaging with them at this point would be extremely naive.
Fuck them. Now is the time to pummel them even harder. Making them eat their words is what will send a message to the rest of the jackasses designing garbage and tracking us relentlessly for access to what should be trivial to engineer features.
Generally, an engineer wants their product to work well and work efficiently. They put effort into a product, and it feels good to see people benefit from that work. The ones making the decisions have money on their mind. If a FOSS version of their paid platform costs them too much money, they will shut it down. Not because it was the engineers decision, but because the one’s making the decision likely don’t even know what github is and just know it’s taking away that sweet subscription money.
They both represent the company. The company came on strong all ban-hammery, the news flashed around, his repo got forked over a thousand times in a matter of hours.
Haier found themselves on the defensive suddenly, so they got one of their engineers to play nice.
They now know they have 300k users who are pissed at them. People are choosing other products over this already.
Fuck them. With a pineapple. Corporations aren’t people, I owe them no consideration, no courtesy, especially when they act like this.
The idea of “self-hosting” git is so incredibly weird to me. Somehow GitHub managed to convince everyone that Git requires some kind of backend service. Meanwhile, I just push private code to bare repositories on my NAS via SSH.
You’re completely missing the point. Even Gitea (much simpler than GitHub, nevermind GitLab) is much more than a git backend. It’s viewable in a browser, renders markdown, has integrated CI functionality, and so on.
Even for my meager self-host use-case, being able to view markdown docs in the browser is useful from time to time, even on my phone.
As for the things I use (a self-hosted) GitLab instance at work for… that doesn’t even scratch the surface.
Do you honestly think they’re “completely missing the point”? Read the meme. There’s no mention of gitea. Self-hosting git is nothing to wiggle your tie over. Maybe setting up the things you are talking about are, but git?
The title of the post is literally “I love my Gitea”.
The content of them meme does conflate “git” with its various frontends (like gitea), but it’s an incredibly common misnomer so who cares?
The person I responded to then went on a weird rant about how “git by itself is distributed” which is completely irrelevant to the point since OP’s Gitea provides a whole lot more.
I said “read the meme” because that is all I was addressing. The title is just engagement-bait as far as I’m concerned. It’s either a meme or question. I’m sure others are here for the question but not the meme. And therefore, I’m being engagement-baited. Who knows, but I was clear about what I was talking about.
I just think saying “you’re completely missing the point” to a comment that is perfectly on topic is completely uncalled for.
I reason I think git is dead-simple to “self-host” is because I do it. I’m not a computer guy. I just used svn to version control some papers with fellow grad students. (it didn’t last, i was the only one that liked it.) so now i use git for some notes i archive. I’m not saying there aren’t tools to considerably upgrade the easy-of-use factor that would require some tech skills I don’t possess, but I stand by point.
They didn’t convince anyone of anything, they just have a great free-tier service, so people prefer using it than self-hosting something. You can also self-hosted Github if you want the features they offer, besides Git.
This post is about “self-hosting” a service, not using GitHub. That’s what I’m responding to.
I’m not saying GitHub isn’t valuable. I use it myself. And in any situation involving multiple collaborators I’d probably recommend that kind of tool–whether GitHub or some self-hosted option–for ease of user administration, familiar PR workflows, issue tracking, etc.
But if you’re a solo developer storing your code locally with no intention to share or collaborate, and you don’t want to use GitHub (as, again, is the case with this post) a self-hosted service adds a ton of complexity for only incremental value.
I suspect a ton of folks simply don’t realize that you don’t need anything more than ssh and git to push/pull remote git repositories because they largely cargo cult their way through source control.
Absolutely. Every service you run, whether containerized or not, is software you have to upgrade, maintain, and back up. Containers don’t magically alleviate the need for basic software/service maintenance.
Yes, but doesn’t that also apply for a machine running bare git?
Not containers also adds some challenges with posibly having dependecies problems. I’d say running bare git is not a lot easier than having a container with say forgejo.
Right now I have a NAS. I have to upgrade and maintain my NAS. That’s table stakes already. But that alone is sufficient to use bare git repos.
If I add Gitea or whatever, I have to maintain my NAS, and a container running some additional software, and some sort of web proxy to access it. And in a disaster recovery scenario I’m now no longer just restoring some files on disk, I have to rebuild an entire service, restore it’s config and whatever backing store it uses, etc.
Even if you don’t already have a NAS, setting up a server with some storage running SSH is already necessary before you layer in an additional service like Gitea, whereas it’s all you need to store and interact with bare git repos. Put the other way, Gitea (for example) requires me to deploy all the things I need to host bare repos plus a bunch of addition complexity. It’s a strict (and non-trivial) superset.
I dont know. 😆 im really just trying to get it in case -for example- of needing to advice someone in such a case :) my confusion probably comes from the fact that I have never host anything outside containers.
I still see it a bit diferent. A well structured container structure with configs as files instead of bare commands, back up volumes would be the same effort… But who knows. Regarding the rest like proxies, well you do not really need one.
Honestly the issue here may be a lack of familiarity with how bare repos work? If that’s right, it could be worth experimenting with them if only to learn something new and fun, even if you never plan to use them. If anything it’s a good way to learn about git internals!
Anyway, apologies for the pissy coda at the end, I’ve deleted it as it was unnecessary. Keep on having fun!
Bare repos with multiple users are a bit of a hassle because of file permissions. It works, and works well, as long as you set things up right and have clear processes. But god help you if you don’t.
I find that with multiple users the safest way is to set up/use a service. Plus you get a lot of extra features like issue tracking and stuff.
Agreed, which is why you’ll find in a subsequent comment I allow for the fact that in a multi-user scenario, a support service on top of Git makes real sense.
Given this post is joking about being ashamed of their code, I can only surmise that, like I’m betting most self-hosters, they’re not dealing with a multi-user use case.
Well, that or they want to limit their shame to their close friends and/or colleagues…
I thought I’d give this a shot, but the metrics/data collection flag was turned on by default and when I added a command to my docker-compose to turn them off, it was ignored. Then, I created an account and looked for a way to turn them off in the settings and there was none. You expect people interested in self-hosting OSS to be cool with sending data out of their network every time the server is started, a memo is created, a comment is created, a webhook is dispatched, a resource or a user is created?! Also, the metrics are collected by a 3rd party with their own ToS that could change at any time?
Holy hell, hard pass. I’d rather use a piece of paper.
Saved me the effort, thanks. Although, couldn’t you just block the container from talking outside your network? I can’t see why I’d need a memo app (server) to have access to the internet.
That’s not good enough in my opinion, it should be opt in, not opt out. They’re marketing it on their site as being more secure because you can self-host. It all just seems really skeevy.
It would appear that blocking app.posthog.com on the host/network resolves this. But I got the parameter to work, too, as per www.usememos.com/docs/advanced-settings/metrics use ‘–metric=false’ and bam, no DNS queries!
Yeah, I’d assumed it would respect the —metric=false flag when building with docker run, but docker-compose is ostensibly supported and easier to work with. I was able to successfully change other configuration options (such as setting the db to use MySQL instead of the default SQLite) using the docker-compose ‘command’ block, but the metric flag specifically was ignored. It’s entirely possible that this is a bug and not an intentional attempt to hoover up user data. Either way, data collection should be opt-in by default (by law, imo).
Do NOT self-host email! In the long run, you’ll forget a security patch, someone breaches your server, blasts out spam and you’ll end up on every blacklist imaginable with your domain and server.
Buy a domain, DON’T use GoDaddy, they are bastards. I’d suggest OVH for European domains or Cloudflare for international ones.
After you have your domain, register with “Microsoft 365” or “Google Workspace” (I’d avoid Google, they don’t have a stable offering) or any other E-Mail-Provider that allows custom domains.
Follow their instructions on how to connect your domain to their service (a few MX and TXT records usually suffice) and you’re done.
After that, you can spin up a VPS and try out new stuff and connect it also to your domain (A and CNAMR records).
I’d throw in mailbox.org as a more privacy-focused alternative to Google and Microsoft. Been using them for years without issues. Only their 2FA solution sucks.
If you get your domain from OVH, you get one single mailbox (be it with a lot of aliases, like a different email-address for every service/website you use) for free.
I did as well, but then I went Microsoft and never looked back. Google’s platform still feels like a shitty startup with missing stuff everywhere, compared to Azure (or AWS).
The only thing I’m missing is Google Photos, but there are self-hosted alternatives out, that I’ll try soon.
All good advice. I’d recommended protonmail for mail hosting - got very good experience with them and the onky downside is you have to use their client.
I’ll second not self hosting email unless you’re in it for the experience.
I’d also strongly caution against hosting email for friends and family unless you want to own that relationship for the rest of your life.
If you do it anyway, you’re going to end up locked into whatever solution you decide for a long time, because now you have users who rely on that solution.
If you still go forward, don’t use Google (or msft). Use a dedicated email service. Having your personal domain tied to those services just further complicates the lock in.
(I did this over a decade ago, with Google, when it was just free vanity domain hosting. I’ve been trying for years to get my users migrated to Gmail accounts.)
If I had it all to do over again. I’d probably setup accounts as vanity forwards to a “real” account for people who wanted them. That’s easy to maintain, move around, and you’re not dealing with migrating peoples oauth to everything when you want to move or stop paying for it.
I have a bunch of users (friends and family) on a bunch of different domains. It’s honestly not so bad but yeah, you need a decent dedicated service.
Migrations aren’t simple but aren’t that complicated either (just did one last year).
I mainly need to copy their email over but it’s also a good moment to check they’re using decent passwords and to have them freshen it.
I also need to update their webmail and IMAP/SMTP URLs in their bookmark/email apps but I’ve been playing with DNS CNAMEs for this purpose and it’s mostly working ok (aliasing one of my domains to the provider’s so I only have to update the DNS which I do anyway for a mail migration).
My mistake was using Google but when it was just the ability to have a personal domain as your google account. But they kept expanding and morphing that into what is now Google Workspace. Migrating people off of that requires them to abandon their Google accounts and start over. If it was just email it would be a much simpler prospect to change backends.
Certainly. But, what I’m trying to say is it’s not just email. My users are using my domain as their Google account. All Google services, oAuth, etc…, not just email. To do it right I need to get them to migrate their google services to a gmail.com account.
I currently selfhost mailcow on a small VPS but I would like to move the receiving part to my homelab and only use a small VPS or service like SES for sending.
I set this up a couple years ago but I seem to remember AWS walking me through the initial setup.
First you’ll need to configure your domain(s) in SES. It requires you to set some DNS records to verify ownership. You’ll also need to configure your SPF record(s) to allow email to be sent through SES. They provide you with all of this information.
Next, you’ll need to configure SES credentials or it won’t accept mail from your servers. From a security standpoint, if you have multiple SMTP servers I would give each a unique set of credentials but you can get away with one for simplicity.
I’ve got postfix configured on each of my VPS servers, plus and internal relay, to relay all mail through SES. To the best of my knowledge it’s worked fine. I haven’t had issues with mail getting dropped or flagged as SPAM.
There is a cost, but with my email volumes (which are admittedly low) it costs me 2-3 cents a month.
I’d avoid Google, they don’t have a stable offering
What you you mean by not stable?
I’ve been (stuck with) Google Workspace for many, many years - I was grandfathered out from the old G-Suite plans. The biggest issue for me is that all my Play store purchases for my Android are tied to my Workspace’s identity, and there’s no way to unhook that if I move.
I want to move. I have serious trust issues with Google. But I can’t stop paying for Workspaces, as it means I’d lose all my Android purchases. It’s Hotel fucking California.
But I’ve always found the email to be stable, reliable, and the spam filtering is top notch (after they acquired and rolled Postini into the service).
I mean, they kill services willy nilly. Sure Gmail will probably survive, but the rest drove me away (Reader, Music, …).
Regarding your Android purchases: At the time of my move I went through my list of apps I bought and tallied the ones up, that I still used. It was less than $50 of repurchases.
Don’t let those old purchases hold you back. Cut this old baggage loose.
At the time of my move I went through my list of apps I bought and tallied the ones up, that I still used. It was less than $50 of repurchases.
Yeah, I know this what I should do too. As someone else said in this comment thread, gotta tear that bandaid off at some point. Just shits me that I should have to. But the freedom after doing it… <chef’s kiss>
“But I shouldn’t have to” is a trap, everywhere it occurs. It cripples one’s ability to act on an emotional level, and manifests as all kinds of resistances and avoidances that ultimately prevent you from seeing the problem clearly - and if you somehow do see the problem clearly, you still don’t want to do anything about it.
The world owes you nothing. You exist. If you want love and fairness and a reasonable world, love and be fair and be reasonable, and choose to work together with those who are. Where you work, what you spend your time on, where you spend your money, and who you spend your time with are your places of impact. Don’t let others steal that - particularly over ‘but I shouldn’t have to defend myself.’
I tore that bandwidth off a while ago. Same thing with trust issues and google.
Since then I set up a family account and use a regular Gmail account for app store purchases so I can change provider at any time. Can share most of my app purchases with family. I don’t actually check the gmail email. Just use it for Android services.
Yeah, that’s the other thing that shits me. Paying for my wife and I on Workspaces, and we don’t have family sharing rights. We’re literally paying to be treated like second-class citizens!
Recently, we've observed a substantial increase in AWS calls attributed to your plugin, prompting the communication you previously received as standard protocol for our company, but as mentioned earlier, we are committed to transparency and keenly interested in collaborating with you not only to optimize your plugin in alignment with our cost control objectives,
i get it; their amazon account gets hit hard by some plugin data stream, they trace the source and kill it for monetary reasons. makes total sense. handled terrible, but still, i also completely understand getting some giant bill from amazon and freaking the fuck out.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.