Not a lawyer; would this likely stand up in court? Obviously I wouldn’t risk it were I the dev, but just curious.
It’s pathetic that I’ll happily recommend my Emporia Vue2 energy monitor to folks running HA — not because it works out of the box, but because the company is aware of the community integration projects and seems ok with it, even if they don’t actually support it. (ESPHome Firmware flash gives you local control — It’s been pretty great!)
Not a lawyer; would this likely stand up in court?
I’m not a lawyer either, but I don’t think so.
The developer of this Home Assistant integration is German. European law allows people to reverse engineer apps for the purpose of interoperability (Article 6 of the EU software directive), so observation of the app’s behaviour or even disassembling it to create a Home Assistant integration is not illegal.
In general, writing your own code by observing the inputs to and outputs from an existing system is not illegal, which is for example how video game emulators are legal (just talking about the emulator code itself, not the content you use with it).
If it’s a Terms of Service violation, it’d be the users that are violating the ToS, not the developer. In theory, the Home Assistant integration could have been developed without ever running the app or agreeing to Haier’s Terms of Service, for example if the app is decompiled and the API client code is viewed (which again is allowed by the EU software directive if the sole purpose is for interoperability).
The code in this repo is likely original Python code that was written without using any of Haier’s code and without bypassing any sort of copy protection, so it’s not a DMCA infringement either.
The problem is that Tailscale gives your server a “magic” ip, which isn’t the same one as on your local network. On your local network, do you access them by port? Or reverse proxy?
I think this is what you should look into. Are the services in Heimdall listed with the local IP or host names? Or are they referenced with the tailscale IP?
Three things I want to add here:
On tailscale I can only access my home lab’s root page with the services being accessible with something like domain.tld/service.
service.domain.tld is not supported by tailscale. (See github issue)
The local domain is different to the tailscale domain. If you want to use them with a reverse proxy (nginx, caddy) you need to have rules configured for your tailscale magic DNS domain too.
90% of network traffic uses the primary, but some things like to use both or exclusively the secomd one on random days.
I use Gravity-Sync to keep the settings/lists between them identical. (lots of local dns records for local self-hosted stuff, and each device has a static ip + dns record to identify it easily in logs)
I tried that, devices just request to alternate dns when they get nothing from pihole. I use adgh, and ig there is a setting where you can set the answer to blocked stuff, like 0.0.0.0, empty…etc. if you set thay to 0.0.0.0, devices won’t query 2nd dns (i hope) when adgh is up. But it is best to have a 2nd pihole/adgh, i have one on my proxmox and another on a pi, synced with adgh sync Edit: if you don’t have another pi, use nextdns
Throwing my +1 behind Hetzner, it’s so much more bang for your buck than with a VPS and I’ve been pleased with the stability and uptime I get out of my auction box.
I run pihole on proxomox, and also opnsense in the same box. Then you can forward all port 53 traffic to your pihole. Some devices have hard-coded DNS that will bypass the DHCP DNS.
Lol - not my first rodeo. I’m blocking dns.google as well, and I’m 99.999% certain Google won’t have coded Chromecasts to use anyone else’s DNS servers.
Ha! This is my new way of looking at my smart devices. I’ll sell you off if you don’t do what I want, and buy something that does. Very much a threat.
I recently factory reset all my Roku TVs, and didn’t connect them to the internet… and they work much better now.
Roku broke big time when I insisted on privacy. blocked the entire Roku domain, it broke the apps on a 1-month schedule like clockwork to get the network release for reinstall which allowed for phone home. lol no. I trashed it. They are dumb TVs now.
Since you’ve probably been using the SMB protocol to access the NAS you probably need to understand a few things about the NFS protocol which functions differently. The NFS mount acts like a mapping of the entire system, rather than a specific user. That means that if there are differences in the systems, you may get access errors. For example the default user in Synology has a uid of 1024, but most client systems have a default of 1000. This means your user may not have access to the share or files, even if you have it mounted on the client.
One thing to check is what your Shared Folder’s NFS permissions squash is set to. This is found in Control Panel > Shared Folder the the NFS permissions tab. If it’s set to “no mapping” then uids must match. The easiest setup is to “map all users to admin” but you may encounter issues with that later if you switch back to SMB since new files will be owned by admin.
Thanks for your help! I did setup my NAS share as NFS capable, and I mapped the users as admin. Using the command mentioned in my other comment I could mount the share successfully and find it in several applications. Cheers!
You could use just a simple Apache (or even some simpler static file server) with no authentication what so ever, but only accessible to your own network. Then, add a Reverse Proxy Gateway such as Traefik, Caddy or whatever else, and add Authentik as a Middleware. User heads to the site (I.e.: files.yourdomain.ext), Reverse Proxy Gateway bounces the request to the Middleware (I.e. Authentik), requires the SSO via whatever authority you’ve got setup, gets bounced back, and then your Reverse Proxy Gateway serves up the static content via the internal network without authentication (i.e.: 172.16.10.3).
I currently have Nextcloud and am looking to move away from it. Mainly because my calendar-subscriptions somehow broke and all calendars i subscribe to now are just empty (the old ones are still populated), but also because it makes the subscribed-to calendars available as WebCal, which Thunderbird doesn’t recognize.
The simplest solution would be to install Debian. The thing to note is that the Debian installer is designed to be multipurpose so it will default to installing a GUI.
Assuming you can boot off of a live USB with the Debian installer, you can follow the steps until you get to tasksel software selection from there uncheck gnome and check system utilities and ssh server. Also Debian defaults to separate root and user accounts. I would recommend disabling root (see steps below)
On a different machine, ssh into the server (I’m using debian.local but you should replace that with a hostname or IP)
Now you have a system to set things up. I would start by enabling automatic updates and installing docker compose. (Docker compose allows you to deploy software very quickly in co trainers via a yaml spec)
Thanks, I decided to see what happened with a Mint Install (Before I saw your reply) so as a Toe-in-water thing to learn more about the OS and see what stuff was like. I only Kitty into a Linux server for work and do some basic tasks on it occasionally so was interested.
An … interesting experience… trivial install, easy enough to understand the UI, entirely failed to get a Plex server working though… Nothing on the network can see it (Local works fine) which doesn’t make much difference because Plex has nothing to server since it can’t see the folder with movies on it due to, I believe, ownership issues (The files are on a portable USB drive)
Still fiddling but most help documents descend into arcane command line arguments very quickly and are generally “wrong” in that they suggest editing files that don’t exist in folders that aren’t there.
Still… a learning experience :) (Easy enough to kill it and tried Debian if I can’t work out chown!
Hah! Apparently in the long list of UFW commands I was running, the first one didn’t run or I missed it, can see the server now at least, just need it to see the files!
Entertaining but the wife is getting impatient :/
I personally graduated from a Rpi3b to an Intel NUC years ago and never looked back. Real RAM slots and Storage options internally and you can get as nice a processor as your budget allows. So my vote is to move to the SFF PC and let your Pi stick around for other projects.
Thanks for your insights. Thought about a NUC as well, but AFAIK it doesn’t have pcie slots? So I won’t be able to install eg a graphics card or pcie coral?
I wouldn’t go NUC if you need a PCIe slot. The HP you were talking about would fit the bill though.
I believe they make a Coral that fits where the wifi chip goes too. As long as you are ok ditching the wifi/bt functionality for a TPU. For a server doing image processing that’s almost a no-brainer to me.
selfhosted
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.