Honestly I have no clue. My train of thought was if you could register it, which I havent tested, then the warranty could work. Was kinda hoping someone here or in the original post went through my situation and could clairfy things.
I’ve been using AWS R53 for this for ages and it works well. Not specifically recommending AWS but using dynamic updates rather than a DDNS service (or running your own name server which I’ve also done).
Sir, this is Lemmy. People treat the applications and hardware you use with ethical alignment and switching to FOSS literally has approval on the level of religious conversion.
It’s no wonder people around here care so much about random people’s opinions, the place practically filters for it.
Not sure what kind of tinker board you’re working with, but the power of Pis has increased exponentially through its generations. There are tasks that would run slowly on a dedicated Pi2 that ran easily in parallel with a half dozen other things on a Pi4.
The older ones can still be useful, just for less intensive tasks.
Out of interest from someone with an Rpi4 and Immich, did you deactivate the machine learning? I did since I was worried it will be too much for the Pi, just curious to hear if its doable or not after all.
If z2m, zwavejs,… Are installed from the adon store of HA, all you have to do is create a full backup of HA, and all your automations will be saved and restored automatically.
I am running HA in a container, so that’s not an option, unfortunately. If I’m being honest, though, it’s probably not a bad idea to start fresh with HA and re-import individual automations one-by-one, because HA has a lot of “slop” leftover from when I was first learning it and playing around with it.
I will provide a word of advice since you mentioned messiness. My original server was just one phyiscla host which I would install new stuff to. And then I started realizing that I would forget about stuff or that if I removed something later there may still be lingering related files or dependencies. Now I run all my apps in docker containers and use docker-compose for every single one. No more messiness or extra dependencies. If I try out an app and don’t like it, boom container deleted, end of story.
Extra benefit is that I have less to backup. I only need to backup the docker compose files themselves and whatever persistent volumes are mounted to each container.
I forgot to mention, I do use docker-compose for (almost) all the stuff I’m currently using and, yes, it’s pretty great for keeping things, well… containerized, haha. Clean, organized, and easy to tinker with something and completely ditch it if it doesn’t work out.
They both qualify as “open, federated messaging protocols”, with XMPP being the oldest (about 25 years old) and an internet standard (IETF) but at this point we can consider Matrix to be quite old, too (10 years old). On the paper they are quite interchangeable, they both focus on bridging with established protocols, etc.
Where things differ, though, is that Matrix is practically a single vendor implementation: the same organization (Element/New Vector/ however it’s called these days) develops both the reference client and the reference server. Which incidentally is super complex, not well documented (the code is the documentation), and practically not compatible with the other (semi-official) implementations. This is a red herring because it also happens that this organization was built on venture capital money with no financial stability in sight. XMPP is a much more diverse and accessible ecosystem: there are multiple independent teams and corporations implementing servers and clients, the protocol itself is very stable, versatile and extensible. This is how you can find XMPP today running the backbone of the modern internet, dispatching notifications to all Android devices, being the signaling system behind millions of IoT devices, providing messaging to billion of users (WhatsApp is, by the way, based on XMPP)
Another significant difference is that, despite 10 years of existence and millions invested into it, Matrix still has not reached stability (and probably never will): the organization recently announced Matrix 2 as the (yet another) definitive answer to the protocol’s shortcomings, without changing anything to what makes the protocol so painful to work with, and the requirements (compute, memory, bandwidth) to run Matrix at even a small scale are still orders of magnitude higher than XMPP. This discouraged many organizations (even serious ones, like Mozilla, KDE, …) from running Matrix themselves and further contributes to the de-facto centralization and single point of control federated protocols are meant to prevent.
I’ve used Matrix for months and agree with most points. I would like to try XMPP but it is clear that it does not have the best onboarding experience.
The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.
The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.
That’s a valid concern, but I wouldn’t call it a problem. There are practically 2 types of clients/servers: the ones which are maintained, and which work absolutely fine and well together, and the rest, the unmaintained/abandoned part of the ecosystem.
And with the protocol being so stable and backwards/forwards compatible in large parts, those unmaintained clients will just work, just not with the latest and greatest features (XMPP has the machinery to let clients and servers advertise about their supported features so the experience is at least cohesive).
Which client would you recommend?
Depends on which platform you are on and the type of usage. You should be able to pick one as advertised on joinjabber.org , that should keep you away from the fringe/unmaintained stuff. Personally I use gajim and monocles.
This exactly. If you already have Pis they are still great. Back when they were $35 it was a pretty good value proposition with none of the power or space requirements of a full size x86 PC. But for $80-$100 it’s really only worth it if you actually need something small, or if you plan to actually use the gpio pins for a project.
If you’re just hosting software a several year old used desktop will outperform it significantly and cost about the same.
True. I did some rough math when I needed to right-size a UPS for my home server rack and estimated that running a Pi4 for a year would cost me about $8 worth of electricity and that running an x86 desktop would cost me about $40. Not insignificant for sure if you’re not going to use the extra performance that an x86 PC can offer.
You’re quite right about the Pi 5 power efficiency, an Alder Lake N100 / i3 will smoke it in ops / watt given the right board, but the context was ‘a several year old used desktop’ which the Pi will handily beat.
Depends on what it’s doing. The Pi5 has lower idle power usage but if it’s under constant load it’s actually very inefficient. Keep in mind that the Pi5 has a 25W max TDP, almost as high as the N100.
The reason that the N100 is seems less efficient in Jeff’s video is because it’s clocked a lot higher. And power usage increases exponentially with higher clockspeeds
The Pi5 is made on the 28nm node, which is from around 2011. Of course it has other efficiency improvements like the GPU and the ARM architecture, but pound for pound I don’t think the Pi5 even beats a 6 year old desktop in efficiency if the desktop was properly downclocked and not running some inefficient HDD’s or the likes.
Rockchip boards on the other hand are made on 22nm, which is why they tend to be a bit more efficient.
I kept buying Pi Zero Ws, hats and phats then put them all in a drawer cos I couldn’t decide what to do with them. I think I’ve got about 7 or 8. I really should do something with them.
Pwnagotchi is an A2C-based “AI” leveraging bettercap that learns from its surrounding WiFi environment to maximize the crackable WPA key material it captures (either passively, or by performing authentication and association attacks). This material is collected as PCAP files containing any form of handshake supported by hashcat, including PMKIDs, full and half WPA handshakes.
Your bridge isn’t bridging properly. If Router B is sending a destination unreachable then the packets are being handled on it further up the stack at layer 3 by some sort of routing component rather than by a layer 2 bridging one.
run ip route and ip route get $CLIENT_PUBLIC_IP on router B and see if it has a route to the client, and/or if the default route is correct. Its default gateway might not be set correctly (it should be router A)
and responds appropriately (SYN, ACK),
Does it respond to the client address (public IP?)
I’m not exactly sure what the previous issue was, but it appears that, possibly, the previous bridge that was in use was broken in some way. I have since switched the primary router to one that supports WDS, and created a WDS bridge between the two, and now everything is working as expected.
I have a pi which I use as an apple tv/firestick alternative which works very well and would be pretty pointless with a larger pc imo. Servers I dont do with small PIs but indeed old computers. I think all kinds of ultra movable devices will be good with PI and derivatives.
For folks that want to get into it: pine64 is open source but I havent tried it yet. Thinking of it though. They even have a watch.
The two things to keep in mind with pine64 is that they ship hardware before the software is ready and because they are less popular there is less support.
I like there hardware but its just something to keep in mind. The good news is that to my knowledge all of their single board computers can run regular linux.
The main shortcoming is that the software hasn’t matured yet. Its true you could use Debian or Gentoo and get a decent machine but I would hold off using it for anything important. You won’t find Risc-V images on docker hub and flathub only barely has arm support.
People are shitting on them because the price point for arm sbcs has risen, while the price point for small x86 computers has come down. Also, x86 availability is high and arm sbc availability has become unreliable. They also aren’t generally supported nearly as well. If you don’t need more power and you already have them on hand there’s no reason not to use them.
I’m curious, what’s an example of a mini x86 machine comparable to a raspberry pi? I just did research and ended up buying a RPI 5. I may have not known what to look for, but what I found in the x86 space was $200+ and seemed pretty underwhelming compared to a $80 SBC on arm.
In 2022, when Pi4s were going for $150-200, I managed to get a 7th gen NUC for about $150. I was looking to start Home Assistant, so both were viable options, but even the Pi5’s coming close to $100 retail, spending 50% more gets you a lot more performance for a 7th gen intel i5/i7 mobile chip, 16gb of RAM and a 256GB NVME.
I don’t know what a pi5+ is, unless you mean orange pi 5+?
I just bought a RPI 5 8GB (base price $80), all accessories in, for like $115. It never occurred to me that this would’ve been considered “expensive”, but a lot of people in this thread are saying so because rpis used to be $30. I mean the price has increased, but hasn’t the price of literally everything increased noticeably at the same time?
Pi5+ just because I’d originally written Pi5+PS/case/SD.
And you’re right that everything has gotten more expensive, but $35 in 2016 (Pi-3) is only $45 today (and you can still get a 3B for $35). The older Pis hit, for me, a sweet spot of functionality, ease, and price. Price-wise, they were more comparable to an Arduino board than a PC. They had GPIOs like a microcontroller. They could run a full operating system, so easy to access, configure, and program, without having to deal with the added overhead of cross-compiling or directly programing a microcontroller. That generation of Pi was vastly overpowered for replacing an Arduino, so naturally people started running other services on them.
Pi 3 was barely functional as a desktop, and the Pi Foundation pushed them as a cheap platform to provide desktop computing and programming experience for poor populations. Pi4, and especially Pi5, dramatically improved desktop functionality at the cost of marginal price increases, at the same time as Intel was expanding its inexpensive, low-power options. So now, a high-end Pi5 is almost as good as a low-end x86, but also almost as expensive. It’s no longer attractive to people who mostly want an easy path to embedded computing, and (I think) in developed countries, that was what drove Pi hype.
Pi Zero, at $15, is more attractive to those people who want a familiar interface to sensors and controllers, but they aren’t powerful enough to run NAS, libreelec, pihole, and the like. Where “Rasperry Pi” used to be a melting pot for people making cool gadgets and cheap computing, they’ve now segmented their customer base into Pi-Zero for gadgets and Pi-400/Pi-5 for cheap computing.
I really was asking. I did a little research and concluded any x86 machine I could buy would be too slow for reliable video playback unless I spent over $200. I am open to actually being wrong there though.
No idea, honestly, what the popular perception of N100 platform is. It only came to my mind because I’d watched www.youtube.com/watch?v=hekzpSH25lk a couple days ago. His perspective was basically the opposite of yours, i.e.: Is a Pi-5 good enough to replace an N100?
You’d be looking at used mini PCs. I’ve heard really good things about lenovo. It’s not necessarily exactly comparable in price, but the reason people are souring on arm SBCs, and especially PiS, is that it’s only a little more for a more powerful lenovo, and there are never any supply issues.
I bought an old Intel NUC with a 2.x GHz i3, 8gb ram and 120gb nvme used for $65, upgraded it to 16gb of ram and 1tb nvme for another $50. I run everyting from that in either VMs or LXCs (HA, jellyfin, NAS, CCTV, pihole) and it draws about 10W
I do use Obsidian and you can use what you want to sync between devices by activating plugins. I’ve used many différent notetaking app and Obsidian make my day! obsidian.md
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.