selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

ikidd, (edited ) in So SBCs are shit now? Anything I can do with my collection of Pis and old routers?
@ikidd@lemmy.world avatar

I’m just going to say, I shit on them all along. ARM is relatively expensive, bespoke and difficult to compile for because of that. Anyone can puke out a binary for amd64 that works everywhere. And way, way faster than some sad little SOC. Especially weird is spending $1000 on a clusterboard with CMs that had half of the power of a 5 year old X86 SFF desktop you could pick up for $75 and attach some actual storage to.

Maybe RISC-V will change all that, but I doubt it. Sure hope so though. The price factor has already leaned the right way to make it worthwhile.

DrinkMonkey, (edited ) in Is there an easy way to stream full bluray disc rips with menus and features over the network to my TV

Infuse for Apple TV will do this. You can point it to any folder on your NAS as an SMB share. It’s how I play back my own Blu-ray Discs, 4K or otherwise. It doesn’t do menus that I remember, but you can select the title easily enough.

Highly recommend also pointing it to your Jellyfin instance and using that as your front end for other files as it seems to me to have the best ability to do direct playback without transcoding, and the fewest hiccups for audio playback sync issues which can be annoying.

While you can just point Infuse directly at your other folders, its metadata cache gets dumped frequently by the OS, and it has to get rebuilt which is slow and annoying when you just want to watch something. Pointing at Jellyfin also lets you use whatever custom Jellyfin posters you’ve selected which helps for keeping special versions/collections identifiable visually.

MeatsOfRage, (edited )

Yea, looks like infuse does a good job at just playing the movie both in folder format or ISO which is cool. Instantly recognized the movie. No menus unfortunately :/

Think I might just be barking up a nonexistent tree

DrinkMonkey,

Yeah that’s what I expected. I think the Kodi suggestion for the Shield is the most promising lead. Hope it works out.

m, in Grocery shopping apps
@m@tthi.as avatar

I really like KitchenOwl’s shopping list interface, native iOS app, and OIDC integration. I haven’t used the budgeting or meal planning functions yet.

SeeJayEmm, in XPipe status update: New scripting system, advanced SSH support, performance improvements, and many bug fixes
@SeeJayEmm@lemmy.procrastinati.org avatar

I’m checking this out to see if it’s useful to me. I can see where being able to drop straight into a shell on a docker container would be handy. My only real gripe is that I can’t use it to connect to my free-tier oracle linux cloud VMs because they deploy OracleLinux out of the box.

I don’t begrudge you wanting to make a living from your work. It’s just frustrating.

I am going to try and live in it for a week or two and we’ll see if it sticks.

crschnick,

Yeah the commercialization model is not perfect yet. Ideally the community edition should include all normal features required for personal use. Would that only be like one machine to connect to or many? I was planning to experiment with allowing a few connections where a license would be required in the community version.

drdiddlybadger, in Creating the XMPP Network Graph
@drdiddlybadger@pawb.social avatar

I think this is pretty cool.

spez_, in So SBCs are shit now? Anything I can do with my collection of Pis and old routers?

I have 1 RPI 4 (8GB RAM) running:

  • OpenMediaVault
  • Transmission
  • ArchiveBox & LinkWarden (testing between the two)
  • Gitea
  • Audiobookshelf
  • FileBrowser
  • Vaultwarden
  • Jellyfin
  • Atuin
  • Joplin
  • Paperless-NGX
  • Immich

On another RPI (4GB) I have Home Assistant

cashews_best_nut,

Your Pi runs all that?! I’ve setup Homeassistant on a Tinkerboard and it’s slow as shit with nothing else running. :(

owenfromcanada,
@owenfromcanada@lemmy.world avatar

Not sure what kind of tinker board you’re working with, but the power of Pis has increased exponentially through its generations. There are tasks that would run slowly on a dedicated Pi2 that ran easily in parallel with a half dozen other things on a Pi4.

The older ones can still be useful, just for less intensive tasks.

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

Out of interest from someone with an Rpi4 and Immich, did you deactivate the machine learning? I did since I was worried it will be too much for the Pi, just curious to hear if its doable or not after all.

iso, (edited ) in Creating the XMPP Network Graph
@iso@lemy.lol avatar

I’ve never used XMPP. Can someone compare it with Matrix?

StefanT,

This comparison looks neutral: www.freie-messenger.de/en/…/xmpp-matrix/

u_tamtam,
@u_tamtam@programming.dev avatar

They both qualify as “open, federated messaging protocols”, with XMPP being the oldest (about 25 years old) and an internet standard (IETF) but at this point we can consider Matrix to be quite old, too (10 years old). On the paper they are quite interchangeable, they both focus on bridging with established protocols, etc.

Where things differ, though, is that Matrix is practically a single vendor implementation: the same organization (Element/New Vector/ however it’s called these days) develops both the reference client and the reference server. Which incidentally is super complex, not well documented (the code is the documentation), and practically not compatible with the other (semi-official) implementations. This is a red herring because it also happens that this organization was built on venture capital money with no financial stability in sight. XMPP is a much more diverse and accessible ecosystem: there are multiple independent teams and corporations implementing servers and clients, the protocol itself is very stable, versatile and extensible. This is how you can find XMPP today running the backbone of the modern internet, dispatching notifications to all Android devices, being the signaling system behind millions of IoT devices, providing messaging to billion of users (WhatsApp is, by the way, based on XMPP)

Another significant difference is that, despite 10 years of existence and millions invested into it, Matrix still has not reached stability (and probably never will): the organization recently announced Matrix 2 as the (yet another) definitive answer to the protocol’s shortcomings, without changing anything to what makes the protocol so painful to work with, and the requirements (compute, memory, bandwidth) to run Matrix at even a small scale are still orders of magnitude higher than XMPP. This discouraged many organizations (even serious ones, like Mozilla, KDE, …) from running Matrix themselves and further contributes to the de-facto centralization and single point of control federated protocols are meant to prevent.

iso,
@iso@lemy.lol avatar

I’ve used Matrix for months and agree with most points. I would like to try XMPP but it is clear that it does not have the best onboarding experience.

The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.

Which client would you recommend?

u_tamtam,
@u_tamtam@programming.dev avatar

The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.

That’s a valid concern, but I wouldn’t call it a problem. There are practically 2 types of clients/servers: the ones which are maintained, and which work absolutely fine and well together, and the rest, the unmaintained/abandoned part of the ecosystem.

And with the protocol being so stable and backwards/forwards compatible in large parts, those unmaintained clients will just work, just not with the latest and greatest features (XMPP has the machinery to let clients and servers advertise about their supported features so the experience is at least cohesive).

Which client would you recommend?

Depends on which platform you are on and the type of usage. You should be able to pick one as advertised on joinjabber.org , that should keep you away from the fringe/unmaintained stuff. Personally I use gajim and monocles.

iso,
@iso@lemy.lol avatar

Thank you for the suggestions. I just created an account on jabber.hot-chilli.net and downloaded Gajim. It looks really cool!

JohnFoe, (edited ) in So SBCs are shit now? Anything I can do with my collection of Pis and old routers?

If you’re not into the whole Google Home/Alexa/Apple Home echo system, and have Home Assistant already running, you could use them to build a bunch of smart assistants with Open Thread Border Routers.

I was just looking at doing this in my house but the cost of Pis vs used Google Gen2s with Thread Border Routers built in was cost prohibitive for me.

lemonuri, (edited ) in Best way to create my seedbox?

There is no need fire up a dedicated machine to do this. Use your router/ap running openwrt and connect a hdd via usb. The machine needs at least 128 Mb RAM (256 mb would be better). Install the transmission package, set it up, add a gig of swap space on the hdd and you are good to go. The AP runs 24/7 anyways so there will be very few extra power consumption. Vpns often don’t allow port forwarding (mullvad has stopped support recently if I remember correctly). You can just be a passive node and not often ports, that should work good enough. Consider seeding parts of sci-hub. it’s a project worth supporting imho.

You can just download once of the parts below with less than 12 seeds and set it to host without ratio:

phillm.net/…/stats-filtered-table.php?propname[]=…

Haha,

That makes sense too. And maybe is the most logical answer. When i get my desired router i will set this up.

Clearwater, in Need advice about used drives and Wd warranty experience with drives brought from unauthoirized resellers

Return for refund or replacement. If you’re even slightly concerned about WD giving you trouble, but know eBay/the seller won’t, just go that path since it’s still available.

ponchow8NC,

Yeah I’m guessing this is the easiest option to just get my money back. Appreciate it and I’ll update the post with what I go with. I already have another drive that I tested and works so I’m not desperate for now.

JackbyDev, in So SBCs are shit now? Anything I can do with my collection of Pis and old routers?

I don’t understand this post. Whatever you bought then for they’re still good for. People’s opinions don’t make them less useful.

c0mbatbag3l,
@c0mbatbag3l@lemmy.world avatar

Sir, this is Lemmy. People treat the applications and hardware you use with ethical alignment and switching to FOSS literally has approval on the level of religious conversion.

It’s no wonder people around here care so much about random people’s opinions, the place practically filters for it.

s38b35M5, in Need advice about used drives and Wd warranty experience with drives brought from unauthoirized resellers
@s38b35M5@lemmy.world avatar

Correct me if I’m wrong, but manufacturer warranties are not transferrable, so when you bought it secondhand, the warranty didn’t convey to you.

My experience with WD and Seagate has been that they request proof of purchase, which, for me, was my original invoice.

ponchow8NC,

Honestly I have no clue. My train of thought was if you could register it, which I havent tested, then the warranty could work. Was kinda hoping someone here or in the original post went through my situation and could clairfy things.

baatliwala, in So SBCs are shit now? Anything I can do with my collection of Pis and old routers?

I thought this was about FIFA

ratman150, (edited ) in Starting over and doing it "right"

I’ll freely admit to skimming a bit but yes proxmox can run trunas inside of it. Proxmox is powerful but might be a little frustrating to learn at first. For example by default proxmox expects to use the boot drive for itself and it’s not immediately clear how to change that to use that disk for other things.

The noctua dh-15 is overkill for that cpu btw unless you’re doing an overclock which I wouldn’t recommend for server use. What’s your plans for the 1060? If using proxmox you’ll want to get one of the “G” series AMD CPUs do that proxmox binds to the apu and then you should be able to do gpu passthrough on the 1060.

Malice,

I’d planned on using the GPU for things like video transcoding (which I know it’s probably way overkill for). Perhaps something like stable diffusion to play around with down the line? I’m not entirely sure. I do know that, since the CPU isn’t a G series, it’ll need to be plugged in at least if/when I need to put a monitor on it. Laziness suggests I’ll likely just end up leaving it in there, lol. As far as the dh-15, yeah, that’s outrageously overkill, I know, and I may very well slap the stock cooler on it and sell the dh-15.

Thank you!

ratman150,

I have a proxbox with a R5 4600G even under extreme loads the stock cooler is fine. Honestly once prox is setup you don’t need a GPU. The video output of proxmox is just a terminal (Debian) so as long as things are running normally you can do everything through the web interface even without the gpu. I do highly recommend a second GPU (either a G series CPU or a cheap GPU) if you want to try proxmox GPU passthrough. I’ve done it and can say it is extremely difficult to get working reliably with just a single GPU.

Malice,

Yeah, I’d definitely considered the fact that I can probably just take the GPU out as soon as proxmox is set up. The only thing I’d leave it for is for transcoding, which may or may not be something I even need to/want to bother with.

ratman150,

Depending on your transcoding needs you might not even need it for that.

VelociCatTurd, in Starting over and doing it "right"

I will provide a word of advice since you mentioned messiness. My original server was just one phyiscla host which I would install new stuff to. And then I started realizing that I would forget about stuff or that if I removed something later there may still be lingering related files or dependencies. Now I run all my apps in docker containers and use docker-compose for every single one. No more messiness or extra dependencies. If I try out an app and don’t like it, boom container deleted, end of story.

Extra benefit is that I have less to backup. I only need to backup the docker compose files themselves and whatever persistent volumes are mounted to each container.

Malice,

I forgot to mention, I do use docker-compose for (almost) all the stuff I’m currently using and, yes, it’s pretty great for keeping things, well… containerized, haha. Clean, organized, and easy to tinker with something and completely ditch it if it doesn’t work out.

Thanks for the input!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #