selfhosted

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

InformalTrifle, in Suggestions for Short Rack Mount Case

I’ve not used them yet but was looking at www.plinkusa.net

They have a variety of heights and depths, some of them being 15”

Lettuceeatlettuce,
@Lettuceeatlettuce@lemmy.ml avatar

Great resource, I’ll check them out.

rho50, in When your kid asks for a switch for Christmas

I would’ve been delighted to receive a managed Ethernet switch as a kid! I hope it came with some useful SFP modules and a USB serial adapter 😜

code, in Update: Everyone said GameVault's UI was garbage, so we completely overhauled it.

Any ideas about linux/wine support?

baduhai, (edited )

It only stores files, so there’s no need for wine support, as far as I understand.

Edit: looks like I was wrong, their client seems somewhat capable.

alfagun74,

Not natively but there’s this

Crow, in Update: Everyone said GameVault's UI was garbage, so we completely overhauled it.
@Crow@lemmy.world avatar

What do I use this for? Do I install it on my NAS or my gaming pc?

My best guess is this is a self hosted network storage for games and other computers run the games from there? Or do they download the game from there? Is it a way to store game saves? Does it have any use for emulators like yuzu?

Sorry for all the questions, I’m only asking because the software looks really interesting but I just can’t figure out its uses.

chandz05,

Seems like the intro clears some things: gamevau.lt/docs/introIt looks like you install the server component on your NAS/server etc and store your game files/binaries/installers there. Then you can download client applications and download from that location to install on your gaming PC or whatever

steal_your_face,
@steal_your_face@lemmy.ml avatar

So basically Plex/jellyfin for non drm games it sounds like.

WarmSoda,

In case you’re wondering what GameVault is, it’s like having your own video gaming platform for games on your server – think Plex, but for video games

victorz,

Thanks for clearing this up. I definitely have no use for this. I wish I did, but alas.

Vendetta9076,
@Vendetta9076@sh.itjust.works avatar

I second all these questions.

wolfshadowheart, in Update: Everyone said GameVault's UI was garbage, so we completely overhauled it.

Nice, looking good! Are there ways to customize it to a users liking? You could get a pretty good theme store going I bet!

alfagun74,

There probably will some themes for supporters in the future.

Gooey0210, in Those who are self hosting at home, what case are you using? (Looking for recommendations)

Lenovo thinkcentre m710q 🫡

alphacyberranger, in Update: Everyone said GameVault's UI was garbage, so we completely overhauled it.
@alphacyberranger@lemmy.world avatar

This actually looks cool. Keep up the good work.

key, in Update: Everyone said GameVault's UI was garbage, so we completely overhauled it.
@key@lemmy.keychat.org avatar

Keep at it!

Zoidberg, in Question - ZFS and rsync

When things lock up, will a kill -9 kill rsync or not? If it doesn’t, and the zpool status lockup is suspicious, it means things are stuck inside a system call. I’ve seen all sorts of horrible things with usb timeouts. Check your syslog.

isles,

kill -9

Just tested, thanks for the suggestion! It killed a few instances of rsync, but there are two apparently stuck open. I issued reboot and the system seemed to hang while waiting for rsync to be killed and failed to unmount the zpool.

Syslog errors:


<span style="color:#323232;">Dec 31 16:53:34 halnas kernel: [54537.789982] #PF: error_code(0x0002) - not-present page
</span><span style="color:#323232;">Jan  1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.
</span><span style="color:#323232;">Jan  1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.
</span><span style="color:#323232;">Jan  1 12:57:19 halnas kernel: [    1.119609] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
</span><span style="color:#323232;">Jan  1 12:57:19 halnas kernel: [    1.120020] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
</span><span style="color:#323232;">Jan  1 12:57:19 halnas kernel: [    1.120315] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
</span><span style="color:#323232;">Jan  1 22:59:08 halnas kernel: [    1.119415] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
</span><span style="color:#323232;">Jan  1 22:59:08 halnas kernel: [    1.119814] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
</span><span style="color:#323232;">Jan  1 22:59:08 halnas kernel: [    1.120112] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+
</span><span style="color:#323232;">Jan  1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.
</span><span style="color:#323232;">Jan  1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.
</span><span style="color:#323232;">Jan  2 02:23:18 halnas kernel: [12293.792282] gdbus[2809399]: segfault at 7ff71a8272e8 ip 00007ff7186f8045 sp 00007fffd5088de0 error 4 in libgio-2.0.so.0.7200.4[7ff718688000+111000]
</span><span style="color:#323232;">Jan  2 02:23:22 halnas kernel: [12297.315463] unattended-upgr[2810494]: segfault at 7f4c1e8552e8 ip 00007f4c1c726045 sp 00007ffd1b866230 error 4 in libgio-2.0.so.0.7200.4[7f4c1c6b6000+111000]
</span><span style="color:#323232;">Jan  2 03:46:29 halnas kernel: [17284.221594] #PF: error_code(0x0002) - not-present page
</span><span style="color:#323232;">Jan  2 06:09:50 halnas kernel: [25885.115060] unattended-upgr[4109474]: segfault at 7faa356252e8 ip 00007faa334f6045 sp 00007ffefed011a0 error 4 in libgio-2.0.so.0.7200.4[7faa33486000+111000]
</span><span style="color:#323232;">Jan  2 07:07:53 halnas kernel: [29368.241593] unattended-upgr[4109637]: segfault at 7f73f756c2e8 ip 00007f73f543d045 sp 00007ffc61f04ea0 error 4 in libgio-2.0.so.0.7200.4[7f73f53cd000+111000]
</span><span style="color:#323232;">Jan  2 09:12:52 halnas kernel: [36867.632220] pool-fwupdmgr[4109819]: segfault at 7fcf244832e8 ip 00007fcf22354045 sp 00007fcf1dc00770 error 4 in libgio-2.0.so.0.7200.4[7fcf222e4000+111000]
</span><span style="color:#323232;">Jan  2 12:37:50 halnas kernel: [49165.218100] #PF: error_code(0x0002) - not-present page
</span><span style="color:#323232;">Jan  2 19:57:53 halnas kernel: [75568.443218] unattended-upgr[4110958]: segfault at 7fc4cab112e8 ip 00007fc4c89e2045 sp 00007fffb4ae2d90 error 4 in libgio-2.0.so.0.7200.4[7fc4c8972000+111000]
</span><span style="color:#323232;">Jan  3 00:54:51 halnas snapd[1367]: stateengine.go:149: state ensure error: Post "https://api.snapcraft.io/v2/snaps/refresh": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</span>
grue, (edited ) in Suggestions for Short Rack Mount Case

I’ve looked all over the Internet for <16" short-depth cases multiple times over the years, and I’ve learned the most important question is this: what do you want to put in it?

I’ll tell you right now that some of the things I’ve wanted, like…

  • A 2U with hot-swap caddies all the way across the front (like this or this, but <16" deep at the cost of fitting only an ITX motherboard), or
  • A 4- or 5U chassis that can fit an EATX / SSI EEB (12"x13") motherboard and an ATX PSU at the same time, or
  • A chassis of any size that has both the motherboard/PCI I/O and the drive bays on the front (front drive bays are normal and you can get front access I/O, but not both at the same time)

…simply do not exist, as far as I can tell. I’m pretty sure all of these things are geometrically possible (I did the math), but apparently I’m the only one who wants such weird stuff.

You can get a basic-bitch whatever-U case that supports a mATX motherboard and hard-mounting a couple of internal drives, with sharp metal edges, a shitty plastic door, and a price double (or more) what similarly low quality would cost in a desktop form-factor all day long, though.

Lettuceeatlettuce,
@Lettuceeatlettuce@lemmy.ml avatar

Yeah, seems like there isn’t a big group of folks that are looking for this kind of solution.

I’ve thought about designing a case to my specs that could be 3D printed or maybe be built with some very basic steel sheet, but that’s more DIY than I have time for right now.

possiblylinux127, in Question - ZFS and rsync

Have you tried running it overnight to make sure its not just a performance thing?

isles,

I did, great suggestion. It never recovered.

grue, in Hardware question

This might be an X/Y problem. Why do you think you need HDMI output on a server?

AimlessNameless,

Because installing an OS without iLo, serial or video output would be a bit of a hassle

loganb, in Question - ZFS and rsync

Just to make sure. Are you copying to your ZFS pool directory or a dataset? Check to male sure your paths are correct.

Push vs pull shouldn’t matter but I’ve always done push.

If your zpool is not accessible anymore after a transfer then there is a low-level problem here as it shouldn’t just disappear.

I would installe tmux on your ZFS system and have a window with htop running, dmesg, and zpool status running to check your system while you copy files. Something that severe should become self evedent pretty quickly.

BlueEther, in No posts when surfing through my i stance
@BlueEther@no.lastname.nz avatar

Most of yhe subscribed communities seem to be working on your instance. When did you subscribe to permacomputing@lemmy.sdf.org? was it after the upgrade to 0.19.x?

Valmond,

Good question! I think it was after.

s38b35M5, in Question - ZFS and rsync
@s38b35M5@lemmy.world avatar

If you’re running TrueNAS, the replication feature was the smoothest and easiest way to move large amounts of data when I did it 18 months back. Once the destination location was accessible from the sending host, it was as simple as kicking off a snapshot, resulting in a fully usable replica on the receiving host. IIRC, IXsystems staff told me rsync can be problematic compared to the replication/snapshot system, as permissions and other metadata can be lost.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #