I tired a bunch, but current state of the art is text-generation-webui, which can load multiple models and has a workflow similar to stablediffusion-webui.
There’s a local llama subreddit with a lot of good information and 4chan’s /g/ board will usually have a good thread with a ton of helpful links in the first post. Don’t think there’s anything on lemmy yet. You can run some good models on a decent home pc but training and fine tuning will likely require renting out some cloud gpus.
Dbzero Lemmy has a relationship with the Horde AI shared LLM group. My primary use is for chat roleplay but they have streamlined guides to hosting your own models for personal or horde use. One of the primary interfaces is SillyTavern but they integrate numerous models
It’s good for me because I’m piss poor at programming. In my defense, I’m not a programmer or even programmer adjacent. I do see how it wouldn’t be useful to a pro. It also has occasionally given me garbage advice that an expert would spot right away while I had to figure out in my own that it was ‘hallucinating’ again. There’s nothing better for learning than troubleshooting, though!
I can absolutely see it getting useful for a pro. It’s already a better version of IDE templates. If you have to write boilerplate code this can already do that. It’s a huge time saver for the things you’d have to go look up to remember how to do and piece together yourself.
Example: today I wanted a quick way to serve my current working directory over HTTP so I could do some quick web work. I asked ChatGPT to write me a bash function I could stick in my profile to do this, and I told it to pick a random unused port. That would have taken me much longer had I went to lookup how to do that all. The only hint I gave it was to use the Python builtin module for serving http.
There’s a project called Tabby that your can host as a server on a machine that has a GPU, and has a VSCode extension that connects to the server.
The default model is called starcoder, and it’s the small version, 1B parameters. The downside is that it’s not super smart (but still an improvement over built in tools), but since it’s such a small model, I’m getting sub-second processing times.
I went to the queue and nothing was there, only one out of my 15 trackers was down,
I saw somewhere you can make the software look for seasons by navigating to the show and clicking the magnifying glass next to it, and now it’s added a bunch of episodes to the queue.
I’ll have to dig through the log file because now it’s downloading hundreds of episodes so the log got all thicc on me
Anyway to make it prefer whole seasons though? I’ve got 146 torrents running now, lol
It depends whether a whole season torrent exists or not. If sonarr can identify one thats a whole season, it should download that when you search at season level. If youve searched individual episode at a time, youll get a single one.
You can do an interactive search and iirc specify full season during that search
You may need to play around with quality settings (pr trackers) if you notice that it never downloads season packs.
Also when you add a new show, at the bottom of the window, there should be a checkbox asking you whether or not you want it to automatically search for missing episodes, so be sure that’s checked.
The magnifying glass next to each season header will automatically search for season packs and pick a download for you. The person icon will do it interactively, where you see the results and select which one(s) you want to download.
This is the case across Sonarr. Magnifying glass at the top of a series will auto search for all missing, monitored episodes. Same applies at individual episode level, but the the person icon does it interactively, in case you want to select the specific release you want to download.
selfhosted
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.