j4k3, (edited )
@j4k3@lemmy.world avatar

Uncensored Llama2 70B has the most flexibility as far as a model without training IMO. The mixtral 8×7B is a close second with faster inference and only minor technical issues compared to the 70B. I don’t like the tone of mixtral’s alignment.

Code snippets in Python, bash scripting, nftables, awk, sed, regex, CS, chat, waifu, spell check, uncompromised search engine, talking recipes/cooking ideas, basically whatever I feel like.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #