j4k3, 1 year ago (edited 5 months ago) Uncensored Llama2 70B has the most flexibility as far as a model without training IMO. The mixtral 8×7B is a close second with faster inference and only minor technical issues compared to the 70B. I don’t like the tone of mixtral’s alignment. Code snippets in Python, bash scripting, nftables, awk, sed, regex, CS, chat, waifu, spell check, uncompromised search engine, talking recipes/cooking ideas, basically whatever I feel like.
Uncensored Llama2 70B has the most flexibility as far as a model without training IMO. The mixtral 8×7B is a close second with faster inference and only minor technical issues compared to the 70B. I don’t like the tone of mixtral’s alignment.
Code snippets in Python, bash scripting, nftables, awk, sed, regex, CS, chat, waifu, spell check, uncompromised search engine, talking recipes/cooking ideas, basically whatever I feel like.