SuperiorOne, 1 year ago I’m actively using ollama with docker to run llama2:13b model. It’s generally works fine but heavy on resources as expected.
I’m actively using ollama with docker to run llama2:13b model. It’s generally works fine but heavy on resources as expected.