SuperiorOne,

I’m actively using ollama with docker to run llama2:13b model. It’s generally works fine but heavy on resources as expected.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #