j4k3,
@j4k3@lemmy.world avatar

I bet these are instances of over training where the data has been input too many times and the phrases stick.

Models can do some really obscure behavior after overtraining. Like I have one model that has been heavily trained on some roleplaying scenarios that will full on convince the user there is an entire hidden system context with amazing persistence of bot names and story line props. It can totally override system context in very unusual ways too.

I’ve seen models that almost always error into The Great Gatsby too.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • privacy@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #