@snek@lemmy.world avatar

snek

@snek@lemmy.world

“Once you’ve been to Gaza, you’ll never stop wanting to beat Benjamin Netanyahu to death with your bare hands.”

This profile is from a federated server and may be incomplete. Browse more on the original instance.

snek,
@snek@lemmy.world avatar
snek,
@snek@lemmy.world avatar

As someone who works with LLMs, they shouldn’t…

You still need your chatbot to stick to business rules and act like a real customer service rep, and that’s incredibly hard to accomplish with generative models where you cannot be there to evaluate the generated answers and where the chatbot can go on a tangent and suddenly start to give you free therapy when you originally went in to order pizza.

Don’t get me wrong, they’re great for many applications within the manual loop. They can help customer service reps (as one example) function better, provide more help to users, and dedicate more time to those who still need a human to solve their issues.

Companies are already replacing some workforce with LLMs.

My opinion right now is that companies want you to believe they are 100% capable of replacing humans, but that’s because people in upper management never listen to the damn developers down in the basement (aka me), so they have an unrealistic expectation of AI coupled with an unending desire for money and success.

They are replacing them because they are greedy cunts, not because they are replaceable.

snek,
@snek@lemmy.world avatar

If they had ever cared about quality, they would have treated their employees with dignity and paid them enough 😬

snek, (edited )
@snek@lemmy.world avatar

Do you mean the embeddings? platform.openai.com/docs/…/what-are-embeddings

If so:

The word embeddings and embedding layers are there to represent data in ways that allow the model to make use of them to generate text. It’s not the same as the model acting as a human. It may sound like a human in text or even speech, but its reasoning skills are questionable at best. You can try to make it stick to your company policy but it will never (at this level) be able to operate under logic unless you hardcode that logic into it. This is not really possible with these models in that sense of the word, after all they just predict the best next word to say. You’d have to wrap them around with a shit ton of code and safety nets.

GPT models require massive amounts of data, so they were only that good at languages for which we have massive texts or Wikipedias. If your language doesn’t have good content on the internet or freely available digitalized content on which to train, a machine can still not replace translators (yet, no idea how long this will take until transfer learning is so good we can use it to translate low-resource languages to match the quality of English - French, for example).

snek,
@snek@lemmy.world avatar

And has this “economics” business worked for companies today? Has it worked for us?

snek, (edited )
@snek@lemmy.world avatar

When your trousers rip during a work day but you have to keep going.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #