Even if AI did make psychology redundant in a couple of years (which I’d bet my favourite blanket it won’t), what are the alternatives? If AI can take over a field that is focused more than most others on human interaction, personal privacy, thoughts, feelings, and individual perceptions, then it can take over almost any other field before that. So you might as well go for it while you can.
AI won’t do psychology redundant. Might allow for an easier and broader access to low level psychological first support.
What is more likely to make psychological consultants a risky investment is the economic crisis. People are already prioritizing food over psychological therapy. Psychological therapy unfortunately is nowadays a “luxury item”.
I don’t think the AI everyone is so buzzed about today is really a true AI. As someone summed it up: it’s more like a great autocomplete feature but it’s not great at understanding things.
It will be great to replace Siri and the Google assistant but not at giving people professional advice by a long shot.
Not saying an LLM should substitute a professional psychological consultant, but that someone is clearly wrong and doesn’t understand current AI. Just FYI
It’s an oversimplified statement from someone (sorry I don’t have the source) and I’m not exactly an AI expert but my understanding is the current commercial AI products are nowhere near the “think and judge like a human” definition. They can scrape the internet for information and use it to react to prompts and can do a fantastic job to imitate humans, but the technology is simply not there.
The technology for human intelligence? Any technology would be always very different from human intelligence. What you probably are referring to is AGI, that is defined as artificial general intelligence, which is an “intelligent” agent that doesn’t excel in anything, but is able to handle a huge variety of scenarios and tasks, such as humans.
LLM are specialized models to generate fluent text, but very different from autocompletes because can work with concepts, semantics and (pretty surprisingly) with rather complex logic.
As oversimplification even humans are fancy autocomplete. They are just different, as LLMs are different.
I’m sure as fuck glad my therapist is a human and not a Chatbot.
Also, psychologists will be needed to design AI interfaces so humans have an easy time using them.
A friend of mine studied psychology and now works for a car company, designing their infotainment system UI so that people can instinctively use it without consulting a manual. Those kinds of jobs will become more, not less in the future.
Many valid points here, but here is a slightly different perspective. Let’s say for the sake of discussion AI is somehow disruptive here. So?
You cannot predict what will happen in this very fast space. You should not attempt to do so in a way that compromises your path toward your interests.
If you like accounting or art or anything else that AI may disrupt… so what? Do it because you are interested. It may be hyper important to have people that did so in any given field no matter how unexpected. And most importantly, doing what interest you is always at least part of a good plan.
Given the vast array of existing pitfalls in AI, not to mention the outright biases and absence of facts - AI psychology would be deeply flawed and would more likely kill people.
Person: I’m having unaliving thoughts, I feel like it’s the only thing I can do
AI: Ok do it then
That alone is why it’ll never happen.
Also we need to sort out how to house, heal and feed our people before we start going and replacing masses of workforce.
Well, if you can’t find anything to confirm your bias, then it is probably wrong. Public Transit often loses money and is owned and operated by government.
It is worth noting that “loses money” and “costs money” are generally just differences of perspective. For many, public services don’t necessarily need to be profitable monetarily to be worth their cost.
eta: I would like to clarify that this is in no way disagreement. More of a yes-and, than a but. I agree with you completely.
I love that distinction. And where does the money go? “Lose money” implies it vanishes, which isn’t the case. It goes to companies that then pay their workers. It circulates which should be the point.
It’s like saying the Department of Transportation loses billions of dollars annually to build roads for individual vehicles. People find the craziest arguments to fight against anything that benefits the public.
Public transport almost never runs a profit on it’s own, but if you manage it through the government, then the added tax income from vastly more people being able to work better jobs, more than make up the shortfall.
If I understand correctly, the old streetcar companies weren’t “privatized” (i.e. government-owned assets that were sold off); they were for-profit companies to begin with.
Aside from that quibble, it was true, but then the “willfully left to rot” part kicked in and those transit subsidiaries went bankrupt and ceased to exist. Any rail transit that exists today is either a system that got saved from GM’s plot by being bought by the government (e.g. New York’s subway system), or a government-run system founded more recently (e.g. Washington DC’s subway system).
No, it won’t. I don’t think I would have made it here today alive without my therapist. There may be companies that have AI agents doing therapy sessions but your qualifications will still be priceless and more effective in comparison.
Psychotherapy is about building a working relationship. Transference is a big part of this relationship. I don’t feel like I’d be able to build the same kind of therapeutic relationship with an AI that I would with another human. That doesn’t mean AI can’t be a therapeutic tool. I can see how it could be beneficial with things like positive affirmations and disrupting negative thinking patterns. But this wouldn’t be a substitute for psychotherapy, just a tool for enhancing it.
asklemmy
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.