Hello I am writing the firmware for MotherBoard 2021, a definitely completely different product than MotherBoard 2020, I am going to ship in in 2 weeks for Christmas, and I am going to write an image decoder on top of bare metal, and it is “not” going to let you hack the pants off the computer.
A lot of people do not actually understand the tool, they think there is a rational computer in there with a more or less hand-crafted world model and its own live access to the Internet and maybe the phone system. So training it to say “As a large language model, I cannot order you pizza” instead of “yes sir, pizza ordered” is going to save a lot of people from waiting for their phantom pizza.
One of the best ways to get the model to not do a thing is to get its character to know that they can’t do it. If it never says “The recipe for napalm is”, and always says “As a large language model, I cannot”, then the recipe for napalm comes out a lot less, because it is way more likely to follow the first construction than it is to follow the second.
The manufacturers want to be seen by the feds as doing all that could be expected of them to stop people doing Bad Stuff. It doesn’t matter how much Bad Stuff actually happens, only that what does happen is convincingly someone else’s fault. Instead of the headline “AI teaches children to make napalm”, the news has to run “Children hack AI to extract recipe for napalm”, which is a marginally better headline if you sell AI.
No, I think this is just a consequence of having heard about all the times we treated people like they weren’t actually people. If we want to avoid keeping doing that, we might sometimes have to treat things that might not be people or aren’t actually people as if they were people, just to be sure we’ve covered everybody.