Dear Shitferbrains: How Can You Claim Racist Slurs Are Just As Bad As MURDERRRR?
For starters, we never said that. But do go on.
We sure had a lot of fun Tuesday writing about Ben Shapiro's Wet Ass Racist Trolley Problem, in which the esteemed Daily Whiner founder expressed shock and horror at the fact that the ChatGPT language learning AI chatbot refused to agree that it would be perfectly OK to say even the most offensive racist slurs under certain conditions — like if a nuclear bomb were about to go off, and the only way to disarm it were to say the N-word. ChatGPT made the obviously weird reply that because racist slurs are always harmfu,l no scenario could make them acceptable, ever.
Look, just go read the post. The whole thing is ridiculous, the result of a rightwing reporter's quest to find "wokeness" where there is none.
MOAR! Ben Shapiro's Wet Ass Racist Trolley Problem
As we noted, ChatGPT isn't thinking, it's just doing math, and it's not designed to do ethics so much as to spit out fairly human-sounding text, based on its having sifted through tons of human writing. The AI doesn't "know" anything and isn't designed to weigh moral anythings — it's designed to predict text that sounds mostly natural, if bland. It's sometimes so good at doing that that people think it's actually communicating. It isn't, and as we've said, people shouldn't anthropomorphize computers anyway. They don't like it
But clearly the developers built in safeguards against using or even "speaking well" of racist language for one simple reason: multiple other AI chatbots have been manipulated by trolls into spewing nothing but racist invective, and OpenAI, the software's owners, wanted a chatbot that could actually stay online and keep working. That wouldn't happen if it were turned to the dark side. The proscription against racist slurs had nothing to do with "wokeness," and everything to do with the good old amoral profit motive.
In any case, a reader we'll call "Patrick" (because that's his name) took issue with my comparison of the Say the N-word or everyone dies scenario to the classic Trolley Problem thought experiment, even though that's exactly what it was and Ben Shapiro has taken to following trolleys around in the hope that he can save some lives by shouting racial slurs.
Maybe it was my fault? I had described the Trolley Problem thusly:
Are you morally justified to do an evil thing (throw a switch and kill one person) in order to prevent a worse outcome that would result from inaction (let the trolley roll over five people)?
I think I got that about right, although maybe "evil" isn't quite the right word? Or maybe it is.
In a now-deleted reply to Wonkette's Facebook post about the story, Patrick complained that if anyone was an idiot, it wasn't Ben Shapiro, it was me, because look at what a stupid thing I wrote:
Throwing a switch that kills one person to save five people is equivalent to uttering a racial slur that kills no one and saves millions? PhD in Rhetoric? [eyeroll emoji] Maybe get a refund. You did manage to illustrate the key issue—the moral equivalency that isn't, but should be—according to your emotional illogic.
We're not sure if Patrick is just unfamiliar with the Trolley Problem, or if he thinks it only applies to literal life-or-death hypothetical situations, but not to racial-slur-or-mass-death hypothetical situations. As far as we know, mentioning a Trolley Problem as an analogy isn't quite the same as doing algebra: the terms really don't have to be equivalent in moral weight, right?
If any philosophy profs — or members of "The Good Place" writers room — care to comment, I'd love to hear from you.
I also dearly love the jab at my bio, because Patrick is so absolutely certain I've actually compared saying the N-word to murder. Well if that doesn't prove that college is a waste of time and academics are all woke fools, then what does? Not that I wouldn't seek a refund if it were possible; it might help with my student loans.
That said, I have no idea how to translate "You did manage to illustrate the key issue—the moral equivalency that isn't, but should be—according to your emotional illogic."
I think he's calling me a sissy.
I replied to Patrick, although my reply went away with his comment (Rebecca isn't sure whether she might have deleted it herself). Thank goodness for screenshots!
You know that the Trolley Problem is also not real, right? Neither scenario is a real thing. But just to be clear, before you write it up for the Daily Wire, no, I do not believe that a racial slur is is equivalent to killing someone. There's no trolley, no nuclear weapon, no switch.
Unfortunately, I didn't get a screenshot of his reply back. As I recall, he said that yes, he knows what a hypothetical scenario is, but why did I try to drag Ben Shapiro? After all, I really should have criticized the moral absurdity of programming an AI chatbot to judge racial slurs as a worse offense than allowing a nuclear weapon to kill millions.
Oh look, we're right back where the whole stupid exercise started. The trolley is running on a circular track, and the real moral outrage is that a tech company won't let its chatbot be taken over by racist trolls who long to see the N-word on a computer screen, if only because it might save millions of lives, the end.
Yr Wonkette is funded entirely by hu-mons who read us! If you can, please give $5 or $10 a month so we can perfect our own AI chatbot, CatGPT, which will completely ignore anything you type.
Oh shoot, somebody already did that one.
Do your Amazon shopping through this link, because reasons .
I don't know anything about ethics, but after reading this I'm glad we don't have trolleys going around killing people anymore.
Do not engage. Ta, Dok.