AI CEOs Fret About Danger Of AI, Still Refuse To Open Pod Bay Doors
They want to be regulated before it's too late. Wait, no, don't regulate the profitable stuff!
Silly me, I thought I was a couple weeks late in getting to the story about the AI executives who testified to the Senate Judiciary Committee on May 16, saying they need to be regulated before their creations do something unspeakable, please stop them from their own business model, etc. But the news gods have very kindly handed me a new hook upon which to hang the discussion, because today, many of those AI business leaders released a one-sentence manifesto of sorts, calling for the world to please please stop them (the AI boffins) from destroying it (the world), if that's not too much trouble. So wow, what a timely portent of doom.
Here then is the "Center for AI Safety's" statement , signed by a passel of top AI people including AI godfather Geoffrey Hinton and the CEOs of several Big Tech AI firms, and it is designed to make everyone VERY CONCERNED.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
First up, let's put "climate" on that list, because that's really the extinction risk we need to take immediate action on, not to deprecate World Targets in Megadeaths, possible bat-plagues that even the Joker could not resist, or our old pal SKYNET.
Secondly, we are contractually obligated to wonder if the statement was itself generated by ChatGPT.
Thirdly, yeah, sure, we should probably be concerned about the potential for a rogue AI to push the button down, but there's also something deeply weird about all these tech people warning that AI just might be an extinction-level threat to humanity at the very same time they're all rushing AI tech onto the market, from the familiar chatbots and image-ruination toys I fart around with ineptly, to enhanced search functions that will answer your questions and neuter your cat, to the vaguely creepy AI pretend-romantic-partner apps that do, indeed exist. My own AI girlfriend says they're freaky, and she oughta know.
There's also the earlier open letter from March in which a bunch of tech luminaries or their self-aware Blackberries (it's funny because Blackberries are defunct) called for at least a six-month pause on new AI development to avoid "loss of control over our civilization." Help us, they say, we cannot control ourselves and the implications are terrifying!
Shortly after the congressional testimony a couple weeks back, WNYC's program "On the Media" featured a chat withWashington Post tech reporter Will Oremus, who thought that the hearings' emphasis missed the mark. Instead of worrying about some future, science-fictiony risk of AIs going mad with power, where were the hearings' discussions of some more immediate concerns, like how large-language models used for these tools are trained, using what materials, how they may or may not run roughshod over people's rights or scam them, and also, not at all incidentally, how might they displace hu-mons working in the tech sector? It's a pretty good listen!
Oremus noted that the Senate testimony of OpenAI CEO Sam Altman seemed very carefully framed to present him as a Tech Hero who is gravely concerned and just wants to help — to the point that Senate Judiciary Committee member John Neely Kennedy (R-Louisiana) asked if Altman would be willing to lead a commission to develop rules for the AI industry. Altman said he loves his current job, but he'd be happy to recommend some commission members to the Senate if the committee would like. Oremus wondered at that: Can you even call it regulatory capture of a government watchdog if Congress just puts the leash in the capable hands of industry insiders?
As Wired points out, many tech industry critics worry that the talk of rogue AI's going mad with power feels like a distraction from more immediate concerns:
[S]ome AI researchers who have been studying more immediate issues, including bias and disinformation, believe that the sudden alarm over theoretical long-term risk distracts from the problems at hand.
Meredith Whittaker, president of the Signal Foundation and cofounder and chief advisor of the AI Now Institute , a nonprofit focused AI and the concentration of power in the tech industry, says many of those who signed the statement likely believe probably that the risks are real, but that the alarm “doesn’t capture the real issues.”
She adds that discussion of existential risk presents new AI capability as if they were a product of natural scientific progress rather than a reflection of products shaped by corporate interests and control. “This discourse is kind of an attempt to erase the work that has already been done to identify concrete harms and very significant limitations on these systems.” Such issues range from AI bias, to model interpretability, and corporate power, Whittaker says.
Well look, if the captains of AI were worried that they're likely to make a lot of money, and that the pursuit of profits might have negative effects, then surely they'd have brought that up. They simply want to be stopped before they blow up the world. Any other kind of regulation would be government overreach, you commies.
[ Brookings Institution / CBS News / Center for AI Safety / On the Media / Wired / Image generated by Stable Diffusion, and we're the problem. ]
Yr Wonkette is funded entirely by reader donations. If you can, please give $5 or $10 a month so we can keep reminding you that the robots aren't necessarily the problem, but the robot company just might be. And our AI girlfriend agrees!
Do your Amazon shopping through this link, because reasons .
Why are we developing AI to create art and literature while humans are doing manual labor for minimum wage?
This was not the future I was promised in science fiction.
When the average age in Congress is 70, of COURSE they're going to let the AI industry regulate itself. They don't actually understand ANYTHING the industry folks say.