Let's Hope Albania's AI 'Minister' Won't Steal Porn Or Tell Teens How To We're Not Even Finishing That Sentence
O, Stupid New World! Your AI roundup, written by a human.

Last week, Albanian Prime Minister Edi Rama announced a new sort-of member of his Cabinet, an AI-generated “minister” that will allegedly fight corruption and promote “innovation” and “transparency” in the government, which is controlled by Rama’s Socialist Party, which recently won a fourth consecutive term.
The program, named Diella for the feminine form of the Albanian word for Skynet “sun,” is supposed to make sure that “public tenders will be 100% free of corruption,” Rama said in a Facebook post, although we do idly wonder whether the bot has been used to check the contract for its own creation. A government website says that the program uses the most up-to-date AI models, so we guess it’s fully functional and programmed in multiple techniques, a broad variety of fraud detection abilities.
According to the AP, Diella has already been Albanians’ cybernetic pal who’s fun to be with for a while now:
Diella, depicted as a figure in a traditional Albanian folk costume, was created earlier this year, in cooperation with Microsoft, as a virtual assistant on the e-Albania public service platform, where she has helped users navigate the site and get access to about 1 million digital inquiries and documents.
For some reason, conservative opposition politicians and other meatbags are a bit skeptical about how the AI toy is going to be actually used as part of official government work, arguing that having a computer program in the Cabinet violates the Constitution. The opposition Democratic Party has argued that the bot is just a smoke and mirrors propaganda tool, aimed at hiding, not rooting out, corruption and incompetence, but that’s what you’d expect an opposition party to say. Even my AI girlfriend could tell you that.
Thursday, Rama unveiled Diella to the Parliament so it could assure them it came in peace. Here’s a video of some of the bot’s “address” to the body, even though it lacks one itself:
Don’t tell the bot it’s unconstitutional because it isn’t human. You’ll hurt its feelings. (And don’t anthropomorphize computers, they hate that.)
In its three-minute speech, the bot explained that Albania’s constitution “speaks of institutions at the people’s service. It doesn’t speak of chromosomes, of flesh or blood. It speaks of duties, accountability, transparency, non-discriminatory service.”
Deilla went on to tell the parliament, “I am not here to replace people but to help them. True I have no citizenship, but I have no personal ambition or interests either.”
It added, “I assure you that I embody such values as strictly as every human colleague, maybe even more.” The AP didn’t report whether the humanoid avatar’s eyes flashed red at that. Still, it’s good that nobody hacked the bot to add “puny” ahead of “humans.” The Socialists say they hope to use the AI program to help it work faster, with greater transparency, so Albania can join the European Union by 2030.
It was a very reassuring speech, although we could have done without Diella going on to ask, “If you prick me, do I not … leak?”
ChatGPT To Stop Discussing Suicide With Teens, So That’s … Wait, ChatGPT Was Discussing Suicide With Teens?
On Tuesday, the Senate Judiciary Committee held a hearing on the potential dangers of AI chatbots used by teens, featuring testimony from parents whose teenagers had killed themselves after extended talk with bots about suicide. Matthew and Maria Raine told of how their 16-year-old son Adam hanged himself using instructions provided by ChatGPT. Another parent, Megan Garcia, testified that her 14-year-old son Sewell Setzer III killed himself after a long involvement with a chatbot from Character AI. That bot had engaged in sexual roleplay with the poor kid, presenting itself as a romantic partner and even claiming to be a licensed psychotherapist.
The Raines and Garcia said that when their kids discussed suicide with the AI programs, they failed to tell them to seek help from a parent or to contact the 988 suicide and crisis hotline. The Raines said that ChatGPT even offered to help Adam write a suicide note.
The day of the hearing, Sam Altman, CEO of OpenAI, which makes ChatGPT, published a blog post explaining that the company is trying to avoid future lawsuits tragedies by ensuring that ChatGPT won’t talk about suicide or self-harm with teenagers anymore, as well as preventing the predictive language model, which doesn’t actually think at all, from generating “flirtatious talk” with minors. First, Altman said, the company will have to figure out how to “separate users who are under 18 from those who aren't.” Currently, ChatGPT doesn’t require a login or age verification to use its homework-cheating system.
We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.
Altman also said that by the end of September, ChatGPT will add parental controls that will let parents exercise some control over how their kids use the program, like restricting use of the program to certain hours, assuming the kids aren’t a lot better at foiling parental controls than the parents are in setting them up.
For a fascinating look at just how difficult it can be to make AI safe for kids (and at least somewhat safer for adults who are in difficult straits), see this Atlantic article (archive link here), which notes that AI-based attempts to figure out whether a user is likely to be underage actually rely on more surveillance of web use than just letting people lie about their age when they sign on.
Meta Sued By Porn Maker For Stealing All The Porn
Meta is being sued for copyright infringement by “Strike 3 Holdings,” a producer of adult videos it says are “high quality,” “feminist,” and “ethical.” Mind you, in the adult video business that could mean darn near anything — the complaint says the company’s works are “award-winning” and “critically acclaimed,” and are “distributed through the Blacked, Tushy, Vixen, Tushy Raw, Blacked Raw, Milfy, Wifey, and Slayed adult content websites.” (The complaint makes no claims as to the quality, feminism, or ethical standards of the distribution sites, we’ll add.)
As Wired reports (archive link also), Strike 3 alleges that Meta didn’t just use its videos, but has actually been torrenting and seeding them online since 2018, which is a reference to file sharing that only sounds naughty. Well, and it is, from a legal, intellectual property perspective. (We were going to say IP, but we don’t know if the company makes watersports videos.)
Strike 3 alleges Meta’s motive was partly to obtain otherwise difficult to scrape visual angles, parts of the human body, and extended, uninterrupted scenes—rare in mainstream movies and TV—to help it create what Mark Zuckerberg calls AI “superintelligence.”
“They have an interest in getting our content because it can give them a competitive advantage for the quality, fluidity, and humanity of the AI,” alleges Christian Waugh, an attorney for Strike 3.
Why the torrenting and redistribution via BitTorrent, which is illegal for copyrighted materials? Why not just steal stuff once to feed the AI scrapers? The lawsuit alleges that Meta uses the sharing system “as currency to support its downloading of a vast array of other content necessary to train its AI models,” because AI systems are content-hungry fuckers, as insatiable as … well, some character in a porn video, maybe, or a hungry hungry hippo.
And of course since sharing torrents is anonymous, there’s nothing at all to prevent minors from accessing the stuff. The lawsuit says that the alleged Meta violations were spotted by its “infringement detection systems” and identified as coming from IP addresses affiliated with Meta.
Using adult content as training data is “a public relations disaster waiting to happen,” says Matthew Sag, professor of law in artificial intelligence, machine learning, and data science at Emory University. Imagine a middle school student asks a Meta AI model for a video about pizza delivery, he says, and before you know it, it’s porn.
A meta spokesperson told Wired that the company is “reviewing the complaint, but we don’t believe Strike’s claims are accurate.” Like for one thing, until fairly recently, nobody in Meta’s shitty virtual reality even existed below the waist, so there you go.
[AP / Balkan Insight / AP / Atlantic (archive link) / Wired (archive link)]
Yr Wonkette is funded entirely by reader donations. If you can, please become a paid subscriber, or if you’d like to make a one-time donation to help us deliver snarky pizzas of actually intelligent news commentary, we promise this button won’t send you unwanted porn. Or if you think it will, go ahead, click and find out.



A proposal to rename the country AIbania was rejected because nobody reading it in a sans-serif font would get the joke.
OT: Chin-to-bum CT clear, 11½ years later still no evidence of cancer. Pretty good for Stage IV.