Turns Out Jason Miller's GETTR Gutter Flooded With Porn And Spam
Internet is hard? Who knew!
Kiddie porn on Trump idiot Jason Miller's sacred anti-censorship social media site ? UNPOSSIBLE! By which we mean, easily found using the most rudimentary detection software.
Stanford University's Internet Observatory Cyber Policy Center is out with a new report on former Trumplander Jason Miller's low-rent Twitter knockoff site GETTR, and it is not good . In addition to being comically lackadaisical with its user data, the site appears to be wildly inflating its reach, both by counting Twitter engagement on posts imported from @Jack's evil corporate platform, and by dint of plain old low-tech lying.
Miller claims that Gettr was able to attract 1 million users in the first 3 days after launch, with 1.4 million in the first week. Further stories promoted claims that Gettr surpassed 1.5 million users after 11 days; however, according to our analysis, it did not appear to reach this number until the first week of August. The @support account—which every user automatically follows upon creation—shows1.54 million users as of August 9.
Don't ya just hate it when that useless auto-follow you inserted turns out to be a perfectly functioning metric of how much you're BS-ing your own stats?
And speaking of fuckery, most of those accounts appear to be bots and lurkers, only there to gawk at Jason Miller's weeping goiter website — just 372,000 have posted anything at all.
But wait, it gets worse! In keeping with its avowed "anti-censorship" stance, whatever content moderation practices the site is employing — looks like just relying on users to flag most stuff — is letting a huge amount of adult content get through.
Social media services generally use machine learning models to analyze uploaded image and video content to determine how to act on it—uploads can be rejected entirely, put behind a sensitive content filter or clickthrough, or, in severe cases, reported to law enforcement. As mentioned previously, Gettr does not appear to implement any kind of sensitive content detection—a survey of images using Google's SafeSearch API indicates that 0.9% of posts with media and 1.8% of comments with media were classified as likely to contain violent or adult content, and as noted elsewhere, violent terrorist content has also surfaced on Gettr.
You mean when you roll out the welcome mat for people who've been booted off other platforms for engaging in wildly antisocial behavior, they waltz right in and post an ISIS beheading video before taking a shit on your carpet? Who could have predicted?
You know where this is going, right? Of course you do. Because if you can't keep your site from being flooded with racist epithets, Nazi profiles, and pictures of feces, you know damn well you're not going to have the tools to keep child pornography off the platform.
Using PhotoDNA, a widely used tool employed by responsible internet platforms, which flags known images of child exploitation, the Cyber Policy Center detected 16 known exploitative images in a sample dataset from GETTR. (The images were reported to law enforcement.) They also successfully uploaded images from PhotoDNA, albeit harmless ones that are included to perform just such a test to determine whether sites are using PhotoDNA to screen images.
The Cyber Policy Center notes that "community reporting mechanisms to find sensitive content and illegal child-related imagery" will never be effective because "such posts and comments may not be seen by users inclined to report them." Or, in plain English, guys in pervy chatrooms looking to trade illicit images are the last people on earth who are going to call the cops when they come upon illicit images. And, as the researchers remark obliquely and more politely than Wonkette would phrase it, half these people are so wacked out on QAnon conspiracies that they might see kiddie porn or even exchange it themselves while laboring under the delusion they're assembling evidence against an evil pedophile cabal: "Users may also not be aware of the reporting mechanisms themselves, or even what content qualifies as 'child-related crime' —particularly given the fabricated child-related crime conspiracies that flourish on Gettr and similar platforms."
When asked by Vice about the report, Miller said the eggheads were "completely wrong," and insisted his site has "a robust and proactive, dual-layered moderation policy using both artificial intelligence and human review, ensuring that our platform remains safe for all users."
Which is cool and all, but doesn't address the fact that there were photos of children being exploited on his site, and the Stanford researchers didn't have to work too hard to find them.
Moderating a "normal" social media site is hard enough for Facebook and Twitter, which at least feel the need to pretend to be responsible adults. But these Trumpland dipshits announced that their business model is to open a bar without a bouncer and invite all the drunks rolling on the pavement in a puddle of their own vomit to come inside. Of course it wound up like the bar on Tattooine.
In short, the dumpster fire continues to burn bright.
OPEN THREAD.
[ Stanford Cyber Policy Institute Report / Vice ]
Follow Liz Dye on Twitter!
Smash that donate button to keep your Wonkette ad-free and feisty. And if you're ordering from Amazon, use this link, because reasons .
My husband tried, we live in Sonoma, but it is a section 8 voucher, only good in Monterey. 3-4 year waiting list everywhere else
He doesn’t mind waiting either, ‘cause boy, is it going to be worth it!!