Discussion about this post

User's avatar
ziggywiggy's avatar

Update! on last night’s “Oh FUCK, oh FUCK it’s CRUNCHed” story from Friday night in Cleveland Heights. It is still there. So stolen car + TikTok video is what I am guessing.

https://substack.com/@ziggywiggy/note/c-180165989?utm_source=notes-share-action&r=2knfuc

Expand full comment
The Silver Symposium's avatar

As I've written several times in this very comment section, Grok is not taken seriously by anyone who actually knows anything about AI. The core reason for this is simple: it's basically the AI equivalent of the cybertruck. It's an idea built by other people that ignores all the realities of what he's trying to do.

For example, trying to make a conservative wikipedia has been a thing forever. But an AI is just a prediction engine. That's it. It's not 'alive.' It doesn't 'think.' It's basically the world's most complex odds making machine. Because human beings are creatures of patterns, and AI as it exists basically just recognizes very complex patterns and emulates them, how good a model is depends entirely on how good the information you're giving it is.

Years ago someone trained a model on nothing but 4chan posts and it was little more than a slur machine. Why? Because that's all the information it had to work with. This is not that new; what's new is how complex the outputs can be.

But Grok exists because Musk wants an 'anti-woke' AI, but because 'wokeness' has no clear definition anymore due to the idea being 'whatever we're mad about today' you're left with a model that essentially just exists to massage Musk's ego.

If nothing else, the outputs of Grok tell us what Musk is mainlining, because in order for it to output things like that, it has to first have enough data like that to recognize patterns. It's not 'understanding' 'facts' or anything like that.

Here's an example: To us, the word "moon" has context. We can immediately think of lots of things about that word, and every word. But to an AI model, "moon" is a number that connects to various other numbers with no intrinsic values beyond that. It might output things like "full moon" because it recognizes that in a lot of cases, those two words go together. Language, as everyone who's ever taken a language class knows, has rules, and rules are just patterns.

My point is that in order for Grok to be that sycophantic, it has to have it's 'weights' made that way, meaning it was given enough data to develop a pattern where it believed the 'correct' output was highly likely to be like that. Again, it fits the 'pattern' in the training data it was given.

This of course makes it functionally useless for anyone who wants to use AI. AI has real productivity benefits, in that it can automate a lot of things that are time consuming and tedious. We've been using machine learning in everything from surgery to business for decades. For example! Black Monday in 1987 was a financial crash brought on by the fact that computer models said that there was probably going to be one soon, and everyone panicked. Somehow, despite that being nearly 40 years ago, people are no smarter about computers not being any more infallible than the people who make them.

The core thing that any model needs to work is good data. Train something on bad data, it learns bad patterns, it outputs bad outputs. It's not fundamentally different than high schoolers learning how to write from Shakespeare or Poe and their writing ending up full of cumbersome purple prose.

But if you want to like, USE an AI model, for like, anything, it needs to be trained on good data, and creating datasets that are 'anti-woke' just means you're training it on nonsense. You might as well train it on the benefits of Phrenology or something.

Grok is a joke, and no serious AI follower thinks it'll ever be anything more than one rich guy's vanity project.

Expand full comment
1278 more comments...

No posts

Ready for more?