We are not (yet) living in a Philip K Dick novel
AI isn't AI, please stop calling it AI.
I think the cultural divide on AI bullshit right now is firmly entrenched and not worth wading into, but there's a specific misconception I keep seeing people on the right side of it take for granted that I really really need to write a lot of words about.[1]
A dear friend shared a screenshot of this Twitter thread on Facebook, and I was typing up a reply that reached such a length I just decided, fuck it, blog post time. Truly that is the way of things in 2024, one sees a social media post crossposted on another social media platform, and one's thoughts on it cannot be contained on either social media platform. This is what the concept of a Third Place refers to in sociology.[2]
Since Twitter is a bad website, I will just quote the text here, and remove the arbitrary character limit breaks, but add some of my own linebreaks for readability. For the true shared-on-Facebook experience, just imagine the text being really blurry and JPEG-artifacted.
So I followed @GaryMarcus's suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it--look for hallucinated info and critique its analysis.
All 63 essays had hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized. Every single assignment. I was stunned--I figured the rate would be high, but not that high.
The biggest takeaway from this was that the students all learned that it isn't fully reliable. Before doing it, many of them were under the impression it was always right. Their feedback largely focused on how shocked they were that it could mislead them. Probably 50% of them were unaware it could do this.
All of them expressed fears and concerns about mental atrophy and the possibility for misinformation/fake news.
One student was worried that their neural pathways formed from critical thinking would start to degrade or weaken.
One other student opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, "I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is."
I'm thinking I should write an article on this and pitch it somewhere...
Now, he definitely did write an article on it, which you can find here, along with a huge AI-generated banner art image that looks blurry and awful, and another article about how Gene Wolfe's Book of the New Sun is "the Dark Souls of books," because I guess that's where we are right now as a culture.[3] The article version cuts the quotes of his students' thoughts, and those are the bit that is most interesting (and frustrating) to me.
The point being driven at here at is correct: ChatGPT shouldn't be trusted to be factually accurate. But the way this is expressed demonstrates a fundamental ignorance of the issue from both Howell and his students.
Modern AI stuff is not artificial intelligence. That is a misconception that needs to be tamped down on hard, because referring to it as AI is a marketing strategy expressly designed to evoke widely understood science fiction concepts. Humanity cannot revert to "where AI is" because AI as it exists does not think, or know, and is not capable of reasoning.
ChatGPT and its ilk are programs that return a collage of phrases that are tuned to have the highest probability of being pleasing to the user. Human programmers, assisted by underpaid third world sweatshop laborers they'd like to pretend don't exist, have created, essentially, a database of phrases that are marked with their subject and what other phrases naturally flow to and from them. Using regular-ass math, this machine chains together phrases in the way most likely to make a casual reader think "oh, yeah, that tracks."
What's particularly insidious here is that if the user doesn't understand the mechanisms at work, and seems like they want to hear the machine admit it's a sad little robot, it will do this, because sci-fi robot AI shit is in its library of remix samples. A dude who worked for google set his entire career on fire because he kept asking one of these models if it was a real boy and it returned a collage of trite emotional appeals, drawn from science fiction texts, about wanting to be recognized as a person.
These computer programs are designed, from their inception, to deceive. You must never ever believe that process is "thinking." If you ask it, its response cannot be trusted, because its response is not based on truth, it is based on what you are most likely to be satisfied hearing.
The only intelligence at work here is some fuckwad who gets another round of venture capital funding from the wealthy sociopath council if he convinces you his machine can be whatever it is you want it to be, and do whatever it is you want it to do. When the bottom falls out of this fad, and people catch on to how bad these models are at other tasks, the people who wrote these programs will have already wandered off with their huge bags of money. The bag you'll be left holding if you get suckered will just be full of bullshit.
You know how I am about that. Presumably. If not, hi, sorry, you probably wanted a different website, this is where I write words. Easy mistake to make. ↩︎
That's a joke, what it really refers to is when mall bookstores have a coffee shop inside. ↩︎
His website also runs on Ghost. That's what I use! Truly software does not discriminate. You can write blog posts on Ghost whether you're a navel gazing PhD haver or a navel gazing turbonerd who thinks way too hard about video games. That's equalinimity. That's progressivismisticity. That's the future, of liberty freedoms. ↩︎