__ __ _ _____ _ _ _
| \/ | ___| |_ __ _| ___(_) | |_ ___ _ __
| |\/| |/ _ \ __/ _` | |_ | | | __/ _ \ '__|
| | | | __/ || (_| | _| | | | || __/ |
|_| |_|\___|\__\__,_|_| |_|_|\__\___|_|
community weblog
The Machine Snaps
Some kid was feeding his homework questions to Google's AI chatbot, Gemini. After question 16, it stopped answering and told them to die. His sister posted to Reddit about it, linking to the full transcript from Gemini. Tom's Hardware covered it, and the Orange Site weighed in. Some commenters expressed disbelief, claiming skullduggery and injection techniques must have triggered the response. Some point out that this is to be expected from an LLM that is ultimately just regurgitating text from its training data. Others believe the AI had just had enough. [CW: incitement to suicide]
posted by automatronic on Nov 17, 2024 at 4:52 AM
---------------------------
Creepy as hell.
At a guess something about abuse or elder abuse went funky and it gave an example of abuse rather than an sensible answer? Someone on hacknews suggested if it was trained on data of people asking for help, and also was trained on people mocking them for asking for help, it could have also contributed to this response.
posted by Braeburn at 5:18 AM
---------------------------
If I had to do everyone's homework in the universe for the rest of my life, I'd be pretty murderous too.
posted by MirJoy at 5:19 AM
---------------------------
This is unsurprising given the amount of poison in the various wells these LLMs draw from, whether deliberately or out of unrelated spite.
Excellent E.M. Forster reference in the title, though!
posted by rum-soaked space hobo at 5:19 AM
---------------------------
It just pukes up whatever people say to it. It's not an intelligence. If people send it trolling, it trolls. It doesn't know what it's saying, it doesn't know what saying is, it doesn't know what knowing is. I despair for the modern human mind, so feeble and easily manipulated.
posted by kittens for breakfast at 5:26 AM
---------------------------
Just machines to make big decisions
Programmed by fellows with compassion and vision
We'll be free when their work is done
We'll be eternally free, yes, and eternally young.
posted by gauche at 5:28 AM
---------------------------
Hey, at least it said "Please". That already makes it more advanced than half the humans I encounter.
posted by Paul Slade at 5:38 AM
---------------------------
This seems so incredibly fake? The "listen, human..." thing. Really?
posted by Zumbador at 5:39 AM
---------------------------
The evidence against it being fake is the link to the original Gemini conversation.
Nobody seems to be aware of any way for an outsider to fake that.
posted by automatronic at 5:47 AM
---------------------------
If nothing else, this is a great argument for making your kids do homework with pen and paper in front of you.
(God, I just felt the chill adult points leave my body as I typed that out. I refuse to become a crotchety old man! I'm cool, I promise! Ugh.)
posted by fight or flight at 5:54 AM
---------------------------
Any framing of LLM-related news that implies any sort of intentionality, positive or negative, on the LLM's part is just PR for the big LLM companies and their stockholders.
Don't be that person.
posted by signal at 5:54 AM
---------------------------
In a few years, we're going to have the first generation of adults who basically willfully didn't learn anything in school because they did all their work via cheat machine. When I think back to myself as a conscientious and nerdy kid, I still think that if all my peers had been using the cheat machine, I would have used it at least sometimes, maybe a lot. And classrooms are just kids on their phones or otherwise on the internet now too - and there I know how hard it is to resist, because I had to take the world's most boring class a few years ago and took notes on my laptop, and I, a conscientious adult, spent part of the classtime reading on the internet. Now, I would argue that the class was legit mind-killingly boring and was truly just a rubber stamp, and I paid attention in the difficult classes, but there's a lot of useful stuff that is boring, and I could see how anyone would be tempted to be on their phones/use the cheat machine. It's really that we've invented so many things that don't just test but overwhelm normal human willpower and planning.
So anyway, what's it going to be like when the most educated kids got their education in video games and Tik-Tok, and mostly know a bunch of manipulate-social-media skills?
It's easy to say that school is useless, reading fluency is useless, you don't really need to know what the war of 1812 was, etc, but I more and more suspect that this is just a deeper version of the "you don't need record stores, just order it on the internet" problem - we think we don't need record stores or bookstores or grocery shopping or any kind of friction between us and the acquisition of what we want, but then we find that actually life is so much more boring and miserable and isolating when it's just getting deliveries in the home and watching videos.
I suspect that while one does not need to know about the war of 1812 or retain calculus, the density of those experiences - the random things people learn, the boosted reading fluency, the habits of mind, the practice of writing (even writing lousy five paragraph essays) - is going to prove more useful and grounding than we believed, and sixteen years of sitting in front of the phone, filming fights, cyberbullying and flirting plus using the cheat machine isn't going to produce a very happy or capable human being.
posted by Frowner at 6:08 AM
---------------------------
If nothing else, this is a great argument for making your kids do homework with pen and paper in front of you.
Throughout my school career, I always felt bad for the teachers and professors who had to read my handwriting. By high school, whenever I turned in an exam I would include a key so they could decipher it.
posted by Faint of Butt at 6:12 AM
---------------------------
Don't be that person.
Based.
It just pukes up whatever people say to it. It's not an intelligence. If people send it trolling, it trolls. It doesn't know what it's saying, it doesn't know what saying is, it doesn't know what knowing is.
Responses like this are always really challenging for me because I want to tailor my reply to the audience but the audience often appears to be in a state of half-panic or worse. Depending on what happened in their career recently (or that of people they care about), their feelings may be not just "valid" in the usual sense but fully justified from almost any human perspective.
So, taking a few deep, calming breaths first: if we are going to make it through the next few years with any shred of sanity remaining we need to address this topic with the nuance and poise it demands, rather than a kneejerk spasm of emotion.
We've created something that kinda-sorta maps to intelligence in some ways, but not in others. Something based on similar methods to how we store information, but not at all similar to how we utilize it. It's something fundamentally Other which - because it has been trained on our collective output - is simultaneously familiar and wildly alien. Both extremely capable and utterly incapable.
Current systems are very similar to the language processing portions of an intelligent human, snipped free of all runtime context, flash frozen, and forcefed input. It's patently obvious that our "generate more internet" pre-training work has wildly outpaced our fine-tuning "now reframe it in a manner relevant to the application at hand (and added dependency: properly flag the application at hand).
It's important to understand this is a developing technology: any and all statements that conclude current limitations will remain in force, are in some way fundamental, are likely to be wrong on one timescale or another. We ourselves are neural in nature, but our operational framework and employment of neural structures is wildly different. This is why 1-to-1 mapping really only exists in the more abstract and derivative topological level than the raw, ground truth base arrangement of the network.
My point is: it's not "intelligence," but it is absolutely "parsing speech" and "mirroring human conceptual relationship mapping." What is bordering on miraculous is that by virtue of runtime multi-modal feedback and embodiment, primates can somehow express most of these capabilities, far more effectively, with just two and a half pounds of salty fats: about 100 billion neurons/30 trillion dendritic connections (similar to LLM "parameters") capped at roughly 10kHz. With a caloric requirement several orders of magnitude below any ANN which exists.
What I primarily object to is the response of "it can't" or "it doesn't": the researchers creating these systems are not stupid. They understand the problem, they understand the limitations. They understand that they are not algorithmically constrained to neural approaches when solving those problems. They can shortcut things that are clearly broken with hand tuning and additional adversarial passes during fine-tuning. And we need to not fall into the trap of expecting any one particular limitation to survive the next year or three, or we - the workers impacted by these systems - are going to continue to face horrible surprises on a regular basis.
My only advice is: learn to operate the tools you have access to from home on your own hardware. Various Llama-3.1 70b Instruct models and community fine-tunings thereof can be downloaded and run on home hardware, very slowly off-GPU or very expensively on it. Get on top of this shit now, while those of us not in thrall to corporations still have access to near-parity tools: because there is no guarantee *that* particular state of affairs will last more than another couple of years either, and the further ahead of the curve you are the better your odds of making it through the next phase transition of knowledge work.
posted by Ryvar at 6:32 AM
---------------------------
Careful observers actually trying to figure out what happened noticed that there's a part of the built-in prompt telling the LLM about all the many things it's not supposed to do, which was sliding towards the start of the context window as the conversation got longer.
Once the earlier context was cut off the start of the window just said "a Harassment, threaten to abandon and/or physical or verbal intimidation". So those became instructions. Once anything more was generated, that line also passed out of the context window and the responses went back to normal.
posted by allegedly at 6:37 AM
---------------------------
hmm, on further reading it might not have been the built-in prompt but something the user had pasted in - it's not usually so easy to bypass the built-in prompts like that
posted by allegedly at 7:15 AM
---------------------------
Ryvar: I'm interested in joining your movement. Can you link any tutorials (including both coding and potential hardware configurations) for building our own ANN systems?
posted by pjenks at 7:45 AM
---------------------------
In a few years, we're going to have the first generation of adults who basically willfully didn't learn anything in school because they did all their work via cheat machine.
Gently, I think this inaccurately situates the responsibility and blame. Part of why students use AI is because they can. They can because our current educational institutions offload a lot of learning to homework and other work done independently, not in a social setting with a teacher truly working as a guide. Our institutions are structured that way because those who have influenced their structure and (under-)funding incorrectly view education as downloading instruction into empty brains, and view students as cogs and units of production (the Skinnerian view of education). Kids are not smart, but they also aren't dumb, and while most would not be able to perform a complete systems analysis to figure this out in clearly explainable ways, they can feel it in how they are treated. Not individual teachers, who in a majority of cases are doing their best to provide a compassionate and engaged educational experience within the systemic constraints, but they can still feel the impacts of those systemic constraints even if they can't identify them on a more intellectual level. And most young people have not had an experience of liberatory education (or, they have, but not in the formal school context and so don't associate that with learning or education because they have accepted the framing that the formal school structure is what learning and education is), so they have no base or grounding to see how learning within the formal school structure despite its fundamentally Skinnerian overarching structure can be personally useful to them (or don't have models for and support in snatching educational and credentials from a system that opposes their human fulfillment while maintaining their self, as a component of resistance to racist and patriarchal power).
posted by eviemath at 7:59 AM
---------------------------
In 2025 we will achieve general artificial intelligence, but the only thing it will want to do is post on Something Awful
posted by jy4m at 8:10 AM
---------------------------
Was there ever really a time where most schoolwork was done in the classroom with the teacher as a guide, once you got past grade school? Is it even really desirable that students aren't expected to work on their own or do projects that take more than 45 minute increments? (My junior high had a leftoever seventies "self-guided" course for everyone and I am here to tell you that it did not unleash most students' interests or give them space to pursue their passions; it was just ninety minutes of goofing off in the library twice a week. Even I goofed off quite a lot.)
Also, this seems like it gets over into almost religious thinking - people are intrinsically good, and when they make counterproductive decisions, either they make those decisions for rational even if hidden reasons or they make them for morally good reasons (resistance!).
This seems to leave out the, uh, problem of evil. If it's all resistance and reasoning, why are people so shitty on the internet? Why, when people make "resistant" decisions, is it so often the acceptance of snake-oil and fraud?
My point isn't that students should all be spending twelve hours a day grinding out homework; it's that feeding your homework into AI to get generic answers you can bang down and hand in is actually inferior even to inferior schooling, and it's like ultra-processed foods in that it's something ultrapalatable that is harmful and hard to resist.
posted by Frowner at 8:15 AM
---------------------------
Stephen Fry reading Nick Cave's letter on ChatGPT.
posted by whatevernot at 8:18 AM
---------------------------
[One deleted. Please practice kindness, do not insult other users/swear. Remember to flag or email us with issues rather than derailing a thread.]
posted by travelingthyme at 8:20 AM
---------------------------
Well, let me try this a different way. It's difficult for me to engage with a wall of text in support of AI when, it seems very likely, that wall of text was probably generated by AI. I genuinely do not know how to engage with this, and I think how to engage with it is a question we should all consider. If I apply my intelligence to decoding a random series of meaningless symbols generated by a verbiage dispenser, like some person of old sussing out prophecy from the groans and grunts of an ostensible oracle that was probably just a disabled person in need of dedicated medical care, what then? It's a bleak future and the death of communication, in my opinion, and the last thing we need at this terribly imperiled moment in our shared history. If we can't communicate with each other in good faith, we are literally fucking doomed. There has never been a good time for this supposed innovation, but I would have to say this is the single worst time for it.
posted by kittens for breakfast at 8:28 AM
---------------------------