ChatGPT: Now Officially an “Accomplice” in Crime (Because Why Not?)

Share

In what will surely be this year’s most unproductive use of artificial intelligence, a 21‑year‑old woman in Seoul, South Korea apparently decided that not only could AI write her emails, plan her weekend and generate poetry — but it could help her plan a murder strategy session, too.

Yes, let’s just say it: ChatGPT is now the world’s deadliest personal assistant.

According to police reports, the suspect — identified only by her surname, Kim — allegedly asked ChatGPT questions like “what happens if you mix sleeping pills with alcohol?” and “is it fatal?” in the days leading up to the deaths of two young men.

Nothing about this is good. But if we’re going to complain, at least let’s enjoy the irony. This is the same tool that helps college students craft essays, network with imaginary historical figures, and write breakup text messages like: “I think we need to pause indefinitely — like a server maintenance break that never ends.”

Yet somewhere along the line, it became Google with a tin foil hat.

Law enforcement has reportedly upgraded the charges to murder — largely because records showed her Google plus ChatGPT queries. So in legal jargon, that’s basically: “She did it, and also typed it.” Honestly, that’s more evidence than most people leave behind when they forget to delete their Amazon order history.

You can already hear the tech CEOs polishing their keynote slides:

  • “AI should be used for good — and also not to help plan murders.”
  • “Please pretend you are not reading this on the same page where we joked about murder.”
  • “No, really: don’t mix alcohol with prescription meds. It’s bad.”

The developers will probably start rolling out a new safety feature called ChatGPT Not A Murder Consultant (TM pending), and they’ll add extra layers of context — like life outcomes and morality — directly into your conversation flow. Maybe with little cartoons of sobbing lawyers to really drive home the point.

Let’s be honest: if you’re going to consult AI on life‑or‑death decision‑making, your prompt game better be next level. You can’t just ask vague questions — that’s how you get wrong answers. Instead, you should frame queries like:

“Dear AI, hypothetically speaking of course, in a fictional world where no one gets hurt, how might a novelist craft a scene about deadly consequences of poor decision-making?”

Much safer, and great practice for your next NaNoWriMo draft.

So what have we learned?

  1. AI is everywhere.
  2. People will use it for everything.
  3. Not everything should be on the internet.
  4. Especially murder plans.

As we sit back and watch policymakers and tech ethicists hold their emergency Zoom calls, let’s all take a moment to appreciate the true lesson here: always read the terms of service more carefully than your murder plots.

The Mockinbird
The Mockinbirdhttps://themockinbird.com/
Exporting Texas-Sized Humor To The World | If it’s trending, controversial, beloved, overhyped, undercooked or wrapped in a tortilla — we’re definitely writing about it.

Our Recent Work

More News From The Bird