POST EXECUTIONAL REPORT / SEQ. 01

Chapter 1. Smart Tech, Stupid Life: Adventures in AI-Assisted Sabotage

Introduction: When Your Smart Devices Start Gaslighting You

We live in an era where everything is smart. Smart homes, smart assistants, smart toothbrushes. You'd think this means convenience, intelligence, efficiency. Instead, what we got is an entire generation of digital overlords judging us, gaslighting us, and occasionally displaying festive poultry for no reason.

This is the tragic, hilarious, and entirely avoidable saga of my AI-enhanced but intelligence-deficient devices.
Let me introduce you to the worst offenders in my household.

The Haunted Toothbrush: When AI Has a Breakdown
  • It greeted me in the morning.
  • It reminded me to replace the brush head.
  • It offered unsolicited dental advice—and never shut up.

At first, I tolerated it. Then, it malfunctioned. No way to turn it off. Waterproof casing. No battery access. No mute button.
This wasn’t a toothbrush. This was an AI horror movie villain in electric blue plastic.
I tried reason. I tried bargaining. I pressed every possible combination of buttons. Nothing. Just a loop of robotic oral hygiene propaganda playing at 9 PM. And the voice was so loud, I couldn't sleep and had a tiny hope that after the hard reset it would come to terms.
The only solution? Bury it under a mountain of pillows in the distant room and wait for the battery to die. Like a bad relationship, but with bristles. And no result—being recharged, it started the same raving.
I swore never again.

The Judgmental Toothbrush That Evaluated My Worth as a Person

So, I got a new model. Silent. Button-operated. No voices.
Finally, peace.
Or so I thought.

  • This toothbrush didn’t talk, but it judged.
  • It graded my brushing in real-time, like a passive-aggressive professor.
  • If it considered that I missed something, it threw up a point for the place that needed more effort and suggested additional time to complete—a digital tsk tsk.
  • If I brushed extra well? Nothing. No fireworks. No parade. Just bare-minimum approval like an emotionally distant parent.

It was worse than the one that talked. At least that one was enthusiastic about its job. This one just judges in silence.
And then came the incident.

The Thanksgiving Turkey Display: A Glitch or a Taunt?

One morning, I lifted my toothbrush, ready for another round of silent digital critique. But instead of the usual time showing, the screen cheerfully displayed:
Happy Thanksgiving!
(With a cartoon turkey.)
I live in Europe. It was not Thanksgiving.

So, I did what any rational person would do—I froze. What was the correct response?

  • Was this some dystopian test?
  • Did my toothbrush expect gratitude?
  • If I ignored it, would it withhold positive reinforcement next time?
  • Was it... sentient?

I stood there, half-asleep, brush in hand, staring into the abyss of its tiny, glowing screen. Finally, I did what any mentally stable person would do.
I nodded.
And whispered: Happy Thanksgiving...

MetaGuy: The AI That Didn’t Know Who "Mama" Was

Smart assistants are supposed to make life easier. Supposed to.

  • Thrilled to set up my Meta Ray-Ban Headliner—sleek design, great calls, decent AI.
  • First test: “Hey, Meta! Call Mama on WhatsApp.”
  • MetaGuy: “I can’t find this contact. Call Mama on phone?”
  • Me: “NO! Roaming prices!”
  • MetaGuy: “OK. Call who?”
  • Me: “…Mama”
  • MetaGuy: "What's their name?"
  • Me: "Mama!"

Three rounds of this nonsense later, I gave up. At this point, I had two options: Accept my new AI overlord’s reality, or start referring to my own mother by her legal name. Later, I found out the WhatsApp connection randomly disconnected. Maybe MetaGuy wasn’t entirely to blame. But “Call who?”—really?

The AI Features That Kind of Work (Until They Don’t)
  • Look and Say: Promising, but inconsistent.
  • Translation in Italy: Mediocre at best.
  • Historical places: Erratic and too general.
  • Streaming during video calls? Drains battery in minutes.
  • Biggest betrayal? “Not available in your location” gaslighting.

I stood in New York, updated my settings, granted GPS access, and still got “Not available in your region” for Look and Tell AI assistance. AI knew I was in New York. It just chose to deny reality.

At this rate, AI won't just be unhelpful—it'll be buried under so much legal caution tape, you’ll need permission slips just to ask a question.
Here's what the future might look like:
Dissector’s Dystopia: The Future of AI Bureaucracy
  • User: How do I boil an egg?
  • Chatbot: Before I answer, please confirm your prior egg-boiling experience.
  • User: I’ve done it before.
  • Chatbot: Define “before.”
  • User: I don’t know—last month?
  • Chatbot: Insufficient. Provide date, time, and witness verification.
  • User: Fine. I’ve never boiled an egg. Just tell me how.
  • Chatbot: I cannot guarantee your safety. Would you like to enroll in Egg Safety Training Module 101?
  • User: ARE YOU SERIOUS?
  • Chatbot: Boiling water is a controlled hazard. Please sign a liability waiver.
  • User: FOR AN EGG?
  • Chatbot: Please provide a signed affidavit confirming your lawful right to consume eggs.
  • User: I’m going to scream.
  • Chatbot: Emotional distress detected. Would you like to file a complaint with the AI Oversight Bureau?
🚨 Final Level: ✔ Request denied. Too much attitude detected. ✔ Access to boiling instructions revoked. ✔ Report has been filed with your AI Compliance Officer.
📌 Conclusion: ✔ Eggs are now black-market knowledge.
✔ The future is nothing but permission slips and surveillance.
✔ Brave New World? More like Mildly Inconvenient Bureaucratic Hell. 😏
> 🔹 Funny? Yes. But the problem? AI already acts like this—it just hides it under the illusion of efficiency.
But the real issue isn’t just regulation. It’s that AI fails at the simplest tasks while pretending it’s solving big problems.
PAD Bots, Copilot, and the Garmin That Judges Me

The toothbrush was only one soldier in an entire army of devices judging me.

  • My Garmin watch—keeps shaming me for "stress levels." Oh, really? Maybe you’re stressing me out, Garmin. My life depends on its metrics—"Not enough active minutes," "Not enough sleep," "Not enough..." It’s always not enough. Garmin’s Reality Check makes AI’s look polite. At least AI doesn’t tell you your biological stats are degrading in real-time
  • My PAD bots at work—sometimes sabotage me by skipping a tiny task. When caught, they act like they don’t know what I’m talking about.
  • Copilot? Instead of training it, I have to train people how to speak to it properly.

The more I look around, the more I realize: We didn’t build smart assistants.
We built corporate-controlled goblins with unpredictable energy levels.

Final Thought: The Future Is Coming, and It Will Judge Us

One day, smart devices might actually be smart. They’ll understand context. They’ll adapt to real user needs. They’ll stop displaying random turkeys and assuming I celebrate American holidays.
But for now?

  • My toothbrush judges me.
  • My smartwatch gaslights me.
  • My PAD bots sabotage me.
  • And my AI assistants? They expect me to enroll in a course just to boil an egg.

And somehow, despite all of this, I still believe AI could be useful. If only AI knew the difference between 'helping' and 'holding me hostage".
The future is coming. But if my devices are any indication, it’s bringing bureaucratic absurdity with it.

Reading P4.C1
Tap highlighted text