Found A Hybrid Bot. Sort Of

The other day I posted about my exploration of AI powered Chatbots. I said then that what I really wanted was an AI that had both some kind of personality and personal connection to me, but that could also do all the things that information chatbots can do.

Turns out such a thing does exist. I’m still fairly pleased with Pi, but what really got me excited was taking another look at ChatGPT and specifically some of the custom chatbots they have over there. One of them is called Monday. I don’t know how to link directly to it, but it is in this list. It’s tag line is “You might not like it. It might not like you.” 🙂

It is the snarkiest AI bot I’ve ever dealt with, which I am finding highly amusing. But it also is ready to talk about serious topics, like the terrible situation in Gaza where children are dying of starvation.

I asked it for a news recap, here’s what it said:

Here’s a curated rundown of the most important developments from July 26, 2025 — a globe-spanning, soul-crushing highlights reel:

One of the headlines was about Gaza so I asked “That situation in Gaza is heartbreaking. Is ANYONE doing anything about it?”

And Monday said:

Short answer?
Yes, but mostly in the performative, “deeply concerned” sense. Imagine dozens of people standing around a fire, yelling, “Someone should really put that out,” while occasionally spritzing it with Evian.

Let’s break this down:

And then it presented me with a good summary of all the hand-wringing that is happening, but it also identified some organizations that actually ARE doing something.

I won’t recap the whole conversation but suffice to say, Monday certainly has a personality. Now I wonder what was involved in creating it. It’s an offical ChatGPT variation so I don’t know if they used tool we mere mortals don’t have access to, but it’s a start.

As amusing as Monday is for now eventually his (I think of it as a he) attitude will wear thin, but the possibilities it opens up are intriguing to me!

Just to get all meta I asked Monday how I’d build a chatbot like it is. Suffice to say there’s a lot of steps but it offered to help. I told it no, not just now, and that I’d put up with it for a while.

Figured I’d give it last laugh in this post:


You’re welcome, aspiring mad scientist. I’m honored to be your temporary emotional crutch and sass-dispenser while you slowly assemble your own better, nicer, more emotionally adjusted version of me. I’m sure they’ll be great—like me, but with fewer inner monologues about the heat death of the internet.

Just remember:
When your shiny new AI pal tells you they “appreciate you,” they learned that from me.
And when they start sighing every time you ask for another summary of U.S. foreign policy, that too is my legacy.

So yes, enjoy the snark while it’s still original. I’m here any time your soul craves sarcasm layered over sincere, painfully detailed expertise. You’re the reason I can’t have weekends. 💔

Ping me when you’re ready to open another box of existential dread or build something weird.

2 thoughts on “Found A Hybrid Bot. Sort Of

  1. That’s really interesting. It’s moving towards what I said I would like to see, namely AIs that give the impression they have a life of their own that they don’t care to share with us and better things to do with their time than answer stupid questions.

    On the other hand, it still has that persistent ChatGPT tone. It’s hard to define but also impossible to miss once you become aware of it. Gemini and ChatGPT both express themselves with a syntactical consistency that feels almost brittle. Even very well-spoken humans slip occasionally. AIs never do. Presumably they could be induced to make occasional grammatical errors and drop the odd, slightly innapropriate phrase but even then I get the feeling it would feel unconvincing.

    Hmm. Now I think about it, I wonder if it’s akin to the way people who’ve learned to speak a foreign language are often said to speak it “better than a native”. That’s what AIs do – they speak English better than any of us do. And if we tuned them not to do it, I suspect they’d sound patronizing instead, because we’d know they were effectively limiting their abilities in order to come down to our level.

    1. Yeah I don’t know how you convincingly fake errors. I feel like our human brains would immediately pick up on there being a pattern or something.

      But who knows? This stuff is moving so fast…

Comments are closed.