Today I Learned a New Phrase: “LLM Brain”

I was taking a quick spin through Mastodon today and saw a post from someone I don’t know talking about “LLM Brain” and how bad it is. I don’t know the person so I don’t want to quote them, but the gist of it was they were complaining about trying to teach someone who would take every error message they encountered and feed it to ChatGPT to get a solution. There were some extenuating circumstances; in this case the LLM was often wrong and I guess the student was very resistant to stopping to think about the problem, and I can definitely see where that could be an issue.

But holy smokes in about 2 months I’ve completely accepted having “LLM Brain” and no regrets! Maybe I don’t have imposter syndrome and I really am an imposter, but let me describe my old and new ways of working. The scenario for these work flows is that I’m trying to install some new open source project on my PC. Despite following the documentation, I encounter an error that makes no sense to me.

Old Way: Paste the error into a search box. Skip past ads and sponsored results and YouTube video suggestions to get to the real results, which invariably point me towards something like Stack Exchange or maybe Reddit. But rarely to a traditional article. Next, start following links and skimming the pages. Check for how out of date the proposed solutions are, discarding really old results. Discarding the posts that are people yelling at the questioner for using the wrong format or whatever. Eventually finding a solution, trying to suss out what it is going to do, and then trying to use the solution. Sometimes it works, sometimes not If not, start all over.

New Way: Open ChatGPT, explain what I’m trying to do and share the error message. Almost instantly get a response that both gives me a solution and generally explains WHY I hit this error in the first place. Then I suss out what the solution is going to do, and finally try it. If it’s something really spooky I’ll take a minute and get a second source. But generally the first solution works.

And, if I then get another error, ChatGPT still has context of what I’m doing, so I don’t have to start from scratch again.

I work through problems orders of magnitude more quickly when using an LLM than I used to do using search and user-generated pages. And I don’t think it is making me dumber. I mean I still try to understand why things are happening and how we’re going to fix the issue, with the bonus that the LLM is happy to dive into these details. Does it get things wrong? Yes, sometimes. I still have to sanity check and all that. But plenty of search results give wrong answers, too.

So I kind of reject this idea of “LLM Brain” being a bad thing. In a way it reminds me of how they used to say no one would be able to do basic math once cheap calculators became available. They WERE kind of right, but does it matter? We all walk around with calculators in our pockets. I guess after the apocalypse we’ll be screwed but… I also bet our collective handwriting has gotten REAL bad since the invention of personal computers, but that doesn’t mean you can’t learn to do calligraphy if that’s what you want to do. It just means you don’t HAVE to spend the time learning good penmanship if you don’t want to. We have choices that we never used to have.

Real world example. This morning I wanted to fiddle with an open source project that required me to have node.js on my PC. I downloaded a Node.js installer for Windows and it failed spectacularly. Now I had a mess. I turned to ChatGPT which first guided me through cleaning out all the cruft the botched installation left on my machine, including temp files in AppData and such that I never would’ve thought of on my own. Then it guided me through a manual installation of node.js that was actually easier and faster than using the “installer.”

Again, I just don’t see this as an issue; I see it as being more productive.

I KNOW at this point I’ve drunk deep of the AI Koolaid. But I drank deep of the calculator Koolaid, and the Personal Computer Koolaid, and the cell phone Koolaid, too. And I’ve turned out OK so far.

My only real hesitation is just the power requirements of all of this and how it is impacting the environment, but I also see how models are getting smaller and more efficient so I’m hoping that will level off over time. If the current administration was so pro fossil fuel and anti renewable energy I’d be less concerned about all this, but maybe in 3.5 years that will change.

 

[Header image created using Imagen at https://aistudio.google.com/gen-media, I believe. I didn’t keep notes on that one.]