This past weekend was pretty unusual for me. I did almost no gaming and instead worked on a couple of personal projects, both of them AI based.
On Saturday I created a chatbot, similar to the ones on Character.ai. I used SillyTavern as the UI, KoboldCPP to do the text generation (after I failed to convince llama.cpp and SillyTavern to get along) and Sarah Storyteller as the model. Since I was just goofing around I grabbed a character from Chub.ai which is frequently NSFW so I won’t actually link to it. Folks create a lot of slash-fiction AI bots for their RPing I guess. No judgement!
SillyTavern is pretty cool. You can turn the whole thing into what feels like a visual novel by uploading 2D images of your character representing different moods and such. You can even add voice generation. I didn’t go nearly that far. Once I had it all working I sorta lost interest. It all did run locally, but it was slower than online versions like Character.ai. It was also more private which could be a boon but I’m not doing super spicey things with my AI chatbots so I am not really worried about that. I was just satisfied to get it all working. If you want to see a video on what SillyTavern can do, this is the one that got me interested. Despite the clickbait title, there’s nothing NSFW in the video (YouTube wouldn’t allow that) though maybe don’t watch it at work cuz anime girls.
Sunday, I burned my ComfyUI installation to the ground and built it back up again. I first started using the Portable version, then read that it’s better to manually install it. So I did that, got it working, then went crazy trying to follow a SUPER complex workflow that had me installing bunchs of Custom Nodes willy-nilly until I had such a mess that everything was slow and tended to crash. So another scorched earth re-install was called for.
What I have now is 2 environments, Comfy-Stable where everything in there is solid and tested and working, and Comfy-sandbox where I can dump in random custom nodes, loras, clip-doohickeys and the dozens of other crap you can shove into Comfy, and if things break I can just burn that install down without losing the ability to generate images in Stable. Each install uses it’s own MiniConda environment so, in theory, nothing I do in Sandbox can break Stable. I also put the really big models in a separate directory and symlinked them into the two environments so I didn’t have to double up on models that gorge themselves on SSD space.
So that wasn’t very sexy but it was pretty satisfying. Now I’m SLOWLY adding things to Sandbox to see what they do. Last night I was trying to get good results doing inpainting, which basically means using a mask to get the AI to regenerate just a specific part of an image. I got that working technically, in that I could feed it an image, mask off the part I wanted regenerated, and it would spit out an image with that part regenerated with SOMETHING, I just didn’t crack the code on getting it to generate anything that looked decent. So that’s still being learned.

What’s really neat about generating images locally is how fast it is to generate a bunch of images from the same prompt. As we all know you get a lot of random stuff in AI generated images and being able to put in a prompt that seems to have potential, then run it again asking it to make 20 images instead of 1, is really helpful. Yes you’ll probably throw 19 (or 20!) of those images away but all they cost is the electricity that your PC is using, so who cares, right?
Anyway the end result was that this was the best weekend I’ve had in a LONG time. Learning new stuff is fun. Generating images is fun. Tinkering in code and systems is fun. It’s all a nice break from gaming and I’m sure that when my gamer gene switches back on (and I KNOW it will) I’m going to have MORE fun gaming than I’ve had for a while, thanks to taking a break.
[Image at the top is the image I was trying to fix via inpainting. The prompt called for the woman to be holding a martini but instead she’s about to shot gun something that looks like a 40 oz Corona or something. Despite MANY attempts I couldn’t get the AI to put a martini in her hand..yet. Model is based on juggernaut-XL_v8]
Hmm. SillyTavern does sound interesting. I’ve read the documentation and my PC could run it and I could probably follow the instructions to install it. Not sure I have a real use case for it yet but I’ve bookmarked it for later. Thanks!
You are most welcome! You can run it against a cloud based LLM if you want/need to. There’s one they talk about called AI Horde or Horde AI or something. But I wanted to see if I could get it working 100% locally, and I could. You have to run the LLM and SillyTavern concurrently and then just paste in the URL of your LLM into ST.