Stop doing that!
This week there was a good amount of progress on Endirillia’s avatar. It was also one of the most mentally exhausting weeks working on LazerBunny so far. Learning Blender, and also learning basic concepts of how our flesh bags actually look, work and behave is tiring. More tiring than writing a few thousand lines of code.
The less fun part about Blender is that even people who were literally trained in 3D design in university do not always know what it is doing.
me: "why does it do that?"
her: "oh year, it does that sometimes. Everyone hates it and no one knows why it is happening."
Is that not just great? If any of my code would behave like that you can bet I would have a debugger attached and figure it out. Blender? "Just do X to fix it and ignore it happened.".
I think my absolute favorite has to be random vertexes or edges appearing out of nowhere. "Just hit m and merge by distance". Cool. Cool, cool cool. But why? Who knows. Also I sometimes have to do the same thing multiple times before all vertexes are merged. Why? Who knows. But there is one sentence you will hear over and over again: "STOP DOING THAT!" followed by some creative ways to compare it to Windows ME.
But I am getting somewhere. Slowly. Redoing the whole thing multiple times in the progress. But I am getting somewhere!
if you would like to see different angles I tried to capture a gif, it is about 5MB, you have been warned.
I am also slowly understanding how the tools work. Especially transformation pivot points, smooth editing and when a random hotkey changes from the grab to the smooth tool. Without this being explicitly called out it is sometimes tricky to get it right. So my methodology changed to: watch the whole segment, do it, watch it again step by step, redo everything. Not the most efficient way for sure, but kind of working.
The biggest challenge is how human anatomy works. I even started reading a book explaining the basics. Do you know how many random stuff goes on in your ear? How your eyelids work? Do not get me started on a tongue.
Beside a progress picture there is honestly not much more to share. Guess slow weeks like this will happen when I am spending most of my time on the avatar part of the project.
LLMs, Tools, Agents
It looks like smaller models such as LFM2.5-thinking are struggling a bit with tool calling. And even more when it comes to using two tools at the same time to accomplish a goal. Qwen 3.5 9b is doing a really good job with that. But the difference to the 32b MoE is very noticeable for multi tool and multi agent workflows. There is a fairly good chance I will simply run a 32b MoE on a box for the actual work and let it handle everything.
As a side note I am currently playing with the 122b MoE and I got to say, this thing is good. I am not sure I would agree that it is frontier, closed source, hosted model good, but certainly not so noticeably worse that you think you are missing out.
Having played around a bit with various setups I am getting closer to the conclusion that I have to split the actual LLM part doing the work and the one driving the avatar. Waiting for an agent workflow to finish is like watching paint dry. So why not a small model for the avatar that immediately returns "sure, got it, I'm on it!" and then kicks off the workflow and notifies me with the result once done.
Latency during interaction, especially with TTS and STT is even more important than I assumed. The MacMini might struggle a bit with TTS, STT and a decent model to interact with, but that seems to be a solvable problem.
Technically I would call these two components the brain and the soul. But I am very reluctant right now, especially in the current, slightly delusional, hype bubble to humanize a hallucinating parrot even more than people already do. If I cannot come up with better terms I will likely continue to stick with brain and soul, but under protest and reluctantly.
Progress
Only working on the head of the avatar feels like there is not much progress for the time I am working on it. But learning about literally everything involved takes up time and energy. It honestly feels good stretching myself like this again. It has been some time.
The LLM part is still very much in the experimentation phase. Not necessarily how to write the code, that is the easy party. But how to make it stick together so it feels good to interact with.
As a side note: Some capabilities are on hold right now because there simply is no proper datasource to use, such as the next match of G2 in League of Legends.
posted on March 8, 2026, 8:58 p.m. in AI, lazerbunny
