I'm on the opposite end of this spectrum. Puppets are absurd, hysterical, and it used to be a family thing to get together and make puppets every Christmas. I'd try to make the goofiest looking puppet possible. The last one I made has a wide brimmed hat, blond hair, glasses, and the weirdest looking braided mustache. Oh, and ping pong balls for eyes.
I'm working on Maelstrom, an agent framework with only the basics:
- 'agent' as cognitive state, ie, what to think about
- 'workflow' as 'what to think about
- 'session' as immutable agent history
- 'timers' as a way to kick off an agent on a schedule (with or without a workflow attached
I've been working on this since just before OpenClaw dropped at the end of January. Currently it weighs in at around 20k lines of code. There is still a significant amount of work to be done on polish, but the core appears to be functional, and almost to the point where I can replace opencode as my daily driver (I'm very much looking forward to this).
From [1]:
---
I've been working on a framework since the end of January or so. I'm on my 7th draft. As I've gone along, each draft gets markedly smaller. The overlaps between what I'm building and openclaw are significant, but I've realized the elements that make up the system are distinct, small, and modular (by design).
There are only a few primitives:
1. session history
1a. context map + rendered context map (think of a drive partitioning scheme, but for context -- you can specify what goes into each block of context and this gets built before being sent out for inference).
2. agent definition / runtime
3. workflow definition / runtime
4. workflow history
5. runtime history (for all the stuff session and workflow history fail to capture because they are at a lower level in the stack)
That's it. Everything else builds on top of these primitives, including
- memory (a new context block that you add to a context map)
- tool usage (which is a set of hooks on inference return and can optionally send the output straight back for inference -- this is a special case inside the inference loop and so just lives there)
- anything to do with agent operating environment (this is an extension of workflows)
- anything to do with governance/provenance/security (this is an extension of either workflows and/or agent operating environment... I haven't nailed this down yet).
I suppose I should say something about how agents and workflows work together. I've broken up 'what to do' and 'how to think' into the two primitives of 'workflow' and 'agent' respectively. An agent's context map will have a section for system prompt and cognitive prompt, and an agent can 'bind' to a workflow. When bound, the agent has an additional field in their context map that spells out the workflow state the agent is in, the available tools, and state exit criteria. Ideally an agent can bind/unbind from a workflow at will, which means long-running workflows are durable beyond just agent activity. There's some nuance here in how session history from a workflow is stored, and I haven't figured that out yet.
Generally, the idea of a workflow allows you to do things like scheduled tasks, user UI, connectors to a variety of comms interfaces, tasks requiring specific outputs, etc. The primitive lays the foundation for a huge chunk of functionality that openclaw and others expose.
It's been fun reasoning through this, and I'll admit that I've had an awful lot of FOMO in the mean time, as I watch so many other harnesses come online. The majority of them look polished, and are well marketed (as far as AI hype marketing goes). But I've managed to stay the course so far.
I hope you find your ideal fit. These tools have the potential to be very powerful if we can manage to build them well enough.
Can't you prevent pushing from the client side with pre-commit hooks? I would expect a hook to fire on the developer's computer that prevents them from even committing/pushing (unless they nuke the hook in their local repo copy).
You have to manually install hooks in your local repository. They aren't propagated as part of the repo. Git has intentionally made hooks require a very explicit opt-in.
I know this is a tongue-in-cheek response, but this brings me great pain. The spaghetti begins quickly, and your unit/functional tests won't help you unless you hammered out your module API seams before you even began. Oh, your abstractions are leaking? Your modules know too much about each other? Multiply the spaghetti!
We're moving into the 'industrial age of software'. You exact issue, of bespoke, well thought out and well-crafted code is one that craftsmen felt at the beginning of the industrial age. Now, parts are designed and churned out by machines that no one sees or cares about (generally speaking). This is where we are going with software, and production at a truly industrial scale has its place.
And so does well-crafted bespoke software.
The engineers who built the foundation for the industrial expansion of our forefathers went through the same exact thing we're going through now. They look at what existed, and use it to inform their efforts. This is what LLMs do.
I'm not attempting to moralize here, just comment on the parallels. Do I agree that a craftman's work is consumed by the juggernauts and no second thought is given? No. I think its a shame. But I also think the output will never match the artisans that practice now. By the very nature of the machines we employ, we cannot match the skill or thought that goes into bespoke code.
It is not even about quality. In fact with an LLM following my orders I can create higher quality code than I ever did before. I always was operating within a budget whether it was defined by the # of hours my customers were willing to pay for, or the # of hours I was personally willing to invest in a side project. This budget manifested in the form of cut features, limited test coverage, limited documentation, and so on. So given the same budget or even a slightly reduced budget I can actually make higher quality software with slop superpowers.
If I spend 2 hours designing the domain model, 1 hour slopping out a rough implementation, and 5 hours polishing it with a combo of handwritten and vibed refactorings, I will get a better result than if I spent 8 hours writing everything by hand.
So my point is not that vibe software is lower quality, as my experience has shown the opposite. It is simply that the spirit of sharing my work was done with the idea that I was sharing it with others who toiled in the same craft, not sharing for consumption by machine. Not that I ever contributed anything very important to the open source world, that anybody depended on. Just personal projects I thought were neat or educational.
In hindsight I would probably still have open sourced what I did, because I think it's valuable to have on record that I competently programmed stuff before AI even existed, like pre-atomic steel. But I don't know if I will open source any personal code going forward.
====
To put it more succinctly: if somebody "ripped off" my open source code in 2018, I wasn't mad about that. Even if they didn't bother to attribute me, well, at least they saw my stuff, had a human brain cell light up appreciating it, and thought it was worth stealing. I'm flattered. But with LLMs my work can be reappropriated without a single human ever directly knowing or caring about it.
Well put. I agree wholeheartedly with your sentiment.
Maybe this is me just being angry at the new world that's being created, but the beauty of the open source ecosystem was humans giving away things they found useful in the hope that other humans could find them useful too. Having a machine take all of that and regurgitate it somewhere else without that connection (for profit, no less) feels like a betrayal of that open source ethos.
Now in the back of my mind I worry that everything I open source will be scooped up by corporations to make them more rich and more powerful, so I end up not publishing anything (not that it was of any value). I suspect I'm not alone in feeling that way.
This is like arguing we should only have manual looms because the mechanical looms suppress wages and destroy the livelihoods of those expert loom operators.
The tech is here. We can fight it, or adapt and embrace it.
If previous examples from the industrial revolution are anything to go by, fighting automation is a losing battle.
Difference is that those tools modernised the work and actually created jobs. The ultimate aim of these AI sociopaths is to remove all work in all areas so they can hoover up all the money and let people starve.
But, there is plenty of open source stuff out there to enable people to have their own models, running on their own hardware. Business does not need to go to the big 3.
Business doesn't need to go to the big 3, but it will. The big companies will ensure that smaller, specialised or open models get restricted by laws paid for by their lobbying budgets, so that they can pull up the ladder after them and solidify their position. They've invested billions and will never allow the world become some tech utopia where we all have a personalised free AI in our pocket, they will guarantee their own dominance.
I'd be curious to hear more about your work on the 'Sovereign AI Stack'. I'm also working on a project that prioritizes governance and verification and I'd love to compare notes.
Importing git history is ugly but do-able. I had to do that at a previous job (splitting a git repo in two pieces or importing commit history from SVN).
I can take a look and try to create a PR around this if there is interest.
It makes me chuckle every time I see it.
reply