Open-Source Framework 'Lattice' Aims to Fix AI Coding Chaos – Launches Today
Rahul Garg releases open-source Lattice framework to enforce engineering rigor in AI-assisted programming, featuring composable skills and living context layer.
Breaking: New Open-Source Framework Tackles AI Code Disasters
AI coding assistants have a dirty secret: they jump straight to code, silently make design decisions, forget constraints mid-conversation, and produce output nobody reviewed against real engineering standards. That's about to change. Software engineer and AI ergonomics advocate Rahul Garg today released Lattice, an open-source framework designed to enforce battle-tested engineering disciplines inside AI-assisted development workflows.

Lattice is available immediately as a Claude Code plugin or as a standalone download compatible with any AI tool. The framework introduces a three-tier skill architecture—atoms, molecules, and refiners—that embed principles from Clean Architecture, Domain-Driven Design, design-first methodologies, and secure coding. A living context layer stored in a .lattice/ folder accumulates project standards, design decisions, and review insights over time.
“After a few feature cycles, atoms aren’t applying generic rules—they’re applying your rules, informed by your history,” Garg said in a statement. The system learns from every interaction, making it increasingly smarter with use.
Background
The announcement caps a multi-month series by Garg on reducing friction in AI-assisted programming. He published a sequence of posts on this site detailing how current AI tools often produce unreliable code because they lack structured engineering oversight. Lattice operationalizes those patterns into a repeatable, modular system.
In parallel, a separate article by colleagues Wei Zhang and Jessie Jie Xia on Structured-Prompt-Driven Development (SPDD) has drawn enormous traffic and numerous reader questions. The authors have now added a Q&A section that answers a dozen of the most pressing ones. The SPDD article remains a key reference for developers attempting to bring order to AI-generated code.
The Double Feedback Loop Breakthrough
Developer Jessica Kerr, speaking via her pseudonym Jessitron, posted a novel tool for working with conversation logs that highlights a deeper insight: AI-assisted development involves not one but two feedback loops.
“The first loop is the development loop, with Claude doing what I ask and then me checking whether that is indeed what I want,” Kerr wrote. “Then there’s a meta-level feedback loop, the ‘is this working?’ check when I feel resistance. Frustration, tedium, annoyance—these feelings are a signal to me that maybe this work could be easier.”
Kerr’s double-loop observation resonates with Garg’s philosophy. “As developers using software to build software, we have potential to mold our own work environment,” she noted. “With AI making software change superfast, changing our program to make debugging easier pays off immediately. Also, this is fun!”
The concept echoes what some researchers call Internal Reprogramability—a lost joy from the Smalltalk and Lisp eras where developers could easily shape their tools to fit the problem and personal taste. Modern IDEs and complex toolchains largely eliminated that ability, but AI agents may be reviving it.
What This Means
Lattice represents a significant shift: instead of treating AI coding assistants as black boxes, the framework forces transparency and rigor. For enterprise teams, this could mean fewer production bugs and more maintainable code. For solo developers, it promises a structured way to leverage AI without losing control.
The double-loop insight from Kerr suggests that the real value may lie in developers changing their own tooling as they build. Lattice’s open-source nature encourages exactly that—anyone can contribute new atoms, molecules, or refiners.
As Garg put it: “We’re moving from ‘AI writes my code’ to ‘AI follows my engineering standards.’ That’s the turning point.”