Software Entropy: Why Systems Degrade and What Controls the Rate
Software systems do not stay still. Entropy accumulates from interdependence, unvalidated assumptions, and information that fails to reach the people who need it. This paper introduces the entropy framework that underpins the rest of the Rigel Rise series.
The premise
Every running system tends toward higher entropy unless work is done against it. In software, that work has historically been human attention — review, refactoring, deletion, and the slow propagation of shared mental models. AI accelerates production but does not, by itself, accelerate that counter-work.
What entropy is in this framework
Entropy here is not the codebase metric of the 1980s. It is the gap between what the system does and what the people responsible for it understand it to do. Three forces drive its rate of growth:
- Interdependence. Each new component multiplies the surface where assumptions must hold.
- Silent assumptions. Hypotheses that were never validated outlive their context.
- Information that does not arrive. Knowledge that exists somewhere but does not reach the next decision.
What controls the rate
The rate is not constant. It rises with deadline pressure, the absence of review, and lossy handoffs between phases of the SDLC. It falls with shared documentation, deliberate revision, and architecture that exposes its own constraints.
This paper is in active development. The full version will be published in the coming weeks.