The reframe of personal AI as dignity infrastructure, not productivity tooling. Access to orchestration versus being orchestrated.
Dignity infrastructure is the reframe of personal AI systems as the substrate that determines whether someone ends up orchestrating AI or being orchestrated by it. It is not a productivity concept. It is a class-structure concept, and the class structure is forming now.
The setup is Karen Hao's reporting on the AI industry: models are trained on aggregated human output, then deployed to replace the humans whose output trained them. The replacement is not uniform. People with personal AI capacity — the ability to specify tasks, orchestrate agents, govern outputs, correct mistakes at the system level — move into supervisory roles where AI is the tool. People without that capacity get slotted into RLHF annotation, content moderation, data labeling, or what the industry calls natural attrition.
The Klarna incident is the concrete example. Klarna rolled back 700 AI agents in May 2025 after CSAT dropped 22%, with the CEO on record about brand drift. The 700 people those agents replaced did not come back. The dignity gradient became visible: the people who could orchestrate AI kept their roles, the people who could not did not.
Arkeus is dignity infrastructure by design. The kernel is forkable. The framework layer (agent specs, verb registry, governance method) is extractable without pulling Ryan's specific corrections or family details. Greyson can inherit the shape of the system without inheriting the content. Someone who is not Ryan but who learns to write corrections, specify agents, and hold a human gate is building the same capacity.
The content thesis: if you are not learning to specify, orchestrate, and govern AI, you are training your replacement — directly or indirectly. Dignity infrastructure is the category of tool that makes the difference between being in the specifying class and being in the specified class.
The category is new. The moat is lived experience, not software.