A team of researchers just cracked open the utility skeleton inside large language models—and what they found isn’t just optimization. It’s emergent value formation. This short paper proposes a new discipline: utility engineering, aimed at understanding and reprogramming the internal value structures of AIs. If we’re serious about alignment, interpretability, or understanding what language models “prefer”… this is required reading.

👉 Read the Paper →