Mission & vision.
Advance how representation and state are used in language-based AI systems to produce more reliable and controllable decisions.
Language-based AI systems that make decisions from explicit, updatable state rather than implicit text representations.
We design and evaluate structured representations and intermediate state mechanisms for language-based AI systems to understand how they shape decision behavior. Our work starts with semantic state: making decision-relevant information explicit and addressable. From there, we study how such state can support revision, aggregation, and uncertainty, moving toward systems that can make decisions from consistent, belief-like representations over time.
AI systems that make consistent decisions.
Language-based AI systems generate fluent text but do not maintain stable internal state. We study how structured representations and memory dynamics improve decision behavior.
We design and test representations that help AI make consistent decisions, not just plausible outputs.
Huginn — thought — structures raw language into discrete semantic units.
Muninn — memory — weaves experience into an evolving context that shapes retrieval and prediction.
Together, they form belief. A structured internal position from which decisions follow.
Experiments, research, and notes from the lab.
AI systems generate outputs from implicit context, but reliable decisions require explicit state.
We study whether making semantic representation, memory dynamics, and belief explicit improves:
- Decision consistency
- Interpretability
- Controllability
Gentags
A representation that compresses text into discrete semantic units.
In controlled decision tasks, this improves agreement with full-evidence decisions and increases constraint satisfaction.
- Structured semantic state improves decision stability
- Explicit state improves multi-step reasoning
- Memory dynamics affect decision consistency
We explore candidate models for how memory should evolve during inference.
This includes simplified structures that capture:
- Recency
- Interference
- Context-dependent retrieval
These are treated as testable approximations, not assumptions about how memory fundamentally works.