Mission & vision.

Mission

Advance how representation and state are used in language-based AI systems to produce more reliable and controllable decisions.

Vision

Language-based AI systems that make decisions from explicit, updatable state rather than implicit text representations.

How we work

We design and evaluate structured representations and intermediate state mechanisms for language-based AI systems to understand how they shape decision behavior. Our work starts with semantic state: making decision-relevant information explicit and addressable. From there, we study how such state can support revision, aggregation, and uncertainty, moving toward systems that can make decisions from consistent, belief-like representations over time.

hug&mun labs · ai research

AI systems that make consistent decisions.

Language-based AI systems generate fluent text but do not maintain stable internal state. We study how structured representations and memory dynamics improve decision behavior.

We design and test representations that help AI make consistent decisions, not just plausible outputs.

Huginnthought — structures raw language into discrete semantic units.

Muninnmemory — weaves experience into an evolving context that shapes retrieval and prediction.

Together, they form belief. A structured internal position from which decisions follow.

Scroll to summon
Act 1 · Rest 0%

Experiments, research, and notes from the lab.

Core idea

AI systems generate outputs from implicit context, but reliable decisions require explicit state.

We study whether making semantic representation, memory dynamics, and belief explicit improves:

1
Semantic state
Structured meaning extracted from input
2
Context state
Evolving memory that shapes interpretation
3
Belief state
Uncertainty-bearing position used for decisions
4
Decision
Action or judgment based on belief
First result

Gentags

A representation that compresses text into discrete semantic units.

In controlled decision tasks, this improves agreement with full-evidence decisions and increases constraint satisfaction.

Read the paper
ACL 2026 submission · Pre-print
What we're testing
Explorations

We explore candidate models for how memory should evolve during inference.

This includes simplified structures that capture:

These are treated as testable approximations, not assumptions about how memory fundamentally works.

hug&mun LABS