Mission
We study how humans and AI shape one another — and translate that understanding into frameworks that protect judgment, agency, and responsibility.
- Understand: how AI changes attention, learning, trust, and decision-making.
- Protect: the human capacities that degrade under sustained delegation.
- Translate: research into usable review lenses for real deployments.
- Stay independent: guided by evidence, openness, and conscience — not product cycles.
Read more
We focus on what becomes fragile when AI is not "a tool you use," but an environment you live inside: attention, deliberation, developmental integrity, and the ability to hold complexity without surrendering authorship. Our work aims to keep responsibility legible — so collaboration scales without moral fog.