RINHUMAI logo

Research illuminating
the human layer of AI reality

We study what happens to human judgment, learning, and responsibility when AI becomes the default environment — not the technology itself, but its consequences for the people inside it.

Human
Consciousness
The inner terrain
Attention, agency, development, and meaning-making — the capacities AI can deepen or quietly erode.
Resonant
Intelligence
The bridge
Where inner experience meets external systems — shaping responsibility, trust, and coordination.
AI
Systems
The outer terrain
Interfaces, infrastructures, and institutions that increasingly steer perception, choice, and collective reality.
Watch a primer
A short primer anchoring our research — from cognition to system-level consequences.
How we work

Our scientific approach

We make concerns inspectable, trace them downstream, and turn them into constraints that institutions can use.

Step 01

Make the claim inspectable

We define exactly what changes — a skill, a norm, an incentive — and spell out what evidence would strengthen or weaken the claim.

e.g. “Routine AI drafting reduces editorial judgment”.
Step 02

Track what happens downstream

Adoption does not stop at the first effect. We follow the chain: substitution leads to dependence, dependence shifts norms, shifted norms create institutional lock-in.

e.g. AI-assisted grading starts as convenience — then becomes the standard nobody questions.
Step 03

Translate into constraints

Every finding becomes a design or governance constraint that real institutions can implement: accountability paths, reversibility conditions, boundary mechanisms that hold under scale.

e.g. “Human review must remain mandatory before automated escalation”.
Driving questions

Questions that shape our work

AI increasingly becomes part of the environment in which judgment, learning, and responsibility unfold. These questions guide what we investigate and what we publish.

When AI becomes part of the cognitive environment, what happens to attention, memory, and independent judgment — and how do we measure the drift?
Where does responsibility land when outcomes emerge from human–AI collaboration, and which institutional conditions keep that legible under scale?
Which human capabilities must remain intact — even when automation makes them optional — for development, governance, and care to stay meaningful?

Our research produces papers and essays. The questions give us direction — the publications are where the work lands.