Watch a clip, see what the average brain does.
Four short Creative Commons clips, four different cortical systems. Pick one. The brain above re-renders as the video plays — interpolating between editorial keyframes that compose Neurosynth meta-analyses for the moment-by-moment stimulus.
Built in the spirit of Meta's TRIBE v2 demo at aidemos.atmeta.com/tribev2, with two honesty differences: every prediction here is precomputed (not live TRIBE inference), and every clip carries its own provenance + license + Neurosynth term composition. When the author runs the Colab notebook with their HuggingFace facebook/tribev2 access, real TRIBE predictions drop in at the same JSON path — the page renders them without code change.
Water lily opening
A timelapse close-up of a flower opening. Sustained visual perception builds across the clip; subtle reward weighting follows as the bloom completes — the editorial composition tracks the viewer, not the flower.
Neurosynth meta-analysis (preview) · HCP-MMP-360 (Glasser 2016, doi:10.1038/nature18933) · CC0
- t=0.0s · perception 50% · face 10% · attention 20% · emotion 20%
- t=7.0s · perception 40% · attention 20% · imagery 20% · emotion 20%
- t=14.0s · perception 35% · reward 20% · attention 20% · imagery 25%
- t=20.0s · perception 30% · reward 30% · attention 20% · default mode 20%
Not a measurement of your brain. Not even a measurement of any specific brain. What you're seeing is the activation pattern peer-reviewed fMRI literature aggregates for the term composition above, projected onto a standard cortical surface. The model is not the mind. The aggregate is not the person.
