When Lawmakers Outlaw ‘Three-Cueing’ and Accidentally Outlaw Reading
Share this article
Three-Cueing Meets the Law: How a Literacy War Lost the Plot
Source: Shanahan on Literacy – "Three-Cueing and the Law"
Across the U.S., the phrase "science of reading" has gone from niche research shorthand to political slogan. Governors tweet it, advocacy groups brand around it, and statehouses translate it into statute. One of the clearest targets of this wave: the "three-cueing" system.
On paper, the story is simple: decades of research show that teaching children to guess words from pictures and context undermines fluent, accurate decoding. So states like Ohio and North Carolina ban three-cueing outright.
In practice, as literacy scholar Timothy Shanahan argues, the laws are technically incoherent, pedagogically dangerous, and a case study in what happens when complex cognitive science is flattened into legislative bullet points.
For developers, learning scientists, edtech founders, and technically minded leaders watching this from the sidelines, there’s a deeper pattern here—one with implications for how we codify "AI safety," "responsible ML," and any other science-driven practice in law.
What Three-Cueing Actually Is
Three-cueing was never just a classroom trick; it began as a theory of how readers process print.
Traditionally, it names three sources of information ("cues") that a reader might draw on when identifying a word:
- Semantic: Does this word make sense in context?
- Syntactic: Does it fit the grammatical structure of the sentence?
- Graphophonic: Do the letters and spelling patterns map to plausible sounds?
Kenneth Goodman’s influential 1960s work framed reading as a "psycholinguistic guessing game," suggesting that proficient readers rely increasingly on prediction—using meaning and syntax to anticipate words, consulting the print as little as possible.
Empirically, parts of this observation are true: when readers misread words, miscues are often semantically or syntactically plausible. Young children absolutely use environmental context and expectations. But Shanahan points out the critical distinction the political narrative misses: this research describes what happens when reading breaks down, not the mechanism of skilled word recognition.
What the Science of Reading Actually Says
Eye-tracking and longitudinal studies paint a very different picture of proficient reading than the three-cueing mythos:
- Skilled readers fixate on most words and use information from essentially all letters (Rayner & Pollatsek, 1989).
- As proficiency increases, reliance on graphophonic information (letter–sound correspondences) grows; reliance on semantic/syntactic guessing declines (Stanovich, Cunningham, & Feeman, 1984).
- Neuroscience and cognitive models converge: fluent readers form tightly integrated mappings between orthography (letters), phonology (sounds), and meaning. Accurate decoding isn’t optional; it’s the backbone.
In other words:
- Struggling readers lean on context and pictures because their decoding is weak.
- Skilled readers decode automatically from print, and use context to monitor and confirm—not to guess.
That distinction is everything. Three-cueing, when taught as "look at the picture, check the first letter, think what makes sense," formalizes the habits of poor readers and trains children away from the code they must master.
So far, so good for the legislators, right? Bad model, ban it.
Not quite.
When "Ban the Bad Stuff" Becomes "Ban Reading"
Ohio’s statute, as quoted by Shanahan, defines its prohibition this way:
“As used in this section, ‘three-cueing approach’ means any model of teaching students to read based on meaning, structure and syntax, and visual cues.”
This is where technically literate readers will feel the gears grind.
Taken literally, that definition:
- Goes far beyond outlawing guessing.
- Sweeps in foundational models like the Simple View of Reading, Scarborough’s Reading Rope, Ehri’s orthographic mapping, and federal What Works Clearinghouse guidance.
- Collides with Ohio’s own definition of reading instruction, which includes comprehension.
If teachers are barred from "any model" based on meaning, structure, syntax, and visual cues, then:
- Teaching comprehension strategies is suspect.
- Teaching students to use syntax or semantics to evaluate whether a decoded word makes sense is suspect.
- Teaching the second, crucial step of decoding—checking the pronunciation against context and known language patterns—is suspect.
As Shanahan wryly notes, it’s like protecting baseball by prohibiting bats.
Legally, this isn’t just sloppy; it’s dangerous. Statutes are interpreted according to their plain language. Teachers don’t get to assume, "They didn’t mean that." District lawyers, compliance officers, curriculum vendors, and edtech platforms will read the restriction conservatively—and strip out practices the research actually supports.
The Two-Step Decode That Policy Just Broke
Shanahan leans on orthography expert Richard Venezky to articulate what real decoding instruction entails. It’s not "just phonics" in the caricatured sense.
Effective decoding is a two-step process:
- Convert graphemes to phonemes: map letters and patterns to sounds; blend them.
- Monitor and adjust using context and language knowledge: if the first attempt doesn’t match any known spoken word or doesn’t fit the sentence, flex the vowel, consider alternative pronunciations, and check meaning and syntax.
That second step—contextual verification—is not three-cueing-as-guessing. It’s error correction, calibration, and the path toward robust orthographic mapping.
Ohio-style bans, written broadly, effectively endorse step one and criminalize step two. That is, they outlaw the very metacognitive use of "meaning, structure and syntax" that prevents robotic word calling and supports comprehension.
For students who don’t intuit these strategies on their own—precisely the children most at risk—this is catastrophic. And for developers building "science of reading" aligned tools, it’s a compliance nightmare: are you allowed to prompt a child to "check if that word makes sense" in Ohio? In North Carolina? In Wisconsin?
The honest answer under some of this legislation: it depends how nervously your general counsel reads the verbs.
Three-Cueing, Misuse, and the Edtech Temptation
It’s worth being explicit about what should be dying here.
Instructional patterns like:
- "Look at the picture and guess."
- "Say the first sound, then think what word would make sense."
- "Skip the word if it’s hard; the story and pictures will help."
are indefensible at scale. They produce fragile readers whose strategies collapse the moment the text gets denser, more technical, or less illustrated—which is to say, right when reading becomes mission-critical for STEM, CS, and beyond.
For technology builders, the analogy is direct:
- A reading app that rewards picture-guessing is like an IDE that silently "fixes" code by guessing user intent instead of enforcing syntax. It feels smooth—until it fails in production.
- A system that encourages context-before-code actively prevents learners from acquiring the underlying deterministic skills.
Banning that pattern is rational. But banning all explicit instruction that leverages meaning and syntax is anti-rational.
How Policy Lost the Thread
Why did the laws go wrong? Because they tried to legislate away an instructional error without:
- Distinguishing between word identification and comprehension.
- Defining "three-cueing" with any technical precision.
- Consulting the very researchers whose work is being invoked.
Instead, lawmakers reached for catch-all language that plays well in hearings but performs terribly in classrooms. The result, as reflected in Shanahan’s commenters—teachers, specialists, researchers—is predictable:
- Quality interventions like Reading Recovery are attacked or dismantled wholesale, rather than analyzed and refactored (e.g., separating valuable professional development from flawed cueing guidance).
- Teachers report being chilled from teaching comprehension, vocabulary, and syntactic awareness in early grades, fearing that any reference to "meaning" makes them non-compliant.
- Advocates on both extremes—"all phonics, no context" vs. "all rich text, no code"—claim victory while children remain under-served.
And once this is in statute, your options are litigation, reinterpretation, or quiet, uneven noncompliance.
A Pattern Technologists Should Recognize
If this feels familiar to anyone working in AI policy or cybersecurity, it should.
We’ve seen this movie:
- Good science identifies a real failure mode.
- Advocacy compresses it into a slogan.
- Legislators codify the slogan, not the mechanism.
- Practitioners are left to implement a law that conflicts with the underlying science.
Think of poorly drafted "encryption backdoor" proposals, or AI rules that conflate "black box" with "unsafe" in ways that outlaw legitimate architectures while missing actual risk.
What the three-cueing bans demonstrate, in slow motion and at national scale, is how:
- Overbroad statutory language can weaponize partial understanding of cognitive science.
- Mandate/ban frameworks are brittle when the domain is complex, evolving, and context-dependent.
- Real progress requires technically accurate definitions, scoped obligations, and direct consultation with domain experts—not just journalists, lobbyists, or the loudest influencers.
For edtech and learning science teams, navigating this landscape demands:
- Very explicit modeling of your instructional logic: first decode from print, then confirm via context, never reverse.
- Jurisdiction-aware content design: documentation that shows precisely how your approach aligns with empirical reading science—not with political shorthand.
- Willingness to participate in the policy conversation before the next wave of bills is written.
Toward Laws That Don’t Break Literacy (or Science)
If we take Shanahan’s critique seriously, better policy is not mysterious. It would:
- Define "three-cueing" narrowly: as instructional approaches that prompt students to use non-graphophonic information (pictures, context) as the primary means of word identification instead of as a check on decoding.
- Explicitly distinguish:
- Word reading (decoding, accuracy, fluency) from
- Language comprehension (vocabulary, syntax, background knowledge).
- Mandate what should be present (systematic phonics, phonemic awareness, language-rich instruction) instead of vaguely banning what "shall not" occur.
- Acknowledge that evaluating decoded words against meaning and syntax is not a bug; it’s how human readers become accurate and automatic.
Had lawmakers done that, they would have:
- Killed off the worst guessing-game practices.
- Preserved and elevated the sophisticated two-step decoding process supported by research.
- Given teachers a clear, defensible blueprint instead of a legal tripwire.
Instead, as Shanahan warns with his King Canute parable, they’ve tried to command the cognitive tides—and only succeeded in drenching the people doing the real work.
For a field increasingly mediated by software, dashboards, adaptive assessments, and AI tutors, this is more than an education story. It’s a cautionary tale about what happens when we let legislation define "science" by vibes instead of verification, and when we confuse being against the right villain with being for the right design.
In literacy, as in computing, precision is not pedantry. It’s the difference between systems that teach children to actually read—and systems that only look compliant until the text gets hard.