The FSRS Algorithm: Optimizing Retention via Adaptive Scheduling
An analysis of the Free Spaced Repetition Scheduler (FSRS) and its efficiency advantages over the legacy SM-2 algorithm.
For over three decades, the landscape of spaced repetition software (SRS) has been dominated by a single algorithm: SM-2. Developed in 1987 by Piotr Woźniak, SM-2 became the backbone of industry-standard tools like Anki and Mnemosyne.
While SM-2 was revolutionary for its time, it is fundamentally a static heuristic. It applies a generalized set of rules to every learner, assuming that memory decay functions identically across all human beings and all subject matters [4].
This assumption is false. And for Chinese learners—who must retain thousands of visually similar characters—the inefficiencies compound dramatically.
The Problem with SM-2
SM-2 operates on a simple principle: if you successfully recall a card, increase the interval before the next review. If you fail, reset the interval and start over.
The algorithm uses a single “Ease Factor” to determine how quickly intervals grow. Get a card right repeatedly, and the ease factor increases. Struggle with a card, and it decreases.
This approach has two critical flaws:
1. Over-Reviewing
High-retention items are reviewed too frequently, wasting study time. If you learned 媽 (mā, mother) in Week 1 and have never forgotten it, SM-2 will still schedule it for review at fixed intervals—even though your probability of forgetting is essentially zero [4].
Multiply this across hundreds of “easy” characters, and you spend significant time reviewing words that need no reinforcement.
2. Under-Reviewing
Low-retention items are pushed too far into the future, leading to “lapses” (forgetting) and the need to relearn the card from scratch [4].
SM-2 cannot distinguish between a card that is inherently difficult and a card that you happened to struggle with once. A single failed review tanks the ease factor, but a single successful review after a lapse may push the interval too aggressively.
The result: you forget words you thought you knew, then waste time relearning them from zero.
The FSRS Architecture: D, S, and R
In the debate of FSRS vs SM-2, the primary differentiator is granularity. FSRS does not rely on static multipliers. Instead, it calculates three distinct variables for every card in the database, based on the user’s specific review history [4]:
Difficulty (D)
The intrinsic complexity of the information itself.
Some Chinese characters are objectively harder to retain than others. Abstract words like 雖然 (suīrán, “although”) decay faster than concrete nouns like 貓 (māo, “cat”). Characters with complex stroke patterns fade faster than simple ones.
SM-2 treats all cards equally at the start. FSRS learns the inherent difficulty of each card based on how you perform with it.
Stability (S)
The time required for the probability of recall to drop to 90%.
Stability is not the same as difficulty. A card can be hard to learn initially (high D) but stable once learned (high S). Classical Chinese particles, for example, may take many repetitions to acquire—but once internalized, they rarely fade.
SM-2 conflates these dimensions. FSRS separates them, allowing for more nuanced scheduling [4].
Retrievability (R)
The probability that you can recall the card at the current moment.
This is the dynamic variable—it changes continuously as time passes since your last review. FSRS models retrievability as a decay function, predicting exactly when your recall probability will drop below the target threshold (typically 90%).
When R approaches 90%, FSRS schedules a review. Not before (wasted effort), not after (risking a lapse).
The Mathematical Model
FSRS is built on the DSR (Difficulty, Stability, Retrievability) model of memory, which represents a significant advancement over SM-2’s heuristic approach.
The core equation models memory stability as:
S' = S × e^(w × (1 - R))
Where:
- S’ is the new stability after a successful review
- S is the current stability
- R is the retrievability at the moment of review
- w is a learned parameter specific to the user
This means that reviewing a card when retrievability is lower (harder recall) produces a larger stability gain than reviewing when retrievability is high (easy recall). The algorithm rewards productive difficulty.
FSRS also models forgetting curves with greater precision than SM-2’s linear assumptions:
R(t) = e^(-t/S)
Where:
- R(t) is retrievability at time t
- t is time since last review
- S is current stability
This exponential decay model matches empirical research on human memory far more closely than SM-2’s step-function intervals.
Personalization: Why It Matters for Chinese
Most flashcard apps treat every student the same. FSRS treats you as an individual [5].
The algorithm analyzes your review history to calculate exactly how fast you forget specific types of words. It adapts the schedule continuously. If you have strong retention for concrete nouns but struggle with abstract grammar particles, FSRS will space them differently [5].
For Chinese learners specifically, this personalization addresses several unique challenges:
Tone Pairs
Many students consistently confuse specific tone combinations. If you repeatedly struggle with second-tone words but breeze through fourth-tone words, FSRS will schedule more frequent reviews for your weak spots.
Orthographic Similarity
Characters like 未 (wèi) and 末 (mò) differ by a single stroke length [8]. If your review history shows confusion between orthographically similar characters, FSRS can identify the pattern and increase review frequency for both members of the confusable pair.
Vocabulary Domains
You may retain food vocabulary effortlessly (high stability) while struggling with political terms (low stability). FSRS learns these domain-specific patterns from your data, not from population averages.
Efficiency Gains and Workload Reduction
The practical application of this memory model is a reduction in cognitive load.
Benchmarks comparing FSRS against SM-2 consistently demonstrate that FSRS can achieve equivalent retention rates with 20–30% fewer reviews [4].
For a Dangdai student learning 5,000+ vocabulary items across six books, this efficiency gain is not trivial. It translates to:
- Hours saved per month on redundant reviews
- Reduced burnout risk from overwhelming review queues
- Faster progress through new material (time saved on reviews = time available for acquisition)
The Review Snowball Problem
The efficiency gain becomes critical when considering the “Review Snowball” phenomenon [6].
In any SRS system, every new card learned today represents a debt of future reviews. Learn 20 new words today, and you commit to reviewing them tomorrow, in 4 days, in 10 days, and beyond.
With SM-2’s inefficient scheduling:
- Day 1: 20 reviews
- Day 7: 20 new + ~80 reviews from previous days
- Day 30: 20 new + ~200 reviews from previous days [6]
Within one month, a motivated student faces 220+ cards every morning. This is the primary cause of user attrition—the student misses one day, wakes up to 400 due cards, feels paralyzed, and quits [6].
FSRS mitigates this by:
- Scheduling fewer unnecessary reviews (higher efficiency)
- Predicting optimal intervals more accurately (fewer lapses requiring relearning)
- Adapting to individual forgetting curves (personalized pacing)
The result is a sustainable workload that remains manageable over months and years of study.
FSRS in Practice: The Zhong Chinese Implementation
Zhong Chinese implements FSRS as its core scheduling engine, with specific optimizations for Chinese language learning.
Calibration Phase
During your first 14 days, the system accelerates your pacing to gather data. It observes your retention patterns across different character types, vocabulary domains, and difficulty levels. This data builds your personalized memory profile.
Steady-State Scheduling
Once calibrated, the system normalizes your acquisition speed. It creates a steady-state review load that takes minutes, not hours—ensuring you finish the curriculum without burnout [6].
Integration with Dangdai
Our vocabulary maps directly to the A Course in Contemporary Chinese curriculum, lesson by lesson [9]. When you pre-learn vocabulary for tomorrow’s class using Zhong Chinese, you are not just memorizing definitions—you are building optimized memory traces that FSRS will maintain throughout your studies [1].
The combination of curriculum alignment and adaptive scheduling means that words you learned in Book 1 remain accessible when you need them in Book 4. The algorithm handles the maintenance; you focus on acquisition and production.
The Research Foundation
FSRS was developed by Jarrett Ye and is based on peer-reviewed memory research, including:
- Ebbinghaus forgetting curves — The foundational research on memory decay
- ACT-R cognitive architecture — Computational models of human memory
- Spacing effect literature — Decades of research on optimal review intervals
The algorithm’s parameters have been optimized against large datasets of real user reviews, ensuring that the model reflects actual human memory performance rather than theoretical assumptions.
Unlike SM-2—which was developed through personal experimentation in the 1980s—FSRS represents a modern, data-driven approach to spaced repetition.
Limitations and Considerations
FSRS is not magic. It optimizes when you review, but it cannot help you if:
- You do not show up. The algorithm requires consistent daily engagement to function. Sporadic use breaks the model.
- You do not learn actively. Passive recognition (just looking at cards) produces weaker memory traces than active recall and production.
- You ignore the schedule. Overriding FSRS recommendations—cramming before a test, skipping “easy” cards—undermines the optimization.
The algorithm is a tool. It amplifies consistent effort; it cannot replace it.
Conclusion: Memory Is Mathematical
The insight behind FSRS is simple but profound: memory is not magical, it is mathematical [3].
Your brain forgets according to predictable patterns. Those patterns can be modeled. And if they can be modeled, they can be optimized.
SM-2 was a pioneering attempt at this optimization, but it treated all learners—and all memories—as identical. FSRS recognizes that your forgetting curve is yours alone. It learns your patterns, adapts to your weaknesses, and schedules reviews at the precise moment they will be most effective.
For Chinese learners facing thousands of characters, dozens of grammar patterns, and years of study, this efficiency is not a luxury. It is the difference between sustainable progress and inevitable burnout.
We built Zhong Chinese on FSRS because we believe your study time is valuable. Every minute spent on an unnecessary review is a minute stolen from new learning, from speaking practice, from living in the language.
The algorithm handles the scheduling. You handle the learning.
Ready to apply these principles?
Start mastering Chinese with our science-backed curriculum.