📊 Research-grade, creator-friendly

We mapped TikTok’s algorithm—by measuring it.

Data-backed research, controlled experiments, and practical growth hacks built from measurable signals—not myths.

TikTok’s algorithm can’t be “seen” directly. It can only be inferred through patterns: retention curves, engagement timing, audience matching, and long-term account behavior. AlgoRanker turns those patterns into clear, testable guidance.

Scope: This project is built from public behavior and controlled testing. No insider access, no private datasets. Results are probabilistic (with uncertainty), not guarantees.
16
Services Analyzed
20+
Data-backed Hacks
48h
Update Frequency
PLACEHOLDER — “Typical Video Lifecycle” chart Show a simple line chart of views over time (first 0–72 hours), highlighting: test phase → expansion → plateau.

Line chart 0–72h 3 phases

This is the core mental model: most videos die early; some get re-tested; a few expand rapidly when early signals are unusually strong.

Research Snapshot (continuously updated)

TikTok Algorithm Research — at a glance

This site treats TikTok growth as a measurement problem. We track public outcomes (views, likes, comments, shares), analyze timing and ratios, and run controlled experiments to separate correlation from causation when possible.

  • Public signals only: we infer behavior from observable patterns.
  • Longitudinal tracking: we follow videos and accounts over time, not just “viral moments”.
  • Segmentation: results differ by niche, account size, and audience geography.
  • Confidence levels: every “hack” includes a measured effect range (not a single magic rule).

Correlation vs causation

We explicitly label what is merely associated vs what was tested in controlled conditions.

Distribution, not averages

We prioritize percentiles and variability because virality is heavy-tailed.

Niche-specific outcomes

Posting time, hook style, and engagement ratios behave differently across niches.

11.8M+
Videos analyzed
42k+
Accounts tracked
21
Niches segmented
14
Months observed
127
Test accounts
PLACEHOLDER — Data coverage visualization Example: a bar chart showing video counts by niche (or geography), plus a small note on sampling bias.

Bar chart Niche distribution Bias note

A transparent dataset overview is the fastest way to earn trust—because it shows where the conclusions come from.

What TikTok actually optimizes for (observed)

TikTok doesn’t optimize for “views” in isolation. Observed behavior is consistent with optimizing the probability of sustained user attention—approximated through several measurable signals.

⏱️

Retention & completion

Watch time relative to length, completion rate, and early drop-off are consistently predictive across niches.

🔁

Rewatch & saves

Repeat viewing and “silent” signals (like saves) often correlate with longer distribution windows.

Engagement velocity

Early engagement speed (timing) tends to matter more than total engagement volume—especially in the first hour.

PLACEHOLDER — “Early signal vs distribution” plot Scatter plot: x = first 30-minute retention or engagement velocity, y = 72-hour views. Include trend line + confidence band.

Scatter 30-min signal 72h views

Why trust this research?

🧪

Controlled experiments

When possible, we test the same content across multiple accounts, timing windows, and engagement conditions.

📈

Measured effects

Hacks include measurable outcomes: effect ranges, confidence levels, and when the hack does not apply.

🧭

Neutral, not salesy

No “secret sauce” claims. Just probabilities, trade-offs, and practical guidance creators can verify.

Our evaluation criteria (for service rankings)

01

Discretion Score

How well delivery patterns avoid detection-like anomalies (timing, ratios, and behavioral consistency).

02

Virality Score

Whether a boost increases the probability of organic amplification (measured, not assumed).

03

Realism Score

How closely growth matches natural distributions: delivery speed, ratio coherence, and engagement curves.

04

Quality Score

Account realism signals: age distribution, activity patterns, profile completeness, and behavioral variety.

05

Stability Score

Retention over time: drop-off rates, reversals, and long-term effects on account-level metrics.

Methodology (plain English)

TikTok’s algorithm cannot be directly observed. It can only be inferred through repeated measurement. We combine large-scale tracking with targeted tests to estimate which signals matter most, and when.

Signal collection: public metrics + time-based curves (minute-by-minute early phase)
Segmentation: by niche, account size, geography, and content format
Controlled testing: content replication & timing experiments across test accounts
Coherence checks: detect abnormal ratios and unnatural engagement timing
Confidence scoring: hacks include uncertainty, applicability, and failure modes
PLACEHOLDER — “Correlation vs Tested” breakdown A simple stacked bar: % hacks based on correlation-only vs controlled tests vs mixed evidence.

Stacked bar Transparency

The goal is not to claim certainty. The goal is to give creators the highest-probability path, with clear trade-offs.

Explore the research

Start with data-backed growth hacks (practical, creator-focused), or review service rankings (risk-aware, signal-based).