Diving deep into Tiktok recommendation engine

Going deeper into the video rec repo Monolith and paper produced by Bytedance

Diving Deep into Monolith reco by Bytedance

resources


Background & Prerequisites — What You Need to Know Before Reading This Blog

The Monolith paper sits at the intersection of recommendation systems, real-time ML serving, and large-scale distributed systems. Below are the foundational topics you need to understand.


1. Recommendation System Fundamentals

Why: Monolith is a production recommendation system — you need to understand the landscape it operates in. - Collaborative Filtering (CF) — User-based and item-based CF. Matrix factorization (SVD, ALS, NMF) decomposes user-item interaction matrices into latent factors. Sparse data is the core challenge. - Content-Based Filtering — Uses item features (video tags, duration, category) and user features (demographics, watch history) to recommend. No cold-start for items but limited diversity. - Deep Learning for Reco — Neural collaborative filtering (NCF), two-tower models (user encoder + item encoder), attention mechanisms (transformers) for sequential recommendation. - Ranking vs Retrieval — Two-stage systems: retrieval (candidate generation from millions → hundreds using approximate nearest neighbors) then ranking (score hundreds → top-K using a more complex model). - Multi-task learning — Predicting multiple objectives simultaneously: click-through rate (CTR), watch time, likes, shares, follows. Monolith handles these jointly.

2. Embedding Tables & Feature Engineering

Why: Monolith's key contribution is around real-time embedding table updates — you need to understand what embedding tables are. - Sparse features — Categorical features with high cardinality: user_id (billions), video_id (millions), hashtags, device_type. Represented as one-hot or multi-hot vectors. - Embedding lookup — Each sparse feature ID maps to a dense vector (e.g., 64 or 128 dimensions) in an embedding table. The table is a matrix of shape (num_unique_ids × embedding_dim). - Collision & hashing — With billions of IDs, full embedding tables are impractical. Hash-based tricks (feature hashing, quotient-remainder trick) reduce memory at the cost of collisions. - Dense features — Numerical features like watch duration, scroll speed, time of day. Typically normalized and concatenated with embeddings. - Feature interaction — Cross-features (user_id × video_category), factorization machines, and DeepFM combine sparse and dense signals.

3. Real-Time Training vs Batch Training

Why: Monolith's core innovation is enabling real-time (online) training for recommendation models, as opposed to traditional batch training. - Batch training — Collect user interactions over hours/days → train model offline → deploy updated model. Introduces staleness: model doesn't reflect recent trends or viral content. - Online/real-time training — Continuously update model parameters as new interactions stream in. Captures trending content, breaking news, and shifting user preferences immediately. - Challenges of online training — Parameter consistency (multiple workers updating simultaneously), feature distribution shift, catastrophic forgetting, training-serving skew. - Why it matters for TikTok — Short-video feeds are highly dynamic. A video can go viral in minutes. Batch-trained models miss these signals; real-time training captures them.

4. Parameter Server Architecture

Why: Monolith builds on the parameter server paradigm for distributed ML training. - What is a parameter server — A distributed system where model parameters are sharded across server nodes. Workers pull parameters, compute gradients on data shards, and push gradients back. The server aggregates and updates. - PS vs AllReduce — AllReduce (used in dense models like CNNs) synchronizes all workers. PS is better for sparse models (recommendation) because workers only need a subset of parameters (the embeddings for IDs in their data batch). - Synchronous vs Asynchronous updates — Sync: all workers finish before any update (consistent but slow). Async: workers update independently (faster but introduces gradient staleness). Monolith uses async. - Existing PS frameworks — TensorFlow's ParameterServerStrategy, BytePS, PServer. Monolith extends TensorFlow's PS with a custom collisionless hash table.

5. Monolith's Key Contributions

Why: Understanding what makes the paper novel. - Collisionless hash table — Unlike standard approaches that use fixed-size embedding tables with hash collisions, Monolith uses a dynamic, collision-free hash table. New IDs get their own embedding; old/inactive IDs are expired. Reduces quality loss from collisions. - Real-time training pipeline — Integrates online training with serving. The model in production is continuously updated with fresh user interactions. The paper shows this significantly improves CTR. - Fault tolerance — Mechanisms for checkpointing embedding tables, recovering from worker/server failures without losing training progress. - Expiry and filtering — Old embeddings (for IDs not seen recently) are evicted — critical for memory management when dealing with billions of IDs.

6. Evaluation & Metrics for Recommendation Systems

Why: Understanding how Monolith's improvements are measured. - AUC (Area Under ROC Curve) — Standard metric for CTR prediction. Measures model's ability to rank positive interactions above negative ones. - Log-loss (Cross-entropy) — Measures prediction confidence. Lower is better. - Online A/B testing — The only reliable way to evaluate recommendation changes. The paper reports online metrics (engagement, CTR) from production experiments. - Offline vs Online gap — Offline AUC improvements don't always translate to online gains. Real-time training helps close this gap.

7. TensorFlow Internals (Relevant Portions)

Why: Monolith is built as an extension to TensorFlow — understanding TF's architecture helps comprehend the system design. - TF Variables & Embedding layerstf.nn.embedding_lookup, tf.keras.layers.Embedding. Standard TF uses fixed-size Variables for embedding tables. - TF Serving — The model serving infrastructure. Monolith modifies this to support real-time parameter updates. - SavedModel format — How TF exports models. Monolith needs to checkpoint dynamic hash tables, not just fixed tensors.


Plan to do a exercise implementing this approach for recommendation

Learning Plan from Grok

I reached out to Grok for a comprehensive learning roadmap on this topic. For a detailed plan and structured guidance, check out the following link: Grok


TODO / Remaining Work

Progress...

Back to Blog About the Author
🧘