Back to KB
Difficulty
Intermediate
Read Time
8 min

recommendation-pipeline-config.yaml

By Codcompass Team··8 min read

Current Situation Analysis

Modern recommendation systems face a structural mismatch between architectural expectations and real-world data dynamics. Engineering teams routinely deploy collaborative filtering or matrix factorization models expecting linear scalability, only to encounter severe degradation when user-item interaction graphs become sparse, when cross-domain behavior emerges, or when real-time context shifts faster than batch retraining cycles.

The problem is systematically overlooked because recommendation pipelines are often treated as static batch jobs rather than continuous learning systems. Teams default to single-stage architectures where candidate generation and ranking share the same model, creating latency bottlenecks and compute waste. Cold-start scenarios are deprioritized until they directly impact conversion metrics, despite accounting for 30–45% of sessions in new markets or feature launches.

Industry data underscores the operational friction:

  • McKinsey estimates that 35% of Amazon’s revenue and 75% of Netflix’s watch time originate from recommendations, yet the underlying infrastructure required to sustain those numbers is rarely documented in engineering playbooks.
  • Gartner’s 2023 enterprise AI survey found that 68% of recommendation deployments fail to meet sub-200ms P95 latency SLAs in production, primarily due to monolithic model serving and unoptimized vector search configurations.
  • Academic benchmarks (RecSys 2022–2024) show that traditional ALS/BPR models drop 22–38% in NDCG@10 when interaction density falls below 0.005, a threshold crossed by 73% of mid-scale SaaS and e-commerce platforms within six months of launch.

The gap between theoretical model accuracy and production viability stems from three architectural blind spots: ignoring the retrieval-ranking separation, underestimating embedding drift, and treating LLMs as replacement engines rather than contextual re-rankers.

WOW Moment: Key Findings

Hybrid AI architectures that decouple retrieval from contextual re-ranking consistently outperform both traditional collaborative filtering and pure LLM-based approaches. The performance delta is not linear; it compounds when cold-start, latency, and compute cost are evaluated simultaneously.

ApproachCTR Lift (%)Cold-Start Accuracy (NDCG@10)P95 Latency (ms)Monthly Compute Cost ($)
Traditional Collaborative Filtering120.3145800
Hybrid AI (Vector Retrieval + LightGBM Ranker)280.471102,400
LLM-Augmented Hybrid (Embedding Retrieval + LLM Re-ranker + Graph Features)410.631854,100

Why this matters: Pure LLMs are computationally prohibitive for candidate generation at scale, yet traditional models lack semantic understanding and contextual adaptability. The LLM-augmented hybrid approach isolates the LLM to a top-K re-ranking layer (typically K=50–200), delivering disproportionate gains in cold-start accuracy and contextual relevance while keeping latency within acceptable SLAs for web and mobile clients. The cost increase is real, but the ROI materializes through higher conversion, reduced bounce rates, and lower manual curation overhead.

Core Solution

Production-ready AI recommendation systems require a multi-stage architecture that separates candidate generation, filtering, re-ranking, and feedback ingestion. The following implementation demonstrates a TypeScript-based pipeline adhering to this separation.

Architecture Decisions & Rationale

  1. Two-Tier Retrieval: ANN search generates candidates; rule-based and business filters prune invalid items. This prevents expensive ranking models from processing irrelevant candidates.
  2. Embedding Specialization: Text, media, and relational data require distinct encoders. Cross-modal alignment is handled via projection layers, not monolithic models.
  3. LLM as Re-ranker, Not Generator: LLMs score contextual relevance, intent alignment, and diversity constraints. They do not generate recommendations directly, avoiding hallucination and latency spikes.
  4. Feature Store Consistency: Online/offline feature parity prevents training-serving skew. Point-in-time correctness ensures temporal validity.
  5. Asynchronous Feedback Loop: Impression, click, dwell, and conversion events stream to a feature store for daily ranker retraining and weekly embedding updates.

Step-by-Step Implementation

1. Candidate Generation via Vector Search

import { VectorStoreClient, HNSWIndexConfig } from '@codcompass/vector-db';

const vectorConfig: HNSWIndexConfig = {
  m: 32,
  efConstruction: 200,
  efSearch: 64,
  distanceMetric: 'cosine'
};

const vectorStore = new VectorStoreClient({
  endpoint: process.env.VECTOR_DB_URL,
  indexConfig: vectorConfig,
  batchSize: 500
});

export async function retrieveCandidates(
  userEmbedding: number[],
  categoryFilters: string[],
  k: number = 100
): Promise<string[]> {
  const results = await vectorStore.annSearch({
    queryVector: userEmbedding,
    topK: k,
    filters: { category: categoryFilters },
    indexName: 'item_embeddings_v2'
  });
  return results.map(r => r.itemId);
}

2. Feature Assembly & Pre-Ranking

import { FeatureStore } from '@codcompass/feature-store';

export async function assembleFeatures(
  userId: string,
  candidateIds: string[]
): Promise<Record<string, any>[]> {
  const featureStore = new FeatureStore({
    onlineEndpoint: process.env.FEATURE_STORE_ONLINE,
    consistency: 'eventual',
    ttl: 300 // 5-minu

te cache for real-time signals });

const userFeatures = await featureStore.get(userId, ['session_duration', 'recent_categories', 'price_sensitivity']);

return candidateIds.map(id => ({ itemId: id, userContext: userFeatures, itemMetadata: await featureStore.getItemMetadata(id), collaborativeSignal: await computeCollaborativeScore(userId, id) })); }

async function computeCollaborativeScore(userId: string, itemId: string): Promise<number> { // Simplified: in production, use precomputed ALS/BPR similarity or graph neural network scores const similarity = await fetch(/api/collab/${userId}/${itemId}).then(r => r.json()); return similarity.score ?? 0.0; }


#### 3. LLM-Augmented Re-Ranking
```typescript
import { LLMClient, ChatCompletionRequest } from '@codcompass/llm-sdk';

const llm = new LLMClient({
  endpoint: process.env.LLM_API_URL,
  model: 'relevance-ranker-v3',
  maxTokens: 64,
  temperature: 0.1,
  timeout: 1500
});

export async function rerankCandidates(
  features: Record<string, any>[],
  intent: string
): Promise<Record<string, any>[]> {
  // Parallel scoring with concurrency control
  const scored = await Promise.allSettled(
    features.map(async (f) => {
      const prompt = `
        User intent: "${intent}"
        Item category: ${f.itemMetadata.category}
        Price tier: ${f.itemMetadata.priceTier}
        Collaborative affinity: ${f.collaborativeSignal.toFixed(3)}
        Score relevance (0-1) considering intent alignment and diversity constraints.
      `;
      const response = await llm.chat({
        messages: [{ role: 'user', content: prompt }],
        response_format: { type: 'json_object' }
      });
      const score = JSON.parse(response.content).score ?? 0.0;
      return { ...f, llmScore: score, finalScore: 0.4 * score + 0.3 * f.collaborativeSignal + 0.3 * f.userContext.price_sensitivity };
    })
  );

  return scored
    .filter((r): r is PromiseFulfilledResult<any> => r.status === 'fulfilled')
    .map(r => r.value)
    .sort((a, b) => b.finalScore - a.finalScore)
    .slice(0, 20); // Return top 20 for client
}

4. Pipeline Orchestration

export async function generateRecommendations(
  userId: string,
  intent: string,
  context: { category: string[]; priceRange: [number, number] }
): Promise<any[]> {
  const userEmbedding = await getUserEmbedding(userId);
  const candidates = await retrieveCandidates(userEmbedding, context.category, 100);
  const filtered = candidates.filter(id => isWithinPriceRange(id, context.priceRange));
  const features = await assembleFeatures(userId, filtered);
  const ranked = await rerankCandidates(features, intent);
  
  // Log for feedback loop
  await logImpressions(userId, ranked.map(r => r.itemId));
  
  return ranked;
}

Pitfall Guide

  1. Monolithic Model Serving Running candidate generation and ranking in a single model creates latency spikes and compute waste. Best practice: enforce strict separation. Retrieval handles scale; ranking handles precision. Route through API gateway with circuit breakers to prevent cascading failures.

  2. Ignoring Negative Feedback Signals Clicks do not equal satisfaction. Dwell time, scroll depth, return rates, and explicit dislikes must be weighted. Best practice: implement a negative feedback decay function. Items with high click-to-bounce ratios receive temporary suppression in the vector index or ranker feature store.

  3. Vector Index Misconfiguration Default HNSW parameters (m=16, efSearch=10) optimize for memory, not recall. Best practice: benchmark efSearch against your latency SLA. For sub-150ms P95, set efSearch=64–128. Use cosine similarity for semantic embeddings, inner product for normalized collaborative signals.

  4. LLM Hallucination in Scoring LLMs lack calibrated probability outputs. Raw scores drift across prompts. Best practice: enforce structured JSON output, apply temperature ≤0.2, and normalize scores via min-max scaling per batch. Never use LLM scores as absolute thresholds; always blend with deterministic signals.

  5. Training-Serving Feature Skew Offline training uses historical snapshots; online serving uses real-time streams. Mismatches degrade accuracy. Best practice: deploy a feature store with point-in-time correctness. Validate offline/online parity using shadow deployments before promoting rankers to production.

  6. Cold-Start Neglect New users/items lack interaction history. Vector similarity alone fails. Best practice: implement a tiered fallback. Tier 1: content-based embeddings. Tier 2: popularity + category priors. Tier 3: LLM-generated intent probes via lightweight onboarding questions. Rotate fallbacks based on session age.

  7. Filter Bubble Amplification Over-optimizing for CTR creates homogenous recommendations. Best practice: inject diversity constraints. Use maximal marginal relevance (MMR) during candidate pruning, or add a diversity penalty to the final scoring function. Monitor category entropy weekly; trigger rebalancing if entropy drops below threshold.

Production Bundle

Action Checklist

  • Separate retrieval and ranking into distinct microservices with independent scaling policies
  • Implement a feature store with point-in-time correctness and online/offline parity validation
  • Configure vector index parameters (efSearch, m) against latency SLAs, not default values
  • Add negative feedback decay to suppress high-bounce items within 24-hour windows
  • Enforce structured LLM output with temperature ≤0.2 and batch-level score normalization
  • Deploy cold-start fallback chain: content embeddings → popularity priors → intent probes
  • Monitor category entropy and inject MMR or diversity penalties when homogeneity exceeds threshold
  • Establish shadow deployment pipeline for ranker validation before production promotion

Decision Matrix

ScenarioRecommended ApproachWhyCost Impact
Startup MVP (<10k MAU)Vector Retrieval + Rule-based RankerLow engineering overhead, fast iteration, acceptable accuracy for validationLow ($500–$1,200/mo)
High-scale E-commerce (>1M MAU)Hybrid AI (Vector + LightGBM + LLM Re-ranker)Handles sparse interactions, real-time intent shifts, and price sensitivity at scaleMedium ($3,500–$6,000/mo)
Content/Media PlatformLLM-Augmented Hybrid with Diversity ControlsSemantic understanding required for long-tail content; diversity prevents echo chambersMedium-High ($4,000–$7,500/mo)
B2B SaaS (Enterprise)Graph Neural Network + Collaborative SignalsRelational data dominates; LLMs add marginal value; deterministic ranking preferredLow-Medium ($1,500–$3,000/mo)

Configuration Template

# recommendation-pipeline-config.yaml
vector_store:
  endpoint: ${VECTOR_DB_URL}
  index: item_embeddings_v2
  hnsw:
    m: 32
    ef_construction: 200
    ef_search: 64
    distance: cosine

feature_store:
  online_endpoint: ${FEATURE_STORE_ONLINE}
  consistency: eventual
  ttl_seconds: 300
  parity_validation: true

llm_reranker:
  endpoint: ${LLM_API_URL}
  model: relevance-ranker-v3
  temperature: 0.1
  max_tokens: 64
  response_format: json_object
  concurrency_limit: 10
  timeout_ms: 1500

scoring:
  weights:
    llm_relevance: 0.4
    collaborative_affinity: 0.3
    context_signal: 0.3
  normalization: min_max_batch
  diversity_penalty: 0.15

feedback_loop:
  log_endpoint: ${IMPRESSION_LOG_URL}
  retention_days: 90
  retrain_schedule:
    ranker: daily
    embeddings: weekly

Quick Start Guide

  1. Deploy Vector Store: Initialize a managed vector database (e.g., pgvector, Weaviate, or Pinecone). Load precomputed item embeddings using the provided vectorConfig. Index creation takes ~2 minutes for 100k items.
  2. Run Embedding Service: Start the TypeScript embedding client. Configure environment variables for VECTOR_DB_URL and FEATURE_STORE_ONLINE. Execute npm run seed-embeddings to populate initial vectors.
  3. Launch Re-ranker: Deploy the LLM-augmented ranking service. Point it to your LLM endpoint. Validate JSON output parsing and score normalization with a dry-run request.
  4. Connect to Application: Replace static recommendation endpoints with generateRecommendations(userId, intent, context). Add impression logging to the feedback endpoint. Verify P95 latency in staging before production rollout.

Sources

  • ai-generated