Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.

Reply
Arshi8109
New Member

Cold Start in Recommendation Models

How can we sold cold start user issue even we have send the user cohort signals also, without penalizing other users

3 REPLIES 3
deborshi_nag
Advocate IV
Advocate IV

Hi @Arshi8109 

 

Please use the following approach:

 

  • Global Baseline (safe, non-personalized)

    • Popularity & recency trends with minimal bias and diversity constraints.
    • Use catalog‑level controls (e.g., de-dupe, diversity by category/brand).
  • Cohort‑Aware Prior (hierarchical / Bayesian flavor)

    • Treat cohorts as priors on preferences, not hard filters.
    • Combine cohort priors with global baseline via learned weights (per cohort).
    • This avoids overfitting cohorts and preserves personalization as data accrues.
  • Content‑Based Recommendations (robust for item cold‑start too)

    • Build item embeddings from metadata (text, category, attributes) and optionally image features.
    • Recommend via nearest neighbors in embedding space; weight by cohort priors.
  • Lookalike Modeling (user embedding → nearest neighbor)

    • Map new users to similar existing users using shared signals (device, referrer, campaign, geography, time-of-day).
    • Use their top items as seed recommendations.
  • Contextual Bandits (controlled exploration for new users only)

    • Allocate a small exploration budget (e.g., 5–10%) for new users using contextual bandits (Thompson Sampling or LinUCB).
    • Keep bandit scope restricted to cold‑start segment; this avoids penalizing the rest.
  • Hybrid Model Router

    • A router decides for each request:
      • If user has ≥N events → warm-start (ALS/BPR/LTR).
      • Else → cold-start route (cohort prior + content KNN + bandit).
    • This isolation keeps warm users stable.
  • Graceful Handover

    • As interactions arrive, gradually shift from cold‑start mixture to warm‑start model via learned blending (e.g., a calibrated meta‑model).

 

Avoid “Penalizing Other Users”: Design Principles

  1. Segmentation-based routing: Only new users hit the exploration policy.
  2. Budgeted exploration: Cap exploration impressions (e.g., 1–2 items per top‑N).
  3. Catalog‑safe baseline: Use a robust popularity baseline that does not change due to cold‑start policy shifts.
  4. Hierarchical priors (not hard rules): Cohort influence is additive, not exclusive.
  5. Counterfactual offline evaluation: Use IPS/SNIPS to estimate policy performance without risking production stability.
  6. Progressive personalization: Smoothly decay cohort weights as user-specific signals grow.

 

Hope this helps! PLease mark as a Kudos or a Solution. 

Hi @Arshi8109 ,
Thanks for reaching out to the Microsoft fabric community forum. 

 

I would also take a moment to thank  @deborshi_nag  and @Chandhana_nm10 , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.

I hope the above details help you fix the issue. If you still have any questions or need more help, feel free to reach out. We’re always here to support you .

 

 

Best Regards, 
Community Support Team  

Chandhana_nm10
New Member

To address the cold-start user issue without penalizing existing users, combine cohort signals with a layered strategy that blends lightweight personalization with robust default behaviors. Start with strong global or popularity-based priors to ensure high-quality baseline recommendations, then gradually personalize using implicit signals such as short-term behavioral data (clicks, dwell time, and session interactions), contextual features (device, time, and location), and content-based matching instead of relying solely on collaborative signals. Apply exploration–exploitation techniques, such as bandits, to safely test personalized options while protecting overall system performance. Use fallback models, caps on experimental exposure, and continuous monitoring so that weak personalization never degrades the broader user experience. Over time, as the user generates richer data, smoothly transition them into full recommendation models—achieving quick personalization without harming others.

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.