Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi everyone,
I’m facing a challenging performance issue in a SQL Database environment (Azure SQL Database / SQL Server), and I’d appreciate community insights to confirm whether this behavior is expected or if there is a deeper root cause.
🟦 Scenario
I have a stored procedure that queries a large table (~450M rows) with selective and non-selective filters.
The table has:
A clustered columnstore index
Three filtered nonclustered indexes
A datetime2 partitioning column
Multiple predicates depending on customer and region
The stored procedure receives two parameters:
@CustomerId INT,
@RegionCode VARCHAR(5)
Depending on parameter selectivity, the optimizer should choose:
Index seek → narrow execution plan for highly selective customers
Columnstore scan → batch mode for broad regional queries
However…
🟥 Problem
Whenever AUTO_UPDATE_STATISTICS kicks in, my stored procedure suddenly regresses:
SQL Server generates a plan optimized for a non-selective parameter.
That plan becomes cached.
All selective parameter executions switch to a full table scan in columnstore, even when a targeted nonclustered index is available.
CPU usage spikes by 10×.
Query duration increases from 80 ms → 18 seconds.
Attempts so far:
OPTION (RECOMPILE): fixes regression but too expensive for high call volume
OPTIMIZE FOR UNKNOWN: generates a generic but still suboptimal plan
Forced plan via Query Store: breaks when schema evolves
Updating stats with FULLSCAN: temporary improvement only
Creating additional filtered indexes: no impact on plan stability
This appears to be classic parameter sniffing, but much more extreme due to columnstore interactions.
❓ Questions
Is this pattern expected when columnstore indexes are combined with parameter-sensitive predicates?
Is there a best practice for eliminating parameter sniffing without forcing RECOMPILE on a high-throughput stored procedure?
Would splitting the stored procedure into two versions (Selective vs Broad) actually help, or does Query Store override that?
Is this a known issue related to recent engine changes in Azure SQL Database?
Should I consider converting the filtered nonclustered indexes into aligned or partitioned versions?
Any guidance would be greatly appreciated — this issue is impacting production workloads.
Thanks in advance!
Hi @SavioFerraz,
Thank you for reaching out to the Microsoft Fabric Community Forum.
This scenario needs deeper review by the Azure SQL engine team. Please submit this case in the Azure SQL Q&A forum under the Azure SQL Database category. The link provided here: Azure SQL Database - Microsoft Q&A
Please raise a thread there in Azure SQL Database forum. So the appropriate experts can provide accurate guidance.
Thank you for using the Microsoft Fabric Community Forum.