Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.

Reply
dbeavon3
Memorable Member
Memorable Member

DirectLake on OL w/Import : Database consistency checks (DBCC) failed while checking the segment

I keep getting a random error, when refreshing a directlake partition in SSMS.


Database consistency checks (DBCC) failed while checking the segment statistics.

Technical Details:
RootActivityId: db14b55e-18ff-475b-a9a6-819399000263
Date (UTC): 6/23/2025 5:38:59 PM
Database consistency checks (DBCC) failed while checking the '<oii>REDIM02 StandardFiscalDate (6755)</oii>' column.
Database consistency checks (DBCC) failed while checking the '<oii>Hidden Cardex Activity (6745)</oii>' table.
Database consistency checks (DBCC) failed while checking the '2dc8be52-94ae-4f6a-9ac0-25f0fa264830' database.
Database consistency checks (DBCC) failed while checking the '' table.
An error occurred while attempting to save the dataset (reference ID '2dc8be52-94ae-4f6a-9ac0-25f0fa264830').
Run complete



The model has import tables as well as DirectLake tables (on onelake)

The data is coming from a very simple DeltaTable in a very simple lakehouse.  There is no data movement whatsoever.

It is possible that it only happens the very first time I refresh a partition, then never happens again for the same model.  But my workflow involves redeploying the model, so it seems like this happens to me all the time.

 

 

Any tips would be appreciated.  Here is the SSMS error (image)

dbeavon3_0-1750700800670.png

 

5 REPLIES 5
v-achippa
Community Support
Community Support

Hi @dbeavon3,

 

Thank you for reaching out to Microsoft Fabric Community.

 

This issue likely happens because, the segment statistics validation fails during the first DirectLake partition refresh due to metadata not being fully cached immediately after the model deployment.

So I recommend adding a small basic warm up query before the refresh. And since it happens immediately after deployment, add a 2–5 minute delay after model deployment before running partition refresh. And also add retry logic in the refresh workflow, as the issue often resolves after the first retry.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

This is DirectLake on OneLake, and there should be no "cache" or "sql endpoint". 

The explanation doesn't make sense, given the fact that the components you are referring to don't exist.  My understanding is that during a refresh, the semantic model isn't doing more than gatering the metadata (just finding the parquet files in the DeltaTable, along with their logs.)  There should be no other cache than the one it is trying to build for its own purposes.  There are no other services involved either (like sql endpoint)

Hi @dbeavon3,

 

Since no external components are involved, the issue here is likely from a timing mismatch between model deployment and the internal metadata readiness within the engine.
Adding retry logic works because the failure typically does not occur on subsequent attempts, once the engine has fully established its view of the Delta structure.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

 

Do you have first-hand experience with the "DirectLake on OL"  (with import or w/o Import)?

The reason for my question is to understand the error message.  If something is called "inconsistent" then there is a comparison between one thing and another thing.  Given that I'm fully refreshing the model, there should NOT be any comparison being made between one thing and another thing.  There is only the one thing (the DeltaTables, in this case ).

If you understand what comparison is being made, please let me know.

It is unsurprising that retrying an operation over and over and over again in Fabric is the solution, since that seems to be the solution to a lot of flaws in Fabric.  However sometimes developers want to avoid flailing about in that aimless way.  It is better to understand the source of the bugs so that we can come up with more reliable workarounds and/or error checking.

 

If you can  share the details about the comparison made by DBCC, then I think it would help customers to independently find better workarounds (than retrying over and over).   The next time I encounter this I may preceed a "full" processing operation with an explicit "process-clear" operation.  The ultimate goal is to find a workflow that works the first time with none of the strange error messages. 

GilbertQ
Super User
Super User

Hi @dbeavon3 

 

What happens if you recreate the delta table from scratch and try again into your semantic model?





Did I answer your question? Mark my post as a solution!

Proud to be a Super User!







Power BI Blog

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.