Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.

Reply
dbeavon3
Memorable Member
Memorable Member

DirectLake on OL w/Import : Database consistency checks (DBCC) failed while checking the segment

I keep getting a random error, when refreshing a directlake partition in SSMS.


Database consistency checks (DBCC) failed while checking the segment statistics.

Technical Details:
RootActivityId: db14b55e-18ff-475b-a9a6-819399000263
Date (UTC): 6/23/2025 5:38:59 PM
Database consistency checks (DBCC) failed while checking the '<oii>REDIM02 StandardFiscalDate (6755)</oii>' column.
Database consistency checks (DBCC) failed while checking the '<oii>Hidden Cardex Activity (6745)</oii>' table.
Database consistency checks (DBCC) failed while checking the '2dc8be52-94ae-4f6a-9ac0-25f0fa264830' database.
Database consistency checks (DBCC) failed while checking the '' table.
An error occurred while attempting to save the dataset (reference ID '2dc8be52-94ae-4f6a-9ac0-25f0fa264830').
Run complete



The model has import tables as well as DirectLake tables (on onelake)

The data is coming from a very simple DeltaTable in a very simple lakehouse.  There is no data movement whatsoever.

It is possible that it only happens the very first time I refresh a partition, then never happens again for the same model.  But my workflow involves redeploying the model, so it seems like this happens to me all the time.

 

 

Any tips would be appreciated.  Here is the SSMS error (image)

dbeavon3_0-1750700800670.png

 

1 ACCEPTED SOLUTION

Hi @dbeavon3,

 

Thank you for the response, I completely understand the need to look under the hood. Unfortunately currently there is no direct or supported way to inspect the segment statistics used by DBCC in DirectLake models.

Unlike imported models, where other tools like VertiPaq Analyzer or DAX Studio can expose segment and storage metadata but the DirectLake models do not yet expose this internal engine state, especially since the data remains external (like Delta Lake files in OneLake) and has not materialized in memory unless queried.

If you are consistently hitting this issue on a specific table or structure, I recommend raising a support ticket with the full DBCC error details and the timestamp.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

View solution in original post

9 REPLIES 9
v-achippa
Community Support
Community Support

Hi @dbeavon3,

 

Thank you for reaching out to Microsoft Fabric Community.

 

This issue likely happens because, the segment statistics validation fails during the first DirectLake partition refresh due to metadata not being fully cached immediately after the model deployment.

So I recommend adding a small basic warm up query before the refresh. And since it happens immediately after deployment, add a 2–5 minute delay after model deployment before running partition refresh. And also add retry logic in the refresh workflow, as the issue often resolves after the first retry.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

This is DirectLake on OneLake, and there should be no "cache" or "sql endpoint". 

The explanation doesn't make sense, given the fact that the components you are referring to don't exist.  My understanding is that during a refresh, the semantic model isn't doing more than gatering the metadata (just finding the parquet files in the DeltaTable, along with their logs.)  There should be no other cache than the one it is trying to build for its own purposes.  There are no other services involved either (like sql endpoint)

Hi @dbeavon3,

 

Since no external components are involved, the issue here is likely from a timing mismatch between model deployment and the internal metadata readiness within the engine.
Adding retry logic works because the failure typically does not occur on subsequent attempts, once the engine has fully established its view of the Delta structure.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

 

Do you have first-hand experience with the "DirectLake on OL"  (with import or w/o Import)?

The reason for my question is to understand the error message.  If something is called "inconsistent" then there is a comparison between one thing and another thing.  Given that I'm fully refreshing the model, there should NOT be any comparison being made between one thing and another thing.  There is only the one thing (the DeltaTables, in this case ).

If you understand what comparison is being made, please let me know.

It is unsurprising that retrying an operation over and over and over again in Fabric is the solution, since that seems to be the solution to a lot of flaws in Fabric.  However sometimes developers want to avoid flailing about in that aimless way.  It is better to understand the source of the bugs so that we can come up with more reliable workarounds and/or error checking.

 

If you can  share the details about the comparison made by DBCC, then I think it would help customers to independently find better workarounds (than retrying over and over).   The next time I encounter this I may preceed a "full" processing operation with an explicit "process-clear" operation.  The ultimate goal is to find a workflow that works the first time with none of the strange error messages. 

Hi @dbeavon3,

 

Even during a full refresh in DirectLake mode, DBCC (Database Consistency Check) does not check the data itself, but it validates the consistency between the semantic model metadata and the physical structure of the Delta table in OneLake.

This includes like column names and data types, expected segment structure and the alignment of physical parquet segments with what the model expects to load.

 

The engine performs this validation even during a Process Full because the engine does not skip validation and it does not rebuild blindly. It confirms that what it finds in the Delta source matches what the model expects before proceeding.

 

The error here the DBCC failed likely because of a mismatch like the column type is different, segment metadata incomplete or not fully readable, then the DBCC fails. Here the semantic model was deployed and a refresh was triggered before OneLake had fully exposed the latest _delta_log and Parquet metadata. So the engine could not validate the segments and the DBCC failed.

That is why a retry often works by the second attempt, the Delta metadata is stable, and the model can validate successfully.

 

To avoid this completely, use the following workflow:

  • Run Process Clear before any Process Full, it clears any partially loaded metadata.
  • Ensure the Delta table is recently OPTIMIZEd and VACUUMed.
  • Try to perform a small read on the Delta table before refresh so that this triggers early metadata visibility in OneLake.
  • And add a short delay(1–2 minutes) after deployment before refreshing to allow OneLake file and metadata visibility to stabilize.

These steps make the refresh reliable on the first attempt, without retries or inconsistencies.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

I will try to avoid the bug in the way you propose.   I'm not getting my hopes up.  If the problem involves the semantic model metadata (as you theorize) then all the interactions with the deltatable seem pointless.  Remember this is directlake on onelake (not using sql endpoints).

 

I'm guessing this bug will be fixed before the GA.  It seems pretty bad, despite the fact that repeated refresh operations will clear it out.

 

Are you aware of any way to visualize the semantic model metadata ("segment statistics") so that we can make comparisons between the "good" variations of metadata that do NOT cause DBCC problems and the "bad" variations that do?  This would help us put the bug under a microscope and see exactly what it is doing.

Hi @dbeavon3,

 

Thank you for the response, I completely understand the need to look under the hood. Unfortunately currently there is no direct or supported way to inspect the segment statistics used by DBCC in DirectLake models.

Unlike imported models, where other tools like VertiPaq Analyzer or DAX Studio can expose segment and storage metadata but the DirectLake models do not yet expose this internal engine state, especially since the data remains external (like Delta Lake files in OneLake) and has not materialized in memory unless queried.

If you are consistently hitting this issue on a specific table or structure, I recommend raising a support ticket with the full DBCC error details and the timestamp.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

Hi @dbeavon3,

 

As we haven’t heard back from you, we wanted to kindly follow up to check if the solution I have provided for the issue worked? or have you raised any support ticket?
If my response resolved your issue, please mark it as "Accept as solution" and give kudos if you found it helpful.

 

Thanks and regards,

Anjan Kumar Chippa

GilbertQ
Super User
Super User

Hi @dbeavon3 

 

What happens if you recreate the delta table from scratch and try again into your semantic model?





Did I answer your question? Mark my post as a solution!

Proud to be a Super User!







Power BI Blog

Helpful resources

Announcements
September Power BI Update Carousel

Power BI Monthly Update - September 2025

Check out the September 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors
Top Kudoed Authors