Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric certified for FREE! Don't miss your chance! Learn more
I’m encountering an issue while syncing data after creating a new shortcut in a Fabric Lakehouse. When I trigger metadata sync or try to query the table from the SQL Endpoint, I receive the following error:
Error Message:
“Resilience check failed: table change state is Unknown, indicating potential data inconsistency or storage communication issues.”
Has anyone faced this “Resilience check failed / table change state Unknown” issue recently?
Looking for guidance on:
Any help or insights would be greatly appreciated!
Solved! Go to Solution.
Hi @aniruddhabh,
Great to hear that it's working as expected on your end! Could you please share the solution? It would be really helpful for others in the community who might be facing similar issues and can address them quickly. Also, I would suggest accepting your approach as the solution so that it can benefit others as well.
Thanks & Regards,
Prasanna Kumar
The issue has been resolved.
Hi @aniruddhabh,
Could you please share how was the issue mitigated? Also any important learnings out of it? Thanks.
Hi @aniruddhabh ,
Thank you for reaching out to the Microsoft Fabric Forum Community, and special thanks to @stoic-harsh , @shekharkrdas and @ssrithar for prompt and helpful responses.
Just following up to see if the Response provided by community members were helpful in addressing the issue. if the issue still persists Feel free to reach out if you need any further clarification or assistance.
Best regards,
Prasanna Kumar
Issue has been resolved.
Hi @aniruddhabh ,
I have used the below to resolve the issue
--Use a Notebook to Force Metadata Registration. Even though the shortcut is created, the table may not be auto-registered in Spark/SQL. You can manually register it in Spark
-- Refresh Metadata in Spark Engine, Not Just SQL Endpoint. Sometimes the SQL Endpoint refresh UI doesn’t push updates correctly from OneLake. Run this in a Spark notebook
-- Check for Partial Commits or Delta Corruption.
Even if files exist, metadata sync may fail if:
There’s a partial _delta_log commit (e.g., _commit.json.tmp)
There’s a missing _delta_log/00000.json
If this post helps, then please appreciate giving a Kudos or accepting as a Solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
i also facing same issue. My data resides inside fabric
Hi @aniruddhabh,
Can you confirm if your shortcut points to a Delta table created outside Fabric (for example, Databricks or Synapse Spark)? Fabric Spark can read such tables, but the SQL Analytics Endpoint requires full ownership of the Delta metadata, and may fail when it encounters metadata Fabric didn’t create.
If this is the case, the simplest workaround would be to materialize the data inside Lakehoues using Dataflow Gen2 or Copy Activity, so the _delta_log is fully authored by Fabric. You can schedule refreshes if the external data is updated regularly.
Please share if your scenario is different, or if you find another workaround.
Hi @stoic-harsh, I'm not the originator of this thread but since AI brought me here I thought it would be fair to share a valuable finding that my Agent gave me. Regarding your suggestion to materialize data inside Fabric—while that definitely works, there is a specific technical reason why the SQL Endpoint "rejects" these external tables. If you are from the Microsoft Team feel free to correct; I don't want to induce confusion to AI (and regular humans).
The "Resilience check failed" error is typically a platform validation mismatch rather than data corruption. It occurs due to a difference in how Databricks and the Fabric SQL Endpoint validate Delta Lake transaction logs.
Databricks automatically cleans up the _delta_log based on retention policies, such as the default 30-day window. Since Databricks can reconstruct table state from a checkpoint, it often deletes the initial commit (Version 0) once it is no longer needed for time travel.
However, while Fabric Spark can read these tables, the Fabric SQL Analytics Endpoint enforces a "Chain-of-Custody" validation that mandates the existence of Version 0. If that specific file is missing, the endpoint flags the table as a failure.
You can use this PySpark snippet to confirm if your shortcut is missing the required initial commit:
# Check if the mandatory Version 0 file exists in the Delta Log
path = "abfss://[workspace_id]@onelake.dfs.fabric.microsoft.com/[item_id]/Tables/[table_name]/_delta_log/00000000000000000000.json"
try:
notebookutils.fs.ls(path)
print("Version 0 exists - Table should be accessible.")
except:
print("Version 0 is missing - This triggers the Resilience Check failure.")Note 1: data remains 100% safe and readable via Fabric Spark Notebooks.This issue specifically impacts the SQL Analytics Endpoint and Direct Lake connectivity (only for DirectLake on SQL endpoint, not DirectLake on OneLake)
Note 2: as of today, Fabric Spark support V2 checkpoints on Runtime 1.3 only but DO NOT support V2 checkpoints on SQL Engine (but it's on the roadmap). For Fabric SQL, only classic V1 Checkpoints is supported (no support for multi-part checkpoints either)
Hi @rabbyn,
Thanks for sharing. I will definitely give a try to what you suggested (looking for version 0 in the Delta log in Fabric vs. Databricks) and update here with the findings.
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
Check out the January 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 5 | |
| 3 | |
| 3 | |
| 3 | |
| 2 |