Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Score big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount

Reply
luisr-optimus
Frequent Visitor

Error when add/update data to Lakehouse: DeltaTableLastCheckpointedVersionMissing

Hi everyone.

 

I am trying to update some data from D365 to a Lakehouse via Data Pipelines, but I am unable to due to the following issue:

----

Error Details

Server Error.

Message

Delta table 'accountleads' with '_last_checkpoint' file references version '210' but the associated checkpoint file '00000000000000000210.checkpoint.parquet' is missing from the _delta_log directory. Please review and correct the '_last_checkpoint' file such that it conforms to the Delta specification.

Error Code

DeltaTableLastCheckpointedVersionMissing

----

 
 

thumbnail_image003.png

 

 

Is there any fix or workaround to solve this?

Thanks.

1 ACCEPTED SOLUTION
rohit1991
Super User
Super User

Hi @luisr-optimus ,

This error means that your Delta Lake table (accountleads) in the Lakehouse is expecting a specific checkpoint file (00000000000000000210.checkpoint.parquet) in the _delta_log directory, but that file is missing. This usually happens if files were accidentally deleted, moved, or if there was a failed operation that didn’t complete properly.

How to fix or work around it:

  1. Check for accidental deletion or move: If you have access to a backup or recycle bin (some storage accounts support this), try to restore the missing checkpoint file to the _delta_log folder for that table.

  2. Recreate the checkpoint: If restoring isn’t possible, you might be able to trigger a new checkpoint using Spark. You can do this by running a simple vacuum or optimize command, or by rewriting the Delta table.
    Example with PySpark:

    from delta.tables import DeltaTable
    deltaTable = DeltaTable.forPath(spark, "path_to_your_table")
    deltaTable.checkpoint() # This can help recreate the checkpoint

    If that doesn’t work, you might need to rewrite the table with a simple write operation.

  3. Delete or edit the _last_checkpoint file:
    (Use caution!) If you’re comfortable and understand the Delta Lake transaction log, you could edit or temporarily delete the _last_checkpoint file in the _delta_log directory. Without this file, Delta Lake will use the JSON log files to rebuild the state on the next operation. However, this is an advanced and potentially risky option, so make sure you have a backup.

  4. As a last resort, if none of the above are possible, you may need to recreate the Delta table by exporting the data, dropping the table, and reimporting it.


Did it work? ✔ Give a Kudo • Mark as Solution – help others too!

View solution in original post

3 REPLIES 3
v-nmadadi-msft
Community Support
Community Support

Hi @luisr-optimus  ,
Thanks for reaching out to the Microsoft fabric community forum.

Most likely, the cause of the error is that either an Optimize or Vacuum operation was performed on the Lakehouse, which resulted in the deletion or mismatch of critical log or Parquet files needed by the Delta table.
If that is the case you may need to recreate the Delta table. This involves, deleting the existing table, and then reimporting the data to restore it in a clean and stable state and then updating or adding new data to it.

I hope this information helps. Please do let us know if you have any further queries.
Thank you

rohit1991
Super User
Super User

Hi @luisr-optimus ,

This error means that your Delta Lake table (accountleads) in the Lakehouse is expecting a specific checkpoint file (00000000000000000210.checkpoint.parquet) in the _delta_log directory, but that file is missing. This usually happens if files were accidentally deleted, moved, or if there was a failed operation that didn’t complete properly.

How to fix or work around it:

  1. Check for accidental deletion or move: If you have access to a backup or recycle bin (some storage accounts support this), try to restore the missing checkpoint file to the _delta_log folder for that table.

  2. Recreate the checkpoint: If restoring isn’t possible, you might be able to trigger a new checkpoint using Spark. You can do this by running a simple vacuum or optimize command, or by rewriting the Delta table.
    Example with PySpark:

    from delta.tables import DeltaTable
    deltaTable = DeltaTable.forPath(spark, "path_to_your_table")
    deltaTable.checkpoint() # This can help recreate the checkpoint

    If that doesn’t work, you might need to rewrite the table with a simple write operation.

  3. Delete or edit the _last_checkpoint file:
    (Use caution!) If you’re comfortable and understand the Delta Lake transaction log, you could edit or temporarily delete the _last_checkpoint file in the _delta_log directory. Without this file, Delta Lake will use the JSON log files to rebuild the state on the next operation. However, this is an advanced and potentially risky option, so make sure you have a backup.

  4. As a last resort, if none of the above are possible, you may need to recreate the Delta table by exporting the data, dropping the table, and reimporting it.


Did it work? ✔ Give a Kudo • Mark as Solution – help others too!
lbendlin
Super User
Super User

Do you know if somebody did a vacuuming on the lakehouse during that time?

Helpful resources

Announcements
August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.