Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Next up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now

Reply
Mauro89
Super User
Super User

Delta Table time travel not readable

Hi folks,

 

I face some challenges if I want to get a certain delta table version via the following command:

 

ver = 9

deltaTable = "Tables/xyz"

spark.sql(f"SELECT * FROM delta.`{deltaTable}` VERSION AS OF {int(ver)}")
 
Error message: AnalysisException: Cannot time travel Delta table to version 9. Available versions: [10, 31].
 
But if I go to the related Lakehouse and check the folder "_delta_log" I see prio versions than 10.
"OPTIMIZE" done, but no effect, "VACUUM" never run on this table.
 
Would appreciate some ideas about it.
 
Thanks and regards!
 
1 ACCEPTED SOLUTION

Thank you for sharing the screenshot. When I refer to an unbroken sequence, I mean that the transaction log versions should increase consecutively without any gaps, such as 3, 4, 5, 6, etc. The sequence in your screenshot appears consistent.

 

Missing files like 00000000000000000000.json, 00000000000000000001.json, and 00000000000000000002.json are not necessarily a problem, since Delta can remove older log files based on log retention settings.

 

With a checkpoint at version 10, Spark may only be able to reconstruct the table starting from that checkpoint if earlier logs needed for version 9 are no longer available. In this case, time travel would start from version 10.

 

Thanks....

View solution in original post

6 REPLIES 6
YassineHachguer
Regular Visitor

It could be related to log retention or checkpointing. Even if older files exist in _delta_log, they may not be usable for time travel if the checkpoint or metadata chain starts at version 10. You might want to check the table history with DESCRIBE HISTORY to confirm which versions are actually available.

V-yubandi-msft
Community Support
Community Support

Hi @Mauro89 ,

Thank you for reaching out to the Microsoft Fabric Community. The error means that version 10 is the earliest version you can use for time travel. Although you might see older files in the _delta_log folder, Delta Lake can only restore a table version if the complete transaction log chain is present. If any earlier logs are missing, outside the retention window, or not part of a valid checkpoint chain, Spark won’t let you time travel to those versions.

To check which versions are available, run

DESCRIBE HISTORY delta.`Tables/xyz`

This will list the versions you can use for time travel.

 

Regards,

Yugandhar.

Hi @V-yubandi-msft,

 

thanks for your prompt response.

First, if I just run your code without the "sparkisch" context like blow, I get wired behavior as copilot starts to execute and does some tasks. Iam pretty sure this in not intended behavior 😉

 

If I run:

spark.sql(f"DESCRIBE HISTORY delta.`{deltaTable}`").show(truncate=False)
then I also see the versions below 10. So what else could be the reason?
 
Best regards!

 

Thank you for checking and confirming that DESCRIBE HISTORY still displays versions below 10.

This suggests the issue could be related to how Delta reconstructs the table state from the transaction logs. Although earlier versions are listed in the history, time travel needs a complete and continuous transaction log chain starting from a valid checkpoint. If the checkpoint starts at version 10, or if some earlier JSON log files are missing or unreadable, Spark may not be able to reconstruct version 9.

 

It could be helpful to look at the _delta_log folder to see if a checkpoint file starts at version 10, and to confirm that the JSON log files before version 10 are available and form an unbroken sequence.

 

Thank You.

Hi @V-yubandi-msft,

 

thanks for your input. And indeed there is a checkpoint at 10.

When you say "form an unbroken sequence", what do you mean by that?
Is it as shown in the image ment to be "broken" as there is no JSON 1 and 2?

Mauro89_0-1773128556353.png

Thanks!

 

Thank you for sharing the screenshot. When I refer to an unbroken sequence, I mean that the transaction log versions should increase consecutively without any gaps, such as 3, 4, 5, 6, etc. The sequence in your screenshot appears consistent.

 

Missing files like 00000000000000000000.json, 00000000000000000001.json, and 00000000000000000002.json are not necessarily a problem, since Delta can remove older log files based on log retention settings.

 

With a checkpoint at version 10, Spark may only be able to reconstruct the table starting from that checkpoint if earlier logs needed for version 9 are no longer available. In this case, time travel would start from version 10.

 

Thanks....

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.