Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Hi folks,
I face some challenges if I want to get a certain delta table version via the following command:
ver = 9
deltaTable = "Tables/xyz"
Solved! Go to Solution.
Thank you for sharing the screenshot. When I refer to an unbroken sequence, I mean that the transaction log versions should increase consecutively without any gaps, such as 3, 4, 5, 6, etc. The sequence in your screenshot appears consistent.
Missing files like 00000000000000000000.json, 00000000000000000001.json, and 00000000000000000002.json are not necessarily a problem, since Delta can remove older log files based on log retention settings.
With a checkpoint at version 10, Spark may only be able to reconstruct the table starting from that checkpoint if earlier logs needed for version 9 are no longer available. In this case, time travel would start from version 10.
Thanks....
It could be related to log retention or checkpointing. Even if older files exist in _delta_log, they may not be usable for time travel if the checkpoint or metadata chain starts at version 10. You might want to check the table history with DESCRIBE HISTORY to confirm which versions are actually available.
Hi @Mauro89 ,
Thank you for reaching out to the Microsoft Fabric Community. The error means that version 10 is the earliest version you can use for time travel. Although you might see older files in the _delta_log folder, Delta Lake can only restore a table version if the complete transaction log chain is present. If any earlier logs are missing, outside the retention window, or not part of a valid checkpoint chain, Spark won’t let you time travel to those versions.
To check which versions are available, run
DESCRIBE HISTORY delta.`Tables/xyz`
This will list the versions you can use for time travel.
Regards,
Yugandhar.
Hi @V-yubandi-msft,
thanks for your prompt response.
First, if I just run your code without the "sparkisch" context like blow, I get wired behavior as copilot starts to execute and does some tasks. Iam pretty sure this in not intended behavior 😉
If I run:
Thank you for checking and confirming that DESCRIBE HISTORY still displays versions below 10.
This suggests the issue could be related to how Delta reconstructs the table state from the transaction logs. Although earlier versions are listed in the history, time travel needs a complete and continuous transaction log chain starting from a valid checkpoint. If the checkpoint starts at version 10, or if some earlier JSON log files are missing or unreadable, Spark may not be able to reconstruct version 9.
It could be helpful to look at the _delta_log folder to see if a checkpoint file starts at version 10, and to confirm that the JSON log files before version 10 are available and form an unbroken sequence.
Thank You.
Hi @V-yubandi-msft,
thanks for your input. And indeed there is a checkpoint at 10.
When you say "form an unbroken sequence", what do you mean by that?
Is it as shown in the image ment to be "broken" as there is no JSON 1 and 2?
Thanks!
Thank you for sharing the screenshot. When I refer to an unbroken sequence, I mean that the transaction log versions should increase consecutively without any gaps, such as 3, 4, 5, 6, etc. The sequence in your screenshot appears consistent.
Missing files like 00000000000000000000.json, 00000000000000000001.json, and 00000000000000000002.json are not necessarily a problem, since Delta can remove older log files based on log retention settings.
With a checkpoint at version 10, Spark may only be able to reconstruct the table starting from that checkpoint if earlier logs needed for version 9 are no longer available. In this case, time travel would start from version 10.
Thanks....
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 7 | |
| 3 | |
| 3 | |
| 3 | |
| 3 |
| User | Count |
|---|---|
| 27 | |
| 13 | |
| 9 | |
| 8 | |
| 5 |