Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I'm working on a notebook with shortcuts to Dynamics 365 data stored in a ADLS gen2 container (as a result of Synapse Link + Spark). It seems the parquet files written by the Synapse Link contain legacy timestamps - at least that's how I interpret this.
Anyhow, I'm trying out the Native Execution Engine for Fabric 1.3, and according to this link, I should just be able to enable the timestamp rebase for Gluten. But even though I've done this in my Environment and restarted my Spark sessions in Fabric, and subsequently tried adding `SET spark.gluten.legacy.timestamp.rebase.enabled = true;` directly to the notebook in question - it doesn't work.
I'm still getting the error:
`Error Source: USER`
`Error Code: UNSUPPORTED`
`Reason: Reading legacy timestamp is not supported.`
Anyone have any ideas here?
We are following up once again regarding your query. Could you please confirm if the issue has been resolved through the support ticket with Microsoft?
If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.
Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.
Thank you for your understanding and participation.
I did not file a support ticket for this, but I believe the issue has been resolved by the subsequent updates/patches from the team. Unless my D365 source stopped producing legacy timestamps, it works if I run this now:
@dpollozhani, please do raise a support ticket this will allow the a dedicated team to investigate and provide a resolution. Please include details of your setup, test results, and the impact on your workflows when submitting the ticket.
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
Thanks,
Prashanth Are
MS Fabric Community Support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query.
Hi @dpollozhani ,
Thank you for the helpful response @nilendraFabric .
In addition to Niledra's suggestion, I kindly request you to update to the latest versions of Fabric Runtime and Spark. These versions include support for the native execution engine and the necessary timestamp handling features.
If this post helps, please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thank you for being a aprt of Microsoft Fabric Community Forum!
Regards,
Pallavi.
I'm already using the latest Fabric runtime, 1.3. According to the documentation (blog) the timestamp handling features are not yet set by default, which is also evident in my examples.
Hello @dpollozhani
try these as well
spark.sql.legacy.parquet.int96RebaseModeInRead=CORRECTED
spark.sql.legacy.parquet.int96RebaseModeInWrite=CORRECTED
spark.sql.legacy.parquet.datetimeRebaseModeInRead=CORRECTED
spark.sql.legacy.parquet.datetimeRebaseModeInWrite=CORRECTED
Please let me know if this works
Thanks
Here is my current configuration in the Environment:
runtime_version: '1.3'
spark_conf:
- spark.sql.session.timeZone: Europe/Copenhagen
- spark.gluten.legacy.timestamp.rebase.enabled: 'true'
- spark.sql.legacy.parquet.datetimeRebaseModeInRead: CORRECTED
- spark.sql.legacy.parquet.datetimeRebaseModeInWrite: CORRECTED
- spark.sql.legacy.parquet.int96RebaseModeInRead: CORRECTED
- spark.sql.legacy.parquet.int96RebaseModeInWrite: CORRECTED
I should note that the following
- spark.gluten.legacy.timestamp.rebase.enabled: 'true'
- spark.sql.legacy.parquet.int96RebaseModeInRead
- spark.sql.legacy.parquet.int96RebaseModeInWrite
are not recognized by the Environment UI - if that has any implication at all.
I also tried with 'LEGACY' instead of 'CORRECTED' on all the settings.
Still, the same result:
Caused by: java.lang.RuntimeException: Exception: VeloxUserError
Error Source: USER
Error Code: UNSUPPORTED
Reason: Reading legacy timestamp is not supported.
%%configure
{"conf": {"spark.gluten.legacy.timestamp.rebase.enabled": "true"}}
Give it a try
Nope, unfortunately, still same issue.
I recently encountered the same issue. In my scenario I tried multiple times with setting the configuration for the spark session but I still got the legacy timestamp not supported error. But then I found out that my spark session was attached to a specific environment which overrulled my spark session configurations.
So please be aware of this. In short, if you do not run environment and make the following configuration inside the spark session then it will work. The configurations are:
As I stated in previous comments, I am using an Environment, but that's where I'm primarily trying to set the configuration (which is what you're supposed to be able to). Secondly, I tried overriding the presets from the Environment by making a forced configuration in the notebook. None of it works.
My guess is that sessions that are attached to Environments are currently not working as intended, as seems to be the general rule with Environments until now (very unstable). Perhaps some settings are not applied at all and ignored even when set in the notebook. It could be a case where you have to force an Environment rebuild by switching runtimes back and forth (which I've had to do when I encountered issues with custom library publishing).
Next I will have to try to do the rebuild, and after that try to run Native in a notebook without an Environment.
So running native execution engine in a notebook that is not attached to an Environment, with the configurations as per above, does not work:
Changing everything from "CORRECTED" to "LEGACY" has no effect either:
Don't know if it has anything to do with the fact that the data is accessed via shortcuts (it shouldn't have), or if it's just generally buggy - but I think I might just give up on this for now.
User | Count |
---|---|
13 | |
4 | |
3 | |
3 | |
3 |
User | Count |
---|---|
8 | |
8 | |
7 | |
6 | |
5 |