Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount.
Register nowGet certified as a Fabric Data Engineer: Check your eligibility for a 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700. Get started
Why does Dataflow gen2 always have some inexplicable problems, and can we really use it with confidence?
In the Power Query editor, it's a decimal number,
In writing to the Lakehouse's column mapping settings, it is also a decimal number.
After writing the lake house, it is not a decimal number
For other DF2, some are correctly mapped to decimal numbers. There is no pattern, what should we do?
The following one is another DF2 which is normal, is the desired decimal number.
Solved! Go to Solution.
Thank you! We've identified the issue and have a fix for it that will be available in January of next year.
Hi @yjh
Thanks for using Fabric Community. Our community thrives on open communication and collaboration, and for that to happen effectively, we encourage everyone to use English for their questions and discussions. This ensures everyone can understand and participate, regardless of their native language.
Would you be comfortable rephrasing your question in English? We will definitely try to help.
Thanks for understanding.
I changed to English, and I have terrible English.
Thank you!
could you share some sample queries that could help us replicate this issue on our end?
I haven't been able to replicate it on my own with the sample queries that I use
I simply fetch the data from the local database, keep some columns, and that's it.
x5=
I cannot replicate this within my environment.
if you can replicate this with any public data source or some sample data that you can share with us, please go ahead and do so.
if you are unable to provide this info, please go ahead and raise a support ticket so an engineer can take a closer look at your specific scenario and determine what could be happening.
Thank you! We've identified the issue and have a fix for it that will be available in January of next year.
meanwhile is there a workaround to this issue? decimals are pretty important for any calculations. is this an issue with the dataflow? is there other tools we could use in lieu of dataflow to load the data into lakehouse tables? pyspark notebook or pipeline copydata perhaps?
User | Count |
---|---|
3 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
4 | |
3 | |
2 | |
2 | |
2 |