Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hey Everyone,
I am following the End-to-end Lake House Tutorial under Lakehouse tutorial - Prepare and transform lakehouse data - Microsoft Fabric | Microsoft Learn .
When running the first Notebook '01 - Create Delta Tables' I get an Analysis Exception for creating the dimensions tables.
How can I solve this issue?
'AnalysisException: [DELTA_FAILED_TO_MERGE_FIELDS] Failed to merge fields 'CustomerKey' and 'CustomerKey''
Thanks for your help.
Solved! Go to Solution.
Hi,
the root cause of this error are different data types in columns CustomerKey and LineageKey.
In step 3 of this turorial (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-build-lakehouse) you import a csv to the wwilakehouse via Data Flow Gen2 and create a dimension_customer table. The second transformation step contains a transformation of field customer_key and LineageKey to bigint (int64).
Step 5 (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-lakehouse-data-preparation) of this tutorial uses PySpark Notebooks to merge csv files into the dimension_customer table. The data types for the mentioned columns are int (int32). You can compare the data types by renaming the target file from step 3 into dimension_customer_csv and execute the data flow again.
Resulting tabel from step 3:
Resulting table from step 5:
Solution:
In step 3 of the tutorial you have to adjust the last transformation step as following:
Hi,
the root cause of this error are different data types in columns CustomerKey and LineageKey.
In step 3 of this turorial (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-build-lakehouse) you import a csv to the wwilakehouse via Data Flow Gen2 and create a dimension_customer table. The second transformation step contains a transformation of field customer_key and LineageKey to bigint (int64).
Step 5 (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-lakehouse-data-preparation) of this tutorial uses PySpark Notebooks to merge csv files into the dimension_customer table. The data types for the mentioned columns are int (int32). You can compare the data types by renaming the target file from step 3 into dimension_customer_csv and execute the data flow again.
Resulting tabel from step 3:
Resulting table from step 5:
Solution:
In step 3 of the tutorial you have to adjust the last transformation step as following:
Thanks, this fixed my problem! Microsoft Fabric needs to update their documentation for this.