Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The FabCon + SQLCon recap series starts April 14th at 8am Pacific. If you’re tracking where AI is going inside Fabric, this first session is a can't miss. Register now

Reply
IHPROLAN
Regular Visitor

Lakehouse Tutorial issues creating Delta Tables with notebook

Hey Everyone, 

 

I am following the End-to-end Lake House Tutorial under Lakehouse tutorial - Prepare and transform lakehouse data - Microsoft Fabric | Microsoft Learn .

When running the first Notebook '01 - Create Delta Tables' I get an Analysis Exception for creating the dimensions tables. 

How can I solve this issue?

 

'AnalysisException: [DELTA_FAILED_TO_MERGE_FIELDS] Failed to merge fields 'CustomerKey' and 'CustomerKey''

IHPROLAN_0-1728392064715.png

Thanks for your help.

 

1 ACCEPTED SOLUTION
GeraldGast
New Member

Hi,

 

the root cause of this error are different data types in columns CustomerKey and LineageKey.

 

In step 3 of this turorial (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-build-lakehouse) you import a csv to the wwilakehouse via Data Flow Gen2 and create a dimension_customer table. The second transformation step contains a transformation of field customer_key and LineageKey to bigint (int64).

 

Step 5 (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-lakehouse-data-preparation) of this tutorial uses PySpark Notebooks to merge csv files into the dimension_customer table. The data types for the mentioned columns are int (int32). You can compare the data types by renaming the target file from step 3 into dimension_customer_csv and execute the data flow again.

Resulting tabel from step 3:

GeraldGast_0-1728397326625.png

 

 

Resulting table from step 5:

GeraldGast_1-1728397366753.png

 

 

Solution:
In step 3 of the tutorial you have to adjust the last transformation step as following:

    {
      {"CustomerKey", Int32.Type},
      {"WWICustomerID", Int32.Type},
      {"Customer", type text},
      {"BillToCustomer", type text},
      {"Category", type text},
      {"BuyingGroup", type text},
      {"PrimaryContact", type text},
      {"PostalCode", type text},
      {"ValidFrom", type datetime},
      {"ValidTo", type datetime},
      {"LineageKey", Int32.Type}
    }
 
Safe and execute. 
 
dimension_customer should now have int32 data types for CustomerKey, WWICustomerID and LineageKey. And this ensures, that the execution of the merge statement in PySpark notebook does not raise the mentioned error.

 

View solution in original post

3 REPLIES 3
GeraldGast
New Member

Hi,

 

the root cause of this error are different data types in columns CustomerKey and LineageKey.

 

In step 3 of this turorial (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-build-lakehouse) you import a csv to the wwilakehouse via Data Flow Gen2 and create a dimension_customer table. The second transformation step contains a transformation of field customer_key and LineageKey to bigint (int64).

 

Step 5 (https://learn.microsoft.com/en-us/fabric/data-engineering/tutorial-lakehouse-data-preparation) of this tutorial uses PySpark Notebooks to merge csv files into the dimension_customer table. The data types for the mentioned columns are int (int32). You can compare the data types by renaming the target file from step 3 into dimension_customer_csv and execute the data flow again.

Resulting tabel from step 3:

GeraldGast_0-1728397326625.png

 

 

Resulting table from step 5:

GeraldGast_1-1728397366753.png

 

 

Solution:
In step 3 of the tutorial you have to adjust the last transformation step as following:

    {
      {"CustomerKey", Int32.Type},
      {"WWICustomerID", Int32.Type},
      {"Customer", type text},
      {"BillToCustomer", type text},
      {"Category", type text},
      {"BuyingGroup", type text},
      {"PrimaryContact", type text},
      {"PostalCode", type text},
      {"ValidFrom", type datetime},
      {"ValidTo", type datetime},
      {"LineageKey", Int32.Type}
    }
 
Safe and execute. 
 
dimension_customer should now have int32 data types for CustomerKey, WWICustomerID and LineageKey. And this ensures, that the execution of the merge statement in PySpark notebook does not raise the mentioned error.

 

Anonymous
Not applicable

This fixed it, thank you!

Thanks, this fixed my problem! Microsoft Fabric needs to update their documentation for this.

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.

Top Kudoed Authors