Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
pbi_artisan
Frequent Visitor

Copying and renaming fields in table from Bronze Lakehouse to Silver Lakehouse using Notebook

Hi Fabric gurus,

 

I'm in the process of reading data from a Table in my Bronze Lakehouse.

I want to:

1) Read the data from Bronze table in Lakehouse:

delta_table_path = ("abfss://FB-DEV@onelake.dfs.fabric.microsoft.com/Bronze_1.lakehouse/Tables/table")
df_delta_table = spark.read.format("delta").load(delta_table_path )
 
 BUT I get an error with step 2 - copying to the Silver Lakehouse
2) Then I want to write it to a table in my Silver Layer lakehouse:
df_delta_table .write.format("delta").mode("overwrite").saveAsTable("Silver_2.dbo.tablenb")
 
I'm fairly new to Notebooks.
How can I copy this table, but select the columns I want before and rename some, then copy to Silver Lakehouse?
 
Thanks
1 ACCEPTED SOLUTION
v-karpurapud
Community Support
Community Support

Hi @pbi_artisan 


Thanks for reaching out to the Microsoft Fabric Community Forum.

 

To Copy and rename fields in table from Bronze Lakehouse to Silver Lakehouse using Notebook. Here are the steps below:

1. The first step is to load the table from your Bronze Lakehouse into a DataFrame.

# Define the path to the Bronze table

delta_table_path = "Your Table Path Here"

# Read the data from the bronze table using Delta format

df_bronze = spark.read.format("delta").load(delta_table_path)

2.To select specific columns and rename them, use the selectExpr()

from pyspark.sql.functions import col

# Example: Selecting and renaming columns

df_transformed = df_delta_table.select(

    col("original_column1").alias("new_column1"),

    col("original_column2").alias("new_column2"),

   )

# Display the transformed DataFrame

df_transformed.show()


3.Ensure the column names in the df_transformed DataFrame are valid (no spaces, special characters, or reserved keywords).
# Replace invalid characters in column names

valid_columns = [col.replace(" ", "_").replace("(", "").replace(")", "").replace(",", "") for col in df_transformed.columns]

df_transformed = df_transformed.toDF(*valid_columns)

 

# Display the updated column names

print(df_transformed.columns)

4.Now, write the transformed DataFrame into your Silver Lakehouse.

lakehouse_path = "Files/<YourLakehouseName>/YourTargetFolder"

# Save data to the Lakehouse in Delta format

df_transformed.write.format("delta").mode("overwrite").save(lakehouse_path)

This approach resolved the issue of copying data to silver lake house.

If you have any further questions or need additional help with this, feel free to reach out     to us for further assistance!

If you find this post helpful, please mark it as an "Accept as Solution" and give a KUDOS.

View solution in original post

5 REPLIES 5
v-karpurapud
Community Support
Community Support

Hi @pbi_artisan 

We noticed we haven't received a response from you yet, so we wanted to follow up and ensure the solution we provided addressed your issue. If you require any further assistance or have additional questions, please let us know.

Your feedback is valuable to us, and we look forward to hearing from you soon.

v-karpurapud
Community Support
Community Support

Hi @pbi_artisan 


Thanks for reaching out to the Microsoft Fabric Community Forum.

 

To Copy and rename fields in table from Bronze Lakehouse to Silver Lakehouse using Notebook. Here are the steps below:

1. The first step is to load the table from your Bronze Lakehouse into a DataFrame.

# Define the path to the Bronze table

delta_table_path = "Your Table Path Here"

# Read the data from the bronze table using Delta format

df_bronze = spark.read.format("delta").load(delta_table_path)

2.To select specific columns and rename them, use the selectExpr()

from pyspark.sql.functions import col

# Example: Selecting and renaming columns

df_transformed = df_delta_table.select(

    col("original_column1").alias("new_column1"),

    col("original_column2").alias("new_column2"),

   )

# Display the transformed DataFrame

df_transformed.show()


3.Ensure the column names in the df_transformed DataFrame are valid (no spaces, special characters, or reserved keywords).
# Replace invalid characters in column names

valid_columns = [col.replace(" ", "_").replace("(", "").replace(")", "").replace(",", "") for col in df_transformed.columns]

df_transformed = df_transformed.toDF(*valid_columns)

 

# Display the updated column names

print(df_transformed.columns)

4.Now, write the transformed DataFrame into your Silver Lakehouse.

lakehouse_path = "Files/<YourLakehouseName>/YourTargetFolder"

# Save data to the Lakehouse in Delta format

df_transformed.write.format("delta").mode("overwrite").save(lakehouse_path)

This approach resolved the issue of copying data to silver lake house.

If you have any further questions or need additional help with this, feel free to reach out     to us for further assistance!

If you find this post helpful, please mark it as an "Accept as Solution" and give a KUDOS.

Hi @pbi_artisan 
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.

Hi @pbi_artisan 

As we haven’t heard back from you, we wanted to kindly follow up to check if the solution we provided for your issue worked for you  or let us know if you need any further assistance?

 

Your feedback is important to us, Looking forward to your response. 

spencer_sa
Super User
Super User

Assuming you're dead set on using Notebooks rather than Copy Data activities in pipelines...

This is one of the myriad ways of doing this.

 

target_lakehouse_uri = 'abfss://FB-DEV@onelake.dfs.fabric.microsoft.com/Silver.lakehouse'
delta_table_path = ("abfss://FB-DEV@onelake.dfs.fabric.microsoft.com/Bronze_1.lakehouse/Tables/table")
df_delta_table = spark.read.format("delta").load(delta_table_path )

# Rename columns
column_mapping = {'col1': 'Column 1', 'col2': 'Column 2', 'col3': 'Column 3'}
df_delta_table = df_delta_table.withColumnsRenamed(column_mapping)

# Select the columns you want
columns = ['Column 1','Column 2','Column 3']
df_delta_table = df_delta_table.select(columns)

# Save that thing
df_delta_table.write.format('delta').mode('overwrite').save(target_lakehouse_uri + '/Tables/table')

 


If this helps, please consider Accepting as a Solution to help other people to find it.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.