Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
I have this simple notebook to load data from an xlsx file in a lakehouse to a delta table in the Lakehouse.
How can I make this reusable/parameterized so I call it from another notebook and pass parameters and how exactly would I call it from another notebook?
import pandas as pd
source_path = "/lakehouse/default/"
source_file_name = "Files/Test.xlsx"
sheet_name = "Test"
destination_table = "test"
# Read Excel data into a Pandas DataFrame
df = pd.read_excel(source_path + source_file_name, sheet_name=sheet_name)
# Convert Pandas DataFrame to Spark DataFrame
df_spark = spark.createDataFrame(df)
# Save Spark DataFrame as a Delta table
df_spark.write.mode("overwrite").format("delta").saveAsTable(destination_table)
I am aware of how to call a parameterized notebook from a pipeline How to Pass Parameters from Pipelines to Notebooks in Microsoft Fabric! - YouTube.
However, I want to know if we can call the notebook from another notebook.
Solved! Go to Solution.
Hi @VickyDev18
Thanks for using Fabric Community.
You can call one notebook from another notebook using %run command. I have created a repro of the same but instead I used a CSV file. I have attached the screenshots of the code.
Functions is my child notebook where I am defining a function to create a table inside lakehouse using the data inside a CSV file.
I am calling this notebook from another notebook:
The table got created in the lakehouse.
Hope this helps. Please let me know if you have any further questions.
Hello @VickyDev18 ,
Its me again. Have you tried using this
Hi @VickyDev18
Thanks for using Fabric Community.
You can call one notebook from another notebook using %run command. I have created a repro of the same but instead I used a CSV file. I have attached the screenshots of the code.
Functions is my child notebook where I am defining a function to create a table inside lakehouse using the data inside a CSV file.
I am calling this notebook from another notebook:
The table got created in the lakehouse.
Hope this helps. Please let me know if you have any further questions.
I had tried this method but for some reason it wasn't working. However, now that I tried it again, it worked!
Only simplified approach I tried was to just call the function directly without the 2nd function.
So overall the approach was:-
1. Create a Notebook called Functions with code below.
import pandas as pd
def load_excel(source_path, source_file_name, sheet_name, destination_table):
# Read Excel data into a Pandas DataFrame
df = pd.read_excel(source_path + source_file_name, sheet_name=sheet_name)
# Convert Pandas DataFrame to Spark DataFrame
df_spark = spark.createDataFrame(df)
# Save Spark DataFrame as a Delta table
df_spark.write.mode("overwrite").format("delta").saveAsTable(destination_table)
2. Create a new Notebook and call the function above using the code below
%run Functions
#load_excel(source_path, source_file_name, sheet_name, destination_table)
load_excel("/lakehouse/default/Files/", "Test.xlsx", "Test", "test")
Hi @VickyDev18
Thanks for sharing your approach here. Glad that your query got resolved. Please continue using Fabric Community for any help regarding your queries.
Hi @VickyDev18
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. Otherwise, will respond back with the more details and we will try to help.
Thanks
User | Count |
---|---|
30 | |
10 | |
4 | |
3 | |
1 |
User | Count |
---|---|
45 | |
15 | |
14 | |
10 | |
9 |