Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
sudarshan5
New Member

Issue with Dynamically Assigning Workspace ID in Notebook Activity in Fabric Data Factory

I am facing an issue with the Notebook activity in Fabric Data Factory, where the workspace ID is not being resolved dynamically. I extracted the JSON definition of my pipeline from Azure DevOps and noticed that the workspaceId is not explicitly defined for the Notebook activity.

Issue Details:

  • I need both the workspace ID and notebook ID to be dynamically assigned in the pipeline.
  • I am retrieving the workspace ID using the system variable: @pipeline().DataFactory.
  • The notebook ID is assigned using a similar approach to (https://community.fabric.microsoft.com/t5/Data-Pipeline/Dynamically-Change-Notebook-ID-for-each-Work...).
  • What is the recommended method to ensure the Notebook activity correctly resolves the workspace?
  • Do I need to create a config table storing all notebook IDs for different environments (Dev, Test, Prod)?

Any insights or best practices would be greatly appreciated!

Pipeline JSON sample:
{
"properties": {
"activities": [
{
"type": "TridentNotebook",
"typeProperties": {
"notebookId": "c83f4cb0-d163-48cb-9a11-b4f029e0820f"
},
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureInput": false,
"secureOutput": false
},
"name": "Check new folders and files",
"description": "Check if new folders and Ingest folder entry in config table and files present in the folder in file config table.",
"dependsOn": [
{
"activity": "fetch env info from ctrl tbl",
"dependencyConditions": [
"Succeeded"
]
}
]
},

sudarshan5_1-1739186116604.png

 

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @sudarshan5 ,

Here is my personal opinion on your question:

 

To ensure that the Notebook activity correctly resolves the workspace, you should do these steps below:

1. When creating a pipeline, add a Notebook activity and configure it to point to the correct notebook within your workspace. Make sure to set the workspace and notebook parameters correctly in the activity settings.
2. You can dynamically retrieve the list of notebooks within a workspace using REST APIs. This can help in scenarios where you need to dynamically select notebooks based on certain conditions.
3. Enable high concurrency mode in your workspace settings to optimize the execution of multiple notebooks. This can help in minimizing the startup time by reusing existing Spark sessions.

 

Creating a config table to store notebook IDs for different environments (Dev, Test, Prod) is a good practice. This approach helps in managing and dynamically selecting the correct notebook ID based on the environment.

 

Here is also a ducument that you can read:

Microsoft Fabric: Dynamically Provide Notebook ID in Data Factory Pipeline

 

 

Best Regards

Yilong Zhou

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

View solution in original post

1 REPLY 1
Anonymous
Not applicable

Hi @sudarshan5 ,

Here is my personal opinion on your question:

 

To ensure that the Notebook activity correctly resolves the workspace, you should do these steps below:

1. When creating a pipeline, add a Notebook activity and configure it to point to the correct notebook within your workspace. Make sure to set the workspace and notebook parameters correctly in the activity settings.
2. You can dynamically retrieve the list of notebooks within a workspace using REST APIs. This can help in scenarios where you need to dynamically select notebooks based on certain conditions.
3. Enable high concurrency mode in your workspace settings to optimize the execution of multiple notebooks. This can help in minimizing the startup time by reusing existing Spark sessions.

 

Creating a config table to store notebook IDs for different environments (Dev, Test, Prod) is a good practice. This approach helps in managing and dynamically selecting the correct notebook ID based on the environment.

 

Here is also a ducument that you can read:

Microsoft Fabric: Dynamically Provide Notebook ID in Data Factory Pipeline

 

 

Best Regards

Yilong Zhou

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.