Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello,
I have a dev workspace with a notebook which will read data from a table in a lakehouse from the same dev workspace.
Later I will publish the objects to the test workspace and I want the notebook to reference the table in the lakehouse in the test workspace, automatically, without having to manually change a hard-coded path.
Here is how I can do it WITH spark:
from notebookutils import mssparkutils
this_workspace_id = mssparkutils.lakehouse.get('lakehouse')['workspaceId']
this_lakehouse_id = mssparkutils.lakehouse.get('lakehouse')['id']
table_path = f'abfss://{this_workspace_id}@onelake.dfs.fabric.microsoft.com/{this_lakehouse_id}/Tables/dbo/table'
spark.read.format("delta").option("startingVersion", "latest").load(table_path)
But I want to do it without starting a Spark session. Without a Spark session you can't import mssparkutils from notebookutils.
Solved! Go to Solution.
As far as I can tell it's impossible to write data to the Tables section of a datalake without starting a Spark session, so this approach will not work.
Hi @DCELL ,
One solution I’d recommend is leveraging Fabric Pipelines for orchestration to retrieve the current workspace and pass it as a parameter when calling the notebook. This allows your notebook to dynamically reference the appropriate Lakehouse without hardcoding any paths.
You can then deploy artifacts from dev to test using Fabric Deployment Pipelines. Since both your pipeline and notebook are parameterized, they’ll automatically adapt to the target environment during deployment.
Also, because the workspace context is retrieved by the pipeline, a Spark session will only be initiated when the notebook runs, not before. This avoids the need to import mssparkutils outside of a Spark session.
Here is a blog I posted that shows a similar use-case and tutorial, however with Warehouses and Stored Procedures: https://discoveringallthingsanalytics.com/fabric-deployment-pipelines-guide-dynamic-warehouse-connec...
If this helped, please mark it as the solution so others can benefit too. And if you found it useful, kudos are always appreciated.
Thanks,
Samson
It's half a solution because I could read the data with Spark, write some code, and when it's ready then switch to pd.read_parquet and add parameterization with the pipeline.
But ideally I want to be able to get the lakehouse & workspace reference within the notebook itself because it also allows me to do some development in a non-Spark notebook and it won't by blocked (due to the Spark session limit) by another Spark-enabled notebook which is already running.
Hi @DCELL ,Thanks for reaching out to the Microsoft fabric community forum
Thanks for your prompt response
@DCELL ,
You're right getting the Lakehouse/workspace info inside a non-Spark notebook is currently not directly supported like it is with mssparkutils in Spark.
However, you can still achieve dynamic, environment-aware notebooks by parameterizing them and using Fabric Pipelines to inject those values at runtime this way, you avoid hardcoding, and your notebook stays Spark-free.
As a lightweight alternative, you could also read a small config.json file from the Lakehouse Files/ area that contains workspace/Lakehouse metadata this works fine in pandas’ notebooks.
So, while the feature isn’t natively exposed in non-Spark notebooks (yet), it’s still possible to design a dynamic, scalable workflow without requiring Spark sessions.
NotebookUtils (former MSSparkUtils) for Fabric - Microsoft Fabric | Microsoft Learn
The Microsoft Fabric deployment pipelines process - Microsoft Fabric | Microsoft Learn
If this post helped resolve your issue, please consider giving it Kudos and marking it as the Accepted Solution. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.
We appreciate your engagement and thank you for being an active part of the community.
Best regards,
LakshmiNarayana.
The .json config file could work. Do you have a guide I can follow?
Hi @DCELL ,
Thanks for the follow-up question
Here's a simple guide to help you set up and use a .json config file in your Fabric notebook (non-Spark) to make your workflows dynamic and environment-aware:
Step-by-Step: Using a config.json in Fabric (Pandas) Notebook
Create the config.json file
Place it in your Lakehouse Files/ area (e.g., Files/config/config.json). Example contents:
{
"lakehouse_name": "SalesLakehouse",
"environment": "dev",
"data_path": "Tables/sales_data",
"region": "East US"
}
Load the JSON in your notebook using Pandas or built-in file APIs
import json
config_path = "Files/config/config.json"
with open(config_path, "r") as f:
config = json.load(f)
print(config["lakehouse_name"])
If reading directly from the Lakehouse via Pandas:
import pandas as pd
import json
with open("/lakehouse/default/Files/config/config.json", "r") as f:
config = json.load(f)
print(config["environment"])
Use config values in your logic
data_path = config["data_path"]
region = config["region"]
Solved: Parameterizing a notebook - Microsoft Fabric Community
Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn
Best Regards,
LakshmiNarayana
As far as I can tell it's impossible to write data to the Tables section of a datalake without starting a Spark session, so this approach will not work.
@DCELL ,
Thanks for the clarification really appreciate the detailed explanation. That clears things up
Best Regards
Lakshmi Narayana
The .json idea can work, since it will just require a one-time load to the datalake of each workspace showing the workspace id and lakehouse id.
Before I close this I'm checking if the non-spark read and write functions will work properly with Fabric datalakes.
Hi @DCELL ,
If your issue has been resolved, please consider marking the most helpful reply as the accepted solution. This helps other community members who may encounter the same issue to find answers more efficiently.
If you're still facing challenges, feel free to let us know we’ll be glad to assist you further.
Looking forward to your response.
Best regards,
LakshmiNarayana.
Hi @DCELL ,
If your issue has been resolved, please consider marking the most helpful reply as the accepted solution. This helps other community members who may encounter the same issue to find answers more efficiently.
If you're still facing challenges, feel free to let us know we’ll be glad to assist you further.
Looking forward to your response.
Best regards,
LakshmiNarayana.
User | Count |
---|---|
13 | |
4 | |
3 | |
3 | |
3 |
User | Count |
---|---|
8 | |
8 | |
7 | |
6 | |
5 |