<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Reference current workspace with notebook without Spark in Data Engineering</title>
    <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4719082#M9878</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;One solution I’d recommend is leveraging Fabric Pipelines for orchestration to retrieve the current workspace and pass it as a parameter when calling the notebook. This allows your notebook to dynamically reference the appropriate Lakehouse without hardcoding any paths.&lt;BR /&gt;&lt;BR /&gt;You can then deploy artifacts from dev to test using Fabric Deployment Pipelines. Since both your pipeline and notebook are parameterized, they’ll automatically adapt to the target environment during deployment.&lt;BR /&gt;&lt;BR /&gt;Also, because the workspace context is retrieved by the pipeline, a Spark session will only be initiated when the notebook runs, not before. This avoids the need to import mssparkutils outside of a Spark session.&lt;BR /&gt;&lt;BR /&gt;Here is a blog I posted that shows a similar use-case and tutorial, however with Warehouses and Stored Procedures:&amp;nbsp;&lt;A href="https://discoveringallthingsanalytics.com/fabric-deployment-pipelines-guide-dynamic-warehouse-connections-in-microsoft-fabric-pipelines/" target="_blank"&gt;https://discoveringallthingsanalytics.com/fabric-deployment-pipelines-guide-dynamic-warehouse-connections-in-microsoft-fabric-pipelines/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;If this helped, please mark it as the solution so others can benefit too. And if you found it useful, kudos are always appreciated.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Thanks,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Samson&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 04 Jun 2025 04:57:42 GMT</pubDate>
    <dc:creator>SamsonTruong</dc:creator>
    <dc:date>2025-06-04T04:57:42Z</dc:date>
    <item>
      <title>Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4718609#M9863</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a dev workspace with a notebook which will read data from a table in a lakehouse from the same dev workspace.&lt;/P&gt;&lt;P&gt;Later I will publish the objects to the test workspace and I want the notebook to reference the table in the lakehouse in the test workspace, automatically, without having to manually change a hard-coded path.&lt;/P&gt;&lt;P&gt;Here is how I can do it WITH spark:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;from notebookutils import mssparkutils

this_workspace_id = mssparkutils.lakehouse.get('lakehouse')['workspaceId']
this_lakehouse_id = mssparkutils.lakehouse.get('lakehouse')['id']

table_path = f'abfss://{this_workspace_id}@onelake.dfs.fabric.microsoft.com/{this_lakehouse_id}/Tables/dbo/table'
spark.read.format("delta").option("startingVersion", "latest").load(table_path)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But I want to do it without starting a Spark session. Without a Spark session you can't import mssparkutils from notebookutils.&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jun 2025 16:23:02 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4718609#M9863</guid>
      <dc:creator>DCELL</dc:creator>
      <dc:date>2025-06-03T16:23:02Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4719082#M9878</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;One solution I’d recommend is leveraging Fabric Pipelines for orchestration to retrieve the current workspace and pass it as a parameter when calling the notebook. This allows your notebook to dynamically reference the appropriate Lakehouse without hardcoding any paths.&lt;BR /&gt;&lt;BR /&gt;You can then deploy artifacts from dev to test using Fabric Deployment Pipelines. Since both your pipeline and notebook are parameterized, they’ll automatically adapt to the target environment during deployment.&lt;BR /&gt;&lt;BR /&gt;Also, because the workspace context is retrieved by the pipeline, a Spark session will only be initiated when the notebook runs, not before. This avoids the need to import mssparkutils outside of a Spark session.&lt;BR /&gt;&lt;BR /&gt;Here is a blog I posted that shows a similar use-case and tutorial, however with Warehouses and Stored Procedures:&amp;nbsp;&lt;A href="https://discoveringallthingsanalytics.com/fabric-deployment-pipelines-guide-dynamic-warehouse-connections-in-microsoft-fabric-pipelines/" target="_blank"&gt;https://discoveringallthingsanalytics.com/fabric-deployment-pipelines-guide-dynamic-warehouse-connections-in-microsoft-fabric-pipelines/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;If this helped, please mark it as the solution so others can benefit too. And if you found it useful, kudos are always appreciated.&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Thanks,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Samson&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 04:57:42 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4719082#M9878</guid>
      <dc:creator>SamsonTruong</dc:creator>
      <dc:date>2025-06-04T04:57:42Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4719926#M9904</link>
      <description>&lt;P&gt;It's half a solution because I could read the data with Spark, write some code, and when it's ready then switch to pd.read_parquet and add parameterization with the pipeline.&lt;/P&gt;&lt;P&gt;But ideally I want to be able to get the lakehouse &amp;amp; workspace reference within the notebook itself because it also allows me to do some development in a non-Spark notebook and it won't by blocked (due to the Spark session limit) by another Spark-enabled notebook which is already running.&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 13:20:57 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4719926#M9904</guid>
      <dc:creator>DCELL</dc:creator>
      <dc:date>2025-06-04T13:20:57Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4722244#M9947</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,Thanks for reaching out to the Microsoft fabric community forum&lt;/P&gt;
&lt;P&gt;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/927426"&gt;@SamsonTruong&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks for your prompt response&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;You're right getting the Lakehouse/workspace info inside a non-Spark notebook is currently not directly supported like it is with mssparkutils in Spark.&lt;/P&gt;
&lt;P&gt;However, you can still achieve dynamic, environment-aware notebooks by parameterizing them and using Fabric Pipelines to inject those values at runtime this way, you avoid hardcoding, and your notebook stays Spark-free.&lt;/P&gt;
&lt;P&gt;As a lightweight alternative, you could also read a small config.json file from the Lakehouse Files/ area that contains workspace/Lakehouse metadata this works fine in pandas’ notebooks.&lt;/P&gt;
&lt;P&gt;So, while the feature isn’t natively exposed in non-Spark notebooks (yet), it’s still possible to design a dynamic, scalable workflow without requiring Spark sessions.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities" target="_blank"&gt;NotebookUtils (former MSSparkUtils) for Fabric - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/cicd/deployment-pipelines/understand-the-deployment-process?tabs=new-ui" target="_blank"&gt;The Microsoft Fabric deployment pipelines process - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If this post helped resolve your issue, please consider giving it &lt;STRONG&gt;Kudos&lt;/STRONG&gt; and marking it as the &lt;STRONG&gt;Accepted Solution&lt;/STRONG&gt;. This not only acknowledges the support provided but also helps other community members find relevant solutions more easily.&lt;/P&gt;
&lt;P&gt;We appreciate your engagement and thank you for being an active part of the community.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best regards,&lt;BR /&gt;LakshmiNarayana&lt;/STRONG&gt;.&lt;/P&gt;</description>
      <pubDate>Fri, 06 Jun 2025 04:07:10 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4722244#M9947</guid>
      <dc:creator>v-lgarikapat</dc:creator>
      <dc:date>2025-06-06T04:07:10Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4724913#M10006</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;If your issue has been resolved, please consider marking the most helpful reply as the &lt;STRONG&gt;accepted solution&lt;/STRONG&gt;. This helps other community members who may encounter the same issue to find answers more efficiently.&lt;/P&gt;
&lt;P&gt;If you're still facing challenges, feel free to let us know we’ll be glad to assist you further.&lt;/P&gt;
&lt;P&gt;Looking forward to your response.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best regards,&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;LakshmiNarayana.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Jun 2025 05:20:48 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4724913#M10006</guid>
      <dc:creator>v-lgarikapat</dc:creator>
      <dc:date>2025-06-09T05:20:48Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4725582#M10024</link>
      <description>&lt;P&gt;The .json config file could work. Do you have a guide I can follow?&lt;/P&gt;</description>
      <pubDate>Mon, 09 Jun 2025 11:03:38 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4725582#M10024</guid>
      <dc:creator>DCELL</dc:creator>
      <dc:date>2025-06-09T11:03:38Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4727466#M10063</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Thanks for the follow-up question&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here's a simple guide to help you set up and use a .json config file in your Fabric notebook (non-Spark) to make your workflows dynamic and environment-aware:&lt;/P&gt;
&lt;P&gt;Step-by-Step: Using a config.json in Fabric (Pandas) Notebook&lt;BR /&gt;Create the config.json file&lt;BR /&gt;Place it in your Lakehouse Files/ area (e.g., Files/config/config.json). Example contents:&lt;BR /&gt;{&lt;BR /&gt;"lakehouse_name": "SalesLakehouse",&lt;BR /&gt;"environment": "dev",&lt;BR /&gt;"data_path": "Tables/sales_data",&lt;BR /&gt;"region": "East US"&lt;BR /&gt;}&lt;BR /&gt;Load the JSON in your notebook using Pandas or built-in file APIs&lt;BR /&gt;import json&lt;BR /&gt;config_path = "Files/config/config.json"&lt;BR /&gt;with open(config_path, "r") as f:&lt;BR /&gt;config = json.load(f)&lt;BR /&gt;print(config["lakehouse_name"])&lt;BR /&gt;If reading directly from the Lakehouse via Pandas:&lt;BR /&gt;import pandas as pd&lt;BR /&gt;import json&lt;BR /&gt;with open("/lakehouse/default/Files/config/config.json", "r") as f:&lt;BR /&gt;config = json.load(f)&lt;BR /&gt;print(config["environment"])&lt;BR /&gt;Use config values in your logic&lt;BR /&gt;data_path = config["data_path"]&lt;BR /&gt;region = config["region"]&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://community.fabric.microsoft.com/t5/Data-Engineering/Parameterizing-a-notebook/m-p/4025536" target="_blank"&gt;Solved: Parameterizing a notebook - Microsoft Fabric Community&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/author-execute-notebook#parameterized-session-configuration-from-a-pipeline" target="_blank"&gt;Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best Regards,&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;LakshmiNarayana&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jun 2025 15:00:45 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4727466#M10063</guid>
      <dc:creator>v-lgarikapat</dc:creator>
      <dc:date>2025-06-10T15:00:45Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4732916#M10173</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;If your issue has been resolved, please consider marking the most helpful reply as the &lt;STRONG&gt;accepted solution&lt;/STRONG&gt;. This helps other community members who may encounter the same issue to find answers more efficiently.&lt;/P&gt;
&lt;P&gt;If you're still facing challenges, feel free to let us know we’ll be glad to assist you further.&lt;/P&gt;
&lt;P&gt;Looking forward to your response.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best regards,&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;LakshmiNarayana.&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Jun 2025 06:42:56 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4732916#M10173</guid>
      <dc:creator>v-lgarikapat</dc:creator>
      <dc:date>2025-06-16T06:42:56Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4734040#M10198</link>
      <description>&lt;P&gt;The .json idea can work, since it will just require a one-time load to the datalake of each workspace showing the workspace id and lakehouse id.&lt;/P&gt;&lt;P&gt;Before I close this I'm checking if the non-spark read and write functions will work properly with Fabric datalakes.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Jun 2025 22:00:52 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4734040#M10198</guid>
      <dc:creator>DCELL</dc:creator>
      <dc:date>2025-06-16T22:00:52Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4734053#M10199</link>
      <description>&lt;P&gt;As far as I can tell it's impossible to write data to the Tables section of a datalake without starting a Spark session, so this approach will not work.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Jun 2025 22:30:16 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4734053#M10199</guid>
      <dc:creator>DCELL</dc:creator>
      <dc:date>2025-06-16T22:30:16Z</dc:date>
    </item>
    <item>
      <title>Re: Reference current workspace with notebook without Spark</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4734800#M10219</link>
      <description>&lt;P&gt;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/244291"&gt;@DCELL&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Thanks for the clarification really appreciate the detailed explanation. That clears things up&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best Regards&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Lakshmi Narayana&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jun 2025 12:37:49 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reference-current-workspace-with-notebook-without-Spark/m-p/4734800#M10219</guid>
      <dc:creator>v-lgarikapat</dc:creator>
      <dc:date>2025-06-17T12:37:49Z</dc:date>
    </item>
  </channel>
</rss>

