Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredJoin us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.
mssparkutils.fabric.sql not working anymore. Has there been a change in notebookutils?
I am using this utility to write directly to fabric warehouse and skip adding a step to ingest in lakehouse and then write to warehouse in next step.
Solved! Go to Solution.
Thanks for the extra detail. The exact call you are using:
from notebookutils import mssparkutils df = mssparkutils.fabric.sql("<DWH name>", sql_query)
is not part of the documented API. Microsoft has been moving from the old mssparkutils namespace to notebookutils. The rename is official and backward compatibility exists for many helpers, but there is no published fabric.sql method. See the rename notice and current surface here: NotebookUtils (former MSSparkUtils) for Fabric and the companion note on the MSSparkUtils page. Undocumented members can change between runtime updates, which likely explains why the call worked 2 weeks ago and now fails.
Option 1 - Python notebook: connect and query with notebookutils.data
The official Python-notebook experience includes data utilities to connect to a Warehouse and run T-SQL. Microsoft documents this workflow here: Use Python experience on Notebook (see the sections on Warehouse interaction and notebookutils.data; they also show the notebookutils.data.help() output and the connect_to_artifact function).
import notebookutils as nbu # Connect by name or ID (optionally pass workspace ID and artifact type) conn = nbu.data.connect_to_artifact("<Warehouse name or ID>") df = conn.query("SELECT TOP 10 * FROM dbo.YourTable;") display(df)
Docs: Python notebook data utilities.
Option 2 - Python notebook: use the %%tsql cell magic
You can execute T-SQL directly in a Python notebook cell and bind to a specific Warehouse.
%%tsql --dw "<Warehouse name>" SELECT COUNT(*) AS rows_in_table FROM dbo.YourTable;
Docs: Run T-SQL code in Fabric Python notebooks and Connect to Fabric Data Warehouse.
Option 3 - PySpark notebook: Fabric Spark connector for Warehouse
For Spark notebooks, use the Warehouse connector. It supports reading and, with current GA runtimes, writing via a two-phase process that uses COPY INTO under the hood.
# Read from a Warehouse table df = spark.read.synapsesql("MyWarehouse.dbo.SourceTable") # Write a Spark DataFrame to a Warehouse table df_to_write.write.mode("append").synapsesql("MyWarehouse.dbo.TargetTable")
Docs: Spark connector for Microsoft Fabric Data Warehouse.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, please mark this as the solution.
thank you @tayloramy .
I was using pyspark mode, changed to python mode and got it working.
Glad to hear it!
Happy I could help.
Hi @Ayush05-gateway,
This library has been renamed to NotebookUtils. NotebookUtils (former MSSparkUtils) for Fabric - Microsoft Fabric | Microsoft Learn
Please try using NotebookUtils instead of mssparkutils.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hi @tayloramy
I am using below statement :
adding to above, i looked at notebookutils.data, notebookutils.warehouse but none of the 2 has any function to query warehouse.
Thanks for the extra detail. The exact call you are using:
from notebookutils import mssparkutils df = mssparkutils.fabric.sql("<DWH name>", sql_query)
is not part of the documented API. Microsoft has been moving from the old mssparkutils namespace to notebookutils. The rename is official and backward compatibility exists for many helpers, but there is no published fabric.sql method. See the rename notice and current surface here: NotebookUtils (former MSSparkUtils) for Fabric and the companion note on the MSSparkUtils page. Undocumented members can change between runtime updates, which likely explains why the call worked 2 weeks ago and now fails.
Option 1 - Python notebook: connect and query with notebookutils.data
The official Python-notebook experience includes data utilities to connect to a Warehouse and run T-SQL. Microsoft documents this workflow here: Use Python experience on Notebook (see the sections on Warehouse interaction and notebookutils.data; they also show the notebookutils.data.help() output and the connect_to_artifact function).
import notebookutils as nbu # Connect by name or ID (optionally pass workspace ID and artifact type) conn = nbu.data.connect_to_artifact("<Warehouse name or ID>") df = conn.query("SELECT TOP 10 * FROM dbo.YourTable;") display(df)
Docs: Python notebook data utilities.
Option 2 - Python notebook: use the %%tsql cell magic
You can execute T-SQL directly in a Python notebook cell and bind to a specific Warehouse.
%%tsql --dw "<Warehouse name>" SELECT COUNT(*) AS rows_in_table FROM dbo.YourTable;
Docs: Run T-SQL code in Fabric Python notebooks and Connect to Fabric Data Warehouse.
Option 3 - PySpark notebook: Fabric Spark connector for Warehouse
For Spark notebooks, use the Warehouse connector. It supports reading and, with current GA runtimes, writing via a two-phase process that uses COPY INTO under the hood.
# Read from a Warehouse table df = spark.read.synapsesql("MyWarehouse.dbo.SourceTable") # Write a Spark DataFrame to a Warehouse table df_to_write.write.mode("append").synapsesql("MyWarehouse.dbo.TargetTable")
Docs: Spark connector for Microsoft Fabric Data Warehouse.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, please mark this as the solution.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.
User | Count |
---|---|
15 | |
8 | |
4 | |
2 | |
2 |
User | Count |
---|---|
31 | |
14 | |
5 | |
5 | |
3 |