Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
Hi Team,
I have requirements that, I need to insert/update and select data from the MS Fabric warehouse.
I am not finding relevant documents on this, can someone help me in this?
Thanks in advance!
Aravind
Solved! Go to Solution.
Hi @AravindPeddola
To stay informed about the latest updates on this feature please book these urls :
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector
I can't commit on exact dates but MS is working on this feature.
please accept this solution and give kudos to:)
it will help the community to find the answer quickly
thanks
Thanks for the update.
But when we can expect this feature( using Pyspark notebooks)
Hi @AravindPeddola
To stay informed about the latest updates on this feature please book these urls :
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector
I can't commit on exact dates but MS is working on this feature.
please accept this solution and give kudos to:)
it will help the community to find the answer quickly
thanks
Currently, there is no direct way to insert, update, or select data from a Microsoft Fabric Warehouse using PySpark notebooks. However, there are some workarounds and upcoming features that can help address your requirements:
1. Read data from Warehouse:
You can use the Spark connector for Fabric Data Warehouse to read data from a warehouse into a PySpark DataFrame
from com.microsoft.spark.fabric.Constants import Constants
df = spark.read.synapsesql("<warehouse name>.<schema name>.<table or view name>")
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector
Insert/Update data: As of now, you cannot directly write to a Fabric Warehouse from a PySpark notebook. However, there are two approaches you can consider:
Write to a Lakehouse, then access from Warehouse:
Save your PySpark DataFrame to a Lakehouse table, then access it from the Warehouse.
If you need to perform insert/update operations immediately, you can use T-SQL in a notebook to interact with the Warehouse. However, this doesn’t use PySpark directly.
Please mark this post as solution and give kudos, if this is helpful.
thanks
| User | Count |
|---|---|
| 3 | |
| 2 | |
| 1 | |
| 1 | |
| 1 |