Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
when using the data warehouse feature of Fabric in PySpark I can connect to the data warehouse and run a query as so...
Solved! Go to Solution.
Data querying within the SQL database (preview) from a notebook is feasible only when the default language of the notebook is set to T-SQL. Upon switching the language to PySpark or Python, querying capabilities are limited to the Lakehouse, and not the databases.
So currently, to execute any PySpark notebooks on the SQL database data, it is necessary to first ingest the data from the database into the Lakehouse using pipelines. Subsequently, PySpark notebooks can be run on the ingested data
Data querying within the SQL database (preview) from a notebook is feasible only when the default language of the notebook is set to T-SQL. Upon switching the language to PySpark or Python, querying capabilities are limited to the Lakehouse, and not the databases.
So currently, to execute any PySpark notebooks on the SQL database data, it is necessary to first ingest the data from the database into the Lakehouse using pipelines. Subsequently, PySpark notebooks can be run on the ingested data
That does appear to be the case although the documentaion and videos do not address this at all. Hopefully Microsoft will relize the importance of this.
If I am not wrong When you use SQL Database (preview) in Microsoft Fabric, the platform automatically replicates your data into OneLake and converts it to Parquet/Delta tables in an analytics-ready format.
Try this
df = spark.sql("""
SELECT *
FROM [YourSQLDatabaseName].[dbo].[YourTable]
""")
Or
df = spark.read.table("[YourSQLDatabaseName].[dbo].[YourTable]")
display(df)
I believe you are referring to the scenario where I have an Azure SQL database and select from that Azure database to Replicate in Fabric.
I am referring to is within fabric you can now create a SQL database, which I did. Those tables are not reflected as delta tabless so I am trying to figure out how to read this data in PySpark.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
16 | |
15 | |
4 | |
4 | |
3 |