Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector
At this point, Scala is the only language for this feature? any Pyspark example?
Regarding of lakehouse SQL endpoint, since notebook is able to use spark.sql() or spark.table() to query the data of lakehouse, this new spark.read.synapsesql() is not much useful. One real use case: use spark to automate the creation and update SQL enpoint's views without any manual work
Yes, I think the Spark connector will mainly be useful for the Fabric Warehouse.
But perhaps also some use for Lakehouse SQL Analytics Endpoint, as you mention.
In general, spark.sql() and %%sql cells are great ways to interact with the Lakehouse data using SQL.
If you want to use other languages than Scala, you could use Scala to create temporary views and then interact with the data using other languages than Scala:
I think pyodbc is an alternative if you want to work with Warehouse or SQL Analytics Endpoint from a Notebook.
I have zero experience with it but you could try to Google "fabric data warehouse pyodbc"
Perhaps this post can help: https://debruyn.dev/2023/connect-to-fabric-lakehouses-warehouses-from-python-code/
and
https://stackoverflow.com/questions/78285603/load-data-to-ms-fabric-warehouse-from-notebook
We have been trying to automate some views into lakehouse SQL endpoint, but so far the only way is using Script pipeline. With the script pipeline we have to manually establish the connection of lakehouse SQL endpoint, which cannot be automated
I tried pyodbc approach - it could query the lakehouse data, but it cannot create any view to lakehouse SQL endpoint
Did you manage to use the Spark Connector? I tried, but I just got an error: https://community.fabric.microsoft.com/t5/Data-Engineering/Spark-Connector-Issue/m-p/4052913#M3182
I tried scala with this new connector but got the access error
spark.read.synapsesql("<lakehouse name>.<schema name>.<table name>") cannot access my lakehouse SQL endpoint
Yes, it's only Scala for now
And we also look for another feature: create regular views of lakehouse SQl endpoint from notebook without using Script pipeline
Hi @yongshao ,
-- At this point, is Scala the only language for this feature?
You are correct. One of the current limitations is that only Scala is supported.
Spark connector for Microsoft Fabric Synapse Data Warehouse - Microsoft Fabric | Microsoft Learn
Notebook itself does not directly provide database or repository management functionality, such as creating views.
-- With the script pipeline we have to manually establish the connection of lakehouse SQL endpoint, which cannot be automated.
Please first create a new pipeline, call the notebook in the pipeline, the logic of the code in the notebook is to connect to lakehouse and storage the view data to the table.
after that, create a Copy data activity in the pipeline, the data source is the table in lakehouse and the destination is the warehouse.
finally, create a schedule for the pipeline.
Best Regards,
Gao
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
How to get your questions answered quickly -- How to provide sample data in the Power BI Forum
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
10 | |
4 | |
4 | |
3 | |
3 |