Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I’m working in a workspace in Microsoft Fabric where I’ve set up a notebook intended to connect to a SQL Mirror Database hosted in the same Fabric environment. However, when I try using the “Connect to Source” option in the notebook, the database doesn’t appear in the list—only Lakehouses from other workspaces show up.
Is there a recommended way to connect a Spark notebook to a SQL Mirror Database in Fabric?
Am I missing a configuration step, or is there a specific connection string or method I should use manually?
Appreciate any guidance or examples from those who’ve set this up successfully!
careful though with DFg2 as it is a lot more expensive in terms of CUs than doing the same with either SQL script or PySpark Notebook in a pipeline.
Hi @abhidotnet
Rather than writing programming code like python or scala in notebook, you should use UI/ UX in Dataflow Gen 2 and use Mirrored SQL Database. ( Click, Click and Click and you are done..)
Hi @abhidotnet
Thank you for reaching out to the Microsoft Fabric Forum Community.
@ToddChitt Thanks for your valuable inputs.
It's also a good idea to consider suggestions from the user, we can try that. adding another point below.
The connect to data option in fabric notebooks only shows Lakehouse & KQL databases, we are not able see SQL Mirrored Databases listed there, even if your in the same workspace. To work around this, we can create a Lakehouse and adding a One Lake shortcut that points to the mirrored SQL database. please refer below document.
Explore Data in Your Mirrored Database With Notebooks - Microsoft Fabric | Microsoft Learn
Please feel free to reach out here if you need further assistance.
Thanks.
This may not be a real solution to your issue, but WHY do you want to connect a Notebook to a Mirrored SQL database? Typically, you use Notebooks to collect and process non-tabular data sources (Excel, JSON, XML, etc.) and from there dump it into tabular destinations (read: tables).
If data is ALREADY in tables in the mirrored database, why not use T-SQL to access and process it?
Proud to be a Super User! | |
because you can do things more easily and in a saner way in Python than with the antedeluvian 4GL language that SQL is. That would be my reason #1.
#2 you got a bunch of libs in Python to do data science and data engineering that do not exist in the SQL world.
and on and on it goes.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.