The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello, Please can i connect my MS fabric notebook directly to an azure SQL where i have only read only access to scan through the database and only save information i need?
Solved! Go to Solution.
HI @Stevox,
Thank you for the follow-up.
Given your current setup, using spark.read.jdbc() to connect directly from a Microsoft Fabric notebook won't work because you only have the SQL connection string with SQL authentication and no access to configure firewall rules or create a Service Principal.
The Spark engine in notebooks uses separate outbound IP addresses, which are not permitted by default in Azure SQL firewall settings, causing the connection failure. Although Copy Data and Dataflows succeed, they use Fabric's managed integration runtimes with internal network access. Unfortunately, workspace connections created in Fabric cannot be accessed directly from notebooks at this time and are only available to UI-based tools like Dataflows, Pipelines, and Copy Activity.
As a workaround, you can use Copy Data or Dataflows to load filtered or minimal required data into a staging Lakehouse table, then run the incremental logic from your notebook. This keeps your notebook-based workflow intact without re-ingesting full datasets each time. Until Microsoft enables native notebook access to workspace connections or shared credentials, this workaround remains the most reliable with your current access level.
Hope my suggestion gives you good idea, if you have any more questions, please feel free to ask we are here to help you.
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Best regards,
Sahasra.
Hi @sahasra,
Thank you for the clarification.
In my case, I only have the Azure SQL connection string using SQL authentication (username and password), and I don’t have access to configure the firewall or set up a Service Principal. While I’m able to connect successfully using Copy Data and Dataflows within Fabric, any attempt to connect using Spark’s JDBC connector in a notebook results in a firewall error — even though I’ve added the connection under my workspace connections.
The notebook I’m working with performs custom incremental loading across multiple tables, so direct access via Spark would be ideal for maintaining that workflow.
Is there a way to use the registered workspace connections directly within the notebook without using the copy or dataflow activity to copy the entire dataset into a staging table every time? Or any alternative method that works with my current access level?
Appreciate your guidance on whether there’s a way to go about this within the current Fabric limitations.
Thanks,
Hi @sahasra,
Thank you for the clarification.
In my case, I only have the Azure SQL connection string using SQL authentication (username and password), and I don’t have access to configure the firewall or set up a Service Principal. While I’m able to connect successfully using Copy Data and Dataflows within Fabric, any attempt to connect using Spark’s JDBC connector in a notebook results in a firewall error — even though I’ve added the connection under my workspace connections.
The notebook I’m working with performs custom incremental loading across multiple tables, so direct access via Spark would be ideal for maintaining that workflow.
Is there a way to use the registered workspace connections directly within the notebook without using the copy or dataflow activity to copy the entire dataset into a staging table every time? Or any alternative method that works with my current access level?
Appreciate your guidance on whether there’s a way to go about this within the current Fabric limitations.
Thanks,
HI @Stevox,
Thank you for the follow-up.
Given your current setup, using spark.read.jdbc() to connect directly from a Microsoft Fabric notebook won't work because you only have the SQL connection string with SQL authentication and no access to configure firewall rules or create a Service Principal.
The Spark engine in notebooks uses separate outbound IP addresses, which are not permitted by default in Azure SQL firewall settings, causing the connection failure. Although Copy Data and Dataflows succeed, they use Fabric's managed integration runtimes with internal network access. Unfortunately, workspace connections created in Fabric cannot be accessed directly from notebooks at this time and are only available to UI-based tools like Dataflows, Pipelines, and Copy Activity.
As a workaround, you can use Copy Data or Dataflows to load filtered or minimal required data into a staging Lakehouse table, then run the incremental logic from your notebook. This keeps your notebook-based workflow intact without re-ingesting full datasets each time. Until Microsoft enables native notebook access to workspace connections or shared credentials, this workaround remains the most reliable with your current access level.
Hope my suggestion gives you good idea, if you have any more questions, please feel free to ask we are here to help you.
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Best regards,
Sahasra.
Hi @Stevox,
Thank you for reaching out to the Microsoft Fabric Community forum!
Yes, you can connect your Microsoft Fabric notebook to an Azure SQL Database with read-only access, but not through pyodbc. Fabric notebooks operate in a managed Spark environment, which does not currently support direct ODBC-based connections like pyodbc or sqlalchemy.
The recommended approach is to use Spark’s JDBC connector with Azure Active Directory (AAD) authentication, typically via a Service Principal. With the correct permissions, this allows you to query the Azure SQL Database, scan the data, and store only what you need, such as saving filtered results to a file or OneLake.
Here’s the process:
Note: If you're using Managed Identity, support is limited but evolving, so Service Principal is the most stable choice for now.
If this solution worked for you, kindly mark it as Accept as Solution and feel free to give a Kudos, it would be much appreciated!
Thank you,
Sahasra
Comunity Support Team.
User | Count |
---|---|
4 | |
4 | |
2 | |
2 | |
2 |
User | Count |
---|---|
17 | |
15 | |
11 | |
6 | |
6 |