Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Stevox
Regular Visitor

Notebook connect

Hello, Please can i connect my MS fabric notebook directly to an azure SQL where i have only read only access to scan through the database and only save information i need? 

1 ACCEPTED SOLUTION

HI @Stevox,

Thank you for the follow-up.
Given your current setup, using spark.read.jdbc() to connect directly from a Microsoft Fabric notebook won't work because you only have the SQL connection string with SQL authentication and no access to configure firewall rules or create a Service Principal.
The Spark engine in notebooks uses separate outbound IP addresses, which are not permitted by default in Azure SQL firewall settings, causing the connection failure. Although Copy Data and Dataflows succeed, they use Fabric's managed integration runtimes with internal network access. Unfortunately, workspace connections created in Fabric cannot be accessed directly from notebooks at this time and are only available to UI-based tools like Dataflows, Pipelines, and Copy Activity.
As a workaround, you can use Copy Data or Dataflows to load filtered or minimal required data into a staging Lakehouse table, then run the incremental logic from your notebook. This keeps your notebook-based workflow intact without re-ingesting full datasets each time. Until Microsoft enables native notebook access to workspace connections or shared credentials, this workaround remains the most reliable with your current access level.

Hope my suggestion gives you good idea, if you have any more questions, please feel free to ask we are here to help you.
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

 

Best regards,
Sahasra.

View solution in original post

4 REPLIES 4
Stevox
Regular Visitor

Hi @sahasra,

Thank you for the clarification.

In my case, I only have the Azure SQL connection string using SQL authentication (username and password), and I don’t have access to configure the firewall or set up a Service Principal. While I’m able to connect successfully using Copy Data and Dataflows within Fabric, any attempt to connect using Spark’s JDBC connector in a notebook results in a firewall error — even though I’ve added the connection under my workspace connections.

The notebook I’m working with performs custom incremental loading across multiple tables, so direct access via Spark would be ideal for maintaining that workflow.

Is there a way to use the registered workspace connections directly within the notebook without using the copy or dataflow activity  to copy the entire dataset into a staging table every time? Or any alternative method that works with my current access level?

Appreciate your guidance on whether there’s a way to go about this within the current Fabric limitations.

Thanks,

Stevox
Regular Visitor

Hi @sahasra,

Thank you for the clarification.

In my case, I only have the Azure SQL connection string using SQL authentication (username and password), and I don’t have access to configure the firewall or set up a Service Principal. While I’m able to connect successfully using Copy Data and Dataflows within Fabric, any attempt to connect using Spark’s JDBC connector in a notebook results in a firewall error — even though I’ve added the connection under my workspace connections.

The notebook I’m working with performs custom incremental loading across multiple tables, so direct access via Spark would be ideal for maintaining that workflow.

Is there a way to use the registered workspace connections directly within the notebook without using the copy or dataflow activity  to copy the entire dataset into a staging table every time? Or any alternative method that works with my current access level?

Appreciate your guidance on whether there’s a way to go about this within the current Fabric limitations.

Thanks,

HI @Stevox,

Thank you for the follow-up.
Given your current setup, using spark.read.jdbc() to connect directly from a Microsoft Fabric notebook won't work because you only have the SQL connection string with SQL authentication and no access to configure firewall rules or create a Service Principal.
The Spark engine in notebooks uses separate outbound IP addresses, which are not permitted by default in Azure SQL firewall settings, causing the connection failure. Although Copy Data and Dataflows succeed, they use Fabric's managed integration runtimes with internal network access. Unfortunately, workspace connections created in Fabric cannot be accessed directly from notebooks at this time and are only available to UI-based tools like Dataflows, Pipelines, and Copy Activity.
As a workaround, you can use Copy Data or Dataflows to load filtered or minimal required data into a staging Lakehouse table, then run the incremental logic from your notebook. This keeps your notebook-based workflow intact without re-ingesting full datasets each time. Until Microsoft enables native notebook access to workspace connections or shared credentials, this workaround remains the most reliable with your current access level.

Hope my suggestion gives you good idea, if you have any more questions, please feel free to ask we are here to help you.
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

 

Best regards,
Sahasra.

v-sgandrathi
Community Support
Community Support

Hi @Stevox,

Thank you for reaching out to the Microsoft Fabric Community forum!

 

Yes, you can connect your Microsoft Fabric notebook to an Azure SQL Database with read-only access, but not through pyodbc. Fabric notebooks operate in a managed Spark environment, which does not currently support direct ODBC-based connections like pyodbc or sqlalchemy.

The recommended approach is to use Spark’s JDBC connector with Azure Active Directory (AAD) authentication, typically via a Service Principal. With the correct permissions, this allows you to query the Azure SQL Database, scan the data, and store only what you need, such as saving filtered results to a file or OneLake.

Here’s the process:

  • Register an Azure AD app and assign it read-only access to the database.
  • Use Spark’s read.jdbc() method with your client ID and secret to authenticate.
  • Save the output as needed within Fabric (e.g., as Parquet or Delta files in your workspace).

Note: If you're using Managed Identity, support is limited but evolving, so Service Principal is the most stable choice for now.

 

If this solution worked for you, kindly mark it as Accept as Solution and feel free to give a Kudos, it would be much appreciated!

 

Thank you, 
Sahasra

Comunity Support Team.

 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.