Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.

Reply
vmattard
New Member

Service Principal Getting "HTTP request forbidden" When Reading from Data Warehouse

Service Principal Getting "HTTP request forbidden" Error When Reading from Data Warehouse via API-Triggered Notebook

 

Hi,

 

I'm experiencing a permissions issue when using a service principal to execute a Fabric notebook via API. The notebook successfully reads from a Lakehouse but fails when trying to read from a Data Warehouse in the same workspace, throwing an "HTTP request forbidden" error.

 

Environment Details
  • Fabric Workspace: Same workspace for notebook, lakehouse, and data warehouse
  • Execution Method: API-triggered notebook execution using service principal
  • User Roles: I'm admin/creator, colleagues have Contributor role
  • Service Principal Role: Contributor role in workspace

 

Working Scenarios:
  • Manual execution by me (admin/creator) - works perfectly
  • Manual execution by colleagues with Contributor role - works perfectly  
  • Service principal querying DW locally (outside of Fabric notebook) - works perfectly
  • Service principal reading from Lakehouse in the notebook - works perfectly
Failing Scenario:
  • Service principal executing notebook via API and trying to read from Data Warehouse - fails with error
Error Details
ERROR FabricSparkTDSImplicits$FabricSparkTDSRead [Thread-68]: Error processing read request - HTTP request forbidden.
 
Code snippet
import com.microsoft.spark.fabric
from com.microsoft.spark.fabric.Constants import Constants


view_name = "MyWarehouse.dbo.my_view"
table_name = "MyWarehouse.dbo.my_table"
database_name = "MyWarehouse"


# This line fails when executed by service principal via API
df_table = spark.read.option(Constants.DatabaseName, database_name).synapsesql(f"SELECT * FROM {table_name}")
# or
df_table = spark.read.synapsesql(table_name)
print(f"Table count is {df_table.count()}")


df_view = spark.read.option(Constants.DatabaseName, database_name).synapsesql(f"SELECT * FROM {view_name}")
# or
df_view = spark.read.synapsesql(view_name)
print(f"View count is {df_view.count()}")
 
About The Service Principal:
In the Azure Service Principal
  • Capacity.Read.All
  • Lakehouse.ReadWrite.All
  • Tenant.Read.All
  • Warehouse.ReadWrite.All
  • Workspace.Read.All
In the Fabric Warehouse
  • SQL Permissions: SELECT, INSERT, UPDATE granted
  • Database Roles: db_datareader, db_datawriter
  • Additional Access: "Manage Connections and Gateways" granted for the Data Warehouse

 

Questions

  1. Is there a specific permission or configuration missing for service principals to access Data Warehouse through Spark in API-triggered notebooks?
  2. Are there differences in authentication context between manual execution and API-triggered execution that could cause this issue?
  3. Could this be related to connection pooling or session management when using service principals in Fabric notebooks?

 

What I've Tried
  • Verified all standard permissions are in place
  • Confirmed service principal works locally for DW queries
  • Confirmed lakehouse access works in the same notebook
  • Verified workspace contributor role is assigned


Any insights or suggestions would be greatly appreciated!

 

Vincent

2 REPLIES 2
v-prasare
Community Support
Community Support

Hi @vmattard,We would like to confirm if  your query got resolved or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.

 

 

 

Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support

v-prasare
Community Support
Community Support

Hi @vmattard,

 

In Fabric, workspace roles (Admin, Member, Contributor, Viewer) govern high-level operations (create/delete/edit items). However, Spark synapsesql() queries check the item-level permission (Build/Read/Contributor) on the specific Data Warehouse.

That means:

  • A service principal with workspace Contributor can create/edit items in the workspace.

  • But unless you go into the Data Warehouse, Manage permissions pane and grant the service principal Build (or Contributor) on that warehouse item itself, Spark-to-DW queries from the notebook API will still get blocked.

That’s why you see:

  • Lakehouse access works (because Lakehouse Contributor is implied by workspace Contributor).

  • Warehouse access fails (because Spark uses the DW’s item ACL, not just workspace role).

please refer this doc for more:

https://learn.microsoft.com/en-us/fabric/data-warehouse/service-principals

 

 

 

Thanks,

Prashanth

MS Fabric community support

Helpful resources

Announcements
September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.