Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
aditridant
Regular Visitor

How to Handle Complex Stored Procedures with DML and Cross-References in Microsoft Fabric?

I'm working on a solution in Microsoft Fabric where I need to accomplish two main goals:

  1. Create tables with full SQL Server features such as primary keys, indexes, constraints, etc.

  2. Execute a complex stored procedure that:

    • Performs cross-referencing between tables.

    • Performs UPSERT (update or insert) operations based on the logic.

Here’s the architectural setup I’m working with:

What’s Working:

  • I can create fully featured tables in the Fabric Database, which supports standard SQL features like PKs, indexes, constraints — great for my data model.

  • For SPs that don’t need cross-reference logic or DML, these work fine inside the Fabric Database.

The Problem:

  • The stored procedure I need to run performs cross-references between tables (think joins across silver and gold) and includes DML (UPDATE, INSERT, etc.).

  • To perform cross-reference logic, I have to use the SQL Endpoint of the Fabric Warehouse.

  • However, SQL Endpoints in Fabric are read-only, so any DML operations fail.

What I'm Considering:

Since I can't run DML inside SQL Endpoints:

  • I could create SPs or views in the SQL Endpoint purely for reading / cross-reference logic.

  • Then use Data Pipelines to do the actual data copy (UPSERT) into the target Fabric table

My Questions:

 

  1. Is there a better way to handle this scenario in Microsoft Fabric?
    This is part of a migration effort, and I already have a large number of stored procedures that tightly couple transformation logic with DML operations.
    Because the SQL Endpoint is read-only, I'm now considering splitting logic — moving cross-reference queries to the SQL Endpoint and doing the actual data writes via Data Pipelines. However, this involves significant effort to refactor and decouple existing procedures.
    Is there a more efficient or recommended pattern in Fabric to avoid this level of rework?
  2. Can the SQL Endpoint be enabled to perform DML operations (writes)?
    Since I'm ultimately using Data Pipelines to perform writes, it feels like I'm already working around a limitation. Is there any plan or supported way to enable write access via the SQL Endpoint, even in a limited capacity, to simplify this architecture?

 

 

5 REPLIES 5
v-veshwara-msft
Community Support
Community Support

Hi @aditridant ,

Following up to see if your query has been resolved. If any of the responses helped, please consider marking the helpful reply as the 'Accepted Solution' to assist others with similar questions.

If you're still facing issues, feel free to reach out.

Thank you.

v-veshwara-msft
Community Support
Community Support

Hi @aditridant ,
Just wanted to check if the response provided has met your needs. If yes, please consider marking it as "Accepted Solution" to assist others with similar queries. If further assistance is needed, please reach out.
Thank you.

v-veshwara-msft
Community Support
Community Support

Hi @aditridant ,

Thanks for posting in Microsoft Fabric Community.

Regarding your scenario - where the stored procedure needs to perform cross-referencing between tables (such as across Silver and Gold layers) along with DML operations - it’s worth noting that Microsoft Fabric does support cross-database and cross-warehouse queries within the same workspace.

This means you can:

  1. Join tables from different SQL Databases or Warehouses using three-part naming (DatabaseName.SchemaName.TableName)

  2. Perform DML operations like INSERT INTO ... SELECT FROM across these items

  3. Include this logic inside stored procedures, provided all referenced objects are part of the same workspace.

For example:

INSERT INTO ContosoWarehouse.dbo.Affiliation
SELECT *
FROM My_Lakehouse.dbo.Affiliation;
 

To enable this, ensure the required databases or warehouses are added to Object Explorer using the + Warehouses action. Once added, they can be referenced directly in your SQL queries and stored procedures.

Please follow this link for step by step procedure: Query the SQL Analytics Endpoint or Warehouse - Microsoft Fabric | Microsoft Learn

 

If your current setup spans multiple workspaces, or if you're using the SQL Endpoint (read-only) for part of the logic, those limitations would still apply. In such cases, using Data Pipelines or Notebooks for the write operations remains a valid alternative.

 

Hope this helps. Please reach out for further assistance.
If this post helps, then please consider to give a kudos and Accept as the solution to help the other members find it more quickly.


Thank you.

Thanks @v-veshwara-msft Yes, I'm aware of the Data Warehouse functionality, and it's part of my current design. However, it doesn't offer all the SQL features we need, which is why I’ve been exploring options with Fabric SQL Database.

That said, I’ve now got answers to the questions I had:

  1. Is there a better way to handle this scenario in Microsoft Fabric?
    Not at the moment. The Data Warehouse remains the preferred option, even though it doesn’t support traditional indexes. Internally, it still helps improve performance over time through the use of statistics. While statistics aren't the same as indexes, they do assist the query optimizer in execution planning, which can lead to performance improvements. For enforcing uniqueness (like primary keys), the current workaround is to adjust stored procedures to ensure only unique records are ingested. While not ideal, this approach works for our current use case. 
  2. Can the SQL Endpoint be enabled to perform DML operations (writes)?
    As of now, this isn't on the roadmap.

Hi @aditridant ,

Thanks for the detailed response and for confirming how you're approaching it.

Just to add - while Fabric Warehouse is your current choice, it's good to be aware that it does support defining PRIMARY KEY, UNIQUE, and FOREIGN KEY constraints, but only with some conditions. Specifically, they must be declared as NONCLUSTERED and NOT ENFORCED. This means the constraints are available in metadata and can help with modeling (like relationships in Power BI), but the system doesn’t actually enforce them. So, duplicates can still be inserted unless handled manually.

Ref: Primary, Foreign, and Unique Keys - Microsoft Fabric | Microsoft Learn

That aligns with your current approach of using stored procedures to make sure only unique data gets in. It’s a practical workaround for now given the platform’s behavior.

 

Also, as you noted, the SQL Endpoint remains read-only and can’t be used for DML operations. There’s no support for enabling writes through it at this point.

 

Thnaks again for sharing and please consider marking your reply as the accepted solution to help others with similar queries. Also, please continue using the Fabric Community for any further questions. We are happy to help.

 

Regards,
Vinay kumar.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors