Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
austworks19
New Member

T-SQL magic cell with parameters in for loop Fabric Python Notebook

Hello, I have a Python notebook in fabric that assesses data quality errors on source tables across various workspaces, lakehouses, and warehouses. I have an input delta table with rows with workspace names, lakehouse names, and T-SQL queries. My goal is to use the Fabric rest api to map the workspace and lakehouse names to their ids (done), then run the T-SQL in the SQL endpoint associated with the given workspace and lakehouse. I am unable to change workspaces using spark.sql(), and am hoping to use the t-sql magic function listed in this article: https://fabric.guru/querying-sql-endpoint-of-fabric-lakehousewarehouse-in-a-notebook-with-t-sql

The issue is to configure the workspace and lakehouse, I need individual cells (one per query), but I would like to do this in a for loop iterating through my input delta table row by row. Any ideas on how I can achieve this?

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @austworks19,

Thanks for reaching out to the Microsoft fabric community forum. You're absolutely right Fabric notebooks using spark.sql() are scoped to the default lakehouse of the Spark session, and unfortunately, you can't dynamically switch workspaces or lakehouses mid-session using Spark commands.

Your workaround of constructing the full abfss path and replacing table references with backtick-enclosed paths is spot on and probably the most flexible approach for your use case.

As for using the %%sql magic function from the blog you linked, you're also correct that it requires separate cells for each query, which doesn’t play well with loops or dynamic execution.

If you're looking to keep everything inside a loop, here’s a suggestion that continue using spark.sql() but dynamically rewrite each query to use the full abfss path. You can store the rewritten queries in a list and iterate through them, executing each with spark.sql().

This way, you avoid the cell-per-query limitation and still get access to the correct lakehouse data.

 

I would also take a moment to thank @tackytechtom , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.

 

If I misunderstand your needs or you still have problems on it, please feel free to let us know.  

Best Regards,
Hammad.

View solution in original post

5 REPLIES 5
Nathan_Mosher
Advocate I
Advocate I

I'm doing something like the below code to loop through TSQL commands in a python notebook. I rewrote it slightly so it has ficticious data, but I think you'll get the point after seeing the example. Creating views have to be done one at a time, so what I did was for a different purpose, but similar situation in that they need to interate TSQL for loop..... 



from IPython import get_ipython
import sempy.fabric as fabric

ipython = get_ipython()

# Example test input
view_scripts = {
    "vw_SalesOrders": (
        "CREATE OR ALTER VIEW [gold_sales].[vw_SalesOrders] AS\n"
        "SELECT OrderID, CustomerID, OrderDate, TotalAmount\n"
        "FROM bronze_orders.Orders",
        "Sample Workspace",
        "Sample Lakehouse"
    ),
    "vw_Products": (
        "CREATE OR ALTER VIEW [gold_sales].[vw_Products] AS\n"
        "SELECT ProductID, ProductName, Category, Price\n"
        "FROM bronze_catalog.Products",
        "Sample Workspace",
        "Sample Lakehouse"
    ),
    "vw_Customers": (
        "CREATE OR ALTER VIEW [gold_sales].[vw_Customers] AS\n"
        "SELECT CustomerID, FirstName, LastName, Email\n"
        "FROM bronze_customers.Customers",
        "Sample Workspace",
        "Sample Lakehouse"
    )
}

# Create views
for view_name, (view_script, workspace_name, item_name) in view_scripts.items():
    print(f"\nCreating view: {view_name} in {item_name} of {workspace_name}")
    try:
        workspace_id = fabric.resolve_workspace_id(workspace_name)
        run_info = f"-artifact {item_name} -type Lakehouse -workspace {workspace_id}"
        
        # Run the SQL via Fabric T-SQL magic
        result = ipython.run_cell_magic("tsql", run_info, view_script)
        
        print(f"Successfully created view: {view_name} on {item_name}")
    except Exception as e:
        print(f"Failed to create view {view_name} on {item_name}: {e}")
​



Anonymous
Not applicable

Hi @austworks19,

Thanks for reaching out to the Microsoft fabric community forum. You're absolutely right Fabric notebooks using spark.sql() are scoped to the default lakehouse of the Spark session, and unfortunately, you can't dynamically switch workspaces or lakehouses mid-session using Spark commands.

Your workaround of constructing the full abfss path and replacing table references with backtick-enclosed paths is spot on and probably the most flexible approach for your use case.

As for using the %%sql magic function from the blog you linked, you're also correct that it requires separate cells for each query, which doesn’t play well with loops or dynamic execution.

If you're looking to keep everything inside a loop, here’s a suggestion that continue using spark.sql() but dynamically rewrite each query to use the full abfss path. You can store the rewritten queries in a list and iterate through them, executing each with spark.sql().

This way, you avoid the cell-per-query limitation and still get access to the correct lakehouse data.

 

I would also take a moment to thank @tackytechtom , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.

 

If I misunderstand your needs or you still have problems on it, please feel free to let us know.  

Best Regards,
Hammad.

@Anonymous this is exactly what I ended up implementing, thank you for your reply!

tackytechtom
Most Valuable Professional
Most Valuable Professional

Hi @austworks19 ,

 

That's a cool use case!

 

Couldn't you do something like this? (note, this is pseudo code 🙂 )

for workspace in workspaces:
    for schema in schemas: 
        for table in tables:
             table_name = f"{workspace.name}.{schema.name}.{table.name}"
             spark.sql(f"SELECT * FROM {table_name}")
             

 

Essentially, just use the full identifier of the table? If this does not work, you might wanna try out to add some spark.sql(f"USE [catalog}.{schema}").

 

Let me know if this helps 🙂

 

/Tom
https://www.tackytech.blog/
https://www.instagram.com/tackytechtom/



Did I answer your question➡️ Please, mark my post as a solution ✔️

Also happily accepting Kudos 🙂

Feel free to connect with me on LinkedIn! linkedIn

#proudtobeasuperuser 

Hi Tackytechtom, for some reason Spark cannot access tables inside lakehouses that are not the Spark session default lakehouse. However, I did find a workaround that works for my use case: dynamically creating the abfss paths from the workspace and lakehouse ids and replacing table references with their full abfss paths in backticks in the t-sql modified for spark.sql.

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.