Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi,
I'm encountering the error 'Max iterations (100) reached for batch Resolution, please set 'spark.sql.analyzer.maxIterations' to a larger value.' while executing a Spark SQL script from the notebook. The script is not complex; it queries 1K records from a delta table in Lakehouse A in workspace A and compares them with a delta table in Lakehouse B in workspace B, writing the differences to the delta table in Lakehouse B.
Solved! Go to Solution.
Hello @PraveenVeli
In Spark SQL’s context, “iterations” refer to the number of passes the query analyzer makes through the logical query plan to resolve references, infer types, and apply optimizations
Why This Applies to Your Fabric Scenario
1. Workspace Boundary Resolution
3. Fabric treats Lakehouses in different workspaces as separate catalogs, forcing Spark to:
• Verify table existence in both environments
• Reconcile schemas across workspaces
• Handle potential credential handoffs
Even for 1k rows comparison
-- Implicitly creates nested plans for:
1) Data fetch from Lakehouse A
2) Data fetch from Lakehouse B
3) Join operation
4) Delta transaction log checks
5) Insert operation
Try
spark.conf.set("spark.sql.analyzer.maxIterations", "200")
And do df.explain(mode="extended")
Look for Cartesian products or complex subquery patterns
Try
OPTIMIZE delta_table ZORDER BY primary_key;
please give a try and let me know if this works
Thank you @nilendraFabric , information you provided greatly helps. In my case (It's in the same lines of what you said), the issue was that I provided incorrect column names in my CTE, which is causing this error. I'm a bit surprised it's not generating the relevant error message. I have a CTE (that retrieves data from Lake-house A in Workspace B) and then use it within the Merge statement to integrate data into Lake-house B in Warehouse B. If the column name does not match the destination (Lake-house B), it throws an actual error indicating that it can't find the field name. I'm running my notebook in Workspace B.
Hi @PraveenVeli
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Hello @PraveenVeli
In Spark SQL’s context, “iterations” refer to the number of passes the query analyzer makes through the logical query plan to resolve references, infer types, and apply optimizations
Why This Applies to Your Fabric Scenario
1. Workspace Boundary Resolution
3. Fabric treats Lakehouses in different workspaces as separate catalogs, forcing Spark to:
• Verify table existence in both environments
• Reconcile schemas across workspaces
• Handle potential credential handoffs
Even for 1k rows comparison
-- Implicitly creates nested plans for:
1) Data fetch from Lakehouse A
2) Data fetch from Lakehouse B
3) Join operation
4) Delta transaction log checks
5) Insert operation
Try
spark.conf.set("spark.sql.analyzer.maxIterations", "200")
And do df.explain(mode="extended")
Look for Cartesian products or complex subquery patterns
Try
OPTIMIZE delta_table ZORDER BY primary_key;
please give a try and let me know if this works
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.