Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.

Reply
Scalkins1743
Frequent Visitor

Report return error in visual after promotion in deployment pipeline, that is not seen in 1st stage

We are using deployment pipelines for promoting content developed in our [Test] workspaces. Our semantic models are kept in as separate workspace from our reports for workflow purposes so two separate deployment pipelines, SM_Workspace [Test] and Workspace [Test]. We use a single source Lakehouse leveraging shortcuts in another workspace called Lakehouse EDW HUB.

 

We have a semantic model and report that were recently deployed from our [Test] workspaces to [QA] workspace that use the same Lakehouse as the source for the semantic models. After promotion, most of the repor works without issue but our developers noticed some visuals were erroring out. They confirmed no difference in visual filters or configuration between [Test] and [QA].

 

As an example for one of the errors, the bar graph visuals is erroring out and providing an error of "The resultset of a query to external data source has exceeded the maximum allowed size of '1000000'". I confirmed the issue does not occur in the [Test] version of the report, and collected the DAX statement from the visual in TEST, and collected a similiar DAX statement (had to filter the date range down from 12 months to 2 months) from QA and compared the statements. The statement in QA has some minor differences in order of filter statements, and an odd exclusion of the field parameter statement, but otherwise looks the same. I then connected via DAX studio and ran the DAX statement that was working in TEST against the QA semantic model and received the error message see in the report. I reconnected to TEST, and validated the exact same DAX query works without issue against TEST and it did. 

 

When I run the server timings and look at the logs, for some reason the query is being handled differently for the QA model than the TEST model. Was hoping someone may have encountered an issue like this, or have some ideas on other items to check as I'm a little stumped. 

 

Please let me know what additional details I can provide, thank you in advance for any help.

1 ACCEPTED SOLUTION

Yes, we have had the issue resolved with Microsoft with the initiation of a manual refresh of the semantic model. This is even though the continuous sync is turned on for automatic updates through direct lake. 

 

The explanation is that calculations do not automatically sync in the new workspace after deployment, so a manual refresh is necessary for them to display correctly. We're adding this to our deployment procedure to ensure this is done going forward.

View solution in original post

6 REPLIES 6
Shahid12523
Community Champion
Community Champion

The 1,000,000 row error in QA happens because the promoted semantic model isn’t identical to Test — field parameters, relationships, or RLS are being applied differently, so queries return bigger resultsets.

 

Fix:
compare Test vs QA model in Tabular Editor/XMLA, check field parameters, RLS, and Lakehouse shortcuts. Re-deploy model if mismatched.

Shahed Shaikh
GeraldGEmerick
Responsive Resident
Responsive Resident

@Scalkins1743It really sounds like you are extremely deep into the weeds here. I would suggest looking at the Known Issues with Power BI and Fabric and see if anything fits. You can also open a support ticket there if necessary. 

Microsoft Fabric Known Issues

 

Couple questions though. One, do any parameters change at all between Test and QA for either pipeline? Currently, it sounds like everything is exactly the same between Test and QA other than the workspace. Two, what happens in Prod?

@gerald there are no defined parameters for these semantic models. We have not promoted the semantic model to PROD, due to this issue in QA. As this is a new solution, and the content has not been released to end users with a previous version, I can promote this to PROD and see if the behavior is consistent there as well and reply with an update.

 

I've opened a support ticket, and will check out the known issues. Thank you for both suggestions!

Hi @Scalkins1743 , hope you are doing well. Just wanted to know if your issue has been resolved through your support ticket with Microsoft? If so, we would greatly appreciate it if you could share the insights here, as they may benefit others with similar issues.

 

If you have any other queries, please feel free to raise a new post in the community. We are always happy to help. Thank you.

Yes, we have had the issue resolved with Microsoft with the initiation of a manual refresh of the semantic model. This is even though the continuous sync is turned on for automatic updates through direct lake. 

 

The explanation is that calculations do not automatically sync in the new workspace after deployment, so a manual refresh is necessary for them to display correctly. We're adding this to our deployment procedure to ensure this is done going forward.

@Shahid12523 My pleasure! Would be interested to know if you get the same behavior in PROD. Hope you get it resolved!!

Helpful resources

Announcements
September Power BI Update Carousel

Power BI Monthly Update - September 2025

Check out the September 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors
Top Kudoed Authors