Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Hi,
I am developing a custom table visual to create a writeback solution. I am using Direct Query and a Databricks SQL warehouse as my backend. I keep running into "The resultset of a query to external data source has exceeded the maximum allowed size of '1000000' rows." error.
I set the data reduction algorithm to 100 rows and still get this error. I load a bunch of columns from my fact table but as soon as I add a related column from my dim table I get this error.
Any idea what can cause this?
@dm-p Thank the response, I checked the query generated by the visual and the query that is ran in databricks.
The DAX TOPN query defines N as the amount specified in the data reduction algo + 1 so in my case it's 101.
The strange thing is that the query ran in Databricks as has limit of 1,000,001 rows defined no matter what my data redcution algo is set to but when I remove the dimension field the databricks query shows the limit set to 100.
Additionally, what is strange is that the cardinality between the fact and dim table is 46 so it's really small.
Same limit is hit with the standard table so I am guessing this is feature and not an issue with my code.
Hi @kpia,
The data reduction algorithm should apply a TOPN in the generated query in the DAX that is generated by Power BI, but it would look like adding the dimension causes some issue with either the query that Power BI generates on your visual's behalf (or the query that Power BI does from the DAX to the back-end). It may need to get more information from the remote data source and this could be breaking the row limit that DirectQuery has. As the data reduction algorithm is the only means you have to limit the outgoing query from a custom visual, this may be a "feature" of DirectQuery and/or how Power BI accesses your data in this scenario.
The easiest thing to do would be to see if you can profile both the DAX being generated by your visual (through Performance Analyzer) and the query being run against Databricks and see what the resulting dataset looks like (I don't have much experience on this part to advise how to do it). If this is >1M rows, then it suggests that this is indeed a DirectQuery-related scenario. FWIW if you can replicate this with the core table visual, it may reinforce the limitation, as this visual has its own data reduction algorithm and will paginate additional queries after the first query has returned its data.
Proud to be a Super User!
My course: Introduction to Developing Power BI Visuals
On how to ask a technical question, if you really want an answer (courtesy of SQLBI)
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the January 2025 Power BI update to learn about new features in Reporting, Modeling, and Data Connectivity.