Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreShape the future of the Fabric Community! Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions. Take survey.
Hi all
I have a model that is a mixed storage mode with my larger fact tables set to import and my smaller tables set to Direct Query (to reduce the overall PBIX size but stop slow visual refreshes on the larger tables when users interact with the attached reports).
My customer dimension is a large table (14m rows) but my pbix was becoming bloated (nearly 4gb size) so i switched it to direct query as currently it's not being used a whole lot in modelling (yet). When I switched the customer table to direct query, a number of measures in my sales fact (imported) table returned a 'resultset query exceeded 1m rows' error.
None of the measures affected directly reference the customer dimension. They are simple 'count distinct number of sales orders' type measures, only relating to the sales fact table. When i switched the customer dimension back to dual storage mode the measures all returned correctly. So my question is - why would the storage mode of the customer table (which is unrelated to the measures being calculated) affect the resultset of those measures, and how can i avoid the error message?
Solved! Go to Solution.
I add new data sources all the time in tabular editor. (I use the paid 3 version but I think it would work in free 2 version).
I can send more detailed instructions when I get to a computer (away camping at the moment). Roughly though...
1) Right click add table.
2) Expand the partitions tree and there will be one with same name as the table.
3) Click on it and the M expression window should open.
4) In power bi desktop add datasource as normal and do any transforms you need.
5) Copy M code over into tabular editor from advanced editor.
6) Right click the table and click "update table schema".
6) Save back to service (I save to dev workspace and then deploy it along from there)
thanks so much! legend.
Hi @SamMiddleton ,
Based on your description, the limit can occur in cases where Power BI is not fully optimizing the queries sent, and there is some intermediate result being requested that exceeds the limit. It can also occur whilst building a visual, on the path to a more reasonable final state. For example, including Customer and TotalSalesQuantity would hit this limit if there were more than 1 million customers, until some filter were applied.
You may refer to the following links.
Best regards.
@SamMiddleton , In the case of import, we configure the incremental etl and load only selective data pbix file and load the rest of the data in power bi service.
If you are premium, you can load the rest of the data in the test instance using the deployment pipeline
Premium- Deployment Pipeline, Load More Data on Test/Prod : https://youtu.be/l69cnWkoGX0
In Direct mode, you can use aggregated table to minimize the row in direct query
https://docs.microsoft.com/en-us/power-bi/desktop-aggregations
Try to simplify the expression to reduce the row queried
hi @amitchandak unfortunately at the moment we are still in a development stage of our data model, where further datasources are being added to the model, which means we need a local copy of the pbix at all times. As i understand it, incremental refresh means that you cannot download the PBIX once it's loaded to the service. I don't believe there's a functionlity currently (happy to be corrected) that allows you to add further datasets to a model as you would via the power query transform functionality in desktop?
Are you using premium capacity or premium per user?
If so I'd look to publish my dataset once and from then on use tabular editor to develop it connected to the service.
@amitchandak recommendation of making use of incremental refresh is a really good one.
Another thing to consider is looking at a dual mode setup with your customer dimension running dual (both import and direct query) and your fact table staying in direct query mode. That should allow the engine to make best use of processing on the sql end.
Lastly look at building an aggregate table that also runs in dual mode.
I'd highly recommend sql bi's mastering tabular video course which covers a lot of this stuff.
Thanks Ben, I'd love to be able to use TE but what happens when i need to add additional datasources in to the power query editor? (eg views/tables). As mentioned, Incremental refresh is the ideal but currently you can't download the PBIX when incremental refresh is set up (and if i need to add a datasource in, i need a local PBIX) UNLESS there's a way to do it in the service or in TE. I've not found a way... yet! 😄 happy to hear any suggestions. I have my Sales Fact on Import as it's currently 56m rows and the reports using it have many measures that make it super slow to load on direct query. Our customer table is also 30m rows (i've cut down to the bare minimum columns but we have a large customer base) and so i've had to also switch that back on to dual mode as when it was on direct q, that's when i was getting the initial error. Unfortunately, its the curse of the BI developer when your customer requires lots of KPI metrics to report on 😞
I add new data sources all the time in tabular editor. (I use the paid 3 version but I think it would work in free 2 version).
I can send more detailed instructions when I get to a computer (away camping at the moment). Roughly though...
1) Right click add table.
2) Expand the partitions tree and there will be one with same name as the table.
3) Click on it and the M expression window should open.
4) In power bi desktop add datasource as normal and do any transforms you need.
5) Copy M code over into tabular editor from advanced editor.
6) Right click the table and click "update table schema".
6) Save back to service (I save to dev workspace and then deploy it along from there)
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
User | Count |
---|---|
92 | |
91 | |
84 | |
80 | |
49 |
User | Count |
---|---|
146 | |
137 | |
109 | |
68 | |
55 |