Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Good morning, everybody,
From power pivot (Excel 365) I attack a Sql Server 2017 view with more than 10 million records containing product movements. Unfortunately all records are necessary, since there are no annual accumulateds with which to reduce the registrations to be downloaded / updated in power pivot. The table is already optimized so that I can't delete columns or records and being a move table, all records must be from day one.
When I download the logs or update them, it ends up giving me an out-of-memory error and tells me to use the 64-bit version, but the version of excel I have is 64-bit and I have 16 Gb of Ram with an updated I5 processor and office 365. The same thing happens to me from other pcs with similar features.
It is normal that the movements of the years before the present one are not touched and that only those of the current year are the ones that vary, so I thought of the following approach:
First: Have an invariant table in your "historic" data with all records for example from day one to the end of 2019 (data that is sure not to change) or I also thought about generating several historical tables that contain records from every 5 years, so that the history table is not a giant file over time.
Second: Have a second table with data from the last two years, which if updated from power pivot.
I know that in the power pivot model, you can relate several tables, but understanding that they are tables with different data, that is, we could for example relate a customer table and a sales table by the customer code that is a common field to the two tables and that relationship would allow us to work the data from a pivot table. In my case what I would need is for both the records in the historical tables and the records of recent years, to be appended with all the records together.
My question is, would it be possible in the data model to make a kind of UNION between all the historical tables and that of the last two years, so that in the PivotTable have all the records available?
I don't know if I'm misfocusing the problem, there are still other solutions or ways to approach the problem.
I'd really appreciate it if you'd help me.
Best regards.
10 million records is not a big model. Something else is wrong, it is just not clear what. How many columns in the table? What is the cardinality of the columns? Many columns and highly unique columns are your enemies.
If the table is defined so that I can go into detail in the product study and well from the Erp, what alternatives do I have?
I"m not 100% sure what you are asking. The table structure and needs of your source system are normally different to what is required for Power BI. Power Query is designed to help with that. Simply connect to the source, use Power Query to choose which columns you need to load, and only load those columns. If you just load the 4 columns you posted earlier, then I am sure it will be fine.
Excuse me, I don't think I explained myself well.
The four columns I passed you were just one example for you to see the cardinality of the table, but as I mentioned earlier, the table has a total of 28 columns and all are necessary for the user to do a detailed analysis from power pivot.
At this point and seeing that in the table you can not remove columns because they are all necessary and therefore power pivot will continue to give me errors when downloading the 10 million records at once, what I need is an alternative solution, so I asked if it was possible in the power pivot data model , make a kind of "UNION" from two tables.
Suppose the 10 billion records are divided into 10 years or exercises.
You would create a first table in the power pivot model with records from the first 7 years. This table would be downloaded a first time and would never be updated, as these records belong to already closed exercises, so downloading them once would be sufficient and unnecessary to update them and prevent the data refresh from being long or fails.
Second, it would create a second table, with the last 3 years of records, which if they can vary, so this table would be updated from power pivot to user demand.
The part I need to know is: Is it possible from a dynamic table to make a "union" species of the two tables in order to have the data of the 10 years and therefore 10 million records?
In short, if only the stirrers of the last few years can change, why am I going to update the 10-year data?
This alternative is just an idea in my head that I don't know if it's possible to carry it out, so I threw the question. Maybe someone has had a problem that looks like or equals mine and solved it differently.
Thanks a lot.
Hello thank you very much for responding. In the table there are 28 columns and the cardinality is 1 to N, a product accumulates several records (inputs, outputs) per year and warehouse. For example:
product, warehouse, date, quantity
==================================
product1, 1, 18/05/2021, -1
product1, 2, 18/05/2021, 1
product1, 6, 17/05/2021, 2
product1, 3, 16/05/2021, -3
product1, 1, 01/05/2021, 10
product1, 4, 15/04/2021, -15
product1, 5, 12/04/2021, 23
product2, 1, 17/05/2021, -6
product2, 2, 17/05/2021, 3
product2, 6, 14/05/2021, 52
product2, 3, 13/05/2021, -16
product2, 1, 06/04/2021, -8
product2, 4, 03/04/2021, 15
product2, 5, 01/04/2021, 3
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 41 | |
| 38 | |
| 36 | |
| 30 | |
| 28 |
| User | Count |
|---|---|
| 128 | |
| 88 | |
| 79 | |
| 67 | |
| 62 |