Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
I have data cube as my data source. These are large tables both in number of rows as well as columns.
I wish to import them and then create multiple tables (views) on this imported tables by filtering rows and columns which are then used in visualization
I am not sure which is the best way to go about it. Using M-language or DAX? any suggestions?
Both can perform table separation, but the task performed in M is completely executed in RAM and the task performed in M will act during data refreshing and loading. If the task you need requires simultaneous cleaning and preparation, it may be better to use M, where both tasks of separation and cleaning are performed together. If you explain the reason and type of separations, it may be easier to provide guidance.
daily hundreds of thousands of rows are generated by the system and stored in the system (as data cubes). I wish to see which would be the fastest way to load the data into PBI for displaying reports
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
Check out the November 2025 Power BI update to learn about new features.