Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Learn from the best! Meet the four finalists headed to the FINALS of the Power BI Dataviz World Championships! Register now
I have data cube as my data source. These are large tables both in number of rows as well as columns.
I wish to import them and then create multiple tables (views) on this imported tables by filtering rows and columns which are then used in visualization
I am not sure which is the best way to go about it. Using M-language or DAX? any suggestions?
Both can perform table separation, but the task performed in M is completely executed in RAM and the task performed in M will act during data refreshing and loading. If the task you need requires simultaneous cleaning and preparation, it may be better to use M, where both tasks of separation and cleaning are performed together. If you explain the reason and type of separations, it may be easier to provide guidance.
daily hundreds of thousands of rows are generated by the system and stored in the system (as data cubes). I wish to see which would be the fastest way to load the data into PBI for displaying reports
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Power BI update to learn about new features.
| User | Count |
|---|---|
| 16 | |
| 12 | |
| 10 | |
| 7 | |
| 6 |