This time we’re going bigger than ever. Fabric, Power BI, SQL, AI and more. We're covering it all. You won't want to miss it.
Learn moreLevel up your Power BI skills this month - build one visual each week and tell better stories with data! Get started
I have data cube as my data source. These are large tables both in number of rows as well as columns.
I wish to import them and then create multiple tables (views) on this imported tables by filtering rows and columns which are then used in visualization
I am not sure which is the best way to go about it. Using M-language or DAX? any suggestions?
Both can perform table separation, but the task performed in M is completely executed in RAM and the task performed in M will act during data refreshing and loading. If the task you need requires simultaneous cleaning and preparation, it may be better to use M, where both tasks of separation and cleaning are performed together. If you explain the reason and type of separations, it may be easier to provide guidance.
daily hundreds of thousands of rows are generated by the system and stored in the system (as data cubes). I wish to see which would be the fastest way to load the data into PBI for displaying reports
Check out the April 2026 Power BI update to learn about new features.
Sign up to receive a private message when registration opens and key events begin.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.