I'm wondering if anyone else is struggling with this. I've got numeric values in my dataset that are defined as decimal and default behavior is to sum. Works well in Power BI.
If I connect to the published data set using Analyze in Excel, those same numbers can no longer be used in the summation region of the Pivot Table. The only work around I've found is to redefine every numeric field as a DAX sum, which is problematic when you have large numbers of numeric data columns.
I know this is how Excel pivot tables normally work against AS data sources for numeric, non-measure columns. Excel pivot tables treat numeric data differently from SQL sources and allow any numeric data to be summed.
I think in this case, the data set model behavior could automate its treatment of numeric data to emulate measures so that Excel can automatically aggregate the data properly in pivot tables.
Thoughts? Anyone else not happy with this issue?
What you describe is the standard behaviour. Implicit measures do not work - you must define them explicitly. Personally I think this is they way it should be, but I understand why some others don't agree. My view is you should write explicit measures because
1. You can define a meaningful business name for the measure (eg a Total Sales instead of Sum of Amt)
2. You define the number formatting that is appropriate for the business
3. You learn how to write DAX
I describe it like this. Imagine the only formula you could write in Excel was when you pressed the AutoSum button. In that scenario, you would be very limited in what Excel could do. This is analogous to using implicit measures in Power Pivot. Better to learn to write a DAX and get some practice by writing measures that add value - my view anyway.
I realize this is standard behavior from a data perspective. In this case, I disagree from a user experience perspective. Implicit measures should work as well as explicit measures in Excel for a consistent user experience.
I'm creating a Power BI model over an underlying product that allows the user to define new extended data fields via the product interface. The product then automatically adds these to the data source and I might not know a new one has been added to define an explicit measure.
It's a bad user experience as currently implemented as it creates an inconsistent UX across data sources and seemingly the same types of data. This in turn drives up support costs.
Also, in my case, the underlying table has one row per per. Technically, there's nothing to sum as there's only one value. However, I can't get the numerics treated correctly unless defined as a measure. Once you are defined as an explicit measure, the type of aggregation is controlled within Excel anyway so again, I'm not seeing value here of requiring the explicit measure.
Check out the November 2023 Power BI update to learn about new features.