Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Hi Everyone,
I'm looking for support on how to approach a challenge we are currently facing.
Across our business, we are asked a lot of very specific questions that help our customer support function and we'd like to build something that enables people to self serve.
The challenge we face is we need to provide a number of different options to cater to the wide-ranging possibilities such as very specific time periods, areas, and a number of other parameters split by a number of key metrics. The size of the data model to cater for the "majority" of questions would be very large and I'm not sure if their is a feature (possibly Q&A) that allows users to answer specific questions.
I'd love to hear everyone's thoughts on both challenges - size of the data model and how best to provide a solution that enables users to answer questions themselves.
Thanks
Solved! Go to Solution.
@Anonymous Well, the self-service concept is kind of why Power BI exists. Depends on your definition of "large", are we talking thousands, millions, billions or trillions of rows. If it is truly large, you would be getting into DirectQuery and using aggregation tables potentially. Q&A is useful although it is often very important to build out your semantic layer to make this easier on end users. Other than that, it is really important to make the data model easy to understand by end users, hide unnecessary fields, etc. You can allow them to create their own reports very easily in the Service.
@Anonymous , I think creating a model is important which can answer most of the questions. I am not sure if Q&A can do everything for you.
A good star schema model with date calendar and required measures should be a good start.
https://www.sqlbi.com/blog/marco/2017/10/02/why-data-modeling-is-important-in-powerbi/
https://www.sqlbi.com/articles/the-importance-of-star-schemas-in-power-bi/
Hi @Anonymous,
I think Greg_Deckler and amitchandak already shared some tricks and tips for your scenario.
In my opinion, you can also need to consider creating dynamic measure formulas instead of static formulas. (they can interact with slicer/filter selections so that you can reduce the formula amounts in your report)
You need to write dynamic Dax formulas who based on Dax functions and current row content in your expression, then they can interact with selection and filter and change the result dynamically.
Please check following links if they help:
Managing “all” functions in DAX: ALL, ALLSELECTED, ALLNOBLANKROW, ALLEXCEPT
DAX – The Many Faces of VALUES()
In addition, you can change your table structure to achieve the dynamic attribute visual based on slicers.
Dynamic Attributes In A Power BI Report
Regards,
Xiaoxin Sheng
Hi @Anonymous,
I think Greg_Deckler and amitchandak already shared some tricks and tips for your scenario.
In my opinion, you can also need to consider creating dynamic measure formulas instead of static formulas. (they can interact with slicer/filter selections so that you can reduce the formula amounts in your report)
You need to write dynamic Dax formulas who based on Dax functions and current row content in your expression, then they can interact with selection and filter and change the result dynamically.
Please check following links if they help:
Managing “all” functions in DAX: ALL, ALLSELECTED, ALLNOBLANKROW, ALLEXCEPT
DAX – The Many Faces of VALUES()
In addition, you can change your table structure to achieve the dynamic attribute visual based on slicers.
Dynamic Attributes In A Power BI Report
Regards,
Xiaoxin Sheng
@Anonymous , I think creating a model is important which can answer most of the questions. I am not sure if Q&A can do everything for you.
A good star schema model with date calendar and required measures should be a good start.
https://www.sqlbi.com/blog/marco/2017/10/02/why-data-modeling-is-important-in-powerbi/
https://www.sqlbi.com/articles/the-importance-of-star-schemas-in-power-bi/
@Anonymous Well, the self-service concept is kind of why Power BI exists. Depends on your definition of "large", are we talking thousands, millions, billions or trillions of rows. If it is truly large, you would be getting into DirectQuery and using aggregation tables potentially. Q&A is useful although it is often very important to build out your semantic layer to make this easier on end users. Other than that, it is really important to make the data model easy to understand by end users, hide unnecessary fields, etc. You can allow them to create their own reports very easily in the Service.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 48 | |
| 43 | |
| 39 | |
| 19 | |
| 17 |
| User | Count |
|---|---|
| 68 | |
| 63 | |
| 31 | |
| 30 | |
| 23 |