Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
I'm facing a challenge in creating a visual representation of the attached clustered column chart. Currently, I've managed to address this by generating multiple sheets in Excel, which I subsequently import into Power BI Desktop. These sheets calculate reductions of 10%, 15%, and 25%. I introduce an additional column and compute the following: -10% = Loan / Value * 0.9. Subsequently, I utilize this LTV value to formulate a new column for the LTV Interval: Under 40%, 40%-60%, and so on.
Furthermore, I have a helper table linked to both sheets, wherein I've established categories like Under 40%, 40%-60%, etc.
Within the clustered column visual, I employ the Under 40% category from the helper table on the X-axis, and the Loan values from the various sheets on the Y-axis.
Original sheet
| Loan | Value | LTV | LTV Interval |
| 11 | 20 | 0,55 | 40%-60% |
| 5,5 | 15 | 0,37 | Under 40% |
| 10 | 10 | 1 | Over 100% |
Calculated sheet
| Loan | Value - 10% | LTV | LTV Interval |
| 11 | 20*0,9=18 | 0,61 | 60%-70% |
| 5,5 | 15*0,9=13,5 | 0,41 | 40%-60% |
| 10 | 10*0,9=9 | 1,11 | Over 100% |
Value -15% etc generates the following visual:
This workflow has proven to be intricate and resource-intensive, leading to an accumulation of extraneous data and Excel sheets. Could this procedure be streamlined through the use of measures instead? It's worth noting that I have attempted this approach, but encountered a challenge where the Loan value exclusively referred to categories like "Under 40%", resulting in unintended outcomes when applying a subtraction of 10%.
For the sake of precision, it's essential that the total Loan remains consistent across all three scenarios. These scenarios should only shift within the different intervals as the Value parameter changes. Consequently, summing up the columns for the diverse scenarios should yield a consistent total sum.
Is there a more efficient approach to resolve this matter, while preserving accuracy and optimizing resource utilization?
Maybe restructure your data so it's more like this:
| Loan | Value | LTV | LTV Interval | Reduction Type |
| 11 | 20 | 0.55 | 40%-60% | None |
| 11 | 20 * 0.9 | Calculated | Calculated | 10% |
| 11 | 20*0.85 | Calculated | Calculated | 15% |
| 11 | 20*0.75 | Calculated | Calculated | 25% |
| 5.5 | 15 | 0.37 | 20%-40% | None |
| 5.5 | 15*0.9 | Calculated | Calculated | 10% |
| 5.5 | 15*0.85 | Calculated | Calculated | 15% |
| 5.5 | 15*0.75 | Calculated | Calculated | 20% |
You could do this in Power Query by puling the same source multiple times & then appending them all together.
Im not sure that I follow what you are thinking... I need to calculated all the values with the same percentage, so that I can see what happens if all the values decreases with 10-15-25%. Hopefully you understand what I mean:) If you see my column chart you can see that I have 4 different bars with the original data and then the stressed values.
Yes, so if your original table has 100 rows, then the table I'm proposing has 400 rows:
100 rows for no reduction
100 rows for 10%
100 rows for 15%
100 rows for 25%
All stacked vertically in one table
Then the values in your chart can just be SUM([Value]), legend is [Reduction Type] and x-axis is [LTV Interval].
Ok, now I understand! However my dataset consists of 200K rows and growing so it will soon be around 1 million rows... Will that be too large and not an optimal approach given the numbers of rows?
No, PBI will be happy with 10's of millions of rows in a table. Let me know if you need more detail on how to make this happen in Power Query.
Hi
Yes I would very muck like some help with Power query. Thanks!
I realise, and am sorry, that this is a very delayed response, however:
Load your data into Power Query
Remove all columns except [Loan] and [Value]
Add a calculated column called New_Value with formula =[Value] * 1
Remove the [Value] column
Rename the [New_Value] column to [Value]
Add a calculated column called LTV, with formula =[Loan]/[Value]
Add a calculated column called LTV Interval, with formula = if [LTV] < 0.4 then "Under 40%" else if [LTV] < 0.6 then "40% - 60%" if [LTV] < 0.7 then "60% - 70%" if [LTV] < 0.85 then "70% to 85%" if [LTV] < 1 then "Under 85% to 100%" else "Over 100%"
Add a calculated column called "Reduction Type" with formula ="None"
Duplicate your query so you now have 4 identical copies (you should be able to right-click on the name of your query in the list on the left and choose 'duplicate')
Rename the 4 queries to "No reduction", "10% reduction", "15% reduction" and "25% reduction"
Edit your "10% reduction" query so the first calulated column is [Value] * 0.9 rather than [Value] * 1
Edit your "10% reduction" query so the last calulated column is ="10% Reduction" rather than ="None"
Edit your "15% reduction" query so the first calulated column is [Value] * 0.85 rather than [Value] * 1
Edit your "15% reduction" query so the last calulated column is ="15% Reduction" rather than ="None"
Edit your "25% reduction" query so the first calulated column is [Value] * 0.75 rather than [Value] * 1
Edit your "25% reduction" query so the last calulated column is ="25% Reduction" rather than ="None"
Use the 'append as new' option in the ribbon to stack the 4 queries you have so far on top of each other. This is the data you'll be using; name it something appropriate.
Finally, you might want to right-click on each of the 4 other queries and disable data-load so they don't clutter things when you drop out of Power Query
Is there anything stopping you having all the columns in the same table. i.e. you have one table with the following columns:
Loan
Value
LTV
LTV Interval
Value (10% reduction)
LTV (10% reduction)
LTV Interval (10% reduction)
Value (15% reduction)
LTV (15% reduction)
LTV Interval (15% reduction)
Value (25% reduction)
LTV (25% reduction)
LTV Interval (25% reduction)
At the very least this would keep everything on one Excel sheet. Even better, you could then have all the calculated columns set up as steps in the Power Query instead, so you're not calculating anything extra in Excel.
Hi
You're right and I have tried this. The problem occurs when I then want to present this as a visual like the clustered column chart which is based on the loan value. I have thought about making 3 equal loan value columns which are somehow linked to the different LTV Intervall columns, but I still dont have it completely figured out. I also tried so make som measures but then problem was that the measure only calculated on the total and then I got one answer for the whole dataset... So Im still searching for the best solution.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 53 | |
| 42 | |
| 34 | |
| 34 | |
| 21 |
| User | Count |
|---|---|
| 143 | |
| 125 | |
| 100 | |
| 81 | |
| 63 |