Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
I am working on improving my semantic model using Best Practice Analyser with Tabular Editor.
I get the Do not use floating point data types but I don't understand how the data type in this case impact the performance.
My source is a SQL Server database.
Solved! Go to Solution.
This is a very good article to help :
Choosing Numeric Data Types in DAX - SQLBI
Imagine you have a table with a floating-point column named SalesAmount. This table has 2 million rows, and the data is stored in two segments (each segment contains 1 million rows, as per the default segment size in Power Pivot).
- Segment 1: Contains rows 1 to 1,000,000
- Segment 2: Contains rows 1,000,001 to 2,000,000
Let’s assume the SalesAmount column in these segments has the following values:
- Segment 1: [1.1, 2.2, 3.3, ..., 1,000,000.0]
- Segment 2: [1,000,001.1, 1,000,002.2, 1,000,003.3, ..., 2,000,000.0]
When you run a query to calculate the sum of the SalesAmount column, the database engine might use multiple cores to process the data in parallel.
1. Core 1 processes Segment 1 and calculates the sum of its values.
2. Core 2 processes Segment 2 and calculates the sum of its values.
Let’s say:
- Core 1 calculates the sum of Segment 1 as Sum1.
- Core 2 calculates the sum of Segment 2 as Sum2.
Finally, the database engine aggregates Sum1 and Sum2 to get the total sum.
Floating-point arithmetic is not associative, meaning that the order in which operations are performed can affect the result. This is due to the way floating-point numbers are represented in memory, which can introduce small rounding errors.
Execution 1 :
- Core 1 processes Segment 1 and calculates Sum1 = 500,000,500,000.0
- Core 2 processes Segment 2 and calculates Sum2 = 1,500,000,500,000.0
- The total sum is Sum1 + Sum2 = 2,000,001,000,000.0
Execution 2 :
- Core 1 processes Segment 2 and calculates Sum1 = 1,500,000,500,000.0
- Core 2 processes Segment 1 and calculates Sum2 = 500,000,500,000.0
- The total sum is Sum1 + Sum2 = 2,000,001,000,000.0002
The result :
In Execution 1, the total sum is 2,000,001,000,000.0.
In Execution 2, the total sum is 2,000,001,000,000.0002.
Even though the difference is tiny (0.0002), it is still a discrepancy. This difference arises because the order in which the segments were processed and aggregated was different, leading to different intermediate rounding errors in the floating-point arithmetic.
While the difference is small, it can be noticeable to end-users, especially in financial or scientific applications where precision is critical. This inconsistency can lead to a loss of trust in the system, as users might expect the same result every time they run the same query.
This is a very good article to help :
Choosing Numeric Data Types in DAX - SQLBI
Imagine you have a table with a floating-point column named SalesAmount. This table has 2 million rows, and the data is stored in two segments (each segment contains 1 million rows, as per the default segment size in Power Pivot).
- Segment 1: Contains rows 1 to 1,000,000
- Segment 2: Contains rows 1,000,001 to 2,000,000
Let’s assume the SalesAmount column in these segments has the following values:
- Segment 1: [1.1, 2.2, 3.3, ..., 1,000,000.0]
- Segment 2: [1,000,001.1, 1,000,002.2, 1,000,003.3, ..., 2,000,000.0]
When you run a query to calculate the sum of the SalesAmount column, the database engine might use multiple cores to process the data in parallel.
1. Core 1 processes Segment 1 and calculates the sum of its values.
2. Core 2 processes Segment 2 and calculates the sum of its values.
Let’s say:
- Core 1 calculates the sum of Segment 1 as Sum1.
- Core 2 calculates the sum of Segment 2 as Sum2.
Finally, the database engine aggregates Sum1 and Sum2 to get the total sum.
Floating-point arithmetic is not associative, meaning that the order in which operations are performed can affect the result. This is due to the way floating-point numbers are represented in memory, which can introduce small rounding errors.
Execution 1 :
- Core 1 processes Segment 1 and calculates Sum1 = 500,000,500,000.0
- Core 2 processes Segment 2 and calculates Sum2 = 1,500,000,500,000.0
- The total sum is Sum1 + Sum2 = 2,000,001,000,000.0
Execution 2 :
- Core 1 processes Segment 2 and calculates Sum1 = 1,500,000,500,000.0
- Core 2 processes Segment 1 and calculates Sum2 = 500,000,500,000.0
- The total sum is Sum1 + Sum2 = 2,000,001,000,000.0002
The result :
In Execution 1, the total sum is 2,000,001,000,000.0.
In Execution 2, the total sum is 2,000,001,000,000.0002.
Even though the difference is tiny (0.0002), it is still a discrepancy. This difference arises because the order in which the segments were processed and aggregated was different, leading to different intermediate rounding errors in the floating-point arithmetic.
While the difference is small, it can be noticeable to end-users, especially in financial or scientific applications where precision is critical. This inconsistency can lead to a loss of trust in the system, as users might expect the same result every time they run the same query.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 39 | |
| 37 | |
| 33 | |
| 33 | |
| 29 |
| User | Count |
|---|---|
| 134 | |
| 96 | |
| 78 | |
| 67 | |
| 65 |