Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi
I have data from multiple excel on shaerpoint folder & received weekly (new file) with file name data date. Each file contents data for three table Project,Profit and Loss in sinlge sheet Sheet1 which can be find via Table column and Type colum describe where each row is Table or Column or row of that table.
File Name: 2025-06-30
| Type | ||||||
| Table | Table | |||||
| Project | Project | Field | ||||
| P1 | Project | Row | ||||
| P2 | Project | Row | ||||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Profit | Profit | Field | ||
| P1 | P11 | 1 | Profit | Row | ||
| P2 | P21 | 2 | Profit | Row | ||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Location | Loss | Loss | Field | |
| P1 | P11 | L1 | 2 | Loss | Row | |
| P2 | P21 | L2 | 4 | Loss | Row |
File Name: 2025-07-03
| Type | ||||||
| Table | Table | |||||
| Project | Project | Field | ||||
| P3 | Project | Row | ||||
| P4 | Project | Row | ||||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Profit | Profit | Field | ||
| P3 | P31 | 3 | Profit | Row | ||
| P4 | P41 | 4 | Profit | Row | ||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Location | Loss | Loss | Field | |
| P3 | P31 | L4 | 5 | Loss | Row | |
| P4 | P41 | L5 | 6 | Loss | Row |
File Name: 2025-07-10
| Type | ||||||
| Table | Table | |||||
| Project | Project | Field | ||||
| P5 | Project | Row | ||||
| P6 | Project | Row | ||||
| Table | Type | |||||
| Project | Sub Project | Profit | Profit | Table | ||
| P5 | P51 | 8 | Profit | Field | ||
| P6 | P61 | 9 | Profit | Row | ||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Location | Loss | Loss | Field | |
| P5 | P51 | L6 | 10 | Loss | Row | |
| P6 | P61 | L6 | 11 | Loss | Row |
I created first Master table then Profit table ( reference from Master) and applied Incremental filter on both table(Master and Profit) with RangeStart(5/1/2025 12:00:00 AM) and RangeEnd(7/20/2025 12:00:00 AM) at Power query level.
Master table - Power Query:
let
Source = SharePoint.Contents("https://mysites.com/data", [ApiVersion = 15]),
#"Removed Other Columns" = Table.SelectColumns(Source,{"Content", "Name"}),
#"Replaced Value" = Table.ReplaceValue(#"Removed Other Columns",".xlsx","",Replacer.ReplaceText,{"Name"}),
#"Duplicated Column" = Table.DuplicateColumn(#"Replaced Value", "Name", "Name - Copy"),
#"Renamed Columns" = Table.RenameColumns(#"Duplicated Column",{{"Name - Copy", "Date Increment"}}),
#"Changed Type" = Table.TransformColumnTypes(#"Renamed Columns",{{"Date Increment", type datetime}}),
#"Sorted Rows" = Table.Sort(#"Changed Type",{{"Date Increment", Order.Ascending}}),
#"Filtered Increment" = Table.SelectRows(#"Sorted Rows", each [Date Increment] >= RangeStart and [Date Increment] < RangeEnd),
#"Filtered Hidden Files1" = Table.SelectRows(#"Filtered Increment", each [Attributes]?[Hidden]? <> true),
#"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1", "Transform File", each #"Transform File"([Content])),
#"Removed Other Columns1" = Table.SelectColumns(#"Invoke Custom Function1", {"Date Increment","Transform File"}),
#"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Other Columns1", "Transform File", Table.ColumnNames(#"Transform File"(#"Sample File")))
in
#"Expanded Table Column1"
and Profit -Power Query:
let
Source = Master,
#"Filtered Rows" = Table.SelectRows(Source, each ([Column6] = "Profit")),
#"Promoted Headers" = Table.PromoteHeaders(#"Filtered Rows", [PromoteAllScalars=true]),
#"Renamed Columns" = Table.RenameColumns(#"Promoted Headers",{{Table.ColumnNames(#"Promoted Headers"){0}, Table.ColumnNames(#"Filtered Rows"){0}},{Table.ColumnNames(#"Promoted Headers"){7}, Table.ColumnNames(#"Filtered Rows"){7}}}),//rename to get Pre column name
#"Filtered Rows1" = Table.SelectRows(#"Renamed Columns", each ([Type] = "Row")),
#"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows1",{"Date Increment","Project", "Sub Project", "Profit"}),
#"Changed Type" = Table.TransformColumnTypes(#"Removed Other Columns",{{"Profit", Int64.Type}}),
#"Filtered Increment" = Table.SelectRows(#"Changed Type", each [Date Increment] >= RangeStart and [Date Increment] < RangeEnd)
in #"Filtered Increment"
then I configured Incremental refresh at report level to both table Master and Profit:
Whenever I run refresh at Power BI service , got error:
Could you please help, how can fix issue ?
Solved! Go to Solution.
Hi @Jyaul1122 ,
The error message "Database consistency checks (DBCC) failed while checking the column statistics." indicates that Power BI’s internal tabular model (used in the dataset) encountered corruption or inconsistency during processing.
Please check below things to fix the issue.
1. Stabilize Schema Before Refresh, Check that all files have consistent column structures. If headers vary, use a standardized schema mapping before promoting headers.
Please try below M code.
let
Source = Master,
FilteredTable = Table.SelectRows(Source, each ([Column6] = "Profit")),
PromotedHeaders = Table.PromoteHeaders(FilteredTable, [PromoteAllScalars=true]),
StandardizedColumns = Table.RenameColumns(PromotedHeaders, {
{Table.ColumnNames(PromotedHeaders){0}, "Date Increment"},
{Table.ColumnNames(PromotedHeaders){1}, "Project"},
{Table.ColumnNames(PromotedHeaders){2}, "Sub Project"},
{Table.ColumnNames(PromotedHeaders){3}, "Profit"}
}),
ChangedTypes = Table.TransformColumnTypes(StandardizedColumns, {
{"Date Increment", type datetime},
{"Project", type text},
{"Sub Project", type text},
{"Profit", Int64.Type}
}),
FilteredIncrement = Table.SelectRows(ChangedTypes, each [Date Increment] >= RangeStart and [Date Increment] < RangeEnd)
in
FilteredIncrement
2. Before publishing, load all files locally in Power BI Desktop and check No missing columns, No null headers and No type mismatches.
3. If you have intermediate queries like Master or Transform File, disable load for them to reduce memory pressure and avoid schema conflicts.
4. Sometimes, the dataset in the service gets corrupted, Delete the dataset from Power BI Service and Re-publish the report from Power BI Desktop and Reconfigure incremental refresh.
5. While dynamic indexing (Table.ColumnNames(){1}) works during development, it can break in service. Prefer static column names after schema stabilization.
6. You can create a function to normalize schema across files before combining. Please refer below M code.
(TableToNormalize as table) =>
let
Renamed = Table.RenameColumns(TableToNormalize, {
{Table.ColumnNames(TableToNormalize){0}, "Date Increment"},
{Table.ColumnNames(TableToNormalize){1}, "Project"},
{Table.ColumnNames(TableToNormalize){2}, "Sub Project"},
{Table.ColumnNames(TableToNormalize){3}, "Profit"}
}),
Typed = Table.TransformColumnTypes(Renamed, {
{"Date Increment", type datetime},
{"Project", type text},
{"Sub Project", type text},
{"Profit", Int64.Type}
})
in
Typed
Note: Then apply this function to each file before combining.
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
Check for trailing space on the "Project" column name.
Note1: SharePoint Folder is a non-folding data source. Incremental Refresh will read ALL files before applying the filter. May not be an issue if the files are small
Note2: Your archival period is 30 days - that means you will end up with 30+ mostly empty partitions as you only fill every seventh partition. Better to set the archival to months or quarters.
More data example:
File Name: 2025-07-17
| Type | ||||||
| Table | Table | |||||
| Project | Project | Field | ||||
| P7 | Project | Row | ||||
| P8 | Project | Row | ||||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Profit | Profit | Field | ||
| P7 | P71 | 8 | Profit | Row | ||
| P8 | P81 | 9 | Profit | Row | ||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Location | Loss | Loss | Field | |
| P7 | P71 | L8 | 10 | Loss | Row | |
| P8 | P81 | L8 | 11 | Loss | Row |
File Name: 2025-07-24
| Type | ||||||
| Table | Table | |||||
| Project | Project | Field | ||||
| P9 | Project | Row | ||||
| P10 | Project | Row | ||||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Profit | Profit | Field | ||
| P9 | P91 | 10 | Profit | Row | ||
| P10 | P101 | 11 | Profit | Row | ||
| Type | ||||||
| Table | Table | |||||
| Project | Sub Project | Location | Loss | Loss | Field | |
| P9 | P91 | L9 | 5 | Loss | Row | |
| P10 | P101 | L9 | 3 | Loss | Row |
Hi @Jyaul1122 ,
Thank you for reaching out to the Microsoft Community Forum.
Please Check below things.
1. Promoted Headers steps assumes column positions that may change or not during service refresh.
2. Project column might be renamed conditionally or exist only in certain files.
3. Column headers like "Project", "sub Projects" and "profit" are derived dynamically from file, so they don't exist until after filtering and transforming.
4. Incremental refresh tries to fold the query to the source, but dynamic header logic can break query folding and trigger this error.
Please avoid direct column references like "Project" before prompting headers and use column position reference or rename safely.
Please replace your Profit query with below query.
let
Source = Master,
FilteredTable = Table.SelectRows(Source, each ([Column6] = "Profit")),
PromotedHeaders = Table.PromoteHeaders(FilteredTable, [PromoteAllScalars=true]),
RenamedColumns = Table.RenameColumns(
PromotedHeaders,
{
{Table.ColumnNames(PromotedHeaders){0}, "Date Increment"},
{Table.ColumnNames(PromotedHeaders){1}, "Project"},
{Table.ColumnNames(PromotedHeaders){2}, "Sub Project"},
{Table.ColumnNames(PromotedHeaders){3}, "Profit"}
}
),
ChangedTypes = Table.TransformColumnTypes(RenamedColumns, {
{"Date Increment", type datetime},
{"Project", type text},
{"Sub Project", type text},
{"Profit", Int64.Type}
}),
FilteredIncrement = Table.SelectRows(ChangedTypes, each [Date Increment] >= RangeStart and [Date Increment] < RangeEnd)
in
FilteredIncrement
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
Hi @Jyaul1122 ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.
Regards,
Dinesh
I tried your idea, when I run with my actual data, I got different error than previous.
I do not know, what it is mean ?
Hi @Jyaul1122 ,
The error message "Database consistency checks (DBCC) failed while checking the column statistics." indicates that Power BI’s internal tabular model (used in the dataset) encountered corruption or inconsistency during processing.
Please check below things to fix the issue.
1. Stabilize Schema Before Refresh, Check that all files have consistent column structures. If headers vary, use a standardized schema mapping before promoting headers.
Please try below M code.
let
Source = Master,
FilteredTable = Table.SelectRows(Source, each ([Column6] = "Profit")),
PromotedHeaders = Table.PromoteHeaders(FilteredTable, [PromoteAllScalars=true]),
StandardizedColumns = Table.RenameColumns(PromotedHeaders, {
{Table.ColumnNames(PromotedHeaders){0}, "Date Increment"},
{Table.ColumnNames(PromotedHeaders){1}, "Project"},
{Table.ColumnNames(PromotedHeaders){2}, "Sub Project"},
{Table.ColumnNames(PromotedHeaders){3}, "Profit"}
}),
ChangedTypes = Table.TransformColumnTypes(StandardizedColumns, {
{"Date Increment", type datetime},
{"Project", type text},
{"Sub Project", type text},
{"Profit", Int64.Type}
}),
FilteredIncrement = Table.SelectRows(ChangedTypes, each [Date Increment] >= RangeStart and [Date Increment] < RangeEnd)
in
FilteredIncrement
2. Before publishing, load all files locally in Power BI Desktop and check No missing columns, No null headers and No type mismatches.
3. If you have intermediate queries like Master or Transform File, disable load for them to reduce memory pressure and avoid schema conflicts.
4. Sometimes, the dataset in the service gets corrupted, Delete the dataset from Power BI Service and Re-publish the report from Power BI Desktop and Reconfigure incremental refresh.
5. While dynamic indexing (Table.ColumnNames(){1}) works during development, it can break in service. Prefer static column names after schema stabilization.
6. You can create a function to normalize schema across files before combining. Please refer below M code.
(TableToNormalize as table) =>
let
Renamed = Table.RenameColumns(TableToNormalize, {
{Table.ColumnNames(TableToNormalize){0}, "Date Increment"},
{Table.ColumnNames(TableToNormalize){1}, "Project"},
{Table.ColumnNames(TableToNormalize){2}, "Sub Project"},
{Table.ColumnNames(TableToNormalize){3}, "Profit"}
}),
Typed = Table.TransformColumnTypes(Renamed, {
{"Date Increment", type datetime},
{"Project", type text},
{"Sub Project", type text},
{"Profit", Int64.Type}
})
in
Typed
Note: Then apply this function to each file before combining.
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
Hi @Jyaul1122 ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.
Regards,
Dinesh
Hi @Jyaul1122 ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.
Regards,
Dinesh
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 58 | |
| 56 | |
| 35 | |
| 18 | |
| 14 |