Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi @PetterR ,
So when sparse columns (e.g., <1% populated) contain only blanks in the initial few rows, they’re excluded from the CSV output, because Power Automate doesn't "see" them as present.
The "Create CSV Table" action does not use the full schema from the dataset.
It only includes columns present in the first few data rows (likely based on internal sampling).
If a column is entirely null or blank() in early rows, it may be omitted.
There’s no exposed “Include all columns” setting in this action.
1. Use a fixed schema via manual column mapping (recommended): Add a "Select" action before "Create CSV Table". Explicitly define the 15 expected columns (even if the value is null or empty). This forces "Create CSV Table" to include all headers consistently, regardless of sparsity.
Steps:
Insert “Select” action before CSV creation.
Map each field like:
"Column1" → items('Apply_to_each')?['Column1']
(Or whatever your structure is.)
2. Sort your DAX output so populated rows come first
Already tested by you — this works, but it’s data-dependent and not as robust.
3. Modify DAX to replace blanks with placeholders:
IF(ISBLANK([MyColumn]), "No Data", [MyColumn])
Works, but requires editing the DAX query and changes data semantics slightly.
4. Export to Excel instead of CSV:
Use “Create Excel File” action — tends to preserve all columns better.
Hi @PetterR
You're using a Power Automate flow that runs every two hours to export data from a Power BI semantic model to a CSV file in SharePoint. The flow uses the "Run a query against a dataset" action to execute a DAX query, and then passes the result to "Create CSV table" and "Create file." The dataset has 15 columns, but you've observed that the generated CSV sometimes contains only 13 or 14 columns—specifically, columns that are sparsely populated (with data in less than 1% of rows) are occasionally missing. This behavior suggests that there’s a form of implicit optimization or sampling occurring somewhere in the process, likely during the transformation of the DAX output to a CSV table. Based on your testing, sorting the data to bring non-blank values to the top ensures that the sparse columns are included in the output, which indicates that Power Automate or the underlying dataset action may be performing a shallow sample (possibly first 500 rows) to infer the schema and structure of the CSV. Unfortunately, this kind of optimization behavior is not explicitly documented by Microsoft, and there is no official setting to disable it directly in Power Automate.
To ensure consistent column inclusion, aside from sorting, your best workaround is to explicitly coerce each expected column to a non-null default, such as adding a dummy first row with all expected fields initialized (e.g., to 0 or blank text). Another option is to manually define a consistent schema by transforming the dataset into a fixed table structure before passing it to "Create CSV table," possibly using a Power Automate expression or an intermediate step (like a formatted JSON table or manually mapping the fields). This forces Power Automate to always recognize all columns regardless of content. In summary, while this appears to be a sampling-based optimization within the flow, it cannot currently be turned off, and your best alternatives are sorting, pre-filling, or schema enforcement.
Hi @PetterR,
Handle Empty Columns in DAX:
One way to address this is to handle BLANK() values directly in your DAX query, so columns with empty rows are not omitted. You can use:
IF(ISBLANK([YourColumn]), "No Data", [YourColumn])
This ensures the column is included in the output, even if it contains no data.
Ensure All Columns Are Included in Create CSV Table:
Check the settings in the "Create CSV table" action and make sure the "Include headers" option is enabled. This will help ensure all column headers are present in the CSV, even for columns with empty or sparse data.
Use a Placeholder for Empty Columns:
If columns are being skipped due to empty values, setting a placeholder like "No Data" in your DAX query for empty rows may help keep those columns in the output.
Test with Excel File Export:
Alternatively, you can try exporting the data to an Excel file using the "Create an Excel file" action, which may handle empty columns more reliably.
Thank you.
Thank you for the quick reply.
According to my understanding you propose 4 (actually 3 different) options to address the issue, of which adapting the settings in the Power Automate action would be the most efficient in terms of not requiring adaptation of the data transformation ("No data" if cell is empty).
I've checked the action "Create CSV Table" in my flow and i couldn't find the setting "Include headers", please check screenshot of action and code:
Could you please give a hint, where the setting that you propose can be found?
And please don't tell the answer you shared was AI generated 😉
Hi @PetterR ,
Thank you for reaching out to the Microsoft Fabric Community.
You'll find experts in Microsoft Fabric and Power BI on this forum. However, for queries specifically related to Power Platform (such as Power Automate), I recommend posting your question in the dedicated Power Platform Community Forum for more targeted assistance.
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thankyou.
Hi @PetterR ,
Have you had a chance to open a thread in the Power Platform Community to get more insights on your issue?
So as to get best possible help and more targeted guidance
Check out the July 2025 Power BI update to learn about new features.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.