The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
Hello,
I have a Power BI dashboard that pulls daily files from Sharepoint and feeds the Power BI. The report was working properly when all of a sudden it stopped loading the data in the csv files it's feeding from.
The csv files have the data and the power BI is loading the files but instead of data, it's loading all the columns with null values. This is happening with files after July 2024. The rest of the files are loading properly. Do you know how I can fix this?
Thanks in advance!
Solved! Go to Solution.
You would have to modify the CSVDecode step.
But - all of these steps could be part of the Power Query ETL process if you want. You can use things like
try ... otherwise ...
to ingest the raw CSV either with commas or with tabs and then you can delete/discard the columns with the PII.
Assuming your data source is Excel files on the sharepoint - are you connecting to Excel sheets, or to named tables?
I'm connecting to a Sharepoint folder that has all the csv files it feeds from. Please see the source settings below:
Then I filter based on the file name:
Then I exctract those files and expand them to see all the rows in them:
That's the data I use to load my report, but as of July this year, it started loading the tables with null values and I don't know why. When I check the files in Sharepoint, the data is there, the problem is in the Power BI.
Thank you for your help.
Please do not sort the table in Power Query - that is a very expensive operation that likely has no benefit (Power BI is ignoring the Power Query sort order).
Can you you isolate the CSV file that causes the issue or do you only see nulls across all files?
Ohh no. I sorted it just to see if the latest tables were being included correctly because it takes ages to scroll down to see the bottom, but then I removed the step. I know the issue is happening with all the files after July 16th this year. Everything before that loads properly.
when you visually inspect these CSV files do you see a difference?
Mmm yes, there is a difference. I download the file daily using Power Automate to create the csv table. Since the file is originally tab separated instead of comma separated, I have an action that replaces commas with tab to generate and it seems that for some reason, after that date, it started changing the format. The correct format looks like this:
And the incorrect one is putting everything into columns:
Perhaps I should take this query to the Power Automate forum instead, as based on this the problem is not the power BI.
You can use Power Query to sense if the file is tab or comma separated, and then ingest it accordingly.
Where do you download the file from?
Since the file is originally tab separated instead of comma separated, I have an action that replaces commas with tab to generate
That's not recommended. Better to to leave the raw files as is and handle the changes in a tool that is a bit more robust than Power Automate.
Ohh I did not know that was possible from Power query. The original file is downloaded from the company website, but then some columns need to be removed for data privacy reasons and we need to keep the rest of the columns. Most columns will include dates or binary information to determine if certain data is available in the file or not. In Power Automate I was able to do that, but when I try to generate the CSV file for Power Bi to consume it, it was giving me issues for being a tab separated file instead of a comma separated file, because I was using the Create CSV table action in Power Automate. That's why I needed to change the delimiter, but now I don't know why it started changing the format of the file. What other tool would you suggest to do this instead? This is the flow I use to generate the file:
You would have to modify the CSVDecode step.
But - all of these steps could be part of the Power Query ETL process if you want. You can use things like
try ... otherwise ...
to ingest the raw CSV either with commas or with tabs and then you can delete/discard the columns with the PII.