Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
In a DataFlow2 I have connected to a sharepoint folder and filtered on the Path column to just get the files I want.
Now I want to load the content of each of those csv files and save it to a table on the Lakehouse.
By right clicking on the csv_list query and clicking "reference", I made three other queries.
I click '[Binary]' on one of the files and it opens it up and loads it as a table.
I set up the Data destination for these 3 extra queries as the lakehouse and I think I'm done.
I tried saving the DataFlow and at first I got a notification that it failed to save. Then a couple of minutes later it says it saved successfully. But when I tried to run the flow, it failed to run because of errors on the 3 queries that "reference" the one pulling from sharepoint. I opened the DataFlow up and it had somehow added a "Remove Columns" thing which got rid of the Content column with the binaries.
I removed this remove columns thing and saved it again, but the same as the above happened...
Am I approaching this completely wrong? How can I get it to work?
Solved! Go to Solution.
Since you need to manually reference (Power Query does not support dynamic query addition) you can just as well create three separate queries without the reference.
The "Remove Columns" step is "helpfully" making sure that only basic column types are returned. "Binary" is unsupported.
Hi @-_
Thank you for reaching out to the Microsoft Fabric Community Forum.
Thank you @lbendlin for your response.
Regarding issue with reading multiple files from SharePoint to Lakehouse. To address this, please follow the steps below:
If this response resolves your query, please mark it as the Accepted Solution to assist other community members. Additionally, a Kudos is appreciated if you found the response helpful.
Thank you!
This is exactly what I was already doing but was getting the issue I described. I think I will have to do what Ibendlin said and do separate flows for each CSV unless there's another way?
Hi @-_
Thanks for the update. Hope your issue gets resolved soon. when it does, please share the insights here and mark it or any other helpful answer 'Accept as Solution', which will help others with similar queries.
Thank you.
Since you need to manually reference (Power Query does not support dynamic query addition) you can just as well create three separate queries without the reference.
The "Remove Columns" step is "helpfully" making sure that only basic column types are returned. "Binary" is unsupported.
Thank you, I was trying to avoid that because there will be many more CSVs than just these, but I think it will be the simplest solution for now. Would it work to put lots of these individual dataflows into one data pipeline, or will they need one pipeline each? (the CSVs are updated daily so will need to schedule it for a regular refresh)
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
79 | |
46 | |
17 | |
11 | |
7 |
User | Count |
---|---|
83 | |
82 | |
27 | |
8 | |
7 |