Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I did a test of using Dataflow Gen2 and was able to read the CSV file by position, but I can only upload to Lakehouse (not Warehouse). In the test using the data pipelines I did not find the option to read by position. Can you tell if there is such an option? Or thinking about the architecture the option of loading first to Lakehouse is correct?
Solved! Go to Solution.
I found the solution to use Dataflow Gen2 and warehouse in this topic:
Solved: Dataflow (Gen2): Fabric Warehouse Destination...th... - Microsoft Fabric Community
I found the solution to use Dataflow Gen2 and warehouse in this topic:
Solved: Dataflow (Gen2): Fabric Warehouse Destination...th... - Microsoft Fabric Community
In Fabric Data Pipelines, you can use the Copy task to bring data directly into the Warehouse. Create a new data pipeline, click on "Copy Data" and select your CSV file. Currently in pipelines, your file must be in Azure, but we are adding access to on-prem data as well. Then you can select "Data Warehouse" as your target. It is in the Workspace category for data destinations.
Hi marcosvin, as far as I know it is not required to upload first to lakehouse and then to warehouse.
I dont understand what you mean with "read by position", could you clarify that?
Hi! The CSV file is delimited by position, that is, the first column starts at character 1 and goes up to 10, the second from 11 to 20, and so on... I didn't find the functionality to read the file this way.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.