Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
I want to create a pipeline where I need to upload some CSV files to a lakehouse. After the first load, only new files that are in the folder should be uploaded. At the moment I have some limitations because my files are local and I did not find the option to read from a folder with a pipeline. Is there any way to do this in Fabric?
Solved! Go to Solution.
Pipelines dont support connecting to data onpremise or on your local machines. Currently the data must be available on cloud, in one of the supported data stores (like azure blob or adls gen2 etc)
You can setup azcopy process on your desktop to upload this incremental data to an azure blob (check azcopy sync). And then load it into LH using pipelines copy activity.
Pipelines dont support connecting to data onpremise or on your local machines. Currently the data must be available on cloud, in one of the supported data stores (like azure blob or adls gen2 etc)
You can setup azcopy process on your desktop to upload this incremental data to an azure blob (check azcopy sync). And then load it into LH using pipelines copy activity.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.