Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I want to create a pipeline where I need to upload some CSV files to a lakehouse. After the first load, only new files that are in the folder should be uploaded. At the moment I have some limitations because my files are local and I did not find the option to read from a folder with a pipeline. Is there any way to do this in Fabric?
Solved! Go to Solution.
Pipelines dont support connecting to data onpremise or on your local machines. Currently the data must be available on cloud, in one of the supported data stores (like azure blob or adls gen2 etc)
You can setup azcopy process on your desktop to upload this incremental data to an azure blob (check azcopy sync). And then load it into LH using pipelines copy activity.
Pipelines dont support connecting to data onpremise or on your local machines. Currently the data must be available on cloud, in one of the supported data stores (like azure blob or adls gen2 etc)
You can setup azcopy process on your desktop to upload this incremental data to an azure blob (check azcopy sync). And then load it into LH using pipelines copy activity.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
2 | |
2 | |
2 | |
2 | |
2 |