Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I had created an MS Fabric Data Pipeline which includes Delete -> Copy -> Dataflow -> Semantic Model refresh activities.
After running for a few times, I realised that I wanted to extract the filenames. So under advanced section > additional columns > I added a new row, and selected $$FILENAME as the value (below image). Then saved the pipeline.
Then I validated and ran the pipeline. But I found that the filename column had not appeared in the dataflow. So I opened the pipeline copy activity and checked the advanced section, and found that the value had changed to "custom" with a nearby custom field with the entry $$FILENAME.
I tried many times, searched for a solution in forums, checked with ChatGPT, Perplexity etc, but couldnt find a solution. Is it a known bug in the system or is there a fix available?
This is an important fix for me, as the filename has to be used for mapping fields from other files to get the required output on Power BI.
(In case you wanted to know, the source is Azure Blob Storage Container Folder, and the files are *.csv)
Thanks in advance for your support.
@burakkaragoz , @suparnababu8 , @nilendraFabric , @lbendlin , @andrewsommer
Hi @PrasoonSur
I think you can't pass $$FILENNAME like this
workaround is to use the Get Metadata activity to retrieve filenames and pass them to the Copy Data Activity using variables or expressions
Hi @nilendraFabric ,
Thanks for the quick reply. I tried the Get Metadata activity followed by ForEach activity for copy data, as I have around 45000 files at Azure Blob Storage. And the files increase at the rate of 2000 files per day. ForEach activity is taking too much time to complete the run, is there any work around to make it faster?
Would the PySpark run be faster?
Thanks for your support.
Hi @PrasoonSur,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @nilendraFabric for the prompt response.
Here the use of $$FILENAME in the Additional columns section of the Copy activity in Microsoft Fabric pipelines is currently not supported the same way it is in Azure Data Factory. This is why you are seeing the value change to Custom with an error.
Fabric’s pipeline UI blocks reserved tokens like $$FILENAME from being used in custom fields. Even if accepted, they are not passed correctly downstream especially to Dataflows.
PySpark notebook can also ingest large number of files in parallel with filenames included, if performance is critical and you are ingesting many files, use PySpark.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Thanks and regards,
Anjan Kumar Chippa
User | Count |
---|---|
81 | |
45 | |
16 | |
11 | |
7 |
User | Count |
---|---|
92 | |
88 | |
27 | |
8 | |
8 |