Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
PrasoonSur
New Member

MS Fabric Data Pipeline Copy Data Activity Additional Column Value automatic change error

I had created an MS Fabric Data Pipeline which includes Delete -> Copy -> Dataflow -> Semantic Model refresh activities.

 

After running for a few times, I realised that I wanted to extract the filenames. So under advanced section > additional columns > I added a new row, and selected $$FILENAME as the value (below image). Then saved the pipeline.

 

PrasoonSur_0-1750175483209.png

 

 

Then I validated and ran the pipeline. But I found that the filename column had not appeared in the dataflow. So I opened the pipeline copy activity and checked the advanced section, and found that the value had changed to "custom" with a nearby custom field with the entry $$FILENAME.

 

 

PrasoonSur_1-1750175568216.png

 

 

 

I tried many times, searched for a solution in forums, checked with ChatGPT, Perplexity etc, but couldnt find a solution. Is it a known bug in the system or is there a fix available?

 

This is an important fix for me, as the filename has to be used for mapping fields from other files to get the required output on Power BI.

 

(In case you wanted to know, the source is Azure Blob Storage Container Folder, and the files are *.csv)

 

 

Thanks in advance for your support.

 

 

@burakkaragoz , @suparnababu8 , @nilendraFabric , @lbendlin , @andrewsommer 

3 REPLIES 3
nilendraFabric
Community Champion
Community Champion

Hi @PrasoonSur 

 

I think you can't pass $$FILENNAME like this

 

workaround is to use the Get Metadata activity to retrieve filenames and pass them to the Copy Data Activity using variables or expressions

Hi @nilendraFabric ,

 

Thanks for the quick reply. I tried the Get Metadata activity followed by ForEach activity for copy data, as I have around 45000 files at Azure Blob Storage. And the files increase at the rate of 2000 files per day. ForEach activity is taking too much time to complete the run, is there any work around to make it faster?

 

Would the PySpark run be faster?

 

 

Thanks for your support.

Hi @PrasoonSur,

 

Thank you for reaching out to Microsoft Fabric Community.

 

Thank you @nilendraFabric for the prompt response.

 

Here the use of $$FILENAME in the Additional columns section of the Copy activity in Microsoft Fabric pipelines is currently not supported the same way it is in Azure Data Factory. This is why you are seeing the value change to Custom with an error.

Fabric’s pipeline UI blocks reserved tokens like $$FILENAME from being used in custom fields. Even if accepted, they are not passed correctly downstream especially to Dataflows.

  • Since you have many files, using ForEach will slow down over time. Instead use a single Copy Activity that loads all files into a Staging Lakehouse table.
  • Dynamically add the filename by adjusting your source/file path logic
  • After loading, use a Dataflow to read from the staging table and apply any mapping or transformations required using the filename column.

PySpark notebook can also ingest large number of files in parallel with filenames included, if performance is critical and you are ingesting many files, use PySpark.

 

 

If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! 

 

Thanks and regards,

Anjan Kumar Chippa

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.