Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric certified for FREE! Don't miss your chance! Learn more
I am creating a metadata driven pipeline with a control table that cycles through a pipeline, one data object at a time. Because of the ease of use I have each data object updated through an individual copy job and then cycle through the copy jobs.
However when I run the data objects through the for loop, after the first copy job through the for loop is run, the second copy job for another data object runs into an error saying that a copy job was running at the same time so it has to fail.
Is there a way to run different copy jobs in parallel? I wanted to use copy jobs over copy activities as the transformation is non intensive and would be more low code friendly than a copy activity in a pipeline. Copy jobs may be better architecturally wise but I need to be able to split up the pipeline by data object instead of a grouping of data objects if I ran a copy jobs by domain.
Solved! Go to Solution.
[
{ "source": "CustomerID", "sink": "Cust_ID" },
{ "source": "CustomerName", "sink": "Cust_Name" }
]
"translator": {
"type": "TabularTranslator",
"mappings": "@activity('GetMapping').output.value"
}
If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster.
Connect with me on LinkedIn
Hi @BriefStop
"ForEach": {
"items": "@activity('GetMetadata').output.value",
"isSequential": false,
"activities": [
{
"name": "CopyData",
"type": "Copy",
...
}
]
If this response was helpful in any way, I’d gladly accept a 👍much like the joy of seeing a DAX measure work first time without needing another FILTER.
Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop 🌀.
If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster.
Connect with me on LinkedIn
Thank you, for parameterizing a copy data activity how are we supposed to dynamically set up the mappings? I currently have the a build a pipeline for every single copy data I want to run with the mappings within each and it's definitely not best practice
[
{ "source": "CustomerID", "sink": "Cust_ID" },
{ "source": "CustomerName", "sink": "Cust_Name" }
]
"translator": {
"type": "TabularTranslator",
"mappings": "@activity('GetMapping').output.value"
}
If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster.
Connect with me on LinkedIn
Hi @BriefStop,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @Zanqueta and @nielsvdc for the prompt response.
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided by the user's for the issue worked? or let us know if you need any further assistance.
Thanks and regards,
Anjan Kumar Chippa
Hi @BriefStop,
We wanted to kindly follow up to check if the solution provided by the user's for the issue worked? or let us know if you need any further assistance.
Thanks and regards,
Anjan Kumar Chippa
Hi @BriefStop, Copy Jobs are designed as standalone, scheduled tasks rather than pipeline activities, which means they don’t support orchestration features like parallel execution within a pipeline loop. That’s why you’re seeing the error that Fabric prevents multiple runs of the same Copy Job at the same time for consistency and resource management.
To use Copy Jobs effectively, you specify multiple tables for a single source and the Copy Job will run all the copy processes parallel to eachother. But Copy Jobs are not design with parameters, enabling them to be executed with dynamic input.
When you want to build a metadata driven pipeline, your options are to use a Copy Data activity or use a notebook with PySpark code. The latter one is effectively the solution with the least overhead compute.
Hope this helps. If so, please give kudos 👍 and mark as Accepted Solution ✔️ to help others.
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
Check out the January 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 11 | |
| 10 | |
| 6 | |
| 2 | |
| 2 |