Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hello community,
I have a json file which contains copyActivity.translator mappings for different tables. I am using filter activity to filter translator from this json file for a particular table. But, problem is lets say my lookup activity gave me 2 table names on which i am iterating and under for each activity i am using filter activity to get translator for a given table. but i am getting same translator for both the table even if i am setting variable before filtering for a table.
This is my set variable activity.
After this, lookup activity will fetch json mapping file. Then i have filter activity.
Ideally, it should give different mapping for different TableNames. but when i run it i am getting same output for different table.
See, both variables activity are setting table names properly.
But when we see output of filter activity,
Can someone tell me why i am getting same output. Does this mean my set variable activity is running parallelly and before filter activity runs, table name was overwritten by another set variable activity ?
Solved! Go to Solution.
The scope of a variable is at the pipeline level, not in the iteration of a foreach loop.
Thinking out loud (not tested this), I assume that the absolute simplest fix would be to set the foreach to sequential, but in most circumstances that's not desired outcome.
But to properly achieve this in paralleld, how I tend to approach this is produce a payload for the foreach that includes the translations rather that retrieving them separately through lookups withi the foreach. The foreach will naturally filter the set leaving you with the translations for the individual tables. You also have less contact with the database that way (i.e. you don't have lots of individual lookups happening concurrently).
Without seeing the wider solution, I'm making somes assumptions but if at some point you have a lookup that is retrieving tables to process, extending that with the mappings would solve this.
Thanks, @justinjbird . That sequential run solution is correct, i have already tested it but since there are very large tables to pull, it is not ideal solution. when we look at the time when set variable activities are evaluated it is 11:08:26. But for filter it is 11:08:35. it means whatever value last set variable activity set for the variable, filter will only evaluate for that variable and will always give same output for every for loop iteration (unless again set variable activity run after filter and change variable name). I had unified mapping file for all tables. i divided that into seperate files for each table and now instead of searching for specific table mapping in a large file, i am pulling table mapping directly from its mapping file using tablename and its working fine.
Hi @ketan5050 ,
I wanted to check in regarding your issue. Has it been resolved, or do you need any further information. Let me know if you’d like more details.
Thanks.
Thanks, @justinjbird . That sequential run solution is correct, i have already tested it but since there are very large tables to pull, it is not ideal solution. when we look at the time when set variable activities are evaluated it is 11:08:26. But for filter it is 11:08:35. it means whatever value last set variable activity set for the variable, filter will only evaluate for that variable and will always give same output for every for loop iteration (unless again set variable activity run after filter and change variable name). I had unified mapping file for all tables. i divided that into seperate files for each table and now instead of searching for specific table mapping in a large file, i am pulling table mapping directly from its mapping file using tablename and its working fine.
Hi @ketan5050 ,
Thank you for sharing your findings and explaining the timestamp analysis it was an effective way to confirm the race condition caused by parallel execution.
You are correct- in parallel ForEach loops, pipeline variables can be overwritten before activities like Filter run, which explains the repeated translator output. I also appreciate your workaround of dividing the unified mapping file into separate files for each table and referencing them by table name. This approach helps prevent variable conflicts, simplifies the logic, and can improve performance by reducing concurrent lookups.
Thanks for your prompt response @justinjbird .
Regards,
Yugandhar.
The scope of a variable is at the pipeline level, not in the iteration of a foreach loop.
Thinking out loud (not tested this), I assume that the absolute simplest fix would be to set the foreach to sequential, but in most circumstances that's not desired outcome.
But to properly achieve this in paralleld, how I tend to approach this is produce a payload for the foreach that includes the translations rather that retrieving them separately through lookups withi the foreach. The foreach will naturally filter the set leaving you with the translations for the individual tables. You also have less contact with the database that way (i.e. you don't have lots of individual lookups happening concurrently).
Without seeing the wider solution, I'm making somes assumptions but if at some point you have a lookup that is retrieving tables to process, extending that with the mappings would solve this.
User | Count |
---|---|
33 | |
13 | |
9 | |
9 | |
5 |