Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
I have created copy job which reads CSV file into Lakehouse as Delta table. I can edit mappings when I have hardcoded lakehouse and warehouse id. If I change them to come from variable library and open the edit mapping page I get this error:
Lakehouse operation failed for: Operation returned an invalid status code 'NotFound'. Workspace: 'a2e4fabe-c127-4f69-b7a5-6fd1acfc677f'. Path: 'a03b2b59-cfc5-3189-d41f-9be4fa2a40f7/Tables/dbo/project/projects.csv'. ErrorCode: 'PathNotFound'. Message: 'The specified path does not exist.'. RequestId: '3d5bec09-501f-0059-5ae9-6f9b03000000'. TimeStamp: 'Thu, 18 Dec 2025 06:40:50 GMT'. Operation returned an invalid status code 'NotFound' Activity ID: e23619c1-1e8e-4918-92bf-b2b6594f057b
Pipeline works so workspaceid and lakehouseid are correctly set. I have default values and environment specific values in variable library set. All of these are working, but I cannot edit mappings after using the variable library in copy job.
I have destination schema also defined, not sure if it affects this.
@PanuO What I see on your screenshot is a clear indication that some of the object IDs are incorrect. You need to double check all the IDs that you have in your configuration. Another possible scenario is the IDs could be correct but you don't have access to those objects. Sometimes the error messages are not well refined.
In your original question, though I just noticed that on a screenshot you reference Tables/dbo/project/projects.csv' I don't think this is correct. If project.csv is a file, it should be in the Files path, not Tables. Please double check that. If you wrote those paths manually you might have made mistakes. I suggest you go through the Copy Job creation again and use the browsing in the UI to find all the proper objects and then find their locations if you need to store them in the Variables Library. This is a downsode of parameterization: if you make a mistake in your parameter value, your workload will fail. Need to double check and test.
If you find this answer useful or solving your problem please consider giving kudos and/or marking as a solution.
Hi @PanuO ,
Thank you for reaching out to the Microsoft Community Forum.
Hi @apturlov , Thank you for your prompt response.
Hi @PanuO Could you please try the proposed solution shared by @apturlov ? Let us know if you’re still facing the same issue we’ll be happy to assist you further.
Regards,
Dinesh
Hi @PanuO ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.
Regards,
Dinesh
It could be that some of the ID is wrong, but after deployment it still works...
Hi @PanuO ,
Thank you for the update. could you please confirm that, whether your issue is resolved? or if you have any further query do let us know.
Regards,
Dinesh
Hi @PanuO, you mention that you created a Copy Job, bu then later you say "Pipeline works". Are you calling your Copy Job from a pipeline?
Assuming that this is just a misphrasing, I did a simple experiement and created a Copy Job that loads from a csv file into a Lakehouse table. When creating the Copy Job I chose "not to run immediately". Once the CJ was saved I clicked on the destination settings and parameterized the Lakehouse ID by using a Varibale from the library. My variable had a default value.
Then, I verified the mapping that I customized when creating a Job. Now, when I changed the destination Lakehouse ID to a varible the mapping completely disappeared. So I recreated it again using "import schemas".
I applied tha changes to the CJ and clicked Run. The job completed without errors.
That said, I was not able to reproduce your error.
But, I confirm that the experience with editing the Copy Job parameters especially mapping, is very confusing and not straight forward. It is possible that you skipped or missed a step in configuration and you might need that again.
On another hand, I, personally, don't see a scenraio when I would need to parameterize a Copy Job considering that it is not a fully-featured ETL and I would only use it occasionally for individual data sets. It absolutely makes sense to parameterize a data pipeline, but I would not use a Copy Job in a pipeline, instead I would use a Copy Activity that is easily and fully parameterizable. This official guide may help decide which tool to use in which situation Fabric decision guide - copy activity, dataflow, Eventstream, or Spark - Microsoft Fabric | Microsof...
If you find this answer useful or solving your problem please consider giving kudos and/or marking as a solution.
Thank you for great answer. Did you try to also set the workspace id? I am surprised that you are able to customize mapping as my UI just shows an error. My source is CSV file also.