Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
PanuO
Helper II
Helper II

Copy Job: Using variable library prevents editing mappings

I have created copy job which reads CSV file into Lakehouse as Delta table. I can edit mappings when I have hardcoded lakehouse and warehouse id. If I change them to come from variable library and open the edit mapping page I get this error:

 

Lakehouse operation failed for: Operation returned an invalid status code 'NotFound'. Workspace: 'a2e4fabe-c127-4f69-b7a5-6fd1acfc677f'. Path: 'a03b2b59-cfc5-3189-d41f-9be4fa2a40f7/Tables/dbo/project/projects.csv'. ErrorCode: 'PathNotFound'. Message: 'The specified path does not exist.'. RequestId: '3d5bec09-501f-0059-5ae9-6f9b03000000'. TimeStamp: 'Thu, 18 Dec 2025 06:40:50 GMT'. Operation returned an invalid status code 'NotFound' Activity ID: e23619c1-1e8e-4918-92bf-b2b6594f057b 

 

Pipeline works so workspaceid and lakehouseid are correctly set. I have default values and environment specific values in variable library set. All of these are working, but I cannot edit mappings after using the variable library in copy job.

14 REPLIES 14
apturlov
Responsive Resident
Responsive Resident

@PanuO I confirm your finding, it's reproducible. When I edit mapping in an existing Copy Job where the source is a flat file and try to import the schema I get the same error:

apturlov_0-1768252770718.png

I agree that the path to the source file is incorrect and that may be a bug in Fabric. I do think that you need to report this to Microsoft if you can open a support request.

As I suggested before, I would be using a Copy activity instead of a Copy job in your case. That should help eliminate this unfortunate issue affecting your work.

Hi @PanuO ,

As mentioned by @apturlov , Please raise a support ticket . To raise a support ticket for Fabric and Power BI, kindly follow the steps outlined in the following guide:

How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn

 

Regards,

Dinesh

I have created a ticket about this and waiting for confirmation from Microsoft about the fix due date.

Hi @PanuO ,

Thank you for the update. you will get update from Microsoft support team.

 

Regards,

Dinesh

PanuO
Helper II
Helper II

Sorry I have barked the wrong tree and thought that the problem was in Variable Library, but it is not. Problem is that Copy Job does not support files from subfolders. I can now reproduce the error consistently. 

Steps:
1. Create new lakehouse (with schemas)
2. Create subfolder under Files section and upload CSV file (lets say abc.csv) into subfolder
3. Create new Pipeline and add Copy Job activity
4. In Copy Job select CSV file from subfolder as source data
5. In destionation type in a schema name for example def and give any table name. I used def_table

PanuO_0-1768223175986.png

6. Import schema and save the Copy Job. Everything seems to work now.
7. Next we need to edit the Copy Job. Open same Copy Job and click "Edit mapping".
8. Click Edit column mapping
9. Fabric seems to cache the column mapping, so it seems to work now. Click reset column mappings and then click import schemas. This reproduces the error.

This results the same error message as I have: Lakehouse operation failed for: Operation returned an invalid status code 'NotFound'. Workspace: 'd6dde882-24c5-4854-882a-7d44e6386811'. Path: '34148136-1ada-4da5-bb25-62a0a9b487f9/Tables/dbo/subfolder'. ErrorCode: 'PathNotFound'.

Notice that it tries to read data from Tables/dbo/subfolder, but it should read it from Files...

 

PanuO
Helper II
Helper II

I have destination schema also defined, not sure if it affects this.

PanuO_0-1766142593048.png

 

apturlov
Responsive Resident
Responsive Resident

@PanuO What I see on your screenshot is a clear indication that some of the object IDs are incorrect. You need to double check all the IDs that you have in your configuration. Another possible scenario is the IDs could be correct but you don't have access to those objects. Sometimes the error messages are not well refined.

In your original question, though I just noticed that on a screenshot you reference Tables/dbo/project/projects.csv'  I don't think this is correct. If project.csv is a file, it should be in the Files path, not Tables. Please double check that. If you wrote those paths manually you might have made mistakes. I suggest you go through the Copy Job creation again and use the browsing in the UI to find all the proper objects and then find their locations if you need to store them in the Variables Library. This is a downsode of parameterization: if you make a mistake in your parameter value, your workload will fail. Need to double check and test.

If you find this answer useful or solving your problem please consider giving kudos and/or marking as a solution.

Hi @PanuO ,

Thank you for reaching out to the Microsoft Community Forum.

 

Hi @apturlov , Thank you for your prompt response.

 

Hi @PanuO  Could you please try the proposed solution shared by  @apturlov   ? Let us know if you’re still facing the same issue we’ll be happy to assist you further.

 

Regards,

Dinesh

Hi @PanuO ,

We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.

 

Regards,

Dinesh

It could be that some of the ID is wrong, but after deployment it still works...

Hi @PanuO ,

Thank you for the update. could you please confirm that, whether your issue is resolved? or if you have any further query do let us know.

 

Regards,

Dinesh

Problem is not in variable library. It is that Copy Job does not support subfolders with files.

apturlov
Responsive Resident
Responsive Resident

Hi @PanuO, you mention that you created a Copy Job, bu then later you say "Pipeline works". Are you calling your Copy Job from a pipeline?
Assuming that this is just a misphrasing, I did a simple experiement and created a Copy Job that loads from a csv file into a Lakehouse table. When creating the Copy Job I chose "not to run immediately". Once the CJ was saved I clicked on the destination settings and parameterized the Lakehouse ID by using a Varibale from the library. My variable had a default value.

apturlov_0-1766112305310.png 

apturlov_1-1766112364858.png

 

Then, I verified the mapping that I customized when creating a Job. Now, when I changed the destination Lakehouse ID to a varible the mapping completely disappeared. So I recreated it again using "import schemas".

apturlov_2-1766112587473.png

I applied tha changes to the CJ and clicked Run. The job completed without errors.

apturlov_3-1766112689858.png

That said, I was not able to reproduce your error.

But, I confirm that the experience with editing the Copy Job parameters especially mapping, is very confusing and not straight forward. It is possible that you skipped or missed a step in configuration and you might need that again.

On another hand, I, personally, don't see a scenraio when I would need to parameterize a Copy Job considering that it is not a fully-featured ETL and I would only use it occasionally for individual data sets. It absolutely makes sense to parameterize a data pipeline, but I would not use a Copy Job in a pipeline, instead I would use a Copy Activity that is easily and fully parameterizable. This official guide may help decide which tool to use in which situation Fabric decision guide - copy activity, dataflow, Eventstream, or Spark - Microsoft Fabric | Microsof...

If you find this answer useful or solving your problem please consider giving kudos and/or marking as a solution.

Thank you for great answer. Did you try to also set the workspace id? I am surprised that you are able to customize mapping as my UI just shows an error. My source is CSV file also.

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors