Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hi there,
I have been having issues with a data pipeline copy job activity recently. All of a sudden 2 weeks ago, my pipeline stopped working on a Copy-Job activity, where I pull in data from a REST API source, and map it to my lakehouse. I keep receiving the vague error below:
There does not appear to be an issue with the lakehouse as I have many other pipelines that move data from other API sources into the same lakehouse database. I tried researching this '2200' error but to little or no avail. I also tried recreating the pipeline but I receive the same error.
Would this be an issue with the source in this case? I thought it may be to do with a NULL value being added to the LH, but all columns are configured to allow NULLs. Any help would be much appreciated.
Many thanks
Hi @AdamJennings-78 ,
I wanted to follow up to see if your issue has been resolved or if you require any further information. Please let us know if you need any additional assistance.
Thank You
Hi all and thanks for your input to this issue.
I think I may have figured out what the issue is and how I come across it, but I am getting a different error now.
The error I am now getting is the following (redacted some client info):
It seems like it is trying to write the parquet files to the lakehouse, but is returning a Null Exception? When I try and re-configure the destination connection, it is showing a different layout to the pipeline actions that actually work. See below the instances where they work and don't work:
This screenshot above is the configuration I use when getting the new error
This is an example of a pipeline configured a few months ago where it is working perfectly. I do not know why all of a sudden there is a new step when configuring a new lakehouse destination, where I now have to select the Lakehouse after selecting a connection.
If I were to configure the destination on the 'error' screenshot, I am not able to select the 'BronzeLakehouse' database directly. It's almost like Fabric have updated something, causing my new pipelines to break.
Any input on this issue would be greatly appreciated.
Thanks
Hi @AdamJennings-78 ,
Thank you for your message. The error you're encountering
UserErrorWriteFailedFileOperation with a NullReferenceException
is likely due to a recent change in the Lakehouse connector flow within Microsoft Fabric.
In the updated UI, you must now explicitly select the Lakehouse after choosing the connection. If the Lakehouse isn't properly initialized, this can result in a null object reference during Parquet file write.
To resolve this
1. Ensure the Lakehouse is created in the same workspace and is visible in the dropdown. If not, recreate it and confirm semantic model sync completes.
2. Avoid reusing pipeline logic from older configurations. Build a fresh pipeline and explicitly select the Lakehouse after choosing the connection.
Hope this resolves the issue please let us know how it goes.
Regards,
Yugandhar.
Hi Yugandhar,
I created a new Lakehouse in my Workspace, and created a fresh pipeline, pointing to said Lakehouse, but to avail unfortunately. I still receive the same error.
Is there a possiblility that the Lakehouse doesn't like certain columns? I am using an HTTP web connection as my source for XML type content. This was working initially before any Microsoft update. I may be forced to go through a DataFlow at this stage, but I am hesitant as it can be quite sore on the Capacity Units.
Thanks
Hi @AdamJennings-78 ,
Since the error is still appearing and you're working with XML content over HTTP, it's possible the issue is related to schema inference or how some columns are handled during the write process.
1. If the XML has nested or inconsistent structures, it could result in null only or unsupported column types, leading to a NullReferenceException.
2. Try previewing the data before the destination step to check for columns that are completely null or have unexpected types.
3. As a test, try switching the destination from Tables to Files and writing to a folder path. This can help determine if the issue is with table mapping or schema enforcement.
4. If possible, simplify the XML payload and run a test pipeline to see if specific fields are causing the error.
If the problem continues, you could use Dataflow Gen2 to flatten and clean the XML before writing to Lakehouse. While this may use more Capacity Units, it offers better control over the schema and may help avoid the null reference issue.
-Yugandhar.
Hi @AdamJennings-78 ,
Please try re-authenticating your REST API linked service by updating the credentials or secret and saving the connection again.
If the problem continues after re-authorizing, could you let me know if there have been any recent changes to the REST API response or schema? Sometimes, changes in the API response structure can also cause this kind of error.
Thanks for your valuable response @AntoineW .
Regards,
Yugandhar.
Hello @AdamJennings-78,
I came across a very similar case on the Fabric Community where a pipeline started failing with the same 2200 – User configuration issue error.
In that case, the root cause was that the Service Principal secret had recently expired or been rotated. The pipeline was still using the old credentials, so the Copy Data activity started failing.
What fixed it:
They updated the connection in Fabric with the new Service Principal secret (or re-authenticated the REST API linked service).
After re-authorizing, the pipeline ran successfully again.
So I would recommend double-checking if:
Your REST API credentials, token, or Service Principal secret have changed in the past couple of weeks.
Re-enter the credentials in the pipeline’s connection settings and save.
This is a common root cause when a pipeline suddenly stops working without any schema or table changes on the Lakehouse side.
Another topic : https://community.fabric.microsoft.com/t5/Data-Pipeline/Data-Pipeline-Copy-data-Error-User-configura...
Hope it can help you !
Best regards,
Antoine