Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
RoopanshGupta
New Member

RunTimeTransferContext Error in Fabric Pipeline When Copying Data from Lakehouse to Warehouse

 

CONTEXT:

I have a Fabric pipeline that processes data daily, performing the following steps:

  1. Retrieve data from APIs using Copy Data activities and store it in a Lakehouse table.

  2. Perform data transformations on the Lakehouse data.

Append today’s transformed data from the Lakehouse to a Warehouse table using another Copy Data activity.

 

PROBLEM:

Just today, the pipeline fails at the step where data is copied from the Lakehouse to the Warehouse, even though:

  • No changes were made to the pipeline or its configuration.

  • The rest of the pipeline works flawlessly (e.g., fetching data from APIs, storing it in the Lakehouse, and transformations).

  • Both the Lakehouse and Warehouse are operational. Queries can be run directly in both.

The error message from the failing activity is as follows:

 

ErrorCode=UserErrorInvalidValueInPayload, 'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException, Message=Failed to convert the value in 'transferContext' property to 'Microsoft.DataTransfer.Runtime.TransferContext' type. Please make sure the payload structure and value are correct., Source=Microsoft.DataTransfer.DataContracts, 'Type=System.InvalidCastException, Message=Object must implement IConvertible., Source=mscorlib,'

 

Or here's the actual error's screenshot:

RoopanshGupta_0-1747803254893.png

 

COPY DATA ACTIVITY's SETUP:

  1. Source: Lakehouse table containing daily data.
  2. Destination: Fabric SQL Warehouse table.
  3.  Mapping:

    • The Copy Data activity's column mapping is dynamically accessed using a Lookup activity.

    • The Lookup activity retrieves the mapping data from the Warehouse.

In the Copy Data activity's mapping section, the mapping is specified as:
@json(activity('07_Lookup_ColumnMapping').output.firstRow.ColumnMapping)

 

WHAT WE'VE ALREADY TRIED:

1. Verified Pipeline Configuration: Checked the Copy Data activity settings, including source and sink configurations, column mappings, and incremental load logic.

2. Ensured that no parameters or metadata are causing issues.

3. Validated Data:

  • Reviewed today’s data in the Lakehouse to ensure it matches the expected schema.

  • Confirmed that there are no null values, data type mismatches, or violations of constraints in the Warehouse.

  • Checked the Warehouse and Lakehouse: Both are accessible and operational. Queries can be executed without issues. Re-tested the Copy Data Activity: Attempted to copy a small subset of data manually between the Lakehouse and Warehouse, and this works, but the copyData activity has to be re-created manually, the copy of the activity from within the pipeline doesn't work in a separate pipeline as well.

 

REQUEST FOR HELP

  • Has anyone encountered a similar RunTimeTransferContext error?

  • Could this be related to a backend issue or a recent service update?

  • Are there any additional logs or configurations I should check?

 

 

Any insights or suggestions would be greatly appreciated! Thank you! 

 

@v-pratak @v-prasare @eileen_iTalent @burakkaragoz @v-saia @v-satreh @v-sathishse @v-sathmakuri 

@v-kathullac 

1 ACCEPTED SOLUTION
v-shamiliv
Community Support
Community Support

Hi @RoopanshGupta 
Thank you for reaching out microsoft fabric community forum.
This is an intermittent issue, and we have not been able to identify the root cause so far. If the issue persists in the future, we recommend raising a high-priority support ticket for immediate assistance.
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
If this solution helps, please consider giving us Kudos and accepting it as the solution so it can assist other community members.
Thank you.

View solution in original post

5 REPLIES 5
v-shamiliv
Community Support
Community Support

Hi @RoopanshGupta 

We are following up once again regarding your query. Could you please confirm if the issue has been resolved through the support ticket with Microsoft?

If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.

Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.

 

Thank you for your understanding and participation.

v-shamiliv
Community Support
Community Support

Hi @RoopanshGupta 
Thank you for reaching out microsoft fabric community forum.
This is an intermittent issue, and we have not been able to identify the root cause so far. If the issue persists in the future, we recommend raising a high-priority support ticket for immediate assistance.
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
If this solution helps, please consider giving us Kudos and accepting it as the solution so it can assist other community members.
Thank you.

RoopanshGupta
New Member

Thanks for sharing your experience - it’s reassuring to know we’re not alone in encountering this issue.

For us, this error came out of the blue on 21 May. The pipeline had been working flawlessly for months, but suddenly, it started failing, that too on specific tables within this particular pipeline, showing the exact error I described earlier. We tried everything the entire day: verifying data types, checking configurations, recreating activities, etc., but nothing worked for those specific tables.

 

What's frustrating is that, today, on 22 May, everything is mysteriously working back to normal, almost as if nothing had happened.

While I’m glad the issue resolved itself, this is not acceptable, especially for productionized pipelines. Such sudden disruptions can have significant downstream impacts, and the lack of visibility into the root cause or any acknowledgment from Microsoft makes it even more concerning. 

 

To add to the strangeness, today (22 May), we encountered a new Fabric-exclusive error during the retry mechanism. Here’s the screenshot of the error:

RoopanshGupta_0-1747873288981.png

 

 

Fortunately, our retrial mechanism kicked in, and the error didn’t appear in the second attempt. But these intermittent and unexplained errors need to be addressed urgently by Microsoft.

 

At this point, it seems plausible that there are backend changes causing these issues, as nothing on our end has changed. This lack of stability undermines confidence in the platform for production workloads. I’m considering raising a support ticket for this, but seeing how others are already burdened with multiple ongoing tickets, it would be great to have some clarity or acknowledgment from Microsoft on whether they are aware of these issues.

 

 

All the best buddy! Let’s hope this gets resolved for good soon.

 

 

Thank you for sharing!

First of all, I'm happy that your issue seems to have resolved itself, although I fully agree with you that it's not really acceptable.

I'm not sure how long you've been running your production workload Fabric capacity, but errors like this have not been completely uncommon for us during last year (at one point there was backend issues with the live livy session, which resulted in us not being able to run our Notebooks at all for 3 days for example).

 

 

That being said, the second error you're describing could actually be related to synchronization issues between your Lakehouse artifact and it's SQL endpoint.

We have had similar issues when reading data from SQL endpoints, the issue has so far always been related to meta-data inconsistencies when reading the parquet files from storage.

One such scenario, for us, has been if we modify a large number of records in a table in our Silver Lakehouse, and immediately try to read the changes downstream in our Gold Lakehouse (we use shortcuts between the two).

 

It usually solves itself after a couple of retries, as the SQL endpoints does automatically refresh within specific intervals, but we've instead added a permanent solution to this by always refreshing our SQL endpoints in a pipeline action (notebook invocation).

It could also be related to soft-delete policies (if you have that enabled) due to retentation period.

Overall the issue is very sporadic and only appeared randomly for us, but we've not experienced it since we implemented our solution.


FYI: I just ran our pipeline and can confirm that it works now as well.

Most likely a transient error in Microsoft's backend, and I strongly doubt we'll get an official statement on this.

 

Best of luck to you!

HMSNemanja
Regular Visitor

Experiencing the same issue, it suddenly began on 2025-05-19 despite no changes, It has worked for atleast 3 months prior to this without any issues.

Our Data pipeline does similar things to yours, collecting data from Lakehouse / Warehouse endpoints and copying data into an external Azure SQL database.

We're using TabularTranslator to dynamically work with JSON definitions of our tables /columns / keys in both source and sink.

We then iterate a list of tables with different sources (2 lakehouses, 1 warehouse) that all get upserted into the same target destination (Azure SQL).

 

Suddenly we're getting the exact same error message as you've posted.

 

ErrorCode=UserErrorInvalidValueInPayload,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to convert the value in 'transferContext' property to 'Microsoft.DataTransfer.Runtime.TransferContext' type. Please make sure the payload structure and value are correct.,Source=Microsoft.DataTransfer.DataContracts,''Type=System.InvalidCastException,Message=Object must implement IConvertible.,Source=mscorlib,'

 I have also verified that data types of source tables have not changed.

 

However, in our cases, the entire flow still works for our warehouse source tables, It's just our lakehouse tables, from both lakehouses, that get this error.

We're still able to extract and upsert data from all warehouse tables.

 

This really feels like backend changes in the platform, or transient errors.

 

I haven't raised any support with Microsoft over this yet (Got 3 other ongoing tickets that are exhausting me).

There is no official Service Status issue registered in the Admin Portal ServiceHealth either.

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors