Skip to main content
Showing results for 
Search instead for 
Did you mean: 

Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.

Helper I
Helper I

Fabric pipeline failing - Failure happened on 'Source' side. ErrorCode=UserErrorUnclassifiedError,'T

Hi everyone,


Data analyst here, experienced with Pbi desktop but none with pipelines.

Going through the Fabric trial, and testing it by connecting to our Postgres replica DB, hoping to query that database multiple times a day and offer users with reports that are refresh using direct lake connection.


I managed to set up a few tables, however when working on the most critical one, account invoices and sales, I get this error, which I think relates to the source data being modified while I queried it.


Failure happened on 'Source' side. ErrorCode=UserErrorUnclassifiedError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Odbc Operation Failed.,Source=Microsoft.DataTransfer.ClientLibrary.Odbc.OdbcConnector,''Type=System.Data.Odbc.OdbcException,Message=ERROR [40001] [Microsoft][ODBC PostgreSQL Wire Protocol driver][PostgreSQL]ERROR: VERROR; canceling statement due to conflict with recovery(Detail User query might have needed to see row versions that must be removed.; File postgres.c; Line 3143; Routine ProcessInterrupts; ),Source=mspsql27.dll,' 


Can anyone help me understand how to fix this and get the invoice data into the lakehouse?


On a different note, how can I modify the selected columns to import, after the pipeline has been created ? Is there a way to modify those columns and data types or do I need to recreate the pipeline altogether?




EDIT: Just to clarify in case there is a better way to do this. 

I'm trying to query a live Postgres DB, I was hoping to set up the pipeline, have the data update automatically and create curate a dataset in the service for power users (which for now it will also be me).

We have PPU licenses and Fabric trail.

I thought the process would be Pipeline from Postgres to Lakehouse. Dataflow from lakehouse to lakehouse, ETL and publish dataset.


Should I be doing something differently ?


Thanks for any assistance.


Helpful resources

Microsoft Fabric Learn Together

Microsoft Fabric Learn Together

Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City


Power BI Monthly Update - April 2024

Check out the April 2024 Power BI update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors
Top Kudoed Authors