Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredJoin us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.
Hi there,
I've built a data pipeline copying a table in SQL Server into a parquet file in Azure Blob Storage and got the below error message:
ErrorCode=UserErrorInvalidValueInPayload,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to convert the value in 'compressionCodec' property to 'Microsoft.DataTransfer.Richfile.Core.ParquetCompressionCodec' type. Please make sure the payload structure and value are correct.,Source=Microsoft.DataTransfer.DataContracts,''Type=System.InvalidCastException,Message=Null object cannot be converted to a value type.,Source=mscorlib,'
How to fix it please?
Cheers,
Kylie
Solved! Go to Solution.
Hi @KylieF,
We would need to some debugging to understand which values are actually causing the problem.
Please enable fault tolerance with 'Skip incompatible rows'. This will load all correct data into the table.
Enable logging so that a log is created which contains rows that are causing the problem. You can check individual columns to understand any special character/value that is causing the problem.
In addition to that, you can try mapping manually giving the column names and data types. Note that, parquet has some restrictions on column names.
Hi @KylieF
It looks like your data pipeline is trying to write a Parquet file to Azure Blob Storage, but the compressionCodec property is either missing or set to an invalid value. Here’s how you can fix it:
Set a Valid Compression Codec
Ensure that the compressionCodec property is set correctly. The supported values for Parquet compression in Azure Data Factory are:
Example JSON snippet for the dataset configuration:
Check for Null or Missing Property
The error suggests that compressionCodec might be missing or set to null. If you haven't explicitly defined it, try setting it to "None" instead of leaving it blank.
Validate JSON Payload in the Dataset Definition
If you are using Azure Data Factory (ADF) or Synapse Pipelines, go to the Sink dataset (Parquet file) settings and verify that the compression setting is defined.
Check Your Pipeline Parameters
If compressionCodec is set dynamically using pipeline parameters, ensure the parameter value is correctly assigned before execution.
Restart the Pipeline After Fixing
Once you've corrected the issue, publish the changes and restart the pipeline.
Hi @KylieF,
We would need to some debugging to understand which values are actually causing the problem.
Please enable fault tolerance with 'Skip incompatible rows'. This will load all correct data into the table.
Enable logging so that a log is created which contains rows that are causing the problem. You can check individual columns to understand any special character/value that is causing the problem.
In addition to that, you can try mapping manually giving the column names and data types. Note that, parquet has some restrictions on column names.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.