Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Hi folks,
I have set up a pipeline to perform a copy of a single file from Azure BLOB to a new LakeHouse, into a new table created in the process. The setup is straightforward, but I am unable to find a way to specify the destination table column formats. Everything defaults to String and there is no apparent way to override this.
Am I missing a step somewhere along the way?
Apologies, a relative noob with the MS stack.
Thanks in advance,
Andrew
Solved! Go to Solution.
Hi,
The copy activity in a pipeline will bring the data as they are in the original files. For example, for a parquet source, it creates automatically the data type mapping:
If you need to convert the types, you need to use a dataflow. You can use a dataflow for the entire process, or you can drop your file in the "Files" area and use the dataflow to convert from the "Files" area to the "Tables" area.
The pipeline can control the execution, first copying the file, than triggering the data flow for the data types transformations.
Kind Regards,
Dennes
Hi,
The copy activity in a pipeline will bring the data as they are in the original files. For example, for a parquet source, it creates automatically the data type mapping:
If you need to convert the types, you need to use a dataflow. You can use a dataflow for the entire process, or you can drop your file in the "Files" area and use the dataflow to convert from the "Files" area to the "Tables" area.
The pipeline can control the execution, first copying the file, than triggering the data flow for the data types transformations.
Kind Regards,
Dennes
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!