Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi folks,
I have set up a pipeline to perform a copy of a single file from Azure BLOB to a new LakeHouse, into a new table created in the process. The setup is straightforward, but I am unable to find a way to specify the destination table column formats. Everything defaults to String and there is no apparent way to override this.
Am I missing a step somewhere along the way?
Apologies, a relative noob with the MS stack.
Thanks in advance,
Andrew
Solved! Go to Solution.
Hi,
The copy activity in a pipeline will bring the data as they are in the original files. For example, for a parquet source, it creates automatically the data type mapping:
If you need to convert the types, you need to use a dataflow. You can use a dataflow for the entire process, or you can drop your file in the "Files" area and use the dataflow to convert from the "Files" area to the "Tables" area.
The pipeline can control the execution, first copying the file, than triggering the data flow for the data types transformations.
Kind Regards,
Dennes
Hi,
The copy activity in a pipeline will bring the data as they are in the original files. For example, for a parquet source, it creates automatically the data type mapping:
If you need to convert the types, you need to use a dataflow. You can use a dataflow for the entire process, or you can drop your file in the "Files" area and use the dataflow to convert from the "Files" area to the "Tables" area.
The pipeline can control the execution, first copying the file, than triggering the data flow for the data types transformations.
Kind Regards,
Dennes