March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Here we are connecting HTTP as data source, following the below steps
-->First create a folder in our workspace, so that all the corresponding works regarding these will store in that folder-->open folder
-->Create a new lake house- dataengineering homepage-new item-search lakehouse-create a new Lakehouse with a name of your choice-create
-->On the Lake view tab in the pane on the left, in the … menu for the Files node, select New subfolder and create a subfolder named new_data.
-->create a pipeline-copydata-datasource-HTTP-Connect data source-give required url(from which you want to ingest),connection-create new connection,connection name-as per your requirement,data gateway-none,authentikind-anonymous
-->next-select as per requirement-for example(Relative URL: Leave blank, Request method: GET, Additional headers: Leave blank, Binary copy: Unselected, Request timeout: Leave blank, Max concurrent connections: Leave blank)
-->next,and wait for the data to be sampled and then ensure that the following settings are selected
(File format: Delimited Text, Column delimiter: Comma (,), Row delimiter: Line feed (\n), First row as header: Selected, Compression type: None)
-->Select Preview data to see a sample of the data that will be ingested. Then close the data preview and select Next.
-->Set the following data destination options, and then select Next:, Root folder: Files, Folder path name: new_data, File name: xxxxx.csv, Copy behavior: None,-->next
-->Set the following file format options and then select Next:, File format: DelimitedText, Column delimiter: Comma (,), Row delimiter: Line feed (\n), Add header to file: Selected, Compression type: None
-->On the Copy summary page, review the details of your copy operation and then select Save + Run. A new pipeline containing a Copy Data activity is created. -->Now pipeline will start with status as in queued/progress.
-->After successful run, check the data in corresponding data folder according to your selection
Solved! Go to Solution.
Data Ingestion - Connecting to data source (HTTP)
Here we are connecting HTTP as data source, following the below steps
-->First create a folder in our workspace, so that all the corresponding works regarding these will store in that folder-->open folder
-->Create a new lake house- dataengineering homepage-new item-search lakehouse-create a new Lakehouse with a name of your choice-create
-->On the Lake view tab in the pane on the left, in the … menu for the Files node, select New subfolder and create a subfolder named new_data.
-->create a pipeline-copydata-datasource-HTTP-Connect data source-give required url(from which you want to ingest),connection-create new connection,connection name-as per your requirement,data gateway-none,authentikind-anonymous
-->next-select as per requirement-for example(Relative URL: Leave blank, Request method: GET, Additional headers: Leave blank, Binary copy: Unselected, Request timeout: Leave blank, Max concurrent connections: Leave blank)
-->next,and wait for the data to be sampled and then ensure that the following settings are selected
(File format: Delimited Text, Column delimiter: Comma (,), Row delimiter: Line feed (\n), First row as header: Selected, Compression type: None)
-->Select Preview data to see a sample of the data that will be ingested. Then close the data preview and select Next.
-->Set the following data destination options, and then select Next:, Root folder: Files, Folder path name: new_data, File name: xxxxx.csv, Copy behavior: None,-->next
-->Set the following file format options and then select Next:, File format: DelimitedText, Column delimiter: Comma (,), Row delimiter: Line feed (\n), Add header to file: Selected, Compression type: None
-->On the Copy summary page, review the details of your copy operation and then select Save + Run. A new pipeline containing a Copy Data activity is created. -->Now pipeline will start with status as in queued/progress.
-->After successful run, check the data in corresponding data folder according to your selection
Data Ingestion - Connecting to data source (HTTP)
Here we are connecting HTTP as data source, following the below steps
-->First create a folder in our workspace, so that all the corresponding works regarding these will store in that folder-->open folder
-->Create a new lake house- dataengineering homepage-new item-search lakehouse-create a new Lakehouse with a name of your choice-create
-->On the Lake view tab in the pane on the left, in the … menu for the Files node, select New subfolder and create a subfolder named new_data.
-->create a pipeline-copydata-datasource-HTTP-Connect data source-give required url(from which you want to ingest),connection-create new connection,connection name-as per your requirement,data gateway-none,authentikind-anonymous
-->next-select as per requirement-for example(Relative URL: Leave blank, Request method: GET, Additional headers: Leave blank, Binary copy: Unselected, Request timeout: Leave blank, Max concurrent connections: Leave blank)
-->next,and wait for the data to be sampled and then ensure that the following settings are selected
(File format: Delimited Text, Column delimiter: Comma (,), Row delimiter: Line feed (\n), First row as header: Selected, Compression type: None)
-->Select Preview data to see a sample of the data that will be ingested. Then close the data preview and select Next.
-->Set the following data destination options, and then select Next:, Root folder: Files, Folder path name: new_data, File name: xxxxx.csv, Copy behavior: None,-->next
-->Set the following file format options and then select Next:, File format: DelimitedText, Column delimiter: Comma (,), Row delimiter: Line feed (\n), Add header to file: Selected, Compression type: None
-->On the Copy summary page, review the details of your copy operation and then select Save + Run. A new pipeline containing a Copy Data activity is created. -->Now pipeline will start with status as in queued/progress.
-->After successful run, check the data in corresponding data folder according to your selection
Hi, @SuryaTejaK
Thanks for sharing out on the forum about importing data using HTTP connection as data source, this will help a lot of people.
Best Regards,
Yang
Community Support Team
Thank you so much @v-yaningy-msft
it will be very happy if it helpful to others.
Kindly give one like,if you like it
Regards,
Suryateja K.
Hi, @SuryaTejaK
Of course, the kudos are well deserved, and by the way, would you be able to reply yourself to share the way again, and then accept your reply as solution. An answered thread is searched more easily than an open one. Others will learn more from your sharing.
Best Regards,
Yang
Community Support Team
hi @v-yaningy-msft you mean to say the same matter can i post as reply and accept as solution for my reply by me itslef? right
Hi, @SuryaTejaK
Yes, that's what I means, an answered thread is searched more easily than an open one. Others will learn more from your sharing. Thanks for your understanding.
Best Regards,
Yang
Community Support Team
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
7 | |
4 | |
2 | |
2 | |
1 |
User | Count |
---|---|
13 | |
7 | |
5 | |
4 | |
3 |