Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedJoin us at the 2025 Microsoft Fabric Community Conference. March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for $400 discount. Register now
Here we are connecting HTTP as data source, following the below steps
-->First create a folder in our workspace, so that all the corresponding works regarding these will store in that folder-->open folder
-->Create a new lake house- dataengineering homepage-new item-search lakehouse-create a new Lakehouse with a name of your choice-create
-->On the Lake view tab in the pane on the left, in the … menu for the Files node, select New subfolder and create a subfolder named new_data.
-->create a pipeline-copydata-datasource-HTTP-Connect data source-give required url(from which you want to ingest),connection-create new connection,connection name-as per your requirement,data gateway-none,authentikind-anonymous
-->next-select as per requirement-for example(Relative URL: Leave blank, Request method: GET, Additional headers: Leave blank, Binary copy: Unselected, Request timeout: Leave blank, Max concurrent connections: Leave blank)
-->next,and wait for the data to be sampled and then ensure that the following settings are selected
(File format: Delimited Text, Column delimiter: Comma (,), Row delimiter: Line feed (\n), First row as header: Selected, Compression type: None)
-->Select Preview data to see a sample of the data that will be ingested. Then close the data preview and select Next.
-->Set the following data destination options, and then select Next:, Root folder: Files, Folder path name: new_data, File name: xxxxx.csv, Copy behavior: None,-->next
-->Set the following file format options and then select Next:, File format: DelimitedText, Column delimiter: Comma (,), Row delimiter: Line feed (\n), Add header to file: Selected, Compression type: None
-->On the Copy summary page, review the details of your copy operation and then select Save + Run. A new pipeline containing a Copy Data activity is created. -->Now pipeline will start with status as in queued/progress.
-->After successful run, check the data in corresponding data folder according to your selection
Solved! Go to Solution.
Data Ingestion - Connecting to data source (HTTP)
Here we are connecting HTTP as data source, following the below steps
-->First create a folder in our workspace, so that all the corresponding works regarding these will store in that folder-->open folder
-->Create a new lake house- dataengineering homepage-new item-search lakehouse-create a new Lakehouse with a name of your choice-create
-->On the Lake view tab in the pane on the left, in the … menu for the Files node, select New subfolder and create a subfolder named new_data.
-->create a pipeline-copydata-datasource-HTTP-Connect data source-give required url(from which you want to ingest),connection-create new connection,connection name-as per your requirement,data gateway-none,authentikind-anonymous
-->next-select as per requirement-for example(Relative URL: Leave blank, Request method: GET, Additional headers: Leave blank, Binary copy: Unselected, Request timeout: Leave blank, Max concurrent connections: Leave blank)
-->next,and wait for the data to be sampled and then ensure that the following settings are selected
(File format: Delimited Text, Column delimiter: Comma (,), Row delimiter: Line feed (\n), First row as header: Selected, Compression type: None)
-->Select Preview data to see a sample of the data that will be ingested. Then close the data preview and select Next.
-->Set the following data destination options, and then select Next:, Root folder: Files, Folder path name: new_data, File name: xxxxx.csv, Copy behavior: None,-->next
-->Set the following file format options and then select Next:, File format: DelimitedText, Column delimiter: Comma (,), Row delimiter: Line feed (\n), Add header to file: Selected, Compression type: None
-->On the Copy summary page, review the details of your copy operation and then select Save + Run. A new pipeline containing a Copy Data activity is created. -->Now pipeline will start with status as in queued/progress.
-->After successful run, check the data in corresponding data folder according to your selection
Data Ingestion - Connecting to data source (HTTP)
Here we are connecting HTTP as data source, following the below steps
-->First create a folder in our workspace, so that all the corresponding works regarding these will store in that folder-->open folder
-->Create a new lake house- dataengineering homepage-new item-search lakehouse-create a new Lakehouse with a name of your choice-create
-->On the Lake view tab in the pane on the left, in the … menu for the Files node, select New subfolder and create a subfolder named new_data.
-->create a pipeline-copydata-datasource-HTTP-Connect data source-give required url(from which you want to ingest),connection-create new connection,connection name-as per your requirement,data gateway-none,authentikind-anonymous
-->next-select as per requirement-for example(Relative URL: Leave blank, Request method: GET, Additional headers: Leave blank, Binary copy: Unselected, Request timeout: Leave blank, Max concurrent connections: Leave blank)
-->next,and wait for the data to be sampled and then ensure that the following settings are selected
(File format: Delimited Text, Column delimiter: Comma (,), Row delimiter: Line feed (\n), First row as header: Selected, Compression type: None)
-->Select Preview data to see a sample of the data that will be ingested. Then close the data preview and select Next.
-->Set the following data destination options, and then select Next:, Root folder: Files, Folder path name: new_data, File name: xxxxx.csv, Copy behavior: None,-->next
-->Set the following file format options and then select Next:, File format: DelimitedText, Column delimiter: Comma (,), Row delimiter: Line feed (\n), Add header to file: Selected, Compression type: None
-->On the Copy summary page, review the details of your copy operation and then select Save + Run. A new pipeline containing a Copy Data activity is created. -->Now pipeline will start with status as in queued/progress.
-->After successful run, check the data in corresponding data folder according to your selection
Hi, @SuryaTejaK
Thanks for sharing out on the forum about importing data using HTTP connection as data source, this will help a lot of people.
Best Regards,
Yang
Community Support Team
Thank you so much @v-yaningy-msft
it will be very happy if it helpful to others.
Kindly give one like,if you like it
Regards,
Suryateja K.
Hi, @SuryaTejaK
Of course, the kudos are well deserved, and by the way, would you be able to reply yourself to share the way again, and then accept your reply as solution. An answered thread is searched more easily than an open one. Others will learn more from your sharing.
Best Regards,
Yang
Community Support Team
hi @v-yaningy-msft you mean to say the same matter can i post as reply and accept as solution for my reply by me itslef? right
Hi, @SuryaTejaK
Yes, that's what I means, an answered thread is searched more easily than an open one. Others will learn more from your sharing. Thanks for your understanding.
Best Regards,
Yang
Community Support Team
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
Check out the February 2025 Fabric update to learn about new features.
User | Count |
---|---|
42 | |
5 | |
2 | |
2 | |
2 |
User | Count |
---|---|
37 | |
6 | |
6 | |
5 | |
4 |