This time we’re going bigger than ever. Fabric, Power BI, SQL, AI and more. We're covering it all. You won't want to miss it.
Learn moreDid you hear? There's a new SQL AI Developer certification (DP-800). Start preparing now and be one of the first to get certified. Register now
In modern data platforms, building efficient and reliable data pipelines is at the heart of every data engineering workflow. This is where Microsoft Fabric Data Factory comes into play. It provides a powerful, intuitive interface to design, orchestrate, and automate data movement across different systems. With its visual pipeline designer, rich set of activities, and built-in scheduling and monitoring capabilities, Data Factory enables data engineers to create scalable workflows—from simple data ingestion to complex transformations—without heavy coding.
As we move from understanding the interface to building real solutions, the next essential step is preparing a robust data source. In this article, we will start by creating an Azure Data Lake Storage (ADLS) account in Azure, upload sample data, and lay the foundation for our upcoming data pipelines.
We’ll begin by setting up an Azure Storage account, where we’ll upload some sample data that will later be used in our data pipeline. I’ll assume you already have access to a Microsoft Azure account. Start by opening a new browser tab and navigating to the Azure portal (portal.azure.com). From there, we’ll proceed to create the storage account within your Azure environment.
To set up the storage account, simply search for “Storage” in the Azure portal, where you’ll find the Create option—go ahead and click it. You’ll then be prompted to select your subscription and define a new resource group. Next, provide a unique name for your storage account. You can leave the region and other settings as default if they suit your needs and proceed by clicking Next. To configure it as an Azure Data Lake Storage account, make sure to enable the Hierarchical Namespace option.
After that, continue clicking Next through the remaining steps. In the final stage, you’ll see a summary page displaying all your selected configurations. Take a moment to review everything, and once you’re satisfied, click Create to proceed.
The deployment may take a few moments to complete.
Once the Azure Storage account is successfully created, navigate to the resource to continue.
Once the storage account is ready, the next step is to create a container. Navigate to Data Lake Storage, add a new container, give it a name like Building Container, and then click Create.
Within this container, we can now upload our sample data. Simply select the required files—such as orders, products, and customers—and click Upload to add them.
That’s it—your data has been successfully uploaded to the Azure Data Lake Storage account. In the next article, we’ll build a pipeline to read this data from ADLS and load it into a Fabric Lakehouse.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.