How to Build and Orchestrate a Scalable Data Pipeline in Microsoft Fabric? A Step-by-Step Guide
In today’s data-driven world, organizations handle massive volumes of information from multiple sources. The challenge is not just storing this data but ensuring it is organized, transformed, and analytics-ready—without manual intervention.
Public parameters are two small words that substantially boost the versatility and usability of Dataflow Gen2s in Microsoft Fabric for your data orchestrations.
In this blog post, we'll explore how you can start using Fabric Data Pipelines as a Power BI user that wants to take full advantage of Microsoft Fabric. And if you have never used Power BI before but still want to start with pipelines, you are in the right place!
In this blog we discuss how to use the expression language to handle referencing a field that may or may not exist at runtime, a non-existent property.
This short blog details a common scenario we saw in Azure Data Factory where we wanted to ignore zero-byte (empty) files landing in out storage accounts. In this blog we show you how to achieve this functionality with data pipeline storage event triggers (preview) and provide references to the properties and schemas of the event grid topics which will empower you to specify the filters you need to be successful.
For the last few days, I have been working on the Contoso Sales data to create a Power BI report as part of the learning. Currently, I am using the default ready to go Power BI data model provided by Microsoft which can be found here. As Microsoft Fabric is the new tech buzz, so I thought why don’t I get this data somehow in the Fabric environment.