Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now

Poweraegg

Microsoft Fabric Tutorial - Deployment Pipelines Basics

 

In this video I explain how we can dynamically reference the data source read and write location inside a Fabric PySpark notebook throughout the deployment pipeline stages, without having to manually interact or adjust any variables or parameters. Depending on your needs, you can reference subsets of the data source from the productive deployment stage in the dev or test stages to reduce data load. The idea of this scenario is that data resides in a single location and is not duplicated throughout the deployment pipeline workspaces for each stage. We have one version of truth and eliminate data silos. The concept is supposed to reduce manual interaction and risk. We want to deploy between the deployment pipeline stages without having to worry about the code.