Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreGet certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now
In this video I explain how we can dynamically reference the data source read and write location inside a Fabric PySpark notebook throughout the deployment pipeline stages, without having to manually interact or adjust any variables or parameters. Depending on your needs, you can reference subsets of the data source from the productive deployment stage in the dev or test stages to reduce data load. The idea of this scenario is that data resides in a single location and is not duplicated throughout the deployment pipeline workspaces for each stage. We have one version of truth and eliminate data silos. The concept is supposed to reduce manual interaction and risk. We want to deploy between the deployment pipeline stages without having to worry about the code.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.