Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers!
Enter the sweepstakes now!Prepping for a Fabric certification exam? Join us for a live prep session with exam experts to learn how to pass the exam. Register now.
In this video I explain how we can dynamically reference the data source read and write location inside a Fabric PySpark notebook throughout the deployment pipeline stages, without having to manually interact or adjust any variables or parameters. Depending on your needs, you can reference subsets of the data source from the productive deployment stage in the dev or test stages to reduce data load. The idea of this scenario is that data resides in a single location and is not duplicated throughout the deployment pipeline workspaces for each stage. We have one version of truth and eliminate data silos. The concept is supposed to reduce manual interaction and risk. We want to deploy between the deployment pipeline stages without having to worry about the code.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.