Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
In this video I explain how we can dynamically reference the data source read and write location inside a Fabric PySpark notebook throughout the deployment pipeline stages, without having to manually interact or adjust any variables or parameters. Depending on your needs, you can reference subsets of the data source from the productive deployment stage in the dev or test stages to reduce data load. The idea of this scenario is that data resides in a single location and is not duplicated throughout the deployment pipeline workspaces for each stage. We have one version of truth and eliminate data silos. The concept is supposed to reduce manual interaction and risk. We want to deploy between the deployment pipeline stages without having to worry about the code.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.