The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
If Fabric pipelines support array and object parameters without issue and a Spark Notebook within the pipeline can leverage the pipeline parameters for configurable notebook execution than it would be nice for both to support the same range of data types for their parameters.
This would be a QOL adjustment instead of having to pass strings that are parsed into json objects in the notebook. It would also allow, in a more complex workflow, the ability to cleanly use an array in a forEach loop in the pipeline and potentially handle it in a seperate fashion in a notebook.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.