Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hi Team,
I have a notebook using pyspark.
there i have written a code that is fetching shar market value for every 5 mins. below is my notebook code
In above snap, i have given company name as ABC and i have parameterized that cell and i am getting below results in my lakehouse as table
now my question is i need to get some more companies data like ABC,XYZ,PQR companies data in that same table as rows one after other as rows, it means for every 5 mins 3 rows should add(one row for each company)
I was able to get this using pipeline like below using config table, but i need to achieve it by using only notebook and giving parameters list in the notebook.
can somebody look into it and do needful.
TIA
Hi,
If you use a Spark Job definition to schedule your jobs, it's possible to pass multiple command line arguments to the job, would this solve your problem?
Kind Regards,
Dennes
Hi, thankyou for reply.
My issue not yet resolved. can you plese write that code and paste it here, , or any links,or docs.so that would really helpful for me.
thankyou
Hi @sudhav , @DennesTorres ,
This is a duplicate thread of the similar one available at the link : Duplicate Link
Hence I am closing this thread.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.
| User | Count |
|---|---|
| 13 | |
| 5 | |
| 4 | |
| 3 | |
| 2 |