Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
Hi Team,
I have a notebook using pyspark.
there i have written a code that is fetching shar market value for every 5 mins. below is my notebook code
In above snap, i have given company name as ABC and i have parameterized that cell and i am getting below results in my lakehouse as table
now my question is i need to get some more companies data like ABC,XYZ,PQR companies data in that same table as rows one after other as rows, it means for every 5 mins 3 rows should add(one row for each company)
I was able to get this using pipeline like below using config table, but i need to achieve it by using only notebook and giving parameters list in the notebook.
can somebody look into it and do needful.
TIA
Hi,
If you use a Spark Job definition to schedule your jobs, it's possible to pass multiple command line arguments to the job, would this solve your problem?
Kind Regards,
Dennes
Hi, thankyou for reply.
My issue not yet resolved. can you plese write that code and paste it here, , or any links,or docs.so that would really helpful for me.
thankyou
Hi @sudhav , @DennesTorres ,
This is a duplicate thread of the similar one available at the link : Duplicate Link 
Hence I am closing this thread. 
 
					
				
				
			
		
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.
