Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Hi Team,
I have a notebook using pyspark.
there i have written a code that is fetching shar market value for every 5 mins. below is my notebook code
In above snap, i have given company name as ABC and i have parameterized that cell and i am getting below results in my lakehouse as table
now my question is i need to get some more companies data like ABC,XYZ,PQR companies data in that same table as rows one after other as rows, it means for every 5 mins 3 rows should add(one row for each company)
I was able to get this using pipeline like below using config table, but i need to achieve it by using only notebook and giving parameters list in the notebook.
can somebody look into it and do needful.
TIA
Hi,
If you use a Spark Job definition to schedule your jobs, it's possible to pass multiple command line arguments to the job, would this solve your problem?
Kind Regards,
Dennes
Hi, thankyou for reply.
My issue not yet resolved. can you plese write that code and paste it here, , or any links,or docs.so that would really helpful for me.
thankyou
Hi @sudhav , @DennesTorres ,
This is a duplicate thread of the similar one available at the link : Duplicate Link
Hence I am closing this thread.
User | Count |
---|---|
31 | |
10 | |
4 | |
3 | |
1 |
User | Count |
---|---|
47 | |
15 | |
14 | |
10 | |
9 |