Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
Hi Team,
I have a notebook using pyspark.
there i have written a code that is fetching shar market value for every 5 mins. below is my notebook code
In above snap, i have given company name as ABC and i have parameterized that cell and i am getting below results in my lakehouse as table
now my question is i need to get some more companies data like ABC,XYZ,PQR companies data in that same table as rows one after other as rows, it means for every 5 mins 3 rows should add(one row for each company)
I was able to get this using pipeline like below using config table, but i need to achieve it by using only notebook and giving parameters list in the notebook.
can somebody look into it and do needful.
TIA
Hi,
If you use a Spark Job definition to schedule your jobs, it's possible to pass multiple command line arguments to the job, would this solve your problem?
Kind Regards,
Dennes
Hi, thankyou for reply.
My issue not yet resolved. can you plese write that code and paste it here, , or any links,or docs.so that would really helpful for me.
thankyou
Hi @sudhav , @DennesTorres ,
This is a duplicate thread of the similar one available at the link : Duplicate Link
Hence I am closing this thread.
User | Count |
---|---|
5 | |
4 | |
3 | |
3 | |
2 |
User | Count |
---|---|
10 | |
8 | |
7 | |
6 | |
6 |