The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I suspect I know the answer to this but I'll ask anyway to see if anyone has a good idea.
I want to use Fabric to do my data processing, then I want to store my data and give access to external applications.
I know that I can use the SQL Server ODBC driver to query the data, but sometimes the data comes in formats thats supported in Spark but not in the ODBC driver, is it possible to get access to the Spark context to run queries on the Cluster as you would from a notebook from outside of Fabric for application development?
Hi @Anonymous ,
Is my follow-up just to ask if the problem has been solved?
If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?
Thank you very much for your cooperation!
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @Anonymous ,
In Fabric, you can do this with the Apache Livy service.
The installation, configuration, and usage of Livy is described in detail here, which you can refer to:
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!