Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
I suspect I know the answer to this but I'll ask anyway to see if anyone has a good idea.
I want to use Fabric to do my data processing, then I want to store my data and give access to external applications.
I know that I can use the SQL Server ODBC driver to query the data, but sometimes the data comes in formats thats supported in Spark but not in the ODBC driver, is it possible to get access to the Spark context to run queries on the Cluster as you would from a notebook from outside of Fabric for application development?
Hi @Anonymous ,
Is my follow-up just to ask if the problem has been solved?
If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?
Thank you very much for your cooperation!
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @Anonymous ,
In Fabric, you can do this with the Apache Livy service.
The installation, configuration, and usage of Livy is described in detail here, which you can refer to:
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.