Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
Hello,
Could you please help to know how to use PySql in Fabric notebook ?
I tried to found something in Fabric microsoft documentation but no result.
Thank you.
Solved! Go to Solution.
Well. I strongly agree with @tayloramy . The best approach, performance and optimal way to query data it's spark. You will query directly lakehouse natively with pyspark notebooks. You can install python libraries like I have shown you before, but that doesn't mean you can use it. If it's like any other sql library, you probably need to create a connection using server and db the SQL Endpoint of the lakehouse created automatically by Fabric.
On the other hand, I was looking for pysql and it looks like a very old and not updated library for oracle and mysql. Are you sure that was the library they have requested? pip install pysql?
Regards
Happy to help!
Hi. You can install libraries for notebooks. You can do something like this: https://blog.ladataweb.com.ar/post/796774242908766208/fabric-entornos-y-librerías-de-código-para-tus
However, using a library doesn't mean you can do what you want. If you are looking to connect your fabric notebook to a mysql database, that's not possible. In order to work with mysql you should use pipelines or dataflow to move the data to the lakehouse first. You can also work in an alternative like building a mirroring to mysql using open mirroring. I think I have seen users at the community saying the manage to make it work.
I hope that helps,
Happy to help!
Thank you for the answer. I have Sql Enpoints datasource in my Lakehouse I need to retreive it and transform it, and all that using PySql in my notebook.
Hi @SoufianeYazane,
If your data is sitting in a lakehouse, I recommend using PySpark instead of PySQL. PySpark can natively interact with the underlaying delta tables which is faster and more efficient than using the SQL endpoint.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hello, the customer required to use PySQL. That's why I'm looking to undestdant how to use it and the main question if it's compatible with our case.
Thank you!
Well. I strongly agree with @tayloramy . The best approach, performance and optimal way to query data it's spark. You will query directly lakehouse natively with pyspark notebooks. You can install python libraries like I have shown you before, but that doesn't mean you can use it. If it's like any other sql library, you probably need to create a connection using server and db the SQL Endpoint of the lakehouse created automatically by Fabric.
On the other hand, I was looking for pysql and it looks like a very old and not updated library for oracle and mysql. Are you sure that was the library they have requested? pip install pysql?
Regards
Happy to help!
Thank you guys. Finally we will use Spark SQL.
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!