Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
ltofanelli
Frequent Visitor

Trying to connect to an Oracle database

I'm trying to connect to an oracle database with a notebook, but I'm receive a driver error.
With the Dataflow the connection is working.

 

 

 

%load_new_custom_jar {mssparkutils.nbResPath}/builtin/ojdbc11.jar

from pyspark.sql import SparkSession

spark = SparkSession.builder \
    .appName("OracleIngest") \
    .config("spark.driver.extraClassPath", "/builtin/ojdbc11.jar") \
    .getOrCreate()

url = "jdbc:oracle:thin:@//host:1521/db"

properties = {
    "user": "user1",
    "password": "password123",
    "driver": "oracle.jdbc.driver.OracleDriver"
}

query = "SELECT * FROM table"
df = spark.read.jdbc(url = url, table = query, properties = properties)

df.show()

 

 

 

 

That's the error message.

Success to load new jar(s): /synfs/nb_resource/builtin/ojdbc11.jar
---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
Cell In[158], line 20
     13 properties = {
     14     "user": "user1",
     15     "password": "password123",
     16     "driver": "oracle.jdbc.driver.OracleDriver"
     17 }
     19 query = "SELECT * FROM table"
---> 20 df = spark.read.jdbc(url = url, table = query, properties = properties)
     22 df.show()

File /opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py:927, in DataFrameReader.jdbc(self, url, table, column, lowerBound, upperBound, numPartitions, predicates, properties)
    925     jpredicates = utils.toJArray(gateway, gateway.jvm.java.lang.String, predicates)
    926     return self._df(self._jreader.jdbc(url, table, jpredicates, jprop))
--> 927 return self._df(self._jreader.jdbc(url, table, jprop))

File ~/cluster-env/trident_env/lib/python3.10/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
   1316 command = proto.CALL_COMMAND_NAME +\
   1317     self.command_header +\
   1318     args_command +\
   1319     proto.END_COMMAND_PART
   1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
   1323     answer, self.gateway_client, self.target_id, self.name)
   1325 for temp_arg in temp_args:
   1326     if hasattr(temp_arg, "_detach"):

File /opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py:169, in capture_sql_exception.<locals>.deco(*a, **kw)
    167 def deco(*a: Any, **kw: Any) -> Any:
    168     try:
--> 169         return f(*a, **kw)
    170     except Py4JJavaError as e:
    171         converted = convert_exception(e.java_exception)

File ~/cluster-env/trident_env/lib/python3.10/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
    324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325 if answer[1] == REFERENCE_TYPE:
--> 326     raise Py4JJavaError(
    327         "An error occurred while calling {0}{1}{2}.\n".
    328         format(target_id, ".", name), value)
    329 else:
    330     raise Py4JError(
    331         "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
    332         format(target_id, ".", name, value))

Py4JJavaError: An error occurred while calling o7687.jdbc.
: java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver
	at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:476)
	at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:594)
	at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:527)
	at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:46)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.$anonfun$driverClass$1(JDBCOptions.scala:103)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.$anonfun$driverClass$1$adapted(JDBCOptions.scala:103)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:103)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:41)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:34)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:346)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:236)
	at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:219)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:219)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
	at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:261)
	at jdk.internal.reflect.GeneratedMethodAccessor198.invoke(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.base/java.lang.Thread.run(Thread.java:829)

 

 

 

 

 

 

 

 

1 ACCEPTED SOLUTION
ltofanelli
Frequent Visitor

I solve that problem with this way.

Upload the .jar to a folder on the Lake

ltofanelli_1-1710504823886.png

 

Go to Workspace settings > Data Engineering/Science > Spark settings > Environment.

Create a default environment

In the value field, put the address to the .jar 

ltofanelli_2-1710505034541.png

 

And don't forget to change the environment on the notebook.

ltofanelli_3-1710505190819.png

 

View solution in original post

8 REPLIES 8
ltofanelli
Frequent Visitor

I solve that problem with this way.

Upload the .jar to a folder on the Lake

ltofanelli_1-1710504823886.png

 

Go to Workspace settings > Data Engineering/Science > Spark settings > Environment.

Create a default environment

In the value field, put the address to the .jar 

ltofanelli_2-1710505034541.png

 

And don't forget to change the environment on the notebook.

ltofanelli_3-1710505190819.png

 

Hello @ltofanelli 

 

Are you using On-premise Oracle? I tried your solution but it is still failing. 

 

Py4JJavaError: An error occurred while calling o4896.load. : java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489) at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:553) at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:254) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)

Hi @ltofanelli 

 

Glad that your query got resolved and thank you for sharing the details in the community as it can be helpful to others, much appreciated.

Please continue using Fabric Community for any help regarding your queries.

 

Thanks.

ltofanelli
Frequent Visitor

Isn't possible to bring the .jar with the magic commands. What's the alternative? Someone is using a .jar uploaded in the Spark Default Enviroment?
ltofanelli_0-1710428133226.png

Manage Apache Spark libraries - Microsoft Fabric | Microsoft Learn

sushiat
Helper I
Helper I

Hi, running into the same issue. The Oracle database is version 19, I downloaded the ojdbc driver version 10 (19.22.0.0). The JAR file is loaded fine and the Oracle site lists it as compatible with JDK11. But when trying to connect I'm getting this error:

Py4JJavaError: An error occurred while calling o6139.load. : java.lang.ClassNotFoundException: oracle.jdbc.driver.OracleDriver

Is there a full guide/example somewhere on how to load custom JAR files and add them to spark's classpath?

v-cboorla-msft
Microsoft Employee
Microsoft Employee

Hi @ltofanelli 

 

Thanks for using Microsoft Fabric Community.

Apologies for the inconvenience.

To ensure compatibility and troubleshoot the driver error you're encountering, could you please confirm the version of your Oracle database?

Version compatibility is a potential cause. We can troubleshoot by verifying the appropriate JDBC driver for your specific Oracle database version that might help you. The Oracle JDBC driver is always compliant to the latest JDK version in each of the new releases. In some versions, JDBC drivers support multiple JDK versions. Use the table below to choose the correct JDBC driver based on your preferred JDK version. For reference please find the screenshot.

vcboorlamsft_0-1710392304565.png

For more details, please refer : What are the Oracle JDBC releases Vs JDK versions?

 

If the issue still persists, please do let us know. Glad to help.

 

I hope this information helps.

 

Thanks.

I tried the ojdbc8.jar and ojdbc10.jar, but the error is the same.

The version is Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.