Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.

Reply
stan01
Frequent Visitor

Notebook Spark Custom JDBC error converting timestamp

Hi, 

 

I am trying to import data into a lakehouse via a custom JDBC Driver (From Infor M3 ERP).

 

I am able to load the table with the same SQL and Python code on Azure Databricks, but using the same code on MS Fabric returns me the following error: 'Unrecognized SQL type - name: TIMESTAMP WITH TIME ZONE'. Both are using Spark 3.5.

 

Is is possible a spark setting which is enabled on Databricks is not in Fabric?

 

I have also been able to query data by for example converting timestamps: 'SELECT CAST(timestamp AS VARCHAR)', however, I would need to declare the schema on import (over 100+ columns), and I don't know which columns are datetimes. 

 

This is the script I have:

df = (
    spark.read
         .format("jdbc")
         .option("url", api_key)
         .option("driver", "com.infor.idl.jdbc.Driver")
         .option("preferTimestampNTZ", True)
         .option("query", "SELECT * FROM FGLEDG LIMIT 10")
         .load()
)
display(df)

 

And I get the following error:

 

Py4JJavaError                             Traceback (most recent call last)
Cell In[10], line 8
      1 df = (
      2     spark.read
      3          .format("jdbc")
      4          .option("url", api_key)
      5          .option("driver", "com.infor.idl.jdbc.Driver")
      7          .option("query", "SELECT * FROM FGLEDG LIMIT 10")
----> 8          .load()
     9 )
     10 display(df)

File /opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py:314, in DataFrameReader.load(self, path, format, schema, **options)
    312     return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
    313 else:
--> 314     return self._df(self._jreader.load())

File ~/cluster-env/trident_env/lib/python3.11/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
   1316 command = proto.CALL_COMMAND_NAME +\
   1317     self.command_header +\
   1318     args_command +\
   1319     proto.END_COMMAND_PART
   1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
   1323     answer, self.gateway_client, self.target_id, self.name)
   1325 for temp_arg in temp_args:
   1326     if hasattr(temp_arg, "_detach"):

File /opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py:179, in capture_sql_exception.<locals>.deco(*a, **kw)
    177 def deco(*a: Any, **kw: Any) -> Any:
    178     try:
--> 179         return f(*a, **kw)
    180     except Py4JJavaError as e:
    181         converted = convert_exception(e.java_exception)

File ~/cluster-env/trident_env/lib/python3.11/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
    324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325 if answer[1] == REFERENCE_TYPE:
--> 326     raise Py4JJavaError(
    327         "An error occurred while calling {0}{1}{2}.\n".
    328         format(target_id, ".", name), value)
    329 else:
    330     raise Py4JError(
    331         "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
    332         format(target_id, ".", name, value))

Py4JJavaError: An error occurred while calling o7255.load.
: org.apache.spark.SparkSQLException: [UNRECOGNIZED_SQL_TYPE] Unrecognized SQL type - name: TIMESTAMP WITH TIME ZONE, id: TIMESTAMP_WITH_TIMEZONE.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.unrecognizedSqlTypeError(QueryExecutionErrors.scala:996)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getCatalystType(JdbcUtils.scala:228)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$getSchema$1(JdbcUtils.scala:308)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:308)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.getQueryOutputSchema(JDBCRDD.scala:71)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:58)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:241)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:37)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:346)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:236)
	at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:219)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:219)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.base/java.lang.Thread.run(Thread.java:829)

 

 

1 ACCEPTED SOLUTION
Thomaslleblanc
Super User
Super User

It could be the JDBC driver is different in Fabric than databricks. Check the versions. You can load a different one in Fabric if needed, but it would have to be on a custon spark environment.

 

You could also mapp to a string and convert later.

View solution in original post

5 REPLIES 5
v-lgarikapat
Community Support
Community Support

Hi @stan01 ,

 

Thanks for reaching out to the Microsoft fabric community forum.

@Thomaslleblanc ,

Thanks for your prompt response

@stan01 , 

I wanted to follow up and confirm whether you’ve had the opportunity to review the information provide by @Thomaslleblanc . Should you have any questions or require further clarification, please don't hesitate to reach out.

 

We appreciate your engagement and thank you for being an active part of the community.

Best regards,
Lakshmi

 

Hi @stan01 ,

We’d like to confirm whether your issue has been successfully resolved. If you still have any questions or need further assistance, please don’t hesitate to reach out. We’re more than happy to continue supporting you.

We appreciate your engagement and thank you for being an active part of the community.


Best Regards,
Lakshmi.

Hi @stan01 ,

We’d like to confirm whether your issue has been successfully resolved. If you still have any questions or need further assistance, please don’t hesitate to reach out. We’re more than happy to continue supporting you.

We appreciate your engagement and thank you for being an active part of the community.


Best Regards,
Lakshmi.

Thomaslleblanc
Super User
Super User

It could be the JDBC driver is different in Fabric than databricks. Check the versions. You can load a different one in Fabric if needed, but it would have to be on a custon spark environment.

 

You could also mapp to a string and convert later.

Hi Thomas,

 

It's a Custom JDBC driver that has been downloaded. So it's the same version, just a different Platform. My hypothesis would be that there are some package differences between Fabric and Databricks causing this.

 

I just kept this workload on Databricks for now, mapping to a string and converting later is also a solution that works, but it requires too much work in the metadata-driven for each loop to do it for all tables.

 

Thanks anyways!

Helpful resources

Announcements
September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Kudoed Authors