Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
Anonymous
Not applicable

Establishing Connection For Oracle EBS from Fabric

Hi team

 

I was working on to establish connection from Oracle EBS to fabric. I have established a Oracle Data Gateway, Hwn using this gateway i was able to connect to the Oracle using Data flow gen 2 and using copy activity not all tables are visible.

 

I want to establish connection using Fabric Notebooks, Can someone provide me assistance/Guidance on how  to establish the connection and get data from Oracle EBS

 

Thanks

1 ACCEPTED SOLUTION
Anonymous
Not applicable

HI @Anonymous,

I'd like to suggest you try to use jdbc to connect to the data source: (upload the driver file to the environment than you can invoke it in the spark config)

# Import necessary libraries
from pyspark.sql import SparkSession

# Create a Spark session and config driver
spark = SparkSession.builder \
    .appName("OracleTest") \
    .config("spark.driver.extraClassPath", "/path/to/your/uploaded/ojdbc11.jar") \
    .getOrCreate()

# JDBC connection and properties
jdbc_url = "jdbc:oracle:thin:@//host:1521/db"

connection_properties = {
    "user": "<your_username>",
    "password": "<your_password>",
    "driver": "oracle.jdbc.driver.OracleDriver"
}

#query
query = "SELECT * FROM your_table"

# Read data from JDBC source
df = spark.read.jdbc(url=jdbc_url, table=query, properties=connection_properties)

# Show the data
df.show()

python - How to use JDBC source to write and read data in (Py)Spark? - Stack Overflow

Regards,

Xiaoxin Sheng

View solution in original post

2 REPLIES 2
Someshn
Regular Visitor

Thank you for this insights 

Anonymous
Not applicable

HI @Anonymous,

I'd like to suggest you try to use jdbc to connect to the data source: (upload the driver file to the environment than you can invoke it in the spark config)

# Import necessary libraries
from pyspark.sql import SparkSession

# Create a Spark session and config driver
spark = SparkSession.builder \
    .appName("OracleTest") \
    .config("spark.driver.extraClassPath", "/path/to/your/uploaded/ojdbc11.jar") \
    .getOrCreate()

# JDBC connection and properties
jdbc_url = "jdbc:oracle:thin:@//host:1521/db"

connection_properties = {
    "user": "<your_username>",
    "password": "<your_password>",
    "driver": "oracle.jdbc.driver.OracleDriver"
}

#query
query = "SELECT * FROM your_table"

# Read data from JDBC source
df = spark.read.jdbc(url=jdbc_url, table=query, properties=connection_properties)

# Show the data
df.show()

python - How to use JDBC source to write and read data in (Py)Spark? - Stack Overflow

Regards,

Xiaoxin Sheng

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.