The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi team
I was working on to establish connection from Oracle EBS to fabric. I have established a Oracle Data Gateway, Hwn using this gateway i was able to connect to the Oracle using Data flow gen 2 and using copy activity not all tables are visible.
I want to establish connection using Fabric Notebooks, Can someone provide me assistance/Guidance on how to establish the connection and get data from Oracle EBS
Thanks
Solved! Go to Solution.
HI @Anonymous,
I'd like to suggest you try to use jdbc to connect to the data source: (upload the driver file to the environment than you can invoke it in the spark config)
# Import necessary libraries
from pyspark.sql import SparkSession
# Create a Spark session and config driver
spark = SparkSession.builder \
.appName("OracleTest") \
.config("spark.driver.extraClassPath", "/path/to/your/uploaded/ojdbc11.jar") \
.getOrCreate()
# JDBC connection and properties
jdbc_url = "jdbc:oracle:thin:@//host:1521/db"
connection_properties = {
"user": "<your_username>",
"password": "<your_password>",
"driver": "oracle.jdbc.driver.OracleDriver"
}
#query
query = "SELECT * FROM your_table"
# Read data from JDBC source
df = spark.read.jdbc(url=jdbc_url, table=query, properties=connection_properties)
# Show the data
df.show()
python - How to use JDBC source to write and read data in (Py)Spark? - Stack Overflow
Regards,
Xiaoxin Sheng
Thank you for this insights
HI @Anonymous,
I'd like to suggest you try to use jdbc to connect to the data source: (upload the driver file to the environment than you can invoke it in the spark config)
# Import necessary libraries
from pyspark.sql import SparkSession
# Create a Spark session and config driver
spark = SparkSession.builder \
.appName("OracleTest") \
.config("spark.driver.extraClassPath", "/path/to/your/uploaded/ojdbc11.jar") \
.getOrCreate()
# JDBC connection and properties
jdbc_url = "jdbc:oracle:thin:@//host:1521/db"
connection_properties = {
"user": "<your_username>",
"password": "<your_password>",
"driver": "oracle.jdbc.driver.OracleDriver"
}
#query
query = "SELECT * FROM your_table"
# Read data from JDBC source
df = spark.read.jdbc(url=jdbc_url, table=query, properties=connection_properties)
# Show the data
df.show()
python - How to use JDBC source to write and read data in (Py)Spark? - Stack Overflow
Regards,
Xiaoxin Sheng
User | Count |
---|---|
6 | |
2 | |
2 | |
2 | |
2 |
User | Count |
---|---|
20 | |
18 | |
6 | |
5 | |
4 |