Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
BI_is_Fun
New Member

Challenges moving Oracle Fusion data into a data warehouse for reporting?

I’m fairly new to working with Oracle Fusion and want to learn from others who have had to integrate it with their data warehouse environment.

If you’ve extracted Fusion data (via OTBI, BIP, APIs, or other methods) into a data warehouse

What are the biggest problems you’ve faced?

1 On the reporting side (gaps in OTBI/BIP subject areas, slow queries, performance bottlenecks, limited historical data).

2 On the ETL side (data volume/API limits, refresh windows, data quality issues, maintaining incremental loads, schema changes).

I’d love to hear what you’d do differently if you were setting up the pipeline again.

4 REPLIES 4
v-nmadadi-msft
Community Support
Community Support

Hi @BI_is_Fun 

May I check if this issue has been resolved? If not, Please feel free to contact us if you have any further questions.


Thank you

v-nmadadi-msft
Community Support
Community Support

Hi @BI_is_Fun 

I wanted to check if you had the opportunity to review the information provided by @deborshi_nag , @ssrithar . Please feel free to contact us if you have any further questions.


Thank you.

deborshi_nag
Impactful Individual
Impactful Individual

Hello @BI_is_Fun 

 

The typical mechanisms of extracing data out of Oracle Fusion (ERP/HCM/SCM) are the following -

 - OTBI (ad‑hoc analytics over subject areas)
 - BI Publisher (BIP) (pixel/extract reports, schedule/burst to SFTP/OCI)
 - REST/SOAP APIs (ERP/HCM/SCM endpoints, UCM for files/attachments)
 - Extracts (HCM Extracts, ERP Extracts)
 - Fusion Analytics Warehouse (FAW) (Oracle’s managed DW/semantic model)
 
In regards to the other points you mentioned -
 
Reporting-side challenges (OTBI/BIP) chalenges:
- Subject area gaps and missing attributes
What works: Use REST APIs for entities that are absent in subject areas.
- Slow queries & performance governors
What works: In the warehouse, create summary tables (e.g., GL balances by period, BU, CC, product) and leverage Power BI aggregations.
- Limited or inconsistent history
What works: Treat Fusion as operational source, not the system of analytical history.
- Security context bleed-through
What works: Once data lands in your DW/Fabric Lakehouse, re-apply row-level security in the semantic model, not at extraction time.
 
ETL/ELT-side challenges
- Volume, throttling & pagination (APIs)
What works: Partition and parallelize by business key/date windows (e.g., per BU, per period).

- Incremental loads & late-arriving data

What works: Maintain CDC watermarks per entity

- Schema drift (quarterly updates)

What works: In the DW, stage → bronze → silver → gold (medallion) to isolate breakage and preserve raw captures.
- Data quality, master/reference data
What works: Implement DQ rules (nulls, referential integrity, allowed value sets) and reconciliation checks
- Attachments & large binaries
What works: Separate metadata vs. binary pipelines.
 
Hope this helps - please appreciate leaving a Kudos or accepting as a Solution
ssrithar
Resolver I
Resolver I

@BI_is_Fun 

 

Beyond the issues already mentioned in your post (reporting gaps, slow queries, API limits, schema changes, etc.), here are additional real-world challenges teams face when extracting Oracle Fusion data into a data warehouse:

 

  • Limited Joinability Across Subject Areas - OTBI and BIP subject areas are siloed
  • Inconsistent Business Definitions - Same KPI or field name may mean different things in different subject areas
  • Frequent Logic Changes in Upstream Systems- Fusion configurations or setups may change
  • Access & Security Model Limitations -  OTBI/BIP reports reflect user security roles — which can limit access to full datasets during extraction
  • Lack of True CDC (Change Data Capture)
  • Version Skew Between Environments

What can be done different

  • Build schema drift detection into the pipeline early on
  • REST APIs or BI Cloud Connector (BICC) wherever possible, and only use OTBI for ad hoc needs or pre-built summaries.
  • Design robust watermarking strategies (composite keys, last run logic)

 

If this post helps, then please appreciate giving a Kudos or accepting as a Solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

 

 

 

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.