Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hello Community,
I'm building a Metadata-Driven Ingestion Framework using Notebooks in Microsoft Fabric, and I'm looking for guidance on handling a non-ideal file structure in ADLS.
We are ingesting data from an ADLS source, where a shortcut has been created in the Lakehouse. The objective is to populate the Bronze layer with three Delta tables, each corresponding to a specific category:
BB_End_User
End_User
MonthlyMobileActiveUsers
The challenge is that all source files are placed together in a single folder, without any subfolder organization. The files follow a naming pattern like:
BB_End_User_YYYYMM
End_User_YYYYMM
MobileActiveUsers_YYYYMM
Ideally, these files would be stored in category-specific folders, but restructuring is not an option at this time.
What we're trying to achieve:
Read and write the files into their corresponding Delta tables in the Bronze layer based on their file name.
Use a metadata/config table to drive the ingestion logic for scalability.
Support both full and incremental loads, based on configuration.
Build a semantic model on top of the Bronze layer for reporting.
I’m looking for suggestions or best practices on:
Filtering and routing files by category when they are all in a single folder
Structuring the metadata/config table for flexible ingestion
Implementing full and incremental loads effectively in Fabric notebooks
If you've dealt with a similar scenario, I'd really appreciate your insights. Thank you.
Solved! Go to Solution.
Hi @Subhashsiva ,
1.Design a metadata/config table with fields like:
This enables dynamic pipeline orchestration and tracking
2. Use the metadata table to determine load type:
In Fabric Notebooks, you can trigger notebook execution via:
3. Notebook Execution and Orchestration
Use Fabric Pipelines to orchestrate notebook execution:
Refer- Metadata Driven Pipelines for Microsoft Fabric
Hope this helps!
Hi @Subhashsiva ,
Since we didnt hear back, we would be closing this thread.
If you need any assistance, feel free to reach out by creating a new post.
Thank you for using Microsoft Community Forum
Hi @Subhashsiva ,
Just wanted to check if you got a chance to review the suggestions provided and whether that helped you resolve your query?
Hi @Subhashsiva ,
Just wanted to check if you got a chance to review the suggestions provided and whether that helped you resolve your query?
If the answer has helped you resolve your query, please "Accept it as Solution" so that other members can also benefit from it.
Thank You
Hi @Subhashsiva ,
1.Design a metadata/config table with fields like:
This enables dynamic pipeline orchestration and tracking
2. Use the metadata table to determine load type:
In Fabric Notebooks, you can trigger notebook execution via:
3. Notebook Execution and Orchestration
Use Fabric Pipelines to orchestrate notebook execution:
Refer- Metadata Driven Pipelines for Microsoft Fabric
Hope this helps!
User | Count |
---|---|
25 | |
16 | |
13 | |
10 | |
10 |
User | Count |
---|---|
36 | |
32 | |
25 | |
22 | |
15 |