The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
We are trying to evaluate the Direct Lake option in MS Fabric for a large volume of dataset.
Currently, we have our data resides in Amazon Redshift, which is around 4TB data comprising of 5 years worth of information. As per the Fabric documentation, we offloaded the data into S3 bucket and enabled S3 shortcut in Fabric on top of the Lake House.
S3 files are in Parquet format. When we tried to create a semantic model, the system was not letting us to connect to the parquet files. We need to understand how can we build the semantic model on the S3 files without loading the data into Lake.
Hi @esabu ,
For guidance on connecting to Parquet files, please refer to:
Connect to Parquet files in dataflows - Microsoft Fabric | Microsoft Learn
Hope that helps.
Best Regards,
Yulia Yan
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
User | Count |
---|---|
36 | |
15 | |
12 | |
11 | |
9 |
User | Count |
---|---|
45 | |
44 | |
19 | |
18 | |
18 |