Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
My company is migrating files from one system to another. One of the requirements for migration is that every machine be scanned for malware. Each of these scans produces a log file. My job is to parse the log files and present the aggregate data.
I've written an M script that parses the files into 3 tables: runTable, runErrorsTable, runDetectionsTable. The 2nd two tables are many to one with the runTable.
And I've created a report that does what the boss needs. But I'm concerned about scalability. There are thousands of these log files, with more created daily. The files are anywhere from 10 kb to ~2Mb.
Since parsing these files is not trivial (they have multiple sections in various formats), does it make more sense to pre-parse them into a set of Dataverse tables and let Power BI use that as a datasource? Or should I just let Power BI deal with both parsing and presentation?
Can you convert the files into Parquet / Delta Lake format?
Delta Lake table optimization and V-Order - Microsoft Fabric | Microsoft Learn
User | Count |
---|---|
8 | |
4 | |
3 | |
3 | |
3 |