Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Sign up nowGet Fabric certified for FREE! Don't miss your chance! Learn more
Hi There, Within fabric I am using metadata driven framework and creating log files (.txt). This is a dynamic framework to populate multiple tables and writes into log file to indicate what each step is doing and also to indicate where failure as occured (if any). The log file currently works fine if the filesize is upto100kb but if the filesize is more than 100KB then nothing is written after the filesize reaches 100KB. How to increase the filesize ?
Hi @KrishnaMoola
We wanted to follow up to check if you’ve had an opportunity to review the previous responses. If you require further assistance, please don’t hesitate to let us know.
Hi @KrishnaMoola
Following up to confirm if the earlier responses addressed your query. If not, please share your questions and we’ll assist further.
Hello @KrishnaMoola,
This is expected behavior. In Microsoft Fabric, dbutils.fs.put() has a hard limit of 100 KB per write and is intended only for small files such as configuration or metadata. When the file size exceeds this limit, no further content is written. The limit cannot be increased.
Microsoft documentation
dbutils.fs.put() is designed for small files only:
https://learn.microsoft.com/azure/databricks/dev-tools/databricks-utils#dbutilsfs
Fabric Lakehouse file writing best practices:
https://learn.microsoft.com/fabric/data-engineering/lakehouse-overview
Recommended approach
Use Spark DataFrame or RDD writes (or log to a Lakehouse table) to handle larger or growing log files, as these scale without file-size limitations
This behavior is not something that can be fixed by increasing a configurable file size limit. It is a limitation of how Microsoft Fabric handles file writes in OneLake, especially for append-style operations on text files.
In Fabric, continuously appending to a single .txt file is not a supported or reliable pattern. While small append operations may work initially, once the file grows beyond a certain size (around 100 KB in this case), further writes can silently fail without throwing an error. This is expected behavior due to the underlying storage and write semantics rather than a user-side configuration issue.
There is no setting in Fabric to increase this limit.
Recommended workarounds are:
Avoid a single growing log file
Instead of appending to one .txt file, generate multiple smaller log files (for example, per pipeline run, per batch, or per timestamp). This aligns with Fabric and OneLake design principles.
Use overwrite instead of append
If a single file is required, read the existing content, append new log entries in memory, and overwrite the file entirely. Overwrite operations are more stable than append operations in Fabric.
Use a table-based logging approach (best practice)
Store logs in a Lakehouse or Delta table with columns such as:
Timestamp
Pipeline / Process name
Step name
Log level
Message
Run ID
This approach removes file size limitations, supports querying and monitoring, and is the recommended enterprise logging pattern in Fabric.
In summary, Fabric is not designed for application-style, continuously growing text log files. The most stable and scalable solution is to switch to partitioned log files or table-based logging rather than trying to increase the file size of a single .txt log.
I have already done all the development and now need to make lot of changes to convert into a table. It would have been good if we had the option to increase the filesize then it would have been good.
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
Check out the January 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 25 | |
| 5 | |
| 3 | |
| 3 | |
| 3 |
| User | Count |
|---|---|
| 59 | |
| 13 | |
| 10 | |
| 9 | |
| 8 |