Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.

Reply
dolphinantonym
Frequent Visitor

Writing JSON/List into Lakehouse's Files

I am collecting a list - AllResults - from a REST API, and trying to store it as a JSON file in my Lakehouse. I can't work out where I'm going wrong - the following command doesn't error, but I don't see anything appear when I refresh the files in my Lakehouse:

 

FilePath = "abfss://MyWorkSpace@onelake.dfs.fabric.microsoft.com/MyLakehouse.Lakehouse/Files/APIResponse.json"

with open(FilePath, "w", encoding="utf-8") as f:
    json.dump(AllResults, f, indent=2)

 

 

1 ACCEPTED SOLUTION
Aala_Ali
Advocate IV
Advocate IV

Hi @dolphinantonym 👋

open() can’t write to an abfss://… URL. In Fabric notebooks, either use the Lakehouse File API path that’s mounted into the notebook, or use NotebookUtils (mssparkutils) to write to OneLake.

Option 1 > Use the mounted Lakehouse path (works with plain Python)

Make sure your target Lakehouse is attached as Default (pin icon). Then write to the File API path:

import json, os

out_path = "/lakehouse/default/Files/APIResponse.json" # File API path
os.makedirs("/lakehouse/default/Files", exist_ok=True)

with open(out_path, "w", encoding="utf-8") as f:
json.dump(AllResults, f, ensure_ascii=False, indent=2)

print("Wrote:", out_path)


Refresh the Files pane and you should see APIResponse.json. (The default Lakehouse mount point is /lakehouse/default. If you only provide a relative path like Files/..., Fabric will also resolve it to the default Lakehouse.)

Option 2> Use NotebookUtils (mssparkutils)

This is a one-liner that writes text content into OneLake:

from notebookutils import mssparkutils
import json

mssparkutils.fs.put("Files/APIResponse.json",
json.dumps(AllResults, ensure_ascii=False, indent=2),
True) # overwrite=True


You can verify with:

mssparkutils.fs.ls("Files")


Docs for fs.put/fs.ls here.

Why your original code didn’t show a file
open("abfss://…", "w") doesn’t target OneLake; Python’s open() doesn’t understand the ABFSS scheme. Use the mounted File API path (/lakehouse/default/...) or mssparkutils instead.

If your end goal is to query the API data later, consider landing it as a Delta table (instead of a raw JSON file):

df = spark.createDataFrame(AllResults) # list[dict]
df.write.format("delta").mode("append").save("Tables/APIResponse")


That creates/updates a managed Delta table under Tables, which you can query from the SQL endpoint.
Microsoft Learn

 

If this helps, please mark it as Solution and give it a kudos so others can find it too. 🙏

View solution in original post

2 REPLIES 2
Aala_Ali
Advocate IV
Advocate IV

Hi @dolphinantonym 👋

open() can’t write to an abfss://… URL. In Fabric notebooks, either use the Lakehouse File API path that’s mounted into the notebook, or use NotebookUtils (mssparkutils) to write to OneLake.

Option 1 > Use the mounted Lakehouse path (works with plain Python)

Make sure your target Lakehouse is attached as Default (pin icon). Then write to the File API path:

import json, os

out_path = "/lakehouse/default/Files/APIResponse.json" # File API path
os.makedirs("/lakehouse/default/Files", exist_ok=True)

with open(out_path, "w", encoding="utf-8") as f:
json.dump(AllResults, f, ensure_ascii=False, indent=2)

print("Wrote:", out_path)


Refresh the Files pane and you should see APIResponse.json. (The default Lakehouse mount point is /lakehouse/default. If you only provide a relative path like Files/..., Fabric will also resolve it to the default Lakehouse.)

Option 2> Use NotebookUtils (mssparkutils)

This is a one-liner that writes text content into OneLake:

from notebookutils import mssparkutils
import json

mssparkutils.fs.put("Files/APIResponse.json",
json.dumps(AllResults, ensure_ascii=False, indent=2),
True) # overwrite=True


You can verify with:

mssparkutils.fs.ls("Files")


Docs for fs.put/fs.ls here.

Why your original code didn’t show a file
open("abfss://…", "w") doesn’t target OneLake; Python’s open() doesn’t understand the ABFSS scheme. Use the mounted File API path (/lakehouse/default/...) or mssparkutils instead.

If your end goal is to query the API data later, consider landing it as a Delta table (instead of a raw JSON file):

df = spark.createDataFrame(AllResults) # list[dict]
df.write.format("delta").mode("append").save("Tables/APIResponse")


That creates/updates a managed Delta table under Tables, which you can query from the SQL endpoint.
Microsoft Learn

 

If this helps, please mark it as Solution and give it a kudos so others can find it too. 🙏

EDIT: I have got this working using your suggested Option2, but modifying for a plain Python Notebook, by using notebookutils.fs.put() rather than mssparkutils.fs.put() - thanks!

 

Plain Python is the approach I prefer.

 

Are you aware of ways to update the mounted/pinned Lakehouse in a CICD environment? In plain Python I am able to dynamically construct the abfss://... path so I can do things like use write_delta() and have it write to Tables in a branch's Workspace without needing to manually change which Lakehouse is pinned in the branch, and again when I merge the Notebook back into my main branch.

 

I'm not aware of an equivalent to the parameter.yml file that works within Workspaces that have been branched out to via Fabric's source control, because there is a new Workspace per branch rather than a permanent Workspace with a known ID for deployed code.

Helpful resources

Announcements
August Fabric Update Carousel

Fabric Monthly Update - August 2025

Check out the August 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Kudoed Authors