Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
Anonymous
Not applicable

Storing a schema in the lakehouse file section

I want to store a table schema independently from the table where I got it from. This should happen via notebook. First problem: There seems to be no way to write from a notebook into the files section of a lakehouse. Furthermore, once I manually uploaded the schema in json-format into the lakehouse, I can't load it properly into a json object, but it always ends up as a dataframe (which is obviously useless, I want a schema again). What can I do?

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @Anonymous ,

 

Direct writes from notebooks to Lakehouse file sections are not supported.

 

There is a workaround: write the JSON schema to a temporary location and then manually move it to the Lakehouse file section.

 

Here's an example of how I tested this using Python.

 

Use the following code to write the Json file to the temporary file:

import json
import os
import shutil

# Define example schema with more columns
schema = {
    "columns": [
        {"name": "id", "type": "integer"},
        {"name": "first_name", "type": "string"},
        {"name": "last_name", "type": "string"},
        {"name": "age", "type": "integer"},
        {"name": "email", "type": "string"},
        {"name": "phone_number", "type": "string"},
        {"name": "address", "type": "string"},
        {"name": "created_at", "type": "timestamp"}
    ]
}

# Write to temporary file
temp_path = "/tmp/schemaTest.json"
with open(temp_path, 'w') as f:
    json.dump(schema, f)

print(f"Schema temporarily saved to {temp_path}")

 

Use this code to move the json file under files, note that the files path here uses the Files API Path.

vhuijieymsft_0-1735022249076.png

# File API path
lakehouse_dir = "/lakehouse/default/Files/"
lakehouse_path = lakehouse_dir + "schemaTest.json"

if not os.path.exists(lakehouse_dir):
    os.makedirs(lakehouse_dir)

# Use shutil.copy to copy files to the target location
shutil.copy(temp_path, lakehouse_path)

print(f"Schema saved to {lakehouse_path}")

vhuijieymsft_1-1735022249077.png

 

Use this code to read the json file.

import json
import pandas as pd

# set Lakehouse path
lakehouse_path = "/lakehouse/default/Files/schemaTest.json"

# use pandas to read JSON file
df = pd.read_json(lakehouse_path, orient='records', lines=True)

schema_loaded = json.loads(df.to_json(orient='records'))

print("Loaded schema:")
print(json.dumps(schema_loaded, indent=4))

vhuijieymsft_2-1735022272647.png

 

If you have any other questions please feel free to contact me.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

View solution in original post

1 REPLY 1
Anonymous
Not applicable

Hi @Anonymous ,

 

Direct writes from notebooks to Lakehouse file sections are not supported.

 

There is a workaround: write the JSON schema to a temporary location and then manually move it to the Lakehouse file section.

 

Here's an example of how I tested this using Python.

 

Use the following code to write the Json file to the temporary file:

import json
import os
import shutil

# Define example schema with more columns
schema = {
    "columns": [
        {"name": "id", "type": "integer"},
        {"name": "first_name", "type": "string"},
        {"name": "last_name", "type": "string"},
        {"name": "age", "type": "integer"},
        {"name": "email", "type": "string"},
        {"name": "phone_number", "type": "string"},
        {"name": "address", "type": "string"},
        {"name": "created_at", "type": "timestamp"}
    ]
}

# Write to temporary file
temp_path = "/tmp/schemaTest.json"
with open(temp_path, 'w') as f:
    json.dump(schema, f)

print(f"Schema temporarily saved to {temp_path}")

 

Use this code to move the json file under files, note that the files path here uses the Files API Path.

vhuijieymsft_0-1735022249076.png

# File API path
lakehouse_dir = "/lakehouse/default/Files/"
lakehouse_path = lakehouse_dir + "schemaTest.json"

if not os.path.exists(lakehouse_dir):
    os.makedirs(lakehouse_dir)

# Use shutil.copy to copy files to the target location
shutil.copy(temp_path, lakehouse_path)

print(f"Schema saved to {lakehouse_path}")

vhuijieymsft_1-1735022249077.png

 

Use this code to read the json file.

import json
import pandas as pd

# set Lakehouse path
lakehouse_path = "/lakehouse/default/Files/schemaTest.json"

# use pandas to read JSON file
df = pd.read_json(lakehouse_path, orient='records', lines=True)

schema_loaded = json.loads(df.to_json(orient='records'))

print("Loaded schema:")
print(json.dumps(schema_loaded, indent=4))

vhuijieymsft_2-1735022272647.png

 

If you have any other questions please feel free to contact me.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.