Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
shivani111
Frequent Visitor

call/import a notebook into another notebook in fabric

I have 5 custom python library code and created a notebook which call a function . function definition is available in another helper custom .py file. similarly there are other functions inside custom files which has call other custom files. 

 

tried : %run ./custom_file_name 

it is not working. 

 

what are other alternative ways i can try .

also i am facing an error: "

InvalidHttpRequestToLivy: from cannot be less than 0 HTTP status code: 400."  did not find much about specific to this error. if anyone have any idea

1 ACCEPTED SOLUTION

Hi @shivani111 ,

 

You're right — %run doesn't work as expected in Microsoft Fabric notebooks, especially when you're working in a Spark environment. This is due to how Fabric handles notebook execution behind the scenes (via Livy), and %run isn't fully supported in that context.

If you're trying to reuse functions from other .py files, here’s a more reliable approach:

Use sys.path and import

  1. Upload your .py files to the Lakehouse or Workspace Files (e.g., under a folder like /Files/code).
  2. In your notebook, add the path and import the module:
import sys
sys.path.append('/lakehouse/default/Files/code')  # adjust path if needed

import my_utils  # assuming the file is my_utils.py
my_utils.my_function()

Make sure:

  • The file is a plain .py file (not a notebook).
  • The function is defined properly with def.

Nested Imports

If my_utils.py imports another helper file, make sure both are in the same folder and that folder is added to sys.path.


About the Livy Error

The error:

InvalidHttpRequestToLivy: from cannot be less than 0

usually means the notebook tried to send a malformed or unsupported command to the Spark backend. This often happens with %run, %load, or other magic commands that aren't fully supported in Fabric.


Alternative: Pipelines or Modular Design

If you're trying to chain notebooks (not just import .py files), consider:

  • Moving shared logic into .py files and importing them as shown above.
  • Using Data Factory Pipelines to orchestrate multiple notebooks.

Let me know if you need help structuring the files or debugging the import — happy to help.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

View solution in original post

10 REPLIES 10
v-pnaroju-msft
Community Support
Community Support

Hi @shivani111,

Thank you for sharing your insights and approach in resolving the issue.

We kindly request you to mark your response as the accepted solution, as this will help other community members find answers to similar challenges more efficiently.
Please continue leveraging the Fabric Community for any further assistance with your queries.

Should you have any further queries, kindly feel free to contact the Microsoft Fabric community.

Thank you.

v-pnaroju-msft
Community Support
Community Support

Thank you, @burakkaragoz, for your response.

Hi shivani111,

We would like to check if the solution provided by @burakkaragozhas resolved your issue. If you have found an alternative approach, we encourage you to share it with the community to assist others facing similar challenges.
If you found the response helpful, please mark it as the accepted solution and add kudos. This recognition benefits other members seeking solutions to similar queries.

Thank you.

thanks for help. issue is resolved.

changed my approach.

1.uploaded the custom python library to builtin resources instead of lakehouse. As there files open in editable view. imported the builtin resources using : from builtin import helper in notebook

2. if custom file calls another custom file function , then same use same cmd : from  builtin import helper1 will work.

3. in logic using PYDLM python library which by default print logs like.

INFO:pydlm:Forward filtering completed.

INFO:pydlm:Starting backward smoothing...

INFO:pydlm:Backward smoothing completed. so after some time , process gets crash and throw error: InvalidHttpRequestToLivy: from cannot be less than 0 HTTP status code: 400." 

this error is because of this logs printing below cell.

workaround: move all logs to a file into lakehouse

 

import os
import logging
directory_path = '/lakehouse/default/Files'
log_file = 'abc.log'
log_file_path = os.path.join(directory_path, log_file)
# Ensure the directory exists
if not os.path.exists(directory_path😞
    print(f"The specified path does not exist: {directory_path}")
    os.makedirs(directory_path)
    print(f"The directory has been created: {directory_path}")
#clean up existing handlers to prevent duplicate logs on re-runs
for handler in logging.root.handlers[:]:
    logging.root.removeHandler(handler)
# Test writing to a file using 'with open'
try:
    with open(log_file_path, 'w') as f:
        f.write("This log file is created for logging testing.\n")
    print(f"Successfully wrote to {log_file_path}")
except Exception as e:
    print(f"Failed to write to {log_file_path}: {e}")
    raise
# Setup logging to print INFO and above to the specified file
logging.basicConfig(filename=log_file_path,
                    format='%(levelname)s - %(asctime)s - %(name)s - %(message)s',
                    filemode='a',  # Use 'a' to append to the file
                    level=logging.INFO)
logger=logging.getLogger(__name__)
# Suppress pydlm INFO logs and show only warnings or errors
logging.getLogger('pydlm').setLevel(logging.WARNING)  #not working
 
**error which i face is completely different from why it is occuring
only issue i am facing is still cant able to suppress PYDLM info logs. any idea???

 

 

Hi @shivani111 ,

 

You're right — %run doesn't work as expected in Microsoft Fabric notebooks, especially when you're working in a Spark environment. This is due to how Fabric handles notebook execution behind the scenes (via Livy), and %run isn't fully supported in that context.

If you're trying to reuse functions from other .py files, here’s a more reliable approach:

Use sys.path and import

  1. Upload your .py files to the Lakehouse or Workspace Files (e.g., under a folder like /Files/code).
  2. In your notebook, add the path and import the module:
import sys
sys.path.append('/lakehouse/default/Files/code')  # adjust path if needed

import my_utils  # assuming the file is my_utils.py
my_utils.my_function()

Make sure:

  • The file is a plain .py file (not a notebook).
  • The function is defined properly with def.

Nested Imports

If my_utils.py imports another helper file, make sure both are in the same folder and that folder is added to sys.path.


About the Livy Error

The error:

InvalidHttpRequestToLivy: from cannot be less than 0

usually means the notebook tried to send a malformed or unsupported command to the Spark backend. This often happens with %run, %load, or other magic commands that aren't fully supported in Fabric.


Alternative: Pipelines or Modular Design

If you're trying to chain notebooks (not just import .py files), consider:

  • Moving shared logic into .py files and importing them as shown above.
  • Using Data Factory Pipelines to orchestrate multiple notebooks.

Let me know if you need help structuring the files or debugging the import — happy to help.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

How can I reference the python file if I have it in my Workspace? Because in the example you did it with the path to a Lakehouse file folder but I am interested in your second solution about using the workspace files.

 

 

  1. Upload your .py files to the Lakehouse or Workspace Files (e.g., under a folder like /Files/code).
  2. In your notebook, add the path and import the module:
import sys
sys.path.append('/lakehouse/default/Files/code')  # adjust path if needed

import my_utils  # assuming the file is my_utils.py
my_utils.my_function()

 

burakkaragoz
Community Champion
Community Champion

Hi @shivani111 ,

 

You're on the right track trying to modularize your code using helper .py files, but there are a few important considerations when working within Microsoft Fabric notebooks, especially when using Spark (via Livy) as the backend.

🔧 Why %run Might Not Work

The %run magic command is typically used in environments like Databricks or Jupyter, but in Fabric notebooks, especially when using Spark, it may not behave as expected due to how the execution context is managed.

Recommended Alternatives

1. Use import with Workspace Files

If your .py files are stored in the same workspace or Lakehouse file system:

import sys
sys.path.append('/lakehouse/default/Files/code')  # Adjust path as needed

import custom_file_name
custom_file_name.my_function()

Make sure the .py files are accessible and not in a markdown or notebook format.

2. Use Fabric Notebooks as Modules

If you're trying to call another notebook, not just a .py file, Fabric currently does not support %run-style notebook chaining natively. Instead, consider:

  • Refactoring shared logic into .py files.
  • Using pipelines to orchestrate multiple notebooks if needed.

3. Fixing the Livy Error

The error:

InvalidHttpRequestToLivy: from cannot be less than 0 HTTP status code: 400

suggests a malformed request to the Spark Livy endpoint. This could be caused by:

  • A syntax error or invalid cell execution.
  • A misconfigured %run or %load command.
  • A corrupted notebook state — try restarting the session and re-running.

🧪 Debugging Tips

  • Check the file path and ensure it’s relative to the notebook’s execution context.
  • Use os.listdir() to verify the file is visible from the notebook.
  • Restart the notebook kernel to clear any stale state.

Let me know if you’d like help structuring your .py files or setting up a reusable module pattern in Fabric!

Thanks for sharing this it was very useful for us as i was having issues with the %run magic key. I have one question is it possible to have the .py file in the workspace rather than in a lakehouse? I wasn't sure how to do the path to a workspace rather than a lakehouse. If so how would the below have to be modified?

 

import sys
sys.path.append('/lakehouse/default/Files/code')  # Adjust path as needed

import custom_file_name
custom_file_name.my_function()

 Any help much appreciated! 

for Livy error:
let me explain the scenario, in my notebook i will call a function and whose definition is one of the custome file which is currently i have loaded to lakehouse and set sys path to see the files there.
function runs for 4 mins and fail with Livy error.
let me know if you need any other details like cluster congif. to see where exactly it is failing
**this code is running fine in local but showing this error in fabric. i m feeling helpless here

 

🔍 1. Livy Error After 4 Minutes of Execution

The fact that your function runs for ~4 minutes and then fails with a Livy error (especially when it works locally) suggests a few possible causes:

  • Session Timeout or Resource Exhaustion: Fabric notebooks running on Spark via Livy may have execution timeouts or memory constraints. If your function is memory-intensive or involves long-running operations, it could be hitting a resource ceiling.
  • Cluster Configuration: If you're able to share your cluster specs (e.g., executor memory, cores, timeout settings), we can better pinpoint whether it's a resource issue.
  • Logging: Try wrapping your function with logging or try/except blocks to isolate the exact line where it fails.
import logging
logging.basicConfig(level=logging.INFO)

try:
    result = my_function()
except Exception as e:
    logging.error(f"Function failed: {e}")

📦 2. Importing Custom Python Files (Nested Imports)

You're absolutely right — importing a .py file into a notebook is one thing, but importing one custom file into another (i.e., nested imports) inside built-in resources requires careful path management.

Suggested Approach:

  1. Organize your files like a package:
   /Files/code/
     ├── __init__.py
     ├── helper_a.py
     └── helper_b.py  # imports from helper_a
  1. Set the path in your notebook:
   import sys
   sys.path.append('/lakehouse/default/Files/code')

   from helper_b import some_function
  1. Inside helper_b.py, use absolute imports:
   from helper_a import some_util

⚠️ Avoid relative imports like from .helper_a import ... — they won’t work unless the files are part of a proper Python package and executed as such.


🧪 Final Tips:

  • If you're using built-in resources, make sure the files are uploaded and accessible via the Lakehouse file browser.
  • Consider packaging your helpers into a .whl or .tar.gz and installing them via %pip install if you want to reuse them across notebooks more cleanly.
  • If the Livy error persists, try testing with a smaller dataset or shorter-running function to isolate whether it's a timeout or memory issue.

Let me know if you'd like help reviewing your cluster config or packaging your code for reuse!

if i import custom python files in built in resource . will it work?
another issue here is import the file into notebook is fine but how to import a custom file into another custom file when both fiile is in builtin resource 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.