The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I have 5 custom python library code and created a notebook which call a function . function definition is available in another helper custom .py file. similarly there are other functions inside custom files which has call other custom files.
tried : %run ./custom_file_name
it is not working.
what are other alternative ways i can try .
also i am facing an error: "
InvalidHttpRequestToLivy: from cannot be less than 0 HTTP status code: 400." did not find much about specific to this error. if anyone have any idea
Solved! Go to Solution.
Hi @shivani111 ,
You're right — %run doesn't work as expected in Microsoft Fabric notebooks, especially when you're working in a Spark environment. This is due to how Fabric handles notebook execution behind the scenes (via Livy), and %run isn't fully supported in that context.
If you're trying to reuse functions from other .py files, here’s a more reliable approach:
import sys sys.path.append('/lakehouse/default/Files/code') # adjust path if needed import my_utils # assuming the file is my_utils.py my_utils.my_function()
Make sure:
If my_utils.py imports another helper file, make sure both are in the same folder and that folder is added to sys.path.
The error:
InvalidHttpRequestToLivy: from cannot be less than 0
usually means the notebook tried to send a malformed or unsupported command to the Spark backend. This often happens with %run, %load, or other magic commands that aren't fully supported in Fabric.
If you're trying to chain notebooks (not just import .py files), consider:
Let me know if you need help structuring the files or debugging the import — happy to help.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
Hi @shivani111,
Thank you for sharing your insights and approach in resolving the issue.
We kindly request you to mark your response as the accepted solution, as this will help other community members find answers to similar challenges more efficiently.
Please continue leveraging the Fabric Community for any further assistance with your queries.
Should you have any further queries, kindly feel free to contact the Microsoft Fabric community.
Thank you.
Thank you, @burakkaragoz, for your response.
Hi shivani111,
We would like to check if the solution provided by @burakkaragozhas resolved your issue. If you have found an alternative approach, we encourage you to share it with the community to assist others facing similar challenges.
If you found the response helpful, please mark it as the accepted solution and add kudos. This recognition benefits other members seeking solutions to similar queries.
Thank you.
thanks for help. issue is resolved.
changed my approach.
1.uploaded the custom python library to builtin resources instead of lakehouse. As there files open in editable view. imported the builtin resources using : from builtin import helper in notebook
2. if custom file calls another custom file function , then same use same cmd : from builtin import helper1 will work.
3. in logic using PYDLM python library which by default print logs like.
INFO:pydlm:Forward filtering completed.
INFO:pydlm:Starting backward smoothing...
INFO:pydlm:Backward smoothing completed. so after some time , process gets crash and throw error: InvalidHttpRequestToLivy: from cannot be less than 0 HTTP status code: 400."
this error is because of this logs printing below cell.
workaround: move all logs to a file into lakehouse
Hi @shivani111 ,
You're right — %run doesn't work as expected in Microsoft Fabric notebooks, especially when you're working in a Spark environment. This is due to how Fabric handles notebook execution behind the scenes (via Livy), and %run isn't fully supported in that context.
If you're trying to reuse functions from other .py files, here’s a more reliable approach:
import sys sys.path.append('/lakehouse/default/Files/code') # adjust path if needed import my_utils # assuming the file is my_utils.py my_utils.my_function()
Make sure:
If my_utils.py imports another helper file, make sure both are in the same folder and that folder is added to sys.path.
The error:
InvalidHttpRequestToLivy: from cannot be less than 0
usually means the notebook tried to send a malformed or unsupported command to the Spark backend. This often happens with %run, %load, or other magic commands that aren't fully supported in Fabric.
If you're trying to chain notebooks (not just import .py files), consider:
Let me know if you need help structuring the files or debugging the import — happy to help.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
How can I reference the python file if I have it in my Workspace? Because in the example you did it with the path to a Lakehouse file folder but I am interested in your second solution about using the workspace files.
- Upload your .py files to the Lakehouse or Workspace Files (e.g., under a folder like /Files/code).
- In your notebook, add the path and import the module:
import sys sys.path.append('/lakehouse/default/Files/code') # adjust path if needed import my_utils # assuming the file is my_utils.py my_utils.my_function()
Hi @shivani111 ,
You're on the right track trying to modularize your code using helper .py files, but there are a few important considerations when working within Microsoft Fabric notebooks, especially when using Spark (via Livy) as the backend.
The %run magic command is typically used in environments like Databricks or Jupyter, but in Fabric notebooks, especially when using Spark, it may not behave as expected due to how the execution context is managed.
If your .py files are stored in the same workspace or Lakehouse file system:
import sys sys.path.append('/lakehouse/default/Files/code') # Adjust path as needed import custom_file_name custom_file_name.my_function()
Make sure the .py files are accessible and not in a markdown or notebook format.
If you're trying to call another notebook, not just a .py file, Fabric currently does not support %run-style notebook chaining natively. Instead, consider:
The error:
InvalidHttpRequestToLivy: from cannot be less than 0 HTTP status code: 400
suggests a malformed request to the Spark Livy endpoint. This could be caused by:
Let me know if you’d like help structuring your .py files or setting up a reusable module pattern in Fabric!
Thanks for sharing this it was very useful for us as i was having issues with the %run magic key. I have one question is it possible to have the .py file in the workspace rather than in a lakehouse? I wasn't sure how to do the path to a workspace rather than a lakehouse. If so how would the below have to be modified?
import sys sys.path.append('/lakehouse/default/Files/code') # Adjust path as needed import custom_file_name custom_file_name.my_function()
Any help much appreciated!
for Livy error:
let me explain the scenario, in my notebook i will call a function and whose definition is one of the custome file which is currently i have loaded to lakehouse and set sys path to see the files there.
function runs for 4 mins and fail with Livy error.
let me know if you need any other details like cluster congif. to see where exactly it is failing
**this code is running fine in local but showing this error in fabric. i m feeling helpless here
The fact that your function runs for ~4 minutes and then fails with a Livy error (especially when it works locally) suggests a few possible causes:
import logging logging.basicConfig(level=logging.INFO) try: result = my_function() except Exception as e: logging.error(f"Function failed: {e}")
You're absolutely right — importing a .py file into a notebook is one thing, but importing one custom file into another (i.e., nested imports) inside built-in resources requires careful path management.
/Files/code/ ├── __init__.py ├── helper_a.py └── helper_b.py # imports from helper_a
import sys sys.path.append('/lakehouse/default/Files/code') from helper_b import some_function
from helper_a import some_util
⚠️ Avoid relative imports like from .helper_a import ... — they won’t work unless the files are part of a proper Python package and executed as such.
Let me know if you'd like help reviewing your cluster config or packaging your code for reuse!
if i import custom python files in built in resource . will it work?
another issue here is import the file into notebook is fine but how to import a custom file into another custom file when both fiile is in builtin resource
User | Count |
---|---|
14 | |
9 | |
5 | |
5 | |
3 |
User | Count |
---|---|
44 | |
23 | |
17 | |
17 | |
12 |