Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
zakeer1517
Regular Visitor

InvalidHttpRequestToLivy: from cannot be less than 0 HTTP status code: 400

We are currently integrating data from the bronze to silver layer using a DAG that runs notebooks in parallel with a concurrency level of 5. However, the process consistently fails after approximately 28–29 minutes, even though the majority of the workflow executes successfully.

 

  • Total number of tables to process: 172

  • Failure point: After processing around 168 tables

  • Observed behavior: The job fails consistently after running for about 28–29 minutes.

Troubleshooting steps attempted:

  1. Increased session timeout: spark.conf.set("livy.server.session.timeout", "4h")

    → No impact observed.

  2. Scaled up Fabric capacity to F8 → The issue still persists.

Despite these changes, the job fails just before completing, and the timing of the failure is very consistent.

 

Does anyone have any insights or recommendations on what could be causing this failure—especially given the timing and repeatability? Could it be a session timeout, concurrency limit, DAG execution limit, or something else related to Microsoft Fabric or the underlying Livy server?

 

Below the code snippet

 

from notebookutils import notebook

load_group_properties = metadata.first()["load_group_properties"].asDict()
visualize_dag = load_group_properties["visualize_dag"]

DAG = {
    "activities": activities,
    "timeoutInSeconds": load_group_properties["timeout_seconds"],
    "concurrency": load_group_properties["concurrency"]
}

notebookutils.notebook.runMultiple(DAG, {"displayDAGViaGraphviz": visualize_dag})
1 ACCEPTED SOLUTION

Hi @v-kpoloju-msft 

 

I haven't opened any support ticket. I was able to fix the issue by increasing the concurrency limit to 20 by which the execution time falls with in the session time out limit (28-29 mins). 

 

Thanks for the support. 

 

 

View solution in original post

8 REPLIES 8
v-kpoloju-msft
Community Support
Community Support

Hi @zakeer1517,

Thank you for reaching out to the Microsoft Fabric Community, and thanks to @Vinodh247, shared valuable inputs on this thread. The solution provided by the super user is correct and aligns well with the issue you are described.

Based on your detailed analysis and the consistent failure pattern, this does appear to be related to a possible Livy API limitation or bug, particularly given the error:
InvalidHttpRequestToLivy: from cannot be less than 0.

Since you've already tried increasing the session timeout and scaling up the capacity without success, I recommend raising a Microsoft Fabric support ticket. This will allow the engineering team to further investigate the backend Livy logs and session handling in detail. You can create a Microsoft support ticket using the link below: https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket

 

In the meantime, consider reducing the concurrency level or breaking the workload into smaller DAG batches as a temporary workaround to help avoid the failure.

If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.

Thank you for using Microsoft Community Forum.    

I have this same issue.  Is there any talk of a fix?

Hi @v-kpoloju-msft & @Vinodh247 

 

I tried reducing the concurrency level to two and ended up with the same error. 

Hi @zakeer1517,

Thanks for the follow up question. As I mentioned in my previous post, please consider raising a support ticket. This will allow the engineering team to investigate the backend Livy logs and session handling in detail.

Thank you for using the Microsoft Community Forum.

Hi @zakeer1517,

We are following up once again regarding your query. Could you please confirm if the issue has been resolved through the support ticket with Microsoft?

If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.

Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.

 

Thank you for your understanding and participation.

Hi @v-kpoloju-msft 

 

I haven't opened any support ticket. I was able to fix the issue by increasing the concurrency limit to 20 by which the execution time falls with in the session time out limit (28-29 mins). 

 

Thanks for the support. 

 

 

Hi @zakeer1517,

Thank you for the update, and I’m glad to hear that increasing the concurrency limit resolved the issue and helped keep the execution time within the session timeout window.

Appreciate you sharing the solution, it may help others facing a similar challenge. If you run into any further questions or need assistance in the future, feel free to reach out.

Please give us 'Kudos' and mark your post as the accepted solution so other members can find it more easily.

Thank you for using the Microsoft Community Forum.

Vinodh247
Resolver III
Resolver III

The error from cannot be less than 0 hints at an internal pagination or offset issue in Livy's REST API used by notebookutils.notebook.runMultiple. After prolonged concurrent polling (especially under parallel loads), it might be attempting an invalid request with a negative offset.

 

Why it shows up after 28/ 29 mins consistently? Read the following possible reasons.

  • There could be an internal TTL or token/session refresh window in Livy or Spark (ex: 30min inactivity or API paging expiration?).
  • Fabric's notebook orchestrator may be batching or chunking execution state (like polling logs/outputs), and running into a paging math bug.

You are using a concurrency of 5 for 172 notebooks, there are chances that it can create...

  • Longlived sessions lingering in memory (especially if each notebook creates new Spark sessions instead of reusing).
  • Driver memory or thread pool exhaustion, especially if Fabric’s backend queues and polls results via Livy and keeps buffers in memory.

Even though you scaled to F8, this does not change concurrency queues or polling behavior in Livy unless explicitly managed.

 

Common troubleshooting fix that you can try: 

  • Instead of firing all 172 in one DAG, batch it into chunks. Try two DAG runs of 86 tables each, or
  • Use concurrency 3 with fewer total DAG nodes per batch. This avoids overwhelming the Livy poller or execution state tracker.
  • If possible, wrap notebook.runMultiple in a retry logic to catch and reinitiate after a failure.
  • Use monitoring in Fabric to check Livy session limits, queued job durations, and driver memory usage.
  • Look for patterns like exactly 30 min duration, queued notebook executions, or polling failures.

 

 

Please 'Kudos' and 'Accept as Solution' if this answered your query.

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.