Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
Jessy_D
Helper I
Helper I

Working with mutliple nested notebooks

I have a Pyspark (python) notebook that imports other notebooks. When the notebooks are in the same folder, there is no issue when I use the %run command to import the notebooks, but when I place some of the scrips in subfolders to create more of a structure, than I can't seem to get the run command to work.

 

Any tips for creating a more structured workspace and how I get the %run command to work?

1 ACCEPTED SOLUTION
tayloramy
Community Champion
Community Champion

Hi @Jessy_D

 

In addition to what @ibarrau mentioned, you can also use the Scheduler API. Here's my code snippet to do that: 
self.client is a FabricRestClient: 

from sempy.fabric import FabricRestClient
 
 def start_job_instance(
        self,
        ws_id: str,
        item_id: str,
        job_type: str,
        parameters: Optional[Dict[str, Any]] = None
    ) -> str:
        """
        Start a Fabric job instance and return the job instance ID.

        Args:
            ws_id: Workspace GUID
            item_id: Item GUID (notebook or pipeline)
            job_type: "RunNotebook", "Notebook", or "Pipeline"
            parameters: Optional dict of parameter name -> value

        Returns:
            Job instance ID (GUID)

        Raises:
            RuntimeError: If job start fails
        """
        url = f"v1/workspaces/{ws_id}/items/{item_id}/jobs/instances?jobType={job_type}"

        # Build request body
        body = {}
        if parameters:
            body = {
                "executionData": {
                    "parameters": {
                        k: {"value": str(v), "type": "string"}
                        for k, v in parameters.items()
                    }
                }
            }

        log_debug(f"POST {url} params={list((parameters or {}).keys())}")
        resp = self.client.post(url, json=body)
        log_debug(f"POST response: {resp.status_code} Location={resp.headers.get('Location')}")

        if resp.status_code != 202:
            try:
                details = resp.json()
            except Exception:
                details = resp.text
            raise RuntimeError(
                f"Start job failed ({resp.status_code}) jobType={job_type}: {details}"
            )

        # Extract job instance ID from Location header
        loc = resp.headers.get("Location") or ""
        if "/jobs/instances/" not in loc:
            raise RuntimeError(
                f"202 Accepted but missing job instance Location (jobType={job_type})"
            )

        return loc.rsplit("/jobs/instances/", 1)[1]

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

View solution in original post

5 REPLIES 5
tayloramy
Community Champion
Community Champion

Hi @Jessy_D

 

In addition to what @ibarrau mentioned, you can also use the Scheduler API. Here's my code snippet to do that: 
self.client is a FabricRestClient: 

from sempy.fabric import FabricRestClient
 
 def start_job_instance(
        self,
        ws_id: str,
        item_id: str,
        job_type: str,
        parameters: Optional[Dict[str, Any]] = None
    ) -> str:
        """
        Start a Fabric job instance and return the job instance ID.

        Args:
            ws_id: Workspace GUID
            item_id: Item GUID (notebook or pipeline)
            job_type: "RunNotebook", "Notebook", or "Pipeline"
            parameters: Optional dict of parameter name -> value

        Returns:
            Job instance ID (GUID)

        Raises:
            RuntimeError: If job start fails
        """
        url = f"v1/workspaces/{ws_id}/items/{item_id}/jobs/instances?jobType={job_type}"

        # Build request body
        body = {}
        if parameters:
            body = {
                "executionData": {
                    "parameters": {
                        k: {"value": str(v), "type": "string"}
                        for k, v in parameters.items()
                    }
                }
            }

        log_debug(f"POST {url} params={list((parameters or {}).keys())}")
        resp = self.client.post(url, json=body)
        log_debug(f"POST response: {resp.status_code} Location={resp.headers.get('Location')}")

        if resp.status_code != 202:
            try:
                details = resp.json()
            except Exception:
                details = resp.text
            raise RuntimeError(
                f"Start job failed ({resp.status_code}) jobType={job_type}: {details}"
            )

        # Extract job instance ID from Location header
        loc = resp.headers.get("Location") or ""
        if "/jobs/instances/" not in loc:
            raise RuntimeError(
                f"202 Accepted but missing job instance Location (jobType={job_type})"
            )

        return loc.rsplit("/jobs/instances/", 1)[1]

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

Thanks, it was indeed intented to become a job, but this solved my issues

ibarrau
Super User
Super User

Hi. I haven't tried but, thinking in databricks let me ask. Are ou using something like this:

%run "./helpers/transformations"

I remember that wouldn't work, you need the absolute path when calling from another place. You can use the utils of fabric and do something like this:

notebookutils.notebook.run("notebook name", <timeoutSeconds>, <parameterMap>, <workspaceId>)

You can get more info about what you can do like this at: https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities?wt.mc_id=DP-MVP-5004778

I hope that helps

 


If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Happy to help!

LaDataWeb Blog

v-dineshya
Community Support
Community Support

Hi @Jessy_D ,

Thank you for reaching out to the Microsoft Community Forum.

 

The %run command expects a relative path from the root of the workspace, not from the current notebook's location.

 

If your folder structure like below.

 

/Workspace
|    |
|    |  main_notebook.ipynb --> This is your main notebook
|
utils/ --> This is a subfolder named "utils"
|
|
helper_notebook.ipynb --> This notebook is inside the "utils" folder

 

main_notebook.ipynb is at the top level of your workspace. In main_notebook.ipynb, you should use below command.

 

%run ./utils/helper_notebookShow

 

Note: Always start with ./ to indicate the path is relative to the current workspace root. Use folders for logical grouping. Avoid spaces in folder or notebook names. Use %run only for notebooks that define functions or reusable logic.For notebooks that produce outputs or visualizations, consider using modular Python files instead and import them using standard Python import.


Alternative solution:  Use Databricks with %pip install, you can move reusable code to .py files. Install them as packages or import them directly.

 

Python syntax:

 

from utils.helper_module import some_function

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

 

Hi @Jessy_D ,

We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.

 

Regards,

Dinesh

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.