Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
angela_n
Frequent Visitor

Terraform + Azure DevOps Pipeline Issue with Fabric Notebooks

Hey everyone,

I’ve been using Terraform to create all the elements I need in Microsoft Fabric, and everything works fine when I run it locally under my user. Dev workspaces are created, and all elements are correctly assigned to my user.

However, when I try to execute the same process via an Azure DevOps pipeline (running under a Service Principal), most elements are created, but I keep running into this issue:

 

│ Error: Create operation

│ with fabric_notebook.sp_lakehouses,
│ on notebooks_with_depends_on.tf line 568, in resource "fabric_notebook" "xxxxxx":
│ 568: resource "fabric_notebook" "sp_lakehouses" {

│ Could not create resource: Requested 'xxxxxx' is not
│ available yet and is expected to become available in the upcoming minutes.

│ Error Code: ItemDisplayNameNotAvailableYet

 

 

│ Error: Provider returned invalid result object after apply

│ After the apply operation, the provider still indicated an unknown value

│ for

│ fabric_notebook.XXXXX.definition["notebook-content.ipynb"].source_content_sha256.

│ All values must be known after apply, so this is always a bug in the

│ provider and should be reported in the provider's own repository. Terraform

│ will still save the other known object values in the state

 

I’m currently creating 25 notebooks, and I suspected this might be causing the issue, so I added a dependency to sleep for 30 seconds and only created five notebooks at a time. However, the notebooks that fail aren’t always the same, and some do get created successfully with the pipeline.

This issue doesn’t happen when I run everything locally, and I’m sure I’m using the same Terraform version.

Has anyone else faced a similar problem?

Any insights or workarounds would be greatly appreciated!

Thanks in advance!

7 REPLIES 7
v-pnaroju-msft
Community Support
Community Support

Hi angela_n,

We have not received a response from you regarding the query and were following up to check if you have found a resolution. If you have identified a solution, we kindly request you to share it with the community, as it may be helpful to others facing a similar issue.

If you find the response helpful, please mark it as the accepted solution, as this will help other members with similar queries.

Thank you.


v-pnaroju-msft
Community Support
Community Support

Hi angela_n,

Thank you for the update. We understand the urgency of your production deadline.

Since the issue continues even after applying all the best practices, and the same Terraform configuration works in the development environment but fails in production, this matches a known issue in the Microsoft Fabric Terraform provider related to notebook deployments.

The current problem is similar to the one described in GitHub Issue #500, where bulk or sequential notebook creation intermittently fails to return source_content_sha256, especially in new environments.

Therefore, we kindly request you to file a Microsoft Support ticket at aka.ms/fabricsupport, including logs and a link to GitHub Issue #500. This will help escalate the matter to the engineering team, considering the impact on production.

As a workaround, please consider using the Microsoft Fabric REST API to deploy notebooks until the provider is fixed. Also, continue to monitor the official GitHub releases of the provider for updates.

If you find our response helpful, please mark it as the accepted solution. This will help other community members who are facing similar issues.

Should you have any further questions, please feel free to contact the Microsoft Fabric community.

Thank you.

v-pnaroju-msft
Community Support
Community Support

Thankyou, @burakkaragoz, for your response.

Hi angela_n,

We appreciate your inquiry on the Microsoft Fabric Community Forum.

Thank you for your detailed update and for applying the key best practices, such as using depends_on, sleep, and reducing parallelism.

Please follow the steps below which may help resolve the issue:

  1. Notebook deployments may fail if the dependent Lakehouses or Warehouses are not fully available, even if the workspace exists. Ensure you add explicit depends_on referencing the Lakehouse or Data resources, not only other notebooks. Also, include a check to wait until the Lakehouse API endpoint responds before proceeding.

  2. Use a null_resource with local-exec to call the Fabric REST API and poll for resource readiness instead of relying only on sleep.

  3. Verify that the Service Principal used by your DevOps pipeline has admin access to the Fabric workspace and has the necessary API permissions in Azure AD, such as Graph and Power BI.

  4. If notebooks are created but Terraform fails to track them due to asynchronous read issues, you can manually add the resources into the state by using terraform import:
    terraform import fabric_notebook.my_notebook <workspace_id>/<notebook_id>

  5. The error "source_content_sha256 unknown after apply" likely occurs because of how the provider handles post-creation reads. If this happens consistently, please report the issue with logs and a reproducible configuration.

Running Terraform apply locally using a service principal is a valid temporary solution to unblock production deployment.

Additionally, you may refer to the following links:
Terraform Provider for Microsoft Fabric (Generally Available) | Microsoft Fabric Blog | Microsoft Fa...
GitHub - microsoft/terraform-provider-fabric: Terraform Provider for Microsoft Fabric
Releases · microsoft/terraform-provider-fabric
Microsoft Fabric REST API references - Microsoft Fabric REST APIs | Microsoft Learn

If you find this response helpful, kindly mark it as the accepted solution and provide kudos. This will assist other community members facing similar questions.

Should you have any further queries, please feel free to contact the Microsoft Fabric community.

Thank you.

Thanks @burakkaragoz, for all your help. 

 

Unfortunately, none of the proposed solutions have resolved the issue in my case.

 

I’ve tried the following:

  • Creating notebooks one at a time using my own user account (which has full access and works perfectly when running tf with dev as an environment).
  • Adding depends_on 
  • Reducing parallelism and sleep delays.
  • Using terraform import after successful creation. This will recreate the object on the next execution and will delete all the objects or sometimes it return
│ Error: Resource already managed by Terraform
│
│ Terraform is already managing a remote object for fabric_notebook.pitchly_load_pitchly_from_dealcloud. To import to this address you must first remove the existing object from the state.
╵ 

 

Despite these efforts, the issue persists only in the new environments. What’s particularly puzzling is that the exact same Terraform configuration works flawlessly using dev. The only difference is the environment parameter (dev vs prd).

In prd, even when creating notebooks one by one, only two succeeded — the rest failed with the same error:

source_content_sha256 unknown after apply


So the issue is not related to user access or resource availability, but a Bug of the Fabric provider.

 

My deadline is already overdue, and resorting to a manual release process will negatively impact future deployments. Sadly, this is yet another indication that Microsoft Fabric may not yet be fully ready for production.

angela_n
Frequent Visitor

Hi @burakkaragoz 


Thanks so much for taking the time to share your insights—I really appreciate it! I went ahead and implemented the suggested solutions. Specifically, I set  -parallelism=1 in my Terraform apply and tested 3 approaches:

  • Explicit  depends_on: I added depends_on  for all notebooks, making each one dependent on the previous notebook.
  • Null resource wait mechanism: I batched the notebooks in groups of five and used a null resource with a local-exec provisioner to introduce a wait period up to sleep_time = 300 secs
resource "null_resource" "wait_batch_25" {
triggers = {
always_run = timestamp()
}
depends_on = [null_resource.wait_batch_24]
provisioner "local-exec" {
command = "sleep ${local.sleep_time}"
}
}​
  • Same than the previous one, but I added a null resource per Notebook. 
    Unfortunately, all the approaches resulted in the same issue

 

Error: Provider returned invalid result object after apply

After the apply operation, the provider still indicated an unknown value for
fabric_notebook.dealcloud_full_refresh_from_bronze.definition["notebook-content.ipynb"].source_content_sha256.
All values must be known after apply, so this is always a bug in the provider
and should be reported in the provider's own repository. Terraform will still
save the other known object values in the state

 


Would love to hear your thoughts on whether there's another workaround. For now I am asking my devops team to create a service account so I can run the TF code locally from my machine so I can move to Prod. 


Thanks again for your help!

@angela_n ,

 

Thank you for your detailed feedback and for testing the suggested approaches so thoroughly. I see you've already tried:

  • Setting -parallelism=1
  • Explicit depends_on for all notebooks
  • Using a null_resource with local-exec sleep, both per batch and per notebook

If the error still persists (“Provider returned invalid result object after apply... All values must be known after apply...”), even with strict sequencing and waiting, it suggests a deeper issue—most likely with the Terraform provider’s handling of async resource propagation in Fabric.

Additional Thoughts and Workarounds:

  1. Provider Bug Possibility:

    • This error message (unknown value for ... source_content_sha256) is a classic sign that the provider can’t get the final resource state from the Fabric API after creation.
    • If the resource is actually created in the portal/Fabric, but Terraform can’t read it back, it’s likely a provider-side bug with how it retrieves resource properties right after creation.
    • Recommend opening an issue in the provider’s GitHub repo with all your details (steps, config, logs). The provider maintainers might need to add a retry or polling mechanism to their resource “read” logic.
  2. Manual Import as Temporary Fix:

    • If the notebooks are successfully created in Fabric, you could try manually importing them into your Terraform state (using terraform import) after creation. This isn’t a long-term fix, but can unblock you.
  3. Service Principal Permissions/Context:

    • Double-check if the service principal used by your pipeline has exactly the same permissions and access as your user. Sometimes, subtle permission differences affect which objects can be read/returned by the API.
  4. API/Provider Version:

    • Make sure you are using the latest version of the Azure/Fabric Terraform provider.
    • If possible, try with a previous provider version in case this is a recent regression.
  5. Alternative: Run Locally as Interim Solution:

    • Your approach to run the pipeline locally with a service principal is valid as a temporary workaround, especially for Prod migrations.

Summary:
You’ve already implemented the best-practice workarounds. If the problem remains, it’s likely a provider bug or a Fabric API issue that needs to be surfaced to the provider maintainers.
I recommend gathering your minimal reproducible example and opening a GitHub issue in the Terraform provider repo; also, check if others have reported similar issues.

Let me know if you need help drafting the GitHub issue, or if you want to test any other advanced workaround!

burakkaragoz
Community Champion
Community Champion

Hi @angela_n ,

This is a common issue when deploying resources in parallel with Terraform, especially via service principals in Azure DevOps. The root cause is usually eventual consistency in the Fabric API—after creating a workspace or parent resource, there’s a short delay before its dependent resources (like notebooks) become available for further operations.

Key recommendations:

  • Insert explicit depends_on between your resources, ensuring workspaces (and any other parent objects) are fully recognized before notebook creation starts.
  • Add a retry/wait loop in your pipeline (not just a sleep): Use something like terraform-null-resource with local-exec to poll/check for resource readiness before proceeding.
  • Reduce parallelism: In your Azure DevOps pipeline, set max-parallelism = 1 in your Terraform configuration or limit parallel jobs to 1. Even batching by 5 can still hit race conditions.
  • Check for API throttling or propagation delay: This can randomly affect which notebooks fail each run.

Summary:
This isn’t a bug in your code or Terraform version, but a timing/consistency lag with Fabric’s resource propagation. Sequential creation or readiness checks usually resolve it.

Let me know if you need an example of how to implement the readiness check or serial deployment in your pipeline!

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.