Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hey everyone,
I’ve been using Terraform to create all the elements I need in Microsoft Fabric, and everything works fine when I run it locally under my user. Dev workspaces are created, and all elements are correctly assigned to my user.
However, when I try to execute the same process via an Azure DevOps pipeline (running under a Service Principal), most elements are created, but I keep running into this issue:
│ Error: Create operation
│
│ with fabric_notebook.sp_lakehouses,
│ on notebooks_with_depends_on.tf line 568, in resource "fabric_notebook" "xxxxxx":
│ 568: resource "fabric_notebook" "sp_lakehouses" {
│
│ Could not create resource: Requested 'xxxxxx' is not
│ available yet and is expected to become available in the upcoming minutes.
│
│ Error Code: ItemDisplayNameNotAvailableYet
│ Error: Provider returned invalid result object after apply
│
│ After the apply operation, the provider still indicated an unknown value
│ for
│ fabric_notebook.XXXXX.definition["notebook-content.ipynb"].source_content_sha256.
│ All values must be known after apply, so this is always a bug in the
│ provider and should be reported in the provider's own repository. Terraform
│ will still save the other known object values in the state
I’m currently creating 25 notebooks, and I suspected this might be causing the issue, so I added a dependency to sleep for 30 seconds and only created five notebooks at a time. However, the notebooks that fail aren’t always the same, and some do get created successfully with the pipeline.
This issue doesn’t happen when I run everything locally, and I’m sure I’m using the same Terraform version.
Has anyone else faced a similar problem?
Any insights or workarounds would be greatly appreciated!
Thanks in advance!
Hi angela_n,
We have not received a response from you regarding the query and were following up to check if you have found a resolution. If you have identified a solution, we kindly request you to share it with the community, as it may be helpful to others facing a similar issue.
If you find the response helpful, please mark it as the accepted solution, as this will help other members with similar queries.
Thank you.
Hi angela_n,
Thank you for the update. We understand the urgency of your production deadline.
Since the issue continues even after applying all the best practices, and the same Terraform configuration works in the development environment but fails in production, this matches a known issue in the Microsoft Fabric Terraform provider related to notebook deployments.
The current problem is similar to the one described in GitHub Issue #500, where bulk or sequential notebook creation intermittently fails to return source_content_sha256, especially in new environments.
Therefore, we kindly request you to file a Microsoft Support ticket at aka.ms/fabricsupport, including logs and a link to GitHub Issue #500. This will help escalate the matter to the engineering team, considering the impact on production.
As a workaround, please consider using the Microsoft Fabric REST API to deploy notebooks until the provider is fixed. Also, continue to monitor the official GitHub releases of the provider for updates.
If you find our response helpful, please mark it as the accepted solution. This will help other community members who are facing similar issues.
Should you have any further questions, please feel free to contact the Microsoft Fabric community.
Thank you.
Thankyou, @burakkaragoz, for your response.
Hi angela_n,
We appreciate your inquiry on the Microsoft Fabric Community Forum.
Thank you for your detailed update and for applying the key best practices, such as using depends_on, sleep, and reducing parallelism.
Please follow the steps below which may help resolve the issue:
Notebook deployments may fail if the dependent Lakehouses or Warehouses are not fully available, even if the workspace exists. Ensure you add explicit depends_on referencing the Lakehouse or Data resources, not only other notebooks. Also, include a check to wait until the Lakehouse API endpoint responds before proceeding.
Use a null_resource with local-exec to call the Fabric REST API and poll for resource readiness instead of relying only on sleep.
Verify that the Service Principal used by your DevOps pipeline has admin access to the Fabric workspace and has the necessary API permissions in Azure AD, such as Graph and Power BI.
If notebooks are created but Terraform fails to track them due to asynchronous read issues, you can manually add the resources into the state by using terraform import:
terraform import fabric_notebook.my_notebook <workspace_id>/<notebook_id>
The error "source_content_sha256 unknown after apply" likely occurs because of how the provider handles post-creation reads. If this happens consistently, please report the issue with logs and a reproducible configuration.
Running Terraform apply locally using a service principal is a valid temporary solution to unblock production deployment.
Additionally, you may refer to the following links:
Terraform Provider for Microsoft Fabric (Generally Available) | Microsoft Fabric Blog | Microsoft Fa...
GitHub - microsoft/terraform-provider-fabric: Terraform Provider for Microsoft Fabric
Releases · microsoft/terraform-provider-fabric
Microsoft Fabric REST API references - Microsoft Fabric REST APIs | Microsoft Learn
If you find this response helpful, kindly mark it as the accepted solution and provide kudos. This will assist other community members facing similar questions.
Should you have any further queries, please feel free to contact the Microsoft Fabric community.
Thank you.
Thanks @burakkaragoz, for all your help.
Unfortunately, none of the proposed solutions have resolved the issue in my case.
I’ve tried the following:
│ Error: Resource already managed by Terraform
│
│ Terraform is already managing a remote object for fabric_notebook.pitchly_load_pitchly_from_dealcloud. To import to this address you must first remove the existing object from the state.
╵
Despite these efforts, the issue persists only in the new environments. What’s particularly puzzling is that the exact same Terraform configuration works flawlessly using dev. The only difference is the environment parameter (dev vs prd).
In prd, even when creating notebooks one by one, only two succeeded — the rest failed with the same error:
source_content_sha256 unknown after apply
So the issue is not related to user access or resource availability, but a Bug of the Fabric provider.
My deadline is already overdue, and resorting to a manual release process will negatively impact future deployments. Sadly, this is yet another indication that Microsoft Fabric may not yet be fully ready for production.
Thanks so much for taking the time to share your insights—I really appreciate it! I went ahead and implemented the suggested solutions. Specifically, I set -parallelism=1 in my Terraform apply and tested 3 approaches:
resource "null_resource" "wait_batch_25" {
triggers = {
always_run = timestamp()
}
depends_on = [null_resource.wait_batch_24]
provisioner "local-exec" {
command = "sleep ${local.sleep_time}"
}
}
Error: Provider returned invalid result object after apply
After the apply operation, the provider still indicated an unknown value for
fabric_notebook.dealcloud_full_refresh_from_bronze.definition["notebook-content.ipynb"].source_content_sha256.
All values must be known after apply, so this is always a bug in the provider
and should be reported in the provider's own repository. Terraform will still
save the other known object values in the state
Would love to hear your thoughts on whether there's another workaround. For now I am asking my devops team to create a service account so I can run the TF code locally from my machine so I can move to Prod.
Thanks again for your help!
Thank you for your detailed feedback and for testing the suggested approaches so thoroughly. I see you've already tried:
If the error still persists (“Provider returned invalid result object after apply... All values must be known after apply...”), even with strict sequencing and waiting, it suggests a deeper issue—most likely with the Terraform provider’s handling of async resource propagation in Fabric.
Additional Thoughts and Workarounds:
Provider Bug Possibility:
Manual Import as Temporary Fix:
Service Principal Permissions/Context:
API/Provider Version:
Alternative: Run Locally as Interim Solution:
Summary:
You’ve already implemented the best-practice workarounds. If the problem remains, it’s likely a provider bug or a Fabric API issue that needs to be surfaced to the provider maintainers.
I recommend gathering your minimal reproducible example and opening a GitHub issue in the Terraform provider repo; also, check if others have reported similar issues.
Let me know if you need help drafting the GitHub issue, or if you want to test any other advanced workaround!
Hi @angela_n ,
This is a common issue when deploying resources in parallel with Terraform, especially via service principals in Azure DevOps. The root cause is usually eventual consistency in the Fabric API—after creating a workspace or parent resource, there’s a short delay before its dependent resources (like notebooks) become available for further operations.
Key recommendations:
Summary:
This isn’t a bug in your code or Terraform version, but a timing/consistency lag with Fabric’s resource propagation. Sequential creation or readiness checks usually resolve it.
Let me know if you need an example of how to implement the readiness check or serial deployment in your pipeline!
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
74 | |
48 | |
16 | |
12 | |
7 |
User | Count |
---|---|
82 | |
82 | |
27 | |
8 | |
7 |