The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
Please help, my spark session fails to create in East Asia region. when i try to start the session in fabric notebook i get this:
{
"timestamp": "2025-02-27T08:54:22.549Z",
"transientCorrelation": "781bb5f4-4bea-491f-888c-8f86c71a84ca",
"aznb": {
"version": "1.6.99"
},
"notebook": {
"notebookName": "sil2gld_dataverse_bog_group_admission",
"instanceId": "42ffeb44-66fa-43ab-a823-fda17ad62f04",
"documentId": "trident-w-a4daea94-f87e-49f6-b4e9-89be9c66feb0-a-30105ac5-1d5e-4a8c-aa4b-45c470aa956e",
"workspaceId": "a4daea94-f87e-49f6-b4e9-89be9c66feb0",
"kernelId": "d91c6ddb-8b7b-4bd8-8379-690d59567fd7",
"clientSessionId": "42215bcf-f211-45f3-bfe6-a6cbbf62ec5c",
"kernelState": "not connected",
"computeUrl": "https://71da5ef836724f32944f86413a065f32.pbidedicated.windows.net/webapi/capacities/71DA5EF8-3672-4F32-944F-86413A065F32/workloads/Notebook/Data/Direct/api/workspaces/a4daea94-f87e-49f6-b4e9-89be9c66feb0/artifacts/30105ac5-1d5e-4a8c-aa4b-45c470aa956e/jupyterApi/versions/1",
"computeState": "connected",
"collaborationStatus": "offline / joined",
"isSaveLeader": true
},
"synapseController": {
"id": "42ffeb44-66fa-43ab-a823-fda17ad62f04:snc1",
"enabled": true,
"activeKernelHandler": "sparkLivy",
"kernelMetadata": {
"kernel": "synapse_pyspark",
"language": "python"
},
"state": "error",
"sessionId": "ddc30162-7766-453b-a3d4-94ac9b073b58",
"applicationId": null,
"applicationName": "",
"sessionErrors": [
"[NoAvailableCoreService] An internal error occurred. HTTP status code: 500."
]
}
}
and when i try to click into the spark environment i get this:
none of the solutions work like restarting the fabric capacity, creating a new spark environment. Please help i am on a tight deadline and my boss is screaming at me
Solved! Go to Solution.
Hi @dragonlobster
I'm trying to think out loud with you here are some possible solutions you can try:
- Check for Regional Outages: Since the issue is specific to the East Asia region, check Azure Status to see if there's an outage affecting your Fabric capacity.
https://azure.status.microsoft/en-us/status
- Try a Different Region: If possible, create a Spark environment in another region (e.g., Southeast Asia or another nearby region) to see if the issue is region-specific.
- Manually Scale Your Capacity: If you're using a Fabric Premium Capacity, try increasing the capacity size temporarily to see if that resolves the issue.
- Check Quotas & Limits: Ensure you haven't exceeded your capacity’s limits. Check your workload settings in Fabric Admin Portal → Capacity Settings → Workload to see if Spark has enough resources.
- Contact Microsoft Support Immediately: Since this is a critical issue, raise a Severity A support ticket with Microsoft. Go to Microsoft Support → Support → New Support Request and describe the issue in detail.
If this response was helpful, please accept it as a solution or give kudos to support other community member
@dragonlobster, As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
If issue still persists i suggest you to raise a support ticket here. so, that they can assit you in addressing the issue you are facing. please follow below link on how to raise a support ticket:
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
@dragonlobster, As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for your issue worked? or let us know if you need any further assistance here?
Thanks,
Prashanth Are
MS Fabric community support
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and give Kudos if helped you resolve your query
This here is a forum where users help users, time permitting. For urgent requests contact a Microsoft partner near you.
Hi @dragonlobster
I'm trying to think out loud with you here are some possible solutions you can try:
- Check for Regional Outages: Since the issue is specific to the East Asia region, check Azure Status to see if there's an outage affecting your Fabric capacity.
https://azure.status.microsoft/en-us/status
- Try a Different Region: If possible, create a Spark environment in another region (e.g., Southeast Asia or another nearby region) to see if the issue is region-specific.
- Manually Scale Your Capacity: If you're using a Fabric Premium Capacity, try increasing the capacity size temporarily to see if that resolves the issue.
- Check Quotas & Limits: Ensure you haven't exceeded your capacity’s limits. Check your workload settings in Fabric Admin Portal → Capacity Settings → Workload to see if Spark has enough resources.
- Contact Microsoft Support Immediately: Since this is a critical issue, raise a Severity A support ticket with Microsoft. Go to Microsoft Support → Support → New Support Request and describe the issue in detail.
If this response was helpful, please accept it as a solution or give kudos to support other community member