Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
jaryszek
Memorable Member
Memorable Member

We cannot process the request as we encountered a transient issue when trying to determine user

Hi Guys,

I have the issue like that after copying semantic model from one workspace into another using TE3:

Underlying Error: {"code":"ASOperationExceptionError","pbi.error":{"code":"ASOperationExceptionError","parameters":{},"details":[{"code":"ModelingServiceError_Reason","detail":{"type":1,"value":"We cannot process the request as we encountered a transient issue when trying to determine user permissions defined in OneLake. Please wait a few minutes and try again.\r\n{\"RootActivityId\":\"F8E98186-BD57-4C34-AF5A-7A1E622709A4\"}\r\n"}},{"code":"ModelingServiceError_Location","detail":{"type":1,"value":"ModelingEngineHost"}},{"code":"ModelingServiceError_ExceptionType","detail":{"type":1,"value":"ModelingASOperationException"}},{"code":"ModelingServiceError_Message","detail":{"type":1,"value":"We cannot process the request as we encountered a transient issue when trying to determine user permissions defined in OneLake. Please wait a few minutes and try again.\r\n{\"RootActivityId\":\"F8E98186-BD57-4C34-AF5A-7A1E622709A4\"}\r\n"}},{"code":"ModelingServiceError_UserErrorCategory","detail":{"type":1,"value":"Unknown"}},{"code":"ModelingServiceError_AdditionalErrorCode","detail":{"type":1,"value":"PFE_UNIVERSAL_SECURITY_TRANSIENT_ERROR"}},{"code":"ModelingServiceError_AdditionalErrorCode","detail":{"type":1,"value":"PFE_EXTRA_INFORMATION"}}]}}
Activity ID: 214813cc-6569-460c-95b6-53b86b70206c
Correlation ID: cff3a314-f5ab-6958-f962-e9c0d1a12c3f
Request ID: 908782dc-96b5-8672-8e62-cb8b82eb9dde
Status code: 500
Time: Tue Nov 11 2025 16:37:22 GMT+0100 (Central European Standard Time)
Service version: 13.0.27046.36
Client version: 2511.1.26629-train
Cluster URI: https://wabi-us-central-b-primary-redirect.analysis.windows.net/


How to move semantic model using only microsoft tools? 


Does anybody knows the error?

Best,
Jacek

Admin: please move this topic to Developer forum section. 

1 ACCEPTED SOLUTION
jaryszek
Memorable Member
Memorable Member

Problem solved,

The issue was that I had workspaces within different regions. To move it I have used Tabular Editor 3, after deploying model, I need to change a shared connection string to point to the lakehouse already created within target workspace. 

Best,
Jacek

View solution in original post

8 REPLIES 8
jaryszek
Memorable Member
Memorable Member

Problem solved,

The issue was that I had workspaces within different regions. To move it I have used Tabular Editor 3, after deploying model, I need to change a shared connection string to point to the lakehouse already created within target workspace. 

Best,
Jacek

Thanks for the update. It will definitely help others in the community as well. Please stay connected and continue participating in future discussions.

 
 
Shubham_rai955
Power Participant
Power Participant

The error you're seeing: "We cannot process the request as we encountered a transient issue when trying to determine user permissions defined in OneLake" with code PFE_UNIVERSAL_SECURITY_TRANSIENT_ERROR—typically means the Microsoft Fabric service had a temporary problem checking the OneLake permissions required by your semantic model or associated lakehouse/shortcut. This is a short-lived platform or security resolution issue, so simply retrying the operation after several minutes usually resolves it.​

Best Practices for Moving Semantic Models (Using Microsoft Tools Only)

  • Deployment Pipelines: Use Fabric deployment pipelines when moving semantic models between workspaces in the same tenant, provided both workspaces are on Premium or Fabric capacity. This is the official, supported method for migrating semantic models along with data and partitions.​

  • Power BI REST APIs: For large models or CI/CD scenarios, export from one workspace using the Export API, then import to the target workspace with the Import API. This method preserves model data, and is ideal for Fabric and large Direct Lake models.​

  • SSMS/XMLA Endpoints: You can use SQL Server Management Studio (SSMS) to connect to the XMLA endpoint of the target workspace and restore the model using TMSL. This is recommended for exact copies within the same tenant for models too large for PBIX export/import.​

  • ALM Toolkit or PBIP Structure: For small models or metadata-only moves, you can use ALM Toolkit or PBIP deployments, though these won't copy lake or import model data. Avoid .pbix direct publishing for large Direct Lake models.​

Troubleshooting Tips

  • Confirm both source and target workspaces are in the same region/capacity and that necessary permissions on OneLake items are granted and propagated for the service principal or user.​

  • If using TE3 or third-party tooling, transient errors are more likely; using Microsoft-native methods (pipelines, REST APIs, SSMS) makes failures less frequent.​

  • For repeated errors, review OneLake security settings, check for capacity assignment issues, and make sure no recent platform outages are impacting the region.​

Retry first, and if the problem persists, use deployment pipelines or REST API export/import for the most robust and supported migrationn path.

SolomonovAnton
Super User
Super User

 

 

1) First aid for the error you pasted

Message highlights: PFE_UNIVERSAL_SECURITY_TRANSIENT_ERROR, OneLake permissions, Status code 500, ModelingEngineHost. This typically points to a short-lived platform/security resolution issue when the service tries to evaluate OneLake permissions for the item your semantic model depends on (Lakehouse/Warehouse/Shortcut). Microsoft class it as transient—so a retry often succeeds—but you can harden your setup:

  1. Confirm access on the underlying OneLake items (Lakehouse/Warehouse/Shortcut used by Direct Lake/DirectQuery): users/service principals running the query need at least Read on the item and Build on the semantic model. Also verify they have a role in the workspace (Viewer+). See OneLake integration & permissions overview.
  2. Capacity & region health: your cluster shows wabi-us-central-b. If your users are in EU but model runs in US, occasional authorization lookups can be chatty; ensure the workspace is in the capacity/region you expect and capacity isn’t throttled.
  3. Retry and capture the RootActivityId from the message (you have it). If it recurs, raise a Microsoft support ticket with that ID—engineering can trace it.
  4. For Direct Lake models, consider enabling DirectQuery fallback temporarily to ride out transient metadata reads while you investigate (you can turn it back off once stable). Community reports align with transient security lookups on Direct Lake. 

2) Ways to move a semantic model using only Microsoft tools

Below are four Microsoft-native approaches. Pick the one that best matches your environment (Premium/PPU vs Pro, need for CI/CD, cross-workspace/tenant, etc.).

A) Deployment Pipelines (no-code, click-ops CI/CD)

Best for: moving between Dev → Test → Prod workspaces inside the same tenant/capacity with rules to switch connections/parameters.

  1. Ensure your workspaces are on Fabric/Premium capacity.
  2. Create a deployment pipeline and assign the Dev workspace to Stage 1.
  3. Use Deployment rules to swap data source connections/parameters per stage.
  4. Deploy to Test, validate, then deploy to Prod. Reports can be rebound automatically to the target semantic model.

Docs: Get started with deployment pipelines. 

B) Git integration + PBIP (TMDL) — reproducible & reviewable

Best for: source control and repeatable promotion; works with Azure DevOps or GitHub (both Microsoft). Uses the Power BI Project (.pbip) format with TMDL for the semantic model.

  1. In Desktop, enable Developer Mode and PBIP, then Save as PBIP. This writes .Report/ and .SemanticModel/ folders (TMDL). 
  2. Push to Azure DevOps or GitHub.
  3. In Fabric, connect your target workspace to that repo/branch and Update workspace from Git to materialize the semantic model in the new workspace, then refresh.

Why this is robust: it moves metadata (model + report) as text, supports code review/PRs, and plays nicely with pipelines. 

C) XMLA backup/restore (.abf) with SSMS (Premium/PPU required)

Best for: exact copy of a semantic model between workspaces/capacities in the same tenant, including large models.

  1. Enable XMLA read-write on the capacity and configure tenant/workspace-level Backup storage (ADLS Gen2). 
  2. In SSMS, connect to the workspace’s XMLA endpoint, right-click the database (semantic model) → Back Up… to produce an .abf in your storage account. 
  3. Connect to the target workspace’s XMLA endpoint and Restore… from that .abf. You can also script the restore via TMSL. 

TMSL restore example (run in SSMS XMLA query window against the target workspace):

{
  "restore": {
    "database": "SalesModel-Prod",
    "file": "https://<storage>.dfs.core.windows.net/power-bi-backup/SalesModel.abf",
    "allowOverwrite": true,
    "readWriteMode": "readWrite"
  }
}

D) REST APIs (automation) — rebind reports and orchestrate

Best for: scripting promotions where models already exist, or when reports must be pointed at a model in another workspace.

  • Rebind a report to a semantic model in a different workspace (creates a shared dataset reference if needed). 
  • Clone a report into the target workspace and bind it to the target semantic model ID. 

Note: There isn’t a single REST call that “clones a dataset” end-to-end; for creating new models via automation, combine PBIP/Git or XMLA (Create/Restore). The enhanced Refresh API can manage table/partition refreshes post-move. 


Recommended path (quick decision guide)

  • Standard Dev–Test–Prod in same tenant → Deployment Pipelines (simple) or Git+Pipelines (governed). 
  • Exact copy / large models → XMLA Backup/Restore (.abf). 
  • Just need reports to point at a model in another workspace → REST Rebind Report

Quick checks to stabilize your current environment

  • Verify OneLake item permissions & Build permission on the semantic model for the caller.
  • Confirm workspace is on the intended capacity/region; check capacity health.
  • Retry with the RootActivityId handy; if frequent, open a Microsoft ticket.
  • For promotion: pick one method above and perform a full refresh in the target after move.

 

✔️ If my message helped solve your issue, please mark it as Resolved!

👍 If it was helpful, consider giving it a Kudos!

thank you very much. Let me follow those steps and try!

Ok so the issue is that region is not the same within second capacity. 

Next question about it, How to build a semantic model from a scratch within different capacity with different region - not to copy it ?

Best,
Jacek

 

 

A) PBIP + Git (most robust & repeatable) Recommended

  • In Power BI Desktop, enable Developer Mode and Save as PBIP (this writes your model and report to folders using TMDL).
  • Push the PBIP project to Azure DevOps or GitHub.
  • In the target workspace (created on the new capacity/region), connect the workspace to your repo/branch and Update from Git to materialize a new semantic model and report.
  • Adjust parameters/deployment rules for connections, then run a refresh.

    B) Publish Directly From Desktop (fastest)

    • Create the new workspace on the desired capacity/region.
    • Open your PBIX (or build anew), point data sources to target-region Fabric items (Lakehouse/Warehouse).
    • Publish to the target workspace. This creates a fresh semantic model there.

      C) Author via XMLA/TMSL in SSMS (Premium/PPU)

      • Ensure the target capacity has XMLA read-write enabled.
      • Use SSMS to connect to the workspace’s XMLA endpoint and run TMSL to create the model, then process/refresh.

        Region & Capacity Prerequisites (avoid permission hiccups)

        • Keep data and compute in the same region, especially for Direct Lake. Create the Lakehouse/Warehouse in the target region (or use region-local shortcuts).
        • If you must go cross-region, expect extra latency and more frequent transient permission lookups. Ensure Readers on the underlying OneLake items and Build on the semantic model for the caller.
        • Re-create credentials/gateways in the target workspace; credentials do not move with metadata.

          Minimal, Concrete Checklists

          Option A — PBIP + Git

          • Power BI Desktop → enable PBIP → Save as PBIP.
          • Commit to Azure DevOps or GitHub.
          • Target workspace → Connect to Git → select repo/branch → Update from Git.
          • Set parameters (e.g., Lakehouse/Warehouse names) → Refresh.

            Option B — Publish From Desktop

            • Create target workspace on the new capacity/region.
            • Point all connections to target-region Fabric items.
            • Publish PBIX to the target workspace → configure RLS/OLS and permissions → Refresh.

              Option C — XMLA/TMSL (Premium/PPU)

              • Enable XMLA read-write on the capacity.
              • SSMS → connect to the target workspace XMLA endpoint.
              • Run TMSL to createOrReplace the model, then process or trigger a refresh.

                Example TMSL Skeleton (XMLA)

                Run in SSMS → XMLA query window against the target workspace. Adjust names, data sources, and partitions for your environment.

                { "createOrReplace": { "object": { "database": "SalesModel_Target" }, "database": { "name": "SalesModel_Target", "compatibilityLevel": 1600, "model": { "culture": "en-US", "dataSources": [ { "name": "Lakehouse_DS", "type": "structured", "connectionDetails": { "protocol": "tds", "address": { "server": "<your-warehouse-or-lakehouse-endpoint>", "database": "<db-name>" } }, "credential": { "AuthenticationKind": "OAuth2" } } ], "tables": [ { "name": "DimDate", "partitions": [ { "name": "DimDate", "source": { "type": "m", "expression": "let Source = ... in Source" } } ] }, { "name": "FactSales", "partitions": [ { "name": "FactSales_Full", "source": { "type": "m", "expression": "let Source = ... in Source" } } ] } ], "roles": [ { "name": "Readers", "modelPermission": "read", "members": [ { "memberName": "user@contoso.com" } ] } ] } } } }

                After creation, issue a refresh (full or by table/partition) or trigger a dataset refresh from the Service.

                After You Build

                • Rebind reports if any existing reports should now target the new semantic model (Power BI REST rebind operations help automate this).
                • Consider adding the workspaces to Deployment Pipelines later for governed Dev → Test → Prod with connection rules.
                • Validate security: OneLake item permissions (Reader at source), workspace role (Viewer+), and Build permission on the semantic model for all callers.
                  Tip for your case: Since the previous error was driven by cross-region permission resolution, prefer PBIP (Option A) or a clean Desktop publish (Option B) while ensuring your Lakehouse/Warehouse (and any Shortcuts you keep) live in the same region as the new capacity. Then perform a full refresh. This avoids the transient OneLake security lookups that led to the earlier failures.

Hi,

I am using directLAke over Onelake, can not publish anything, it is just a remote semantic model. 

And I am not using GIT integration. Plus how this can switch underlying capacities? 

What else can I do ? 
Best,
Jacek

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.