Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hi,
I'm trying to use the “Copy Data” activity in a Fabric pipeline to execute a Kusto query and write the results to a temporary Kusto table created on-the-fly. However, the pipeline fails with the error: “Table doesn’t exist.”
From my testing, it appears that the “Copy Data” activity uses the .ingest-from-storage command under the hood. I’m wondering if there’s a way to configure the activity so that ingestion to a new table is supported. I haven’t been able to find any documentation on this command, so I’m reaching out to domain experts in this group for guidance.
Context: I’m aiming to switch to managed identity (MI) based authentication for our Fabric pipelines. Unfortunately, the “KQL” activity doesn’t support MI, which means I can’t use commands like .set-or-replace directly within it. That limitation is what led me to explore the “Copy Data” activity
Thanks in advance for your help!
Hi @Jing1018
I wanted to check if you’ve had a chance to review the information provided. If you have any further questions, please let us know. Has your issue been resolved? If not, please share more details so we can assist you further.
I ended up replacing the original KQL activity with the following sequence:
While this approach isn't fully compliant—since not all activities are workspace identity (WI) based—it’s still an improvement over the previous setup. At least the activities that interact with external databases, namely Lookup and Copy Data, are WI-based.
Hi @Jing1018
Thanks for sharing the update. We really appreciate the detailed insights, as they will help others with similar issues. Your new sequence makes the process much clearer, especially with the structured flow from schema retrieval to data population. Even if it’s not fully WI-based, it’s definitely a step forward in ensuring stronger alignment with best practices. Highlighting the trade-offs between compliance and practicality is also really valuable, this will give others a solid reference point when they’re evaluating similar setups.
If you have any other queries, please feel free to create a new post here in the community. We are always happy to help.
Hi @Jing1018
Apologies for the inconvenience, and thank you for raising these thoughtful follow-up questions.
We understand your concern regarding the undocumented nature of Web activity support for Managed Identity (MI) is valid. While the official Microsoft Fabric documentation currently lists only Copy, Lookup, and GetMetadata as MI-supported activities, it does not explicitly mention Web activity. In practice, however, Web activity can be configured to use MI if the authentication settings expose this option, the target endpoint accepts Azure AD tokens, and the Fabric workspace identity has been granted the necessary permissions. For example, the Kusto management endpoint ex: https://<cluster>.kusto.windows.net/v1/rest/mgmt natively accepts Azure AD tokens, so when configured correctly, the Web activity can securely execute commands such as .create table under MI. Because this capability is not yet formally documented, we recommend validating it in a controlled environment before adopting it widely.
Regarding schema flexibility when creating a Kusto table via Web activity, particularly in cases where the incoming data has dynamic or unknown structure, named ingestion mappings are indeed the recommended approach. These mappings define how incoming data fields map to the target table’s columns, enabling ingestion even when the source schema is variable. For instance, if a table is initially created with a generic column (such as a single string or dynamic type), a JSON ingestion mapping can be registered using the .create table ingestion mapping command. This mapping is then referenced during ingestion through the ingestionMappingReference property in the Copy Data activity. This approach ensures that even with a placeholder schema, incoming data can be ingested consistently and extended later as needed.
Regards,
Microsoft Fabric Community Support Team.
Hi @Jing1018
Thank you for reaching out to the Microsoft Fabric Community Forum.
The “Table doesn’t exist” error appears because the Copy Data activity in Fabric pipelines uses Kusto ingestion commands such as .ingest into or .ingest-from-storage, which require the target table to be present beforehand. These commands do not create tables automatically and are mainly intended for prototyping, not production. Additionally, the KQL activity does not currently support Managed Identity (MI), so control commands like .create table or .set-or-replace cannot be executed within it. To address this, the table should be created in advance. A recommended MI-compatible solution is to use a Web (REST) activity in the pipeline, authenticated via MI, to call the Kusto management endpoint and execute table creation commands. Once the table exists, named ingestion mappings can provide schema flexibility during ingestion. The most straightforward MI-based approach is to run a Web activity for table creation, followed by the Copy Data activity for ingestion.
I hope this information is helpful. . If you have any further questions, please let us know. we can assist you further.
Regards,
Microsoft Fabric Community Support Team.
Hi @v-karpurapud ,
Thank you for getting back to me. I have two follow-up questions:
1. According to the official documentation: Authenticate with Microsoft Fabric workspace identity - Microsoft Fabric | Microsoft Learn, MI authentication is currently supported only in three activities: Copy, Lookup, and GetMetadata. Are you suggesting that Web activity also supports MI authentication, even though it's not explicitly mentioned in the documentation?
2. I assume that the Kusto table created via the Web activity won’t reflect the actual data schema, since we haven’t connected to the real data source yet. You mentioned that named ingestion mappings can help provide schema flexibility. I’m not very familiar with this approach—could you please share more insights on the best way to construct such mappings in this scenario? For example, if the Kusto table has a dummy column of type string, but the incoming data could contain any number of columns with varying types, how should we handle that?
Thanks in advance!