Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Did you hear? There's a new SQL AI Developer certification (DP-800). Start preparing now and be one of the first to get certified. Register now

Reply
msturzen
Frequent Visitor

SQL Operator with multiple destinations

Hi all, 
a few weeks back I was happy to read the SQL Operator for Eventstream is GA AND supports multiple destinations. 

I tried to add a second destination to an existing SQL Operator. That does not work (for me). 
I keep getting thr error: "Failed to create data source: <output-alias> due to the following error: Long running operation failed with status 'Failed'. Additional Info:'Stream Analytics job has validation errors: The output <output-alias> used in the query was not defined.'"
The error occurs after publishing, when activating source (works fine) and destinations (fail).

And I have run out of ideas what it could be. I tried: 
* Create a new Eventstream
* Use different kinds of destinations (EH, LH)
* Naming things: with and wihtout "-", "_", camelCase, lowercase
* The order when and where and how to define the destinations (in the SQL editor first, in the Eventstream "main" view first, ....) 
* End each of the "Select into. ..." with and without ";"
* There are no problems shown in the SQL Editor, Test query didn't complain either.

One destination works perfectly fine. Two not at all (for me).
What I observed: If I change the order of the "select into ..."s the other output-alias is the one failing. ==> It's always the first "select into ..." the error is coming from. (Probably trying to connect to this first, second probably not even tried after first fails). 
Maybe important: I use several CTEs to prepare the events before the final Selects. Not both destinations use the same CTE as basis for their tables.

The fallback might be to have two SQL Operators, duplicating some code and stuff. I'm not looking forward to this, esp. since in ASA it (as I read) works. 

Any help, hints, fixes, education what I made wrong are highly appreciated. 

thx
Martin
 


4 REPLIES 4
v-agajavelly
Community Support
Community Support

Hi @msturzen ,

Looking at your query, one thing stands out inside MappedEvents you reference FilteredEvents but the CTE is defined as FilterEvents (no d). That typo could be causing the backend validation to fail and surface as an output binding error. Worth fixing that first and republishing.

If it still fails after that, it may be a tenant rollout issue. You can check by raising a support ticket by using this link
Create a Fabric and Power BI Support Ticket - Power BI | Microsoft Learn  they can confirm whether multi-destination SQL Operator is fully enabled in your tenant.

Thanks,
Akhil.

 

Hi Akhil, 
good find! I double checked with the prod code. --> It's a typo that must have sneaked in when I prepared the snippet. 
Will try the support ticket next....

Best

Martin
 

msturzen
Frequent Visitor

Hi Akhil, 
thnks for your reply and hints to double check. 

  • All output aliases used in the query must explicitly added to the Output section of the SQL editor:  Can verify this is the case.
  • CTEs should be defined cleanly (ideally within a single WITH block): Several CTEs but only one WITH
  • Each SELECT INTO works independently from the other: Check
  • Keep naming simple (no special characters): Check

WRT "Current limitation or bug": Bugs happen of course, I hope it gets fixed soon. "Current limitation" - Several Fabric Update Blog entries hint this limitation has been removed since the Preview. Maybe the feature has not been rolled out yet to my client's tenant? How can I check this?


Best
Martin

Stripped down SQL Operator: 

(edit 2026-05-04: Fixed a typo - renamed initial "WITH FilterEvents" to "WITH FilteredEvents" - as used in prod)

WITH FilteredEvents AS (
    -- Step 1: 
    -- Filter incoming events (json)
    -- convert unix-ms timestamp to event timestamp + use this event timestamp
    SELECT 
        *,
        DATEADD(MILLISECOND, CAST(metadata.timestamp AS BIGINT), '1970-01-01') As [eventtime]
    FROM [machine-data-stream] TIMESTAMP BY DATEADD(MILLISECOND, CAST(metadata.timestamp AS BIGINT), '1970-01-01')
    WHERE
        -- some simple mathces
), 
MappedEvents AS (
    -- Step 2: Extract some nested fields to "top-level"
    SELECT  
        filtered.eventtime                                              AS [eventtime],
        filtered.metadata.cell                                          AS [cellId],
        -- some more of this kind ...
        measureKV.PropertyName                                          AS [measure_name],
        TRY_CAST(GetRecordPropertyValue(
            GetRecordPropertyValue(filtered, measureKV.PropertyName),
            'value') AS NVARCHAR(MAX))                                  AS [measure_value_text], 
        filtered.metadata                                               AS [metadata_raw]
    FROM FilteredEvents AS filtered
    CROSS APPLY
        GetRecordProperties(filtered) AS measureKV
    WHERE
        -- Exclude ALL known structural top-level keys here: This is the ONE PLACE to update if new structural keys are added
        measureKV.PropertyName NOT IN ('metadata', 'topic', 'PartitionId', 'EventProcessedUtcTime', 'EventEnqueuedUtcTime', 'eventtime')
),
ValueMappedEvents AS (
    SELECT
        *,
        TRY_CAST(measure_value_text AS FLOAT)                           AS [measure_value]
    FROM MappedEvents
),
AggregatedEvents AS (
    -- Step 3: Aggregate over 60s Tumbling Windows
    SELECT
        System.Timestamp()                           AS [window_end],
        cellId                                       AS [cellId],
        -- some more fields ...
        measure_name                                 AS [measure],                 
        MIN(measure_value)                           AS [min_value],
        AVG(measure_value)                           AS [avg_value],
        MAX(measure_value)                           AS [max_value],
        COUNT(*)                                     AS [sample_count],
        topone() OVER (ORDER BY eventtime ASC)       AS [earliest_record],
        topone() OVER (ORDER BY eventtime DESC)      AS [latest_record]
    FROM ValueMappedEvents
    GROUP BY
        cellId, 
        -- more gourping ...
        measure,
        TumblingWindow(minute, 1)
)

-- DESTINATION #1: Output to aggregated EH table
SELECT
    -- tons of fields
INTO [aggregatedevents]
    FROM AggregatedEvents as aggregated


-- DESTINATION #2: Output of subset of kpis to LH table
SELECT
    System.Timestamp()                              AS [event_timestamp],
    --tons of fields
INTO [operationalevents]
FROM ValueMappedEvents AS mapped
WHERE mapped.measure in ('a', 'b', 'c')

 

v-agajavelly
Community Support
Community Support

Hi @msturzen ,

Your observation that the first SELECT INTO always fails is a really helpful clue.

A couple of things you could double-check:

  • Make sure all output aliases used in the query are explicitly added in the Outputs section of the SQL editor before publishing, and that the names match exactly.
  • If you're using CTEs, ensure they are defined cleanly (preferably within a single WITH block) and that each SELECT INTO works independently.
  • Keep naming simple (avoid special characters) to rule out edge-case issues.

Since one destination works but multiple don’t and the failure always points to the first output this could also be a current limitation or bug in the SQL Operator.

If it still fails, sharing a minimal repro or raising a support ticket would help confirm. As a workaround, using multiple SQL Operators is a practical option for now.

Thanks,
Akhil.

Helpful resources

Announcements
April Fabric Update Carousel

Fabric Monthly Update - April 2026

Check out the April 2026 Fabric update to learn about new features.

Fabric SQL PBI Data Days

Data Days 2026 coming soon!

Sign up to receive a private message when registration opens and key events begin.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Top Solution Authors
Top Kudoed Authors