Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
Sania-F
Resolver I
Resolver I

Azure resource graph and KQL

Hi Everyone,

 

I am trying to find the right data for compute resources (ex -Virtual machine allocated cores versus used cores for all the subscribtions and quite other things allocation and usage ) .

I need to use kusto explorer/ ADX for analyzing this data. I cannot run queries  in azure portal for same. The kql returns data in json format.  The challenge is ARG, kusto, kql. All these 3 things are new to me. So how do I start to identify the infrastructure compute data allocated, used and write queries that could be used in reporting tools.

1 ACCEPTED SOLUTION
tayloramy
Community Champion
Community Champion

Hi @Sania-F,

 

If I understand correctly, you’re trying to report on "allocated vs used cores" across subscriptions with KQL, Kusto Explorer/ADX, and Azure Resource Graph (ARG). The key is that this spans two different data planes:

  • Allocation (inventory) = ARM metadata (what each VM is sized for). Use Azure Resource Graph.
  • Usage (telemetry over time) = Azure Monitor metrics/logs (what the VM actually consumed). Query Log Analytics / VM Insights with KQL (and optionally land it in ADX for analytics at scale).

Below is a practical starter path with copy-paste-able (though untested) queries.

 

  1. Inventory (allocated vCPUs) via Azure Resource Graph
    Run an ARG query across all subscriptions to list VMs and their sizes. Export to JSON/CSV.

    # Azure CLI (installs ARG extension on first use)
    az graph query -q "
    resources
    | where type =~ 'microsoft.compute/virtualmachines'
    | project subscriptionId, resourceGroup, name, location,
              vmSize=tostring(properties.hardwareProfile.vmSize),
              powerState=tostring(properties.extended.instanceView.powerState.code)"
    --output json > vm_inventory.json

    ARG won’t give you a vCPU count directly for each vmSize. Create a small mapping table of the VM sizes you actually use (e.g., D2s_v5 = 2 vCPUs). You can maintain this mapping from the official VM sizes pages and constrained-vCPU SKUs docs (VM sizes overview, constrained vCPUs).

  2. Usage (CPU actually used) via Azure Monitor Logs
    Enable VM Insights / data collection to your Log Analytics workspace (Perf/InsightsMetrics tables). Then query CPU utilization and estimate “vCPU used” as avg CPU% * allocated vCPUs.

    // Log Analytics or ADX (once logs are there)
    // Average CPU% per computer over 5-minute bins
    InsightsMetrics
    | where TimeGenerated > ago(1d)
    | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage"
    | summarize CPU_Pct = avg(Val) by Computer, bin(TimeGenerated, 5m)
  3. Join allocation + usage
    Load your ARG inventory (with vmSize->vCPU mapping) as a lookup and combine with CPU%.

    // Example: bring allocated vCPUs via a small inline mapping (replace with your real list)
    let VmSizeToVcpu = datatable(vmSize:string, vCPUs:int)
    [
      "Standard_D2s_v5", 2,
      "Standard_D4s_v5", 4,
      "Standard_B2ms", 2
      // add the sizes you use
    ];
    let VmInventory =
    externaldata(subscriptionId:string, resourceGroup:string, name:string, location:string, vmSize:string, powerState:string)
    [@"https://<your-storage-or-adx-ingestion-endpoint>/vm_inventory.json"]
    with(format="multijson")
    | project Computer=name, vmSize
    | join kind=leftouter VmSizeToVcpu on vmSize;
    InsightsMetrics
    | where TimeGenerated > ago(1d)
    | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage"
    | summarize CPU_Pct=avg(Val) by Computer, bin(TimeGenerated, 5m)
    | join kind=leftouter VmInventory on Computer
    | extend vCPU_Allocated = coalesce(vCPUs, 1)
    | extend vCPU_Used_Est = round((CPU_Pct/100.0) * vCPU_Allocated, 2)
  4. Run it where you prefer

That’s enough to produce a tenant-wide table of Subscription, VM, vmSize, vCPU_Allocated, CPU_Pct, vCPU_Used_Est for your reporting tool.

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

View solution in original post

2 REPLIES 2
tayloramy
Community Champion
Community Champion

Hi @Sania-F,

 

If I understand correctly, you’re trying to report on "allocated vs used cores" across subscriptions with KQL, Kusto Explorer/ADX, and Azure Resource Graph (ARG). The key is that this spans two different data planes:

  • Allocation (inventory) = ARM metadata (what each VM is sized for). Use Azure Resource Graph.
  • Usage (telemetry over time) = Azure Monitor metrics/logs (what the VM actually consumed). Query Log Analytics / VM Insights with KQL (and optionally land it in ADX for analytics at scale).

Below is a practical starter path with copy-paste-able (though untested) queries.

 

  1. Inventory (allocated vCPUs) via Azure Resource Graph
    Run an ARG query across all subscriptions to list VMs and their sizes. Export to JSON/CSV.

    # Azure CLI (installs ARG extension on first use)
    az graph query -q "
    resources
    | where type =~ 'microsoft.compute/virtualmachines'
    | project subscriptionId, resourceGroup, name, location,
              vmSize=tostring(properties.hardwareProfile.vmSize),
              powerState=tostring(properties.extended.instanceView.powerState.code)"
    --output json > vm_inventory.json

    ARG won’t give you a vCPU count directly for each vmSize. Create a small mapping table of the VM sizes you actually use (e.g., D2s_v5 = 2 vCPUs). You can maintain this mapping from the official VM sizes pages and constrained-vCPU SKUs docs (VM sizes overview, constrained vCPUs).

  2. Usage (CPU actually used) via Azure Monitor Logs
    Enable VM Insights / data collection to your Log Analytics workspace (Perf/InsightsMetrics tables). Then query CPU utilization and estimate “vCPU used” as avg CPU% * allocated vCPUs.

    // Log Analytics or ADX (once logs are there)
    // Average CPU% per computer over 5-minute bins
    InsightsMetrics
    | where TimeGenerated > ago(1d)
    | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage"
    | summarize CPU_Pct = avg(Val) by Computer, bin(TimeGenerated, 5m)
  3. Join allocation + usage
    Load your ARG inventory (with vmSize->vCPU mapping) as a lookup and combine with CPU%.

    // Example: bring allocated vCPUs via a small inline mapping (replace with your real list)
    let VmSizeToVcpu = datatable(vmSize:string, vCPUs:int)
    [
      "Standard_D2s_v5", 2,
      "Standard_D4s_v5", 4,
      "Standard_B2ms", 2
      // add the sizes you use
    ];
    let VmInventory =
    externaldata(subscriptionId:string, resourceGroup:string, name:string, location:string, vmSize:string, powerState:string)
    [@"https://<your-storage-or-adx-ingestion-endpoint>/vm_inventory.json"]
    with(format="multijson")
    | project Computer=name, vmSize
    | join kind=leftouter VmSizeToVcpu on vmSize;
    InsightsMetrics
    | where TimeGenerated > ago(1d)
    | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage"
    | summarize CPU_Pct=avg(Val) by Computer, bin(TimeGenerated, 5m)
    | join kind=leftouter VmInventory on Computer
    | extend vCPU_Allocated = coalesce(vCPUs, 1)
    | extend vCPU_Used_Est = round((CPU_Pct/100.0) * vCPU_Allocated, 2)
  4. Run it where you prefer

That’s enough to produce a tenant-wide table of Subscription, VM, vmSize, vCPU_Allocated, CPU_Pct, vCPU_Used_Est for your reporting tool.

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.

  • Thankyou for your response,this is the 1st  thing that I am going to do in morning and let you know 

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

Real Time Intelligence in a Day

Real-Time Intelligence in a Day—Free Training

Turn streaming data into instant insights with Microsoft Fabric. Learn to connect live sources, visualize in seconds, and use Copilot + AI for smarter decisions.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors