Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi Everyone,
I am trying to find the right data for compute resources (ex -Virtual machine allocated cores versus used cores for all the subscribtions and quite other things allocation and usage ) .
I need to use kusto explorer/ ADX for analyzing this data. I cannot run queries in azure portal for same. The kql returns data in json format. The challenge is ARG, kusto, kql. All these 3 things are new to me. So how do I start to identify the infrastructure compute data allocated, used and write queries that could be used in reporting tools.
Solved! Go to Solution.
Hi @Sania-F,
If I understand correctly, you’re trying to report on "allocated vs used cores" across subscriptions with KQL, Kusto Explorer/ADX, and Azure Resource Graph (ARG). The key is that this spans two different data planes:
Below is a practical starter path with copy-paste-able (though untested) queries.
Inventory (allocated vCPUs) via Azure Resource Graph
Run an ARG query across all subscriptions to list VMs and their sizes. Export to JSON/CSV.
# Azure CLI (installs ARG extension on first use)
az graph query -q "
resources
| where type =~ 'microsoft.compute/virtualmachines'
| project subscriptionId, resourceGroup, name, location,
vmSize=tostring(properties.hardwareProfile.vmSize),
powerState=tostring(properties.extended.instanceView.powerState.code)"
--output json > vm_inventory.jsonARG won’t give you a vCPU count directly for each vmSize. Create a small mapping table of the VM sizes you actually use (e.g., D2s_v5 = 2 vCPUs). You can maintain this mapping from the official VM sizes pages and constrained-vCPU SKUs docs (VM sizes overview, constrained vCPUs).
Usage (CPU actually used) via Azure Monitor Logs
Enable VM Insights / data collection to your Log Analytics workspace (Perf/InsightsMetrics tables). Then query CPU utilization and estimate “vCPU used” as avg CPU% * allocated vCPUs.
// Log Analytics or ADX (once logs are there) // Average CPU% per computer over 5-minute bins InsightsMetrics | where TimeGenerated > ago(1d) | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage" | summarize CPU_Pct = avg(Val) by Computer, bin(TimeGenerated, 5m)
Join allocation + usage
Load your ARG inventory (with vmSize->vCPU mapping) as a lookup and combine with CPU%.
// Example: bring allocated vCPUs via a small inline mapping (replace with your real list) let VmSizeToVcpu = datatable(vmSize:string, vCPUs:int) [ "Standard_D2s_v5", 2, "Standard_D4s_v5", 4, "Standard_B2ms", 2 // add the sizes you use ]; let VmInventory = externaldata(subscriptionId:string, resourceGroup:string, name:string, location:string, vmSize:string, powerState:string) [@"https://<your-storage-or-adx-ingestion-endpoint>/vm_inventory.json"] with(format="multijson") | project Computer=name, vmSize | join kind=leftouter VmSizeToVcpu on vmSize; InsightsMetrics | where TimeGenerated > ago(1d) | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage" | summarize CPU_Pct=avg(Val) by Computer, bin(TimeGenerated, 5m) | join kind=leftouter VmInventory on Computer | extend vCPU_Allocated = coalesce(vCPUs, 1) | extend vCPU_Used_Est = round((CPU_Pct/100.0) * vCPU_Allocated, 2)
Run it where you prefer
That’s enough to produce a tenant-wide table of Subscription, VM, vmSize, vCPU_Allocated, CPU_Pct, vCPU_Used_Est for your reporting tool.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hi @Sania-F,
If I understand correctly, you’re trying to report on "allocated vs used cores" across subscriptions with KQL, Kusto Explorer/ADX, and Azure Resource Graph (ARG). The key is that this spans two different data planes:
Below is a practical starter path with copy-paste-able (though untested) queries.
Inventory (allocated vCPUs) via Azure Resource Graph
Run an ARG query across all subscriptions to list VMs and their sizes. Export to JSON/CSV.
# Azure CLI (installs ARG extension on first use)
az graph query -q "
resources
| where type =~ 'microsoft.compute/virtualmachines'
| project subscriptionId, resourceGroup, name, location,
vmSize=tostring(properties.hardwareProfile.vmSize),
powerState=tostring(properties.extended.instanceView.powerState.code)"
--output json > vm_inventory.jsonARG won’t give you a vCPU count directly for each vmSize. Create a small mapping table of the VM sizes you actually use (e.g., D2s_v5 = 2 vCPUs). You can maintain this mapping from the official VM sizes pages and constrained-vCPU SKUs docs (VM sizes overview, constrained vCPUs).
Usage (CPU actually used) via Azure Monitor Logs
Enable VM Insights / data collection to your Log Analytics workspace (Perf/InsightsMetrics tables). Then query CPU utilization and estimate “vCPU used” as avg CPU% * allocated vCPUs.
// Log Analytics or ADX (once logs are there) // Average CPU% per computer over 5-minute bins InsightsMetrics | where TimeGenerated > ago(1d) | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage" | summarize CPU_Pct = avg(Val) by Computer, bin(TimeGenerated, 5m)
Join allocation + usage
Load your ARG inventory (with vmSize->vCPU mapping) as a lookup and combine with CPU%.
// Example: bring allocated vCPUs via a small inline mapping (replace with your real list) let VmSizeToVcpu = datatable(vmSize:string, vCPUs:int) [ "Standard_D2s_v5", 2, "Standard_D4s_v5", 4, "Standard_B2ms", 2 // add the sizes you use ]; let VmInventory = externaldata(subscriptionId:string, resourceGroup:string, name:string, location:string, vmSize:string, powerState:string) [@"https://<your-storage-or-adx-ingestion-endpoint>/vm_inventory.json"] with(format="multijson") | project Computer=name, vmSize | join kind=leftouter VmSizeToVcpu on vmSize; InsightsMetrics | where TimeGenerated > ago(1d) | where Origin == "vm.azm.ms" and Namespace == "Processor" and Name == "UtilizationPercentage" | summarize CPU_Pct=avg(Val) by Computer, bin(TimeGenerated, 5m) | join kind=leftouter VmInventory on Computer | extend vCPU_Allocated = coalesce(vCPUs, 1) | extend vCPU_Used_Est = round((CPU_Pct/100.0) * vCPU_Allocated, 2)
Run it where you prefer
That’s enough to produce a tenant-wide table of Subscription, VM, vmSize, vCPU_Allocated, CPU_Pct, vCPU_Used_Est for your reporting tool.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Turn streaming data into instant insights with Microsoft Fabric. Learn to connect live sources, visualize in seconds, and use Copilot + AI for smarter decisions.