Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Hello,
I currently have an F4 capacity where the only consumption at the moment is the VNET. Our primary use case for Fabric is the Power BI workload. We have abour 5-7 reports currently.
Yesterday, we changed the storage modes from Direct Query to Import.
The semantic model sizes are all fairly small, less than 50 MB.
The VNET currently has 5 members.
I was expecting to see a fall in the CU utlization since the change but so far, it looks unchanged.
At this point, I am not sure why the VNET consumption is eating up the entirety of the F4 capacity.
Any help would be greatly appreciated.
Hi @ssrinath,
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
If we don’t hear back, we’ll go ahead and close this thread. For any further discussions or questions, please start a new thread in the Microsoft Fabric Community Forum we’ll be happy to assist.
@Rufyda & @wardy912 Thanks for your prompt response here.
Thank you for being part of the Microsoft Fabric Community.
Hi,
Thanks for sharing the details.
Switching from DirectQuery to Import mode usually reduces query processing load, but it might not immediately affect overall Capacity Unit (CU) utilization because:
VNET overhead is mostly fixed: The Virtual Network infrastructure consumes a baseline amount of capacity that does not fluctuate significantly with your report sizes or query modes.
CU utilization includes all components: Capacity utilization reflects all running Fabric services, including VNET, background tasks, and Power BI workloads.
Small semantic models mean query load may already be low: With models under 50 MB and only 5-7 reports, the query-related CU consumption might have been low even before, so the noticeable change is minimal.
Metrics may have delay or smoothing: Sometimes, CU metrics don’t immediately reflect workload changes, especially if cache warm-up or background refreshes continue.
Recommendations:
Use the Fabric capacity monitoring tools to analyze detailed consumption per workload and component.
Verify VNET configuration and traffic to rule out any unexpected network overhead.
If VNET consumption dominates and impacts your capacity needs, consider discussing options with Microsoft support or evaluating capacity sizing/scaling.
If this answered your question, please consider clicking Accept Answer and Yes if you found it helpful.
If you have any other questions or need further assistance, feel free to let us know — we’re here to help.
As we haven’t heard back from you, we wanted to kindly follow up to check if there is any progress on above mentioned issue. let me know if you still need any further help here.
Thanks,
Prashanth Are
MS Fabric community support
As we haven’t heard back from you, we wanted to kindly follow up to check if there is any progress on above mentioned issue. let me know if you still need any further help here.
Thanks,
Prashanth Are
MS Fabric community support
I'm hoping to test this out later this week to see if there is any change to the capacity utlization.
Hi @ssrinath
We have the same issue! VNET gateways continue to use CU the whole time it is up, so depending on how often your datasets refresh, it could be on indefinitely.
Virtual network data gateways capacity consumption | Microsoft Learn
Firstly, minimise data refresh schedule or coordinate so they give the VNET gateway chance to release CU (30 mins inactivity minimum).
Next, make sure surge protection has been configured in the admin portal to prevent capacity limits being reached.
Then, check if you really need the VNET. I can only speak for our use case. We were using it to access storage accounts. We have swapped that out for shortcuts with a private connection.
Hope this helps, please give a thumbs up and mark as solved if it does. Thanks
Thanks so much for your response!
We have about 2 or 3 reports that need to be refreshed hourly. All semantic model refreshes are under 10 minutes.
The issue for us is that our source is a data warehouse in Databricks - so we need the VNET unfortunetly.
I'll definetly check out the Surge Protection though!
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!