Overview of the Notebook
This notebook takes an Excel file with DAX queries, runs them under cold, warm, and hot cache states, and logs the results in a Lakehouse table.
Key Features:
- Model Definitions
- The definitions for models to be tested are stored in a dictionary, specifying storage mode, workspace, and cache states to test.
- Query Input
- Queries are pulled from an Excel file in your Lakehouse. Each query needs an ID and at least one column that matches the runQueryType in the model definition.
- Different Cache States
- Cold Cache: Clears all cache. For Import/Direct Query, this involves pausing capacity, reassigning workspaces, and clearing VertiPaq cache. For DirectLake, it triggers a dataset clearValues refresh and then a full refresh. For Direct Query, the way cold-cache is set assumes that your data store is in a Fabric workspace, e.g., Lakehouse, Warehouse, etc.
- Warm Cache: Partial caching. For Direct Query, we set cold-cache first, then run a query to “prime” the model. For Import and Direct Lake, we ensure all columns are framed by running the target query, then we clear the Vertipaq cache.
- Hot Cache: Runs the query twice before measuring the third time to ensure columns are framed and all caches are set.
- Capacity Pause/Resume
- **Warning**: Pausing a capacity will interrupt any running workloads on that capacity. Resuming will take time and resources, and can affect other workspaces assigned to the same capacity.
- For cold-cache queries on Import and DirectQuery models, the notebook pauses and resumes capacities to ensure truly cold-cache testing. Configure your workspace capacities in the config cell if using this feature.
- Logging to Lakehouse
- Query logs are captured using an Analysis Services trace and stored in your attached Lakehouse. This includes duration, CPU time, and success/failure status.
https%3A%2F%2Fgithub.com%2Fmicrosoft%2Ffabric-toolbox%2Fblob%2Fmain%2Ftools%2FDAXPerformanceTesting%2FDAXPerformanceTesting.ipynb