March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hello All,
I have been using Fabric for couple of weeks now. From the view of a person who has been using Power BI Desktop for many years, the Power BI view in Fabric seems to miss a lot of features such as Field Parameters, Calculation Groups when we use the Direct Lake Semantic Model.
The Most basic feature of creating Calculated Columns in also not available in Direct Lake Semantic Model. One other thing that I notice missing is that, Direct Lake Models cannot be used as Import Storage Model even when creating a model locally from it. This Local model becomes only a Direct Query model.
This feels like taking the biggest powers of Power BI Desktop away from the developers when using Fabric.
I would like the thoughts of many of you who are already well into Fabric on "What are the other things that I should expect to be not available when using Direct Lake Semantic Model while the same is available in Power BI Import or Direct Query mode."?
Thanks,
Solved! Go to Solution.
Hi @Thejeswar ,
There is some things concerning fabric that need to be taken into account before doing this comparision especially when refering to the direct lake mode.
Fabric stores the data in Parquet files an open format that allows to have better compression and faster querying,
the use of direct lake takes advantage of the parquet files and instead of converting the DAX queries into SQL queries like you do in Direct query the Parquet files are read directly and the result is return.
However since the files are read directly you should be aware that the only way to have information on the parquet files is by doing the ETL process in the lakehouse/warehouseusing the different options.
Calculated Columns and tables use DAX and they cannot be created in the parquet files in the one drive that is why you cannnot create them currently in the parquet files because there is no option to "load" the new data into the parquet files, if you however create an import model based on your onelake data then you can create the columns but won't be direct lake.
Second part that you reder concerning the Local model Direct lake is to be created anbd used online because it takes advantage of the onelake properties and the parquet files it only read the metadata of the tables and where the files are located, if you enable the XMLA endpoint you can edit the Direct lake models on the desktop but be aware that all the changes will be automatically written to the service and you can't on that file do a report and store it locally.
Concerning field parameters, you cannot directly create them on your model but you can use an external tool like Tabular Editor or to create them, since altough they are witten in DAX they are not materialize in any data values you have on your onelake, but rather in paths to your model (columns, measures, and so on).
Direct Lake is not about taking away the "biggest powers" it's a completly different way to look at your modelling, it's mainly used for very large dataset (billions of rows) where you need to be as fast as import but without the latency.
Be aware that Direct Lake and Fabric are evolving so a lot can change in the future.
See this links about Direct Lake mode to get other ideas.
https://www.sqlbi.com/blog/marco/2024/04/06/direct-lake-vs-import-mode-in-power-bi/
https://www.youtube.com/watch?v=eJVYmDq5YCw
Regards
Miguel Félix
Proud to be a Super User!
Check out my blog: Power BI em PortuguêsHi @Thejeswar ,
As a supplement: Their products are positioned differently, Delta Lake's architecture is designed to be interoperable with Delta tables created by other tools and vendors. This openness and compatibility makes Delta Lake a flexible and powerful data storage solution.
Its limitations compared to desktop are largely due to the nature of the underlying storage system and Delta Lake's design choices. However, the functionality and features of the Direct Lake semantic model are constantly evolving to produce better compatibility with desktop.
And compares Direct Lake storage mode to Import and DirectQuery storage modes:
For more details, you can refer to below document:
Direct Lake overview - Microsoft Fabric | Microsoft Learn
Use composite models in Power BI Desktop - Power BI | Microsoft Learn
Best Regards,
Adamk Kong
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @Thejeswar ,
There is some things concerning fabric that need to be taken into account before doing this comparision especially when refering to the direct lake mode.
Fabric stores the data in Parquet files an open format that allows to have better compression and faster querying,
the use of direct lake takes advantage of the parquet files and instead of converting the DAX queries into SQL queries like you do in Direct query the Parquet files are read directly and the result is return.
However since the files are read directly you should be aware that the only way to have information on the parquet files is by doing the ETL process in the lakehouse/warehouseusing the different options.
Calculated Columns and tables use DAX and they cannot be created in the parquet files in the one drive that is why you cannnot create them currently in the parquet files because there is no option to "load" the new data into the parquet files, if you however create an import model based on your onelake data then you can create the columns but won't be direct lake.
Second part that you reder concerning the Local model Direct lake is to be created anbd used online because it takes advantage of the onelake properties and the parquet files it only read the metadata of the tables and where the files are located, if you enable the XMLA endpoint you can edit the Direct lake models on the desktop but be aware that all the changes will be automatically written to the service and you can't on that file do a report and store it locally.
Concerning field parameters, you cannot directly create them on your model but you can use an external tool like Tabular Editor or to create them, since altough they are witten in DAX they are not materialize in any data values you have on your onelake, but rather in paths to your model (columns, measures, and so on).
Direct Lake is not about taking away the "biggest powers" it's a completly different way to look at your modelling, it's mainly used for very large dataset (billions of rows) where you need to be as fast as import but without the latency.
Be aware that Direct Lake and Fabric are evolving so a lot can change in the future.
See this links about Direct Lake mode to get other ideas.
https://www.sqlbi.com/blog/marco/2024/04/06/direct-lake-vs-import-mode-in-power-bi/
https://www.youtube.com/watch?v=eJVYmDq5YCw
Regards
Miguel Félix
Proud to be a Super User!
Check out my blog: Power BI em PortuguêsMarch 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
124 | |
82 | |
69 | |
53 | |
44 |
User | Count |
---|---|
202 | |
106 | |
100 | |
64 | |
56 |