Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.
Hi,
I am importing a table from databricks and was successfulll. In order to gain more performance I am planning to bring only 10 required columns out of 35 from a table. I know here are the possibilities.
1) Write a select statement and choose 10 columns while import
2) import columns and exclude from menu ribbon.
3) Deleting columns.
A part from above methods I am wondering whether there is a smart method through M query or anyother approach.
Appreciate your inputs.
Thanks,Vicks
Solved! Go to Solution.
Hi @vicks123 ,
While writing an SQL SELECT statement will get you what you want in the short term, be mindful that any additional (simple) transformations done on the initial query in Power Query will be performed locally. Additionally, if you choose to write all of your transformations in the SQL SELECT, then you may find your query becomes difficult to maintain in the future, especially if passed to another PBI dev who doesn't 'speak' SQL as well.
The best method would be to query your source while maintaining query folding, so subsequent transformations are also sent back to the source to process.
To do this with Databricks, you can either use the Databricks.Query connector or, if you need to use Databricks.Catalogs, you can wrap it in a Value.NativeQuery and include the [EnableFolding=true] argument, something like this:
Pete
Proud to be a Datanaut!
Hi @vicks123 ,
I wanted to confirm that the solution provided by @BA_Pete is the best approach for your query.
Using query folding to select only the necessary columns at the source level will greatly improve performance by reducing the amount of data imported into Power BI and ensuring transformations are handled server-side in Databricks.
By applying theEnableFolding=true argument in your queries, you can maintain both performance and scalability. This method is efficient and easy to maintain long-term.
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thankyou.
Hi @vicks123,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
Hi @vicks123,
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it quickly.
Thank you.
Hi @vicks123,
I wanted to follow up on our previous suggestions regarding Import only few columns from a table in power query editor. We would love to hear back from you to ensure we can assist you further.
If my response has addressed your query, please accept it as a solution and give a ‘Kudos’ so other members can easily find it. Please let us know if there’s anything else we can do to help.
Hi @vicks123 ,
While writing an SQL SELECT statement will get you what you want in the short term, be mindful that any additional (simple) transformations done on the initial query in Power Query will be performed locally. Additionally, if you choose to write all of your transformations in the SQL SELECT, then you may find your query becomes difficult to maintain in the future, especially if passed to another PBI dev who doesn't 'speak' SQL as well.
The best method would be to query your source while maintaining query folding, so subsequent transformations are also sent back to the source to process.
To do this with Databricks, you can either use the Databricks.Query connector or, if you need to use Databricks.Catalogs, you can wrap it in a Value.NativeQuery and include the [EnableFolding=true] argument, something like this:
Pete
Proud to be a Datanaut!
Using a SQL SELECT query is the most efficient method to import only the required 10 columns from Databricks, minimizing data load and improving performance directly at the source.