The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
Dear,
I have an app with Power BI reports. My Azure SQL has a field with texts extracted from pdf files. It's a lot of text. More than 4 million of text of pdfs files. We have a Power BI Premium P1 capacity to support this. However, when we import this to a Power BI dataset, and execute a filter with a Power bi slicer or a SmartFilter get very slow but work. Would you know if there is a way to process this filter out of Power BI? Something like to use Azure SQL or Azure Connective Search Azure Synapse o process this filter and delivery it dynamically to Power BI the result?
We don't know how to solve this. Can somebody have some idea what to do to get the result instantly?
Follow the information about our field in the Azure SQL DATABASE.
Many thanks,
Thanks for the advice @Anonymous. It would be great if we could do that. Do you know if we save this field in Azure Synapse with a direct query connection could solve this?
Hi @lawrenceabith,
According to your description, I think this scenario should be more related to your source data. AFAIK, the current power bi does not support do custom to affect the performance of these features.
In my opinion, I'd like to suggest you add a custom field on your data source side to analyze and extract the keywords to reduce the processing spends on huge characters.
Regards,
Xiaoxin Sheng