Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Dear,
I have an app with Power BI reports. My Azure SQL has a field with texts extracted from pdf files. It's a lot of text. More than 4 million of text of pdfs files. We have a Power BI Premium P1 capacity to support this. However, when we import this to a Power BI dataset, and execute a filter with a Power bi slicer or a SmartFilter get very slow but work. Would you know if there is a way to process this filter out of Power BI? Something like to use Azure SQL or Azure Connective Search Azure Synapse o process this filter and delivery it dynamically to Power BI the result?
We don't know how to solve this. Can somebody have some idea what to do to get the result instantly?
Follow the information about our field in the Azure SQL DATABASE.
Many thanks,
Thanks for the advice @Anonymous. It would be great if we could do that. Do you know if we save this field in Azure Synapse with a direct query connection could solve this?
Hi @lawrenceabith,
According to your description, I think this scenario should be more related to your source data. AFAIK, the current power bi does not support do custom to affect the performance of these features.
In my opinion, I'd like to suggest you add a custom field on your data source side to analyze and extract the keywords to reduce the processing spends on huge characters.
Regards,
Xiaoxin Sheng
User | Count |
---|---|
5 | |
5 | |
2 | |
2 | |
2 |
User | Count |
---|---|
10 | |
7 | |
4 | |
4 | |
4 |