The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
Is it still the case that no low-/no-code options for dealing with paginated APIs exist in Fabric? That is the issue reported in this old thread - https://community.fabric.microsoft.com/t5/Fabric-Ideas/Enable-pagination-and-parameterization-in-Dat...
Dataflow Gen2 supports ingest from APIs but seems to assume that endpoints will return data without pagination and my research suggests that the best way to deal with these is to create a notebook to do the work, but I wanted to check whether anyone knew differently.
Solved! Go to Solution.
Hi @dolphinantonym , Thank you for reaching out to the Microsoft Community Forum.
You're right to check, but nothing has changed. Dataflow Gen2 still doesn’t support paginated APIs in a low- or no-code way. You can connect to APIs in Power Query, but only if the full dataset comes in a single response. There's still no support for handling offset, page tokens, or loops inside the dataflow.
So yes, the old thread still applies. The only real option is to use a notebook (Python or Spark) to handle pagination and then load the data into a Lakehouse.
Thanks. I was fairly sure that notebooks were the answer but I just wanted final confirmation before setting that as our approach.
Hi @dolphinantonym , thanks for the update. If you have any other queries, please feel free to Open a new thread in the community. We are always happy to help.
Hi @dolphinantonym , Good afternoon. I hope my reply answered your query. If you still have any doubts regarding the issue, please share the details.
Thank you.
Hi @dolphinantonym , Thank you for reaching out to the Microsoft Community Forum.
You're right to check, but nothing has changed. Dataflow Gen2 still doesn’t support paginated APIs in a low- or no-code way. You can connect to APIs in Power Query, but only if the full dataset comes in a single response. There's still no support for handling offset, page tokens, or loops inside the dataflow.
So yes, the old thread still applies. The only real option is to use a notebook (Python or Spark) to handle pagination and then load the data into a Lakehouse.