The ultimate Microsoft Fabric, Power BI, Azure AI, and SQL learning event: Join us in Stockholm, September 24-27, 2024.
Save €200 with code MSCUST on top of early bird pricing!
Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started
So I'm working with the Asana API to pull in projects and having some issues with putting in a refresh schedule. Works fine on the desktop, but get the "Data Source Unsupported for Refresh" when I publish and try to refresh data.
I tried to follow the guidance from http://blog.datainspirations.com/2018/02/17/dynamic-web-contents-and-power-bi-refresh-errors/ - but I'm still getting issues even though I pulled everything out of the URL and put it in the RelativePath as part of the Web.Contents call.
Can anyone give me some guidance on how I'm screwing it up?
let
baseurl = "https://app.asana.com",
RelPath = "/api/1.0/projects?limit=" & Project_Limit & "&workspace=xxxxxx",
headers = [Headers=[#"Content-Type"="application/json", Authorization="Bearer xxxxxxx"], RelativePath= RelPath ],
initReq = Json.Document(Web.Contents(baseurl, headers)),
initData = initReq[data],
//We want to get data = {lastNPagesData, thisPageData}, where each list has the limit # of Records,
//then we can List.Combine() the two lists on each iteration to aggregate all the records. We can then
//create a table from those records
gather = (data as list, uri) =>
let
//get new offset from active uri
newOffset = Json.Document(Web.Contents(uri, headers))[next_page][offset],
//build new uri using the original uri so we dont append offsests
newUri = baseurl & "&offset=" & newOffset,
//get new req & data
newReq = Json.Document(Web.Contents(newUri, headers)),
newdata = newReq[data],
//add that data to rolling aggregate
data = List.Combine({data, newdata}),
//if theres no next page of data, return. if there is, call @gather again to get more data
check = if newReq[next_page] = null then data else @gather(data, newUri)
in check,
//before we call gather(), we want see if its even necesarry. First request returns only one page? Return.
outputList = if initReq[next_page] = null then initData else gather(initData, baseurl),
//then place records into a table. This will expand all columns available in the record.
expand1 = Table.FromRecords(outputList),
#"Filtered Rows" = Table.SelectRows(expand1, each true)
in
#"Filtered Rows"
Solved! Go to Solution.
Someone did let me know. I was so focused on the first call - forgot I was making two different calls. Got it working.
@dude95 I'm having the same isssue. If you wont mind, can you point me to the right direction on what I should change to make it work? My goal is to extract only selected fields which are defined in opt_fields.
RelPath = "/api/1.0/projects/xxxxxxxxxxxx/tasks?limit=100&opt_fields=name,created_at,modified_at,completed_at,due_on,assignee.name,notes,custom_fields.name,custom_fields.display_value,subtasks.name,subtasks.notes,permalink_url"
Someone did let me know. I was so focused on the first call - forgot I was making two different calls. Got it working.
how do you supply the bearer token? Don't you need another call to fetch that first?
Internal file that I control. I've just got a key through my account. Not full oAuth
Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.
Check out the August 2024 Power BI update to learn about new features.
User | Count |
---|---|
56 | |
21 | |
12 | |
12 | |
10 |
User | Count |
---|---|
110 | |
39 | |
28 | |
23 | |
21 |