Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Atoma
Frequent Visitor

Direct link between power BI and the Dalux api

Good morning to you all,

I'm trying to link PowerBi to the opening api of an application we use. However, I'm encountering a problem. PowerBI can't read the data correctly and puts null values instead. What's more, Powerbi can only read data from a page. Whereas I would like it to read all the data

 

I did a test with Python code and it was able to read all the data.

 

Have you ever done this? How did you solve the problem? link to the power bi : 'https://field.dhttps://www.swisstransfer.com/d/83b3ac12-a79c-47a3-8c73-c78782aa0d0c 

Thank you in advance for your answers,

Have a nice day,

Quentin

Here is the Python code : 

headers = {

        'X-API-KEY': 'RENSEIGNER LA CLE API'

    }

import requests

import json

 

write = True

read = True

 

def process_page(url, headers, page_number):

    r = requests.get(url, headers=headers)

    r.raise_for_status()  # Raise an HTTPError for bad responses

    # Process the response for the page as needed

    print(f"Reading page {page_number}", end='', flush=True)

 

    # Append data to the JSON file

    page_data = r.json()

    with open('data.json', 'a') as f:

        json.dump(page_data, f)

        f.write('\n')  # Add a newline to separate the entries

 

    # Add "OK" after successful reading

    print(" OK")

 

if write:

    # Send request

    url = dalux.com/service/api/4.0/projects/6663914611/tasks'

 

    try:

        page_number = 1

        while url:

            r = requests.get(url, headers=headers)

            r.raise_for_status()  # Raise an HTTPError for bad responses

 

            # Save to a JSON file

            data = r.json()

            with open('data.json', 'a') as f:

                json.dump(data, f)

                f.write('\n')  # Add a newline to separate the entries

 

            # Process the page

            process_page(url, headers, page_number)

 

            # Increment page number

            page_number += 1

 

            # Get the next page URL

            url = next((link['href'] for link in data.get('links', []) if link.get('rel') == 'nextPage'), None)

 

        # Confirm that reading is finished and successful

        print("Reading finished successfully.")

 

    except requests.exceptions.RequestException as e:

        print(f"Error during request: {e}")

 

1 REPLY 1
Anonymous
Not applicable

Hi @Atoma 

 

Base on my test on the Task API's query you provide in the pbix, it works well to get data from the first page. After comparing the result with the transformation steps, I found that some records that you expanded in the next step don't exist in the current result. As a result, the non-existing records display null values in those columns. That's why you see the null values in some columns. I guess the API may have be modified so it returns different records than it did when the PQ code was developed. 

vjingzhanmsft_1-1705999992237.png

 

So the main problem is how to get data from all pages through the next page URL. I'm not very good at this but I remember some old threads have dealt with a similar problem. I will post the links here once I found them. 

 

Best Regards,
Jing

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.

July PBI25 Carousel

Power BI Monthly Update - July 2025

Check out the July 2025 Power BI update to learn about new features.