Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi,
I have really odd issue...
I have an gen2 dataflow that copies a csv file from an on prem location into a delta table. It's set to overwrite and in general apepars to work fine. (Would prefer to be able to do that direct in data factory but it can't read on prem yet).
If I query it with the sql endpoint, it gives the expected data that matches the source csv:
If I read it into a spark dataframe, though I get old versions of the same records displaying:
The problem is I'm then using that dataframe to create what should be a unique dimenion.
I've tried specificying the latest timetravel version explicitly and it still does the same.
Only thing I can think is there is some oddity in the delta meta data being written by dataflows.
Thanks @Joshrodgers123 at least that makes me feel like I'm not doing something silly!
I have the same issue, but reversed. I have a dataflow writing to a lakehouse. If I read the data in a notebook, I can see the correct and latest data. If I read the data through the SQL Endpoint (or even the table preview in the lakehouse), it shows an older version of the delta table.
I opened a support ticket (2312050040012594) and they are investigating. They said it was a sync issue.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.