Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello,
I've noticed that when I manually trigger my dataflow which fetches OData and appends it to a Fabric data lake, the dataflow succeeds and the append works.
But when I take that same dataflow and put it in a pipeline, and then trigger/schedule the pipeline, the dataflow says successful, but it doesn't actually append any data. The dataflow is not disabled in the pipeline.
This is is not due to the SQL endpoint lagging because I checked the max date and rowcount with a notebook.
Solved! Go to Solution.
I figured out what the issue was: it's the general unclear behavior with Dataflows with closing vs publishing.
When you exit out of the Dataflow editor, you think it's saving your latest changes because when you go back you will see the most recent state. But in fact it saved the previous state, and you can't save a Dataflow without running it.
This means that if you are doing a fetch+append data on a weekly or monthly basis, which is what I am doing, you have to wait until the last day of your time window to publish your Dataflow, and only then can you enable the auto-run schedule.
I figured out what the issue was: it's the general unclear behavior with Dataflows with closing vs publishing.
When you exit out of the Dataflow editor, you think it's saving your latest changes because when you go back you will see the most recent state. But in fact it saved the previous state, and you can't save a Dataflow without running it.
This means that if you are doing a fetch+append data on a weekly or monthly basis, which is what I am doing, you have to wait until the last day of your time window to publish your Dataflow, and only then can you enable the auto-run schedule.
Hello @DCELL ,
As per my understanding, you have a dataflow that successfully appends data to your Fabric Data Lake when you run it manually. But when you trigger the same dataflow through a pipeline (either scheduled or manually), it says it ran successfully , yet no data is actually appended. Since, You have already confirmed this isn’t due to SQL endpoint delays by checking max date and row counts.
There might be a few possible reasons that causing the issue :
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it! |
Regards,
B Manikanteswara Reddy
Hi @DCELL ,
We wanted to kindly follow up to check if the solution provided for the issue worked? or Let us know if you need any further assistance?
If our response addressed, please mark it as Accept as solution and click Yes if you found it helpful.
Please don't forget to give a "Kudos |
Regards,
B Manikanteswara Reddy
Hi @DCELL ,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for the issue worked? or Let us know if you need any further assistance?
If our response addressed, please mark it as Accept as solution and click Yes if you found it helpful.
Please don't forget to give a "Kudos |
Regards,
B Manikanteswara Reddy
Hello, I've gone through the list and the issue must have been something else.
-The data was already ready
-The dataflow configuration is correct; when running the dataflow without the pipeline it worked. Only when that exact same dataflow is in a pipeline does it 'succeed' but without any effect
-Credentials - but wouldn't the dataflow in the pipeline simply fail in this case if the credentials aren't up to date?
-Schema is correct. The dataflow works outside of a pipeline, but not when inside a pipeline