Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi everyone,
I've been poking around Fabric, and it certainly looks promising. In the past we had an API pulling the data into an S3 buckets, a bunch of complex procedures in GLUE, more SQL and then finally creating a PBI ready dataset through a gateaway. This solution is a bit clunky and unstable, therefore we are now looking at the possibility of simplifying it, storing the raw data in S3 AWS but migrating the engineering and analytics part entirely to the PBI service.
I just had my first experience successfully setting up an S3 shortcut to a test bucket, which contains a folder and 1 json file with sales data. However, upon setting that up I'm not sure how to proceed since the pipeline is not picking up the file and I can't transform it. I was hoping to create a dataflow which would allow me to transform the sales data, then embed that into a pipeline that would allow me to refresh daily and produce a pbi ready dataset. Is this logic incorrect?
See S3 shortcut in Datalake :
Tried creating a pipeline referred to this lakehouse but I seem to be getting an error
Also tried a dataflow but again another error
Admittedly I'm a bit outside my comfort zone, and - understandably since it is a new feature - I struggled to find more detailed guides on the web, therefore any assistance would be highly appreciated.
I work at a small company, which means I am supposed to design the best solution and execute it, for that I have full access to Azure and AWS, but my background is more on a data analysis and not so much engineering however I was hoping that I would be able to manage it through the web GUI. The current solution was set up by an offshore team that is no longer reachable, however the APIs continue to work flawlessly on a daily basis.
Thanks in advance!
Solved! Go to Solution.
Hi @GonzaloB ,
from your S3 you can create linked files or linked tables in Fabric. Linked tables do only support Parquet Delta Table file format, so the linked table path won't work for you. Instead you need to start from the Files top level folder. Then you can continue as you described. From the Files folder you can access to files in dataflows, pipelines or notebooks.
BR
Martin
Hi @GonzaloB ,
from your S3 you can create linked files or linked tables in Fabric. Linked tables do only support Parquet Delta Table file format, so the linked table path won't work for you. Instead you need to start from the Files top level folder. Then you can continue as you described. From the Files folder you can access to files in dataflows, pipelines or notebooks.
BR
Martin
Hi Martin,
Thanks for pointing that out, I wanted to let you know that I've now successfully reached the data flows layer using a s3shortcut stored in files. Thank you!
For anyone else out there with a similar issue, I founds this MS tutorial useful
https://learn.microsoft.com/en-us/fabric/onelake/create-s3-shortcut
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!