Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreWe've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
Here is what I want to do (ideally):
1) End user loads file to a specific Sharepoint Folder.
2) Lakehouse gets access to this file (ideally through a shortcut to avoid data duplication, but duplication is not a deal breaker)
3) Spark Notebook processes the files in the Lakehouse File folder.
4) User gets access to the processed data through a shared semantic model.
The one thing I haven't figured out is how to do Step #2 other than registering an Azure App and doing API calls to move the data, but I'm trying to find a simpler solution.
Hi @pcuriel
Currently, fabric lakehouse doesn't support directly for Sharepoint access. I suggest you 2 ways
1) Use fabric data pipeline activity with Share point online or Https connector and then later push that files into Lakhouse as destination.
2) You can use Dataflow gen2 to pull file from Sharepoint. But it supports structured, semistructred files only like excel,csv etc.
Think yourself and make good decission on this.
Pls let me know if it works.
Thank you!!
Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 14 | |
| 7 | |
| 6 | |
| 4 | |
| 3 |
| User | Count |
|---|---|
| 30 | |
| 17 | |
| 14 | |
| 13 | |
| 10 |