Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredPower BI is turning 10! Let’s celebrate together with dataviz contests, interactive sessions, and giveaways. Register now.
I have a dashboard that has been created by another department and I need to pull some of the data from it into our lake house. We are currently doing it manually where we export the data of a visual and import it in using a dataflow gen2.
I have been given contributor access to the other department workspace - however have not found a way to pull that data into my lake house. Any suggestions? I tried the dataflow connector but dont see the dataset in question in my list (it is a regular power bi dashboard importing data from a source and published to a premium workspace)
Solved! Go to Solution.
Adding what I did for reference-
# import fabric
from sempy import fabric as FabricDataFrame
# read the table
df_tables_ReqInfo = FabricDataFrame.read_table(workspace ="....." , dataset="....." , table="...")
# drop any extra columns
df_tables_OffersHireData.drop(['....', '.....'], axis = 1, inplace=True)
# write to lakehouse
df_tables_OffersHireData.to_lakehouse_table(name="....", mode="overwrite" )
2. Created an enviroment to preload the semantic-link library and set it to as default for workspace - this is required if you want to call the notebook in a pipeline as pip is not allowed at that time.
You can use a notebook to query data from another parvia semantic model and then load that into your lake house Here is an example. Fabric Semantic Link and Use Cases
@GilbertQ Seems to be promising - however am not experienced with the notebooks/PySpark - will try. Any other direct alternatives?
Adding what I did for reference-
# import fabric
from sempy import fabric as FabricDataFrame
# read the table
df_tables_ReqInfo = FabricDataFrame.read_table(workspace ="....." , dataset="....." , table="...")
# drop any extra columns
df_tables_OffersHireData.drop(['....', '.....'], axis = 1, inplace=True)
# write to lakehouse
df_tables_OffersHireData.to_lakehouse_table(name="....", mode="overwrite" )
2. Created an enviroment to preload the semantic-link library and set it to as default for workspace - this is required if you want to call the notebook in a pipeline as pip is not allowed at that time.
Doew the dataframe name change from "df_tables_ReqInfo" to "df_tables_OffersHireData" have any effect?
thats just a typo..I had two tables I was processing and ended up removing the middle steps..was trying to explain the solution without exposing my table/column details.
@PowerNewUser Thank you so much for posting the solution. I was trying to find a way and you did it! I'll add a link to your solution in the post I created.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Power BI update to learn about new features.
User | Count |
---|---|
50 | |
31 | |
26 | |
25 | |
25 |
User | Count |
---|---|
61 | |
49 | |
29 | |
24 | |
23 |