Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for a $400 discount.
Register nowThe Power BI DataViz World Championships are on! With four chances to enter, you could win a spot in the LIVE Grand Finale in Las Vegas. Show off your skills.
I have a dashboard that has been created by another department and I need to pull some of the data from it into our lake house. We are currently doing it manually where we export the data of a visual and import it in using a dataflow gen2.
I have been given contributor access to the other department workspace - however have not found a way to pull that data into my lake house. Any suggestions? I tried the dataflow connector but dont see the dataset in question in my list (it is a regular power bi dashboard importing data from a source and published to a premium workspace)
Solved! Go to Solution.
Adding what I did for reference-
# import fabric
from sempy import fabric as FabricDataFrame
# read the table
df_tables_ReqInfo = FabricDataFrame.read_table(workspace ="....." , dataset="....." , table="...")
# drop any extra columns
df_tables_OffersHireData.drop(['....', '.....'], axis = 1, inplace=True)
# write to lakehouse
df_tables_OffersHireData.to_lakehouse_table(name="....", mode="overwrite" )
2. Created an enviroment to preload the semantic-link library and set it to as default for workspace - this is required if you want to call the notebook in a pipeline as pip is not allowed at that time.
You can use a notebook to query data from another parvia semantic model and then load that into your lake house Here is an example. Fabric Semantic Link and Use Cases
@GilbertQ Seems to be promising - however am not experienced with the notebooks/PySpark - will try. Any other direct alternatives?
Adding what I did for reference-
# import fabric
from sempy import fabric as FabricDataFrame
# read the table
df_tables_ReqInfo = FabricDataFrame.read_table(workspace ="....." , dataset="....." , table="...")
# drop any extra columns
df_tables_OffersHireData.drop(['....', '.....'], axis = 1, inplace=True)
# write to lakehouse
df_tables_OffersHireData.to_lakehouse_table(name="....", mode="overwrite" )
2. Created an enviroment to preload the semantic-link library and set it to as default for workspace - this is required if you want to call the notebook in a pipeline as pip is not allowed at that time.
Doew the dataframe name change from "df_tables_ReqInfo" to "df_tables_OffersHireData" have any effect?
thats just a typo..I had two tables I was processing and ended up removing the middle steps..was trying to explain the solution without exposing my table/column details.
@PowerNewUser Thank you so much for posting the solution. I was trying to find a way and you did it! I'll add a link to your solution in the post I created.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Power BI update to learn about new features.
User | Count |
---|---|
45 | |
33 | |
30 | |
26 | |
24 |
User | Count |
---|---|
40 | |
33 | |
19 | |
18 | |
15 |