Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Does the reading limitation (same limitation as Execute Queries) of semantic tables through spark apply to reading through sempy?
I am building a production element and whether I choose spark or sempy depends on which one is NOT subject to this limitation.
Spark current limitations
#spark vs sempy
server = server
db = db
tbl = tbl
#spark
dataset = spark.sql(f"""select id as id, sum(value) as value FROM pbi.{db}.{tbl} group by id""")
#sempy
dataset = (fabric
.evaluate_dax(workspace= server,
dataset=db,
dax_string=query_string)
)
Hi @smpa01 ,
Is my follow-up just to ask if the problem has been solved?
If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?
Thank you very much for your cooperation!
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @smpa01 ,
Is my follow-up just to ask if the problem has been solved?
If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?
Thank you very much for your cooperation!
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @smpa01 ,
Spark limitations include for example ISNULL, IS_NOT_NULL, STARTS_WITH, ENDS_WITH, and CONTAINS.
SemPy has not seen official documentation on these limitations and can bypass some of the limitations faced by Spark.SemPy may have some limitations in terms of data size and query complexity.
In summary, while both Spark and SemPy have their own limitations, SemPy may have more flexibility in executing complex queries directly against Power BI datasets without some of the limitations in Spark.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
I am more interested in number of rows read for which spark has clear limiations. Can you please confirm if sempy has any limitations in respect to that? @Anonymous I am not interested in any other limiations sempy might have ATM. Cause I need to handle that, if it does have any such limitations.
Hi @smpa01 ,
I have not seen documentation that indicates sempy is limit on the number of rows that can be read.
The limit I see in the documentation for sempy is:
The amount of data that can be retrieved is limited by the per-query maximum memory of the capacity SKU hosting the semantic model and the Spark driver node running the notebook (see Node Size).
Please see this official documentation for information:
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!