The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
I am connecting to a database in Databrick & creating a bar chart. I am counting accountid's by year(table A). However when I change to count(distinct) & use a filter from (Table B)
accountid (Table A) & accountid (Table B) have a many to one relationship
I get the followng error:
OLE DB or ODBC error: [DataSource.Error] ODBC: ERROR [42000] [Microsoft][Hardy] (80) Syntax or semantic analysis error thrown in server while executing query. Error message from server: Error running query: org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input '1000001' expecting {<EOF>, ';'}(line 1, pos 11)
== SQL ==
select top 1000001
-----------^^^
The error happens only when I add in the filter from table B & use count distinct.
The sql query on databricks runs fine. Only Power BI is throwing this error.
What could be wrong?
Thanks,
Samantha
Solved! Go to Solution.
I changed the accountid data type in table A to decimal & the chart seems to have rendered correctly.
accountid in table B is still whole number.
So weird...
Hi, I ran into the same error message and can confirm that changing the type of the columns used for grouping to a defined state instead of the "ABC123"-state has solved my problem!
hi,
I'm also getting a similar error,
OLE DB or ODBC error: [DataSource.Error] ODBC: ERROR [42000] [Microsoft][Hardy] (80) Syntax or semantic analysis error thrown in server while executing query. Error message from server: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.sql.AnalysisException: Table or view not found: u_sc_planning_preprod_frp.u_inventory_actuals_data; line 12 pos 0 at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:47) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:435) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:257) at
I'm trying to connect to databricks cluster, i was able to do it successfully before, suddenly i am getting this error, these tables exist in the dataset idk why its showing not found
hi Surabhi_p22, I am running into a similar issue. I can query the db directly, but getting error in PBI Desktop/PBI Service. Were you able to resolve your issue...if yes, can you share your learnings? Thanks, Jhyaikuti
Hi @jhyaikuti , I am also facing the same error while connecting power bi with My sql, query is working fine in my sql, but while fething data from my sql to power bi similar error is coming, any help who solved this error,
Thanks,
Shivam
Hi, Yes, my issue was resolved as we are connected to azure databricks for our source, the refreshes were failing because there was not enough memory in cluster, azure admin had to increase the memory and had to restart the cluster after that tables started refreshing at power bi end as well
Hi @sam245gonsalves ,
Have you solved the problem? If so, you can mark your answer
If you change the accountid data type of table a, the accountid data type of table B will not change
Best Regards,
Liu Yang
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
I changed the accountid data type in table A to decimal & the chart seems to have rendered correctly.
accountid in table B is still whole number.
So weird...