The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I am using schema enabled lakehouse. When set my lakehouse as default for the notebook and I do this it works :
DESCRIBE HISTORY tableName LIMIT 1
But when I try it with this syntax :
DESCRIBE HISTORY lakehouseName.schemaName.tableName LIMIT 1
I get the following error :
Illegal table name lkh_bronze_dri_rawData.dbo.myTable(line 1, pos 17) == SQL == DESCRIBE HISTORY lkh_bronze_dri_rawData.dbo.myTable LIMIT 1 -----------------^^^ io.delta.sql.parser.DeltaSqlAstBuilder.$anonfun$visitTableIdentifier$1(DeltaSqlParser.scala:433) org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:160) io.delta.sql.parser.DeltaSqlAstBuilder.visitTableIdentifier(DeltaSqlParser.scala:430) io.delta.sql.parser.DeltaSqlAstBuilder.$anonfun$visitDescribeDeltaHistory$3(DeltaSqlParser.scala:388) scala.Option.map(Option.scala:230) io.delta.sql.parser.DeltaSqlAstBuilder.$anonfun$visitDescribeDeltaHistory$1(DeltaSqlParser.scala:388) org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:160) io.delta.sql.parser.DeltaSqlAstBuilder.visitDescribeDeltaHistory(DeltaSqlParser.scala:386) io.delta.sql.parser.DeltaSqlAstBuilder.visitDescribeDeltaHistory(DeltaSqlParser.scala:156) io.delta.sql.parser.DeltaSqlBaseParser$DescribeDeltaHistoryContext.accept(DeltaSqlBaseParser.java:316) org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:18) io.delta.sql.parser.DeltaSqlAstBuilder.$anonfun$visitSingleStatement$1(DeltaSqlParser.scala:426) org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:160) io.delta.sql.parser.DeltaSqlAstBuilder.visitSingleStatement(DeltaSqlParser.scala:426) io.delta.sql.parser.DeltaSqlAstBuilder.visitSingleStatement(DeltaSqlParser.scala:156) io.delta.sql.parser.DeltaSqlBaseParser$SingleStatementContext.accept(DeltaSqlBaseParser.java:179) org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:18) io.delta.sql.parser.DeltaSqlParser.$anonfun$parsePlan$1(DeltaSqlParser.scala:78) io.delta.sql.parser.DeltaSqlParser.parse(DeltaSqlParser.scala:113) io.delta.sql.parser.DeltaSqlParser.parsePlan(DeltaSqlParser.scala:77) com.microsoft.azure.synapse.ml.predict.PredictParser.parsePlan(PredictParser.scala:19) org.apache.spark.sql.SparkSession.$anonfun$sql$2(SparkSession.scala:633) org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:120) org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:632) org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827) org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630) org.apache.spark.sql.SparkSession.sql(SparkSession.scala:671) org.apache.livy.repl.SQLInterpreter.execute(SQLInterpreter.scala:163) org.apache.livy.repl.Session.$anonfun$executeCode$1(Session.scala:893) scala.Option.map(Option.scala:230) org.apache.livy.repl.Session.executeCode(Session.scala:890) org.apache.livy.repl.Session.$anonfun$execute$12(Session.scala:585) org.apache.livy.repl.Session.withRealtimeOutputSupport(Session.scala:1144) org.apache.livy.repl.Session.$anonfun$execute$3(Session.scala:585) scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659) scala.util.Success.$anonfun$map$1(Try.scala:255) scala.util.Success.map(Try.scala:213) scala.concurrent.Future.$anonfun$map$1(Future.scala:292) scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33) scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33) scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) java.base/java.lang.Thread.run(Thread.java:829)
Any help is appreciated. Thanks!
Solved! Go to Solution.
Hi @GammaRamma ,
This is my version of Runtime.
This is my spark pool configuration:
I turned on high concurrency mode.
This is my version of region as well as service.
Given we have so little information at this time, I can't tell what's causing your problem.You can create a support ticket for free and a dedicated Microsoft engineer will come to solve the problem for you.
It would be great if you continue to share in this issue to help others with similar problems after you know the root cause or solution.
The link of Power BI Support: https://powerbi.microsoft.com/en-us/support/
For how to create a support ticket, please refer to How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
Thank you for your understanding.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @GammaRamma ,
Is my follow-up just to ask if the problem has been solved?
If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?
Thank you very much for your cooperation!
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @GammaRamma ,
I ran it successfully using your syntax.
Please check that there are no errors in your syntax, and that the lakehouse name, schema name, and table name are not entered incorrectly.
If you are not sure, you can use it by using “Load data” >> “Spark” under Table and paste the name into your syntax.
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hello @Anonymous thank you for your reply.
It is strange that it works for you and not me. I just retried it using the load data feature and copying the path. I get the same error. Also, my colleages have tried on their side and get the same error. When I do other operations on the tables in my lakehouse using the notation lakehouse.schema.table there is no problem.
So what config could be different in our environments?
Thank you!
Hi @GammaRamma ,
This is my version of Runtime.
This is my spark pool configuration:
I turned on high concurrency mode.
This is my version of region as well as service.
Given we have so little information at this time, I can't tell what's causing your problem.You can create a support ticket for free and a dedicated Microsoft engineer will come to solve the problem for you.
It would be great if you continue to share in this issue to help others with similar problems after you know the root cause or solution.
The link of Power BI Support: https://powerbi.microsoft.com/en-us/support/
For how to create a support ticket, please refer to How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
Thank you for your understanding.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Ok it was the spark runtime version. I had 1.2. It started working when I set it to 1.3.
Thanks!
User | Count |
---|---|
4 | |
2 | |
2 | |
2 | |
2 |
User | Count |
---|---|
13 | |
9 | |
8 | |
6 | |
5 |