Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreShape the future of the Fabric Community! Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions. Take survey.
Hello,
I am enjoying myself with the new Data in Space feature within Power BI. In my use case, I have found a challenge though.
In my use case, I want to project certain dashboards onto different storage units. Showing information regarding that specific unit. These units are visually quite alike though. I have added three pictures, two of what happens, and one of what I would want to happen. I achieved this by going to a corner of the complex, where there were more geometric differences.
Now, the case is that there are quite a lot of long corridors, all with different units, with comparable doors. It would be great if I could map visuals to them, and they would stick.
I have an idea of how to achieve this: if the app, while mapping the area to place visuals, would also use OCR (optical character recognition), I think that this issue would be resolved.
Does anybody have another idea?
Hey @S-Croes ,
We are consulting with the Spatial Anchors team regarding your feedback. Do not have any clear answer at this point.
Maya
Hey @S-Croes ,
I'm so excited you are using and enjoying Data in Space!
It is recommended to capture as wider view as possible when pinning a report to space, since the engine is mapping the physical space, and having as wider view of the space the more accurate the map (and later the anchor finding) is.
Thanks, Maya
In addition, maybe it is a good idea to have a way to show the Data in Space Writer, what area has been mapped. Maybe it is possible to do a deddicted "map area" function, so we could for example walk around a complex mapping first, and then when the map has been created, start pinning visuals?
Hello again Maya,
I tried doing both vertical, as well as horizontal movements before pinning visuals to a location. But as these doors all look alike, the engine seems to mistake one door as another. With something like OCR embedded, the engine could use the numbers or other tekst around these doors to validate it is using the right door. Because at the moment, it is placing visuals at seemingly random doors...
Thanks, Sebastiaan
Hi @mshenhav ,
Any idea if this is something you will be working on? As is, Data in Space would sadly not be usable at this customer of mine...
Thanks again,
Sebasitaan
Hey @S-Croes ,
I have talked to our partners from Spatial anchors team about your feedback and they asked if you can post your suggests to here: https://feedback.azure.com/d365community/forum/f47d9b25-0725-ec11-b6e6-000d3a4f07b8 so they will be able to look into it?
Thanks, Maya
Hi Maya @mshenhav ,
Do you have an update about this idea? I have not recieved any message from the Spatial Anchors team as of yet...
Kind regards,
Sebastiaan
Hi @mshenhav ,
I have not yet recieved a response from the spatial anchors team. Do you know if that is correct?
Thanks for looking in to it,
Kind regards,
Sebastiaan.
Hi Maya,
Sorry for the late respone, I was away on holiday.
I will repost this message in that community! Thanks for the help so far.
Edit: I postes it here: https://feedback.azure.com/d365community/idea/dc924c82-1d34-ed11-a81b-000d3ae3db6e
Thanks again,
Sebastiaan.
Hi @mshenhav
Could you have a look at this thread? It's an amazing feature! We'd like to learn more about it.
Best Regards,
Community Support Team _ Jing
@mshenhav , do you maybe have an ides about this, as you are already helping me with another question about Data in Space?
User | Count |
---|---|
1 | |
1 | |
1 | |
1 | |
1 |