The system to be created consists of two parts, an APP (with two roles: user and author) and a remote system (accessible via web browser) with the operator's control and support functions.
The following features need to be implemented:
• (Author) Function to place some 2d / 3d objects in a specific industrial-type space (recognizing specific tags or machinery). These objects are: Arrows, Pins, Texts and could be clickable to access a menu of documents to view.
• (User) Navigation function in the workspace. The system recognizes the tags or machinery previously identified (by the author) and sees in AR the objects that have been placed in the space. Clicking on a particular object opens a menu of documents. Clicking on an item in the list opens the document. On the screen there is always a button that allows you to activate remote support.
• (Remote operator) When the user calls the remote operator, he will see a notification and if he accepts the "video call" he will see on the screen the same thing that the user sees on his AR device. At that point he can give voice indications to the user or use a pencil tool to highlight objects or points in the AR space on the screen. The user will see on his device what the remote operator has drawn and will listen to the audio in real time (like a videocall).
A working prototype must be delivered, with all the technical documentation and source codes, within 5 weeks from the start of the project.
We require a project proposal with indication of the technological choices (preferably Google ARCore), hypothesis of a release plan and relative economic evaluation.
One last thing. This work is a test for us. We are interested in getting in touch with developers who can then support us in our growth path.