My project has a 2 degrees of freedom arm with a Projector-Camera on the top.
The projector is a 1080p device with manual focus and the camera is a 1080 [login to view URL] device also mounts a depth sensor that can provide distance input at any time for a specific point. The device also mounts a microphone for voice recognition.
They all are connected to a Jetson Nano. The focus motor, Horizontal motor and Vertical Motor are connected through an Arduino.
In order to get a focused image at any time it's required to calibrate the projector focus, so a motor is attached to the wheel that adjusts this.
The horizontal DoF is 360 degrees and the vertical one is 120 with an offset from the bottom of 60 degrees.
The deliverable products are:
a) A software tool that using the camera and/or the depth sensor can focus the projected image at any time
b) A tool that at any given position of the arm can adapt the projected image to the surface so that it is vertical at any time. No playback content or still images but the whole projection
c) A tool that can detect if the surface is projectable or not. Meaning that if there is any disruption of the surface or it's not flat then there is no possible way to get a clean image. Saving the position
d) A tool that can recognize library [login to view URL] the position
e) A tool that can recognize library objects and project a tag over them. Saving the position
f) A tool that for every position of the arm projects an image and the current angle of the arm.
g) A tool that prints the motor positions all the time. Example: “M1: 245 , M2: 98, M3 43”. Being M1 the horizontal motor, M2 the vertical one and M3 the focus Motor.
h) A tool that prints all the possible positions for M1 and M2
All these products can be tested individually but an script must group them like this
A script that runs a) , b), c), g),d),f) and h) in order to get a calibration of the whole [login to view URL] the position is non projectable a red Image will be shown, otherwise a green image.
A script that runs a) , b), c), g),d),f) and h) in order to get a calibration of the whole scenario. The user can use specific voice commands to tell the device to run this script. Example: “Calibrate the space”.If the position is non projectable a red Image will be shown, otherwise a green image.
While a) , b), c), g) and d) are running the user can use specific voice commands to tell the device to go over, below, to the right or to the left of a recognizable object. Example: “Project below the painting”.
While a) , b), c) ,g) and d) are running the user can use specific voice commands to tell the device to save a position with a name. Example: “Save Position 428 as Cabinet”.
While a) , b), c), g) and d) are running the user can use specific voice commands to tell the device to go to a saved position. Example: “Project on Cabinet”.
The user can use specific voice commands to tell the device to turn off. Writing h) to M1:0 and M2:0
A script to recognize voice commands and print them on a [login to view URL]: “Turn the light red”. This is for another development.
The hardware prototype is already built so everything can be tested when finished.
19 freelancers are bidding on average €2321 for this job
I have experience to do similar project as yours. please ping me, and I will show my dem. The demo shows that object recognition on jetson nano, 2d camera motion anaysis and control software and so on. reguards.