On the screen we want to see our character within a frame, listening to us. When we talk, the code should access the Android Speech API and return the recognized sentence. Depending on the words, the character should move, talk and say a corresponding sentence.
We have the Google API access, translate and TTS codes in Delphi and the 3D models of the character and the environment as well as the .bip files. We can give these to whoever will do the coding.
This should be done in 10 days.