The population of Freelancer.com is larger than New Zealand, which is the reason why Chris brought his business online.
When scanning the image target the 3d model appears on the target image and take away the phone from the target then the model appears center of the phone camera screen. We're using touch input of drag, rotate etc for the model. I instantiate the 3D model on the screen from the image target lost. I'm using transform for the target image model for the drag. If i use transform then my instantiating the model position not shown on center of the screen when the target lost.
I have an idea for something similar to an existing website & app however, I would like with added functionality. Full brief to be explained but areas of expertise required will be: Creating ethereum based Smart Contracts Reading information from ledgers and importing into unity ARKit / ARCore / Vuforia Issuing ERC20 contracts on the Main net as well as TestNet
I have to record my Vuforia AR camera screen with audio and share it on social media.
For the non-profit GPLv3 Open Source Project [url removed, login to view] you are implementing a recorder functionality. It will be integrated into the App posted soon on Freelancer. UI Prototype: [url removed, login to view] The App has to work at least on a Lenovo Phab 2 Pro device. Deliverables: A GLVideoRendererActivity (possibly with fragment) with a record and a reset button. The Activity shows - the current video from the camera - a silhouette outline shape of a child - captured point clouds (only during/after recording) The mean distance to the object inside of the head of the shape is measured. If this distance is around 1 meter the outline is green and not moving. The further the distance is from 1 meter, the more the color of the outline turns from green to yellow to red. If distance is < 1 meter the outline is doing a repeating animation, getting smaller If distance is > 1 meter the outline is doing a repeating animation, getting bigger When the user presses the record button the captured point clouds appear in the 3D scene (first person video), recording 10-15 (configuration) point clouds. The point clouds over time are identified by color. Every captured point cloud has a different color, the color changes gradually from blue to white. You user can stop the capturing process by pushing the record button again. He can examine the 3D scan on the screen and press repeat or next. Functionality is similar to mesh builder example from Google Tango Examples but with simple dots layered in Augmented Reality on top of the video feed instead of a VR model. During recording the following Data is written to storage (and optional Google Firebase Storage): - Device Poses with timestamps (absolute and transformation) - Point Clouds with timestamps - Video file (possibly frames with timestamps) The Activity will be used for multiple/repeated scanning processes, with an instructional screen overlayed in between (see UI prototype, link above) Before starting the Project you have to briefly explain which parts you are developing in Java or C++ and why. Documentation has to be provided.
Hello, I have a potential client that wants to integrate AR code into their existing application (Their developers will handle integrating the code you create based on your instructions). The AR feature should just scan a custom marker then using an API display one out of 2 options (All details are in the attached presentation). We prefer keeping the code reliant on a server where we can manage the media content so the client and their developers don't have too much control and will still need us to manage as Admins. I'm available for any clarifications or inquiries. Also, this project will be a sort of trial that will help us keep in mind developers for future projects. Looking forward. Thanks!
Im creating a book about women empowerment and i have 12 videos and 5 links which i want to implemeny AR onto. In the book, there’s specific scan symbols which the reader should scan to view the specified video or be transferred to a link when they download the AR application you’ll make. I want it compatible with IOS and android . Very urgent project and need it in maximum 3 days.
Hi I work in a production house, we need to find a developer to create for us an app for mobile of AR, when we point to one of the image triggers, the APP would play a specific video in AR, we need that this app has a backoffice where we could load several trigger images and AR Videos.
Australian Law Firm rolling out new MFD's with new software. As part of this project we are looking to create an Augmented Reality Training session to lead staff through the different functions as well as basic maintenance (largely refilling paper and toner). Training would be loaded onto iPads.
Hi Experienced freelancers are needed for a project related to arcore. Project include many steps related to augmented reality 3D object implementation, modelling, animation and each steps have its own requirements. At final all features are joined in a single application. Freelancers experience and expertise are examined before the approval. Thanks in advance
Hello, We have a project component requirement in C# Coding. We want the Vuforia camera to be screen capture as a video with external audio and it has to be share in any of the device social media. It has to work seamlessly. Once apk is approved payment will be released to get the Unity c# code. Thanks
For research purposes I would like to have an app that uses the Google Mobile vision API and that can record the result. e.g.: Front facing camera has to detect faces ( like the standard mobile vision projects which draws rectangle over face) and must be able to record at the same time. I must be able to walk the streets press a button and record how Mobile Vision detects my face (keeping the Start / Pause / Resume / Stop in mind) the result should be a saved mp4 file that I can use later to analyse. I can record a camerasource, but I can't record a camerasource with mobile vision running at the same time. What I expect is either a concrete solution to the recording and running Vision at the same time which I am having right now; perhaps you already dealt with this problem. Or a small test/demo with functionality mentioned above.
Goal is to see where teammates are on the mobile phone screen. While camera points forward. (basically like first person shooter games, like pubg) 1- FB register / login (and other register/ login for purposes ) 2- user can make team with password and join existing team with current password 3- user can leave team 4- list of team members 5- user can push "action button" what shows the current location when pressed, that shows on all team members on different color. -free month if u share on fb -montly price feature -every member you invite who register you get extra free week (like dropbox, more you invite more you get space) fixed price only, no milestones. all sourcecodes and ip materials comes to me. if failing deadline you will get penalty of 10% of the project sum on every week.
We've project displaying 3D model, animation when is detected and the target lost the model will goes of from the screen. But we need to display the model on center of the screen with proper position when the target is lost (when move the phone away from the target). Already we've tried extended tracking but it won't displaying the model on the phone screen.
We are about to embark on a 2 month rapid mobile application build for iOS and Android. The job is a high profile sporting event. We are looking for a developer who can help build the project out on iOS and Android. The application still being scoped but will likely include some Augmented Reality features, along with Face Recognition features (think SnapChat). We'd love to hear from candidates who would like to help us bring this project to life. You must be based in Melbourne, Australia and be able to work inhouse.