For test scenarios, I'll provide you with 3-4 recordings only and once you have the algo for training,
you can download the full version here [login to view URL]
Download [login to view URL] from the link
I have a dataset which was recorded in a room, where loudspeakers were playing the recording s and arrays were used to listen to the recording to calculate the distance and direction of sound using MUSIC Algorithm. I am going to replace the MUSIC Algorithm with CRNN so it learns on these recordings and predicts the direction distance as output
Consider something playing on TV in a room (that will be source) and you are hearing it using your ears (consider it array). If your eyes are closed, you can understand the direction of sound right? Now consider the same room as a 3D CUBE and you are sitting on Coordinates 0,0 (Just as an example and TV must be playing from some coordinates, we need to identify it.
There already is an implementation without any DNN which can provide a clear picture of the results we need
Take 1 recording and try to extract features from it. Labels are already known to youThings are fairly simple and have been done in another algorithm and available in Python. We just need to replace the algo with CRNN based algo. You can find it here [login to view URL]
Run the main function with following arguments ( Data dir, Results dir, Task Number, Array Name) and it will output 3 graphs. keep task 1 and array name eignemike
At the start of the code, all the relevant arguments are mentioned. Just write a line of code to call this function with arguments and you will get 3 graphs as a result.
Note: This task needs to be completed in three days. It should take an expert less than 1 day.