-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
SVD features processing #9
Comments
Hi :) After running the prediction, you will be able to identify at which frames the prediction is 0 or 1. From here, you need to convert frames to seconds using the function like I hope this helps. Let me know if there are more questions! |
Hi @kyungyunlee, thanks for your response. |
@simonefrancia Hi, yes it's single binary label for 1.6 seconds, but there is overlap during training so there will be more than 194/1.6 segments, for instance. |
Ok. But is it possible to be more precise, for example if I want to have a prediction for every 100ms ? |
@simonefrancia Sure, but I think 100ms is way too short to determine if the input contains singing voice or not. The big characteristic to detect is vibrato and 100ms doesn't seem long enough to detect vibrato. Typically the input is around 1 second and it makes sense in human's perspective as well. Feel free to try :) |
@kyungyunlee according to you, what is the smallest duration of segment we can use for training? |
I am not sure, since I haven’t tried using shorter input. I think at this point you have to define it in terms of your task goal |
Hi @kyungyunlee ,
thanks for your repo and your ideas.
I am new in the field of audio processing and I would like to know how features are treated in the preprocessing phase in the specific case of CNN training for SVD, and how prediction can be associated to input features in order to apply mask ( 0 if no_voice , 1 if voice) and recreate audio of same length where it can be possible to hear voice if prediction is VOICE, and no audio if prediction is NO_VOICE.
SR = 22050
FRAME_LEN = 1024
HOP_LENGTH = 315
CNN_INPUT_SIZE = 115 # 1.6 sec
CNN_OVERLAP = 5 # Hopsize of 5 for training, 1 for inference
N_MELS = 80
CUTOFF = 8000 # fmax = 8kHz
From this I get 4280832 samples.
Size of
x
results (80, 13590).After this step total_x has shape (13475, 80, 115,1)
So these are the main steps in order to get X that is fed to the network.
What is not so clear to me is the transition between Step 2 and Step 3 , so why (80, 13590) dimension becomes (13475, 80, 115) and what is its meaning.
That is the keypoint I think also to understand how return back to audio where I can apply SVD prediction of the network and build an audio with SVD and with the same original shape.
VAD prediction has shape (13475, 1).
Thank you very much
The text was updated successfully, but these errors were encountered: