Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

如何调整生成的语速 #77

Open
MisakaMikoto-o opened this issue Dec 18, 2023 · 11 comments
Open

如何调整生成的语速 #77

MisakaMikoto-o opened this issue Dec 18, 2023 · 11 comments

Comments

@MisakaMikoto-o
Copy link

我在prompt里用语速普通、语速很快和语速很慢发现区别不大,在代码里其他地方也没发现与速度相关的地方,请问有没有办法能调整生成的语音的语速

@Ccj0221
Copy link

Ccj0221 commented Dec 19, 2023

我想想知道语速如何调节,默认生成出来的音频需要加速到225%左右才是正常的,但是这样就带电子音了

@MisakaMikoto-o
Copy link
Author

我现在用的python的第三方库pyrubberband来调的速,这样基本不会改变音调

@syq163
Copy link
Collaborator

syq163 commented Dec 20, 2023

我现在用的python的第三方库pyrubberband来调的速,这样基本不会改变音调

Thank you for providing the information. Could you please assist in enhancing the related functions and contribute to EmotiVoice?

@MisakaMikoto-o
Copy link
Author

MisakaMikoto-o commented Dec 22, 2023

Thank you for providing the information. Could you please assist in enhancing the related functions and contribute to EmotiVoice?

sure, it's my pleasure, I'll provide my code here, you can refer to it.

# 读取本地文件变速实现
import pyrubberband as pyrb
import soundfile as sf
import simpleaudio as sa
import numpy as np

def play_audio_at_speed(file_path, speed_factor=1.0):
    # 读取音频
    data, samplerate = sf.read(file_path)

    # 使用 pyrubberband 改变速度而不改变音调
    new_data = pyrb.time_stretch(data, samplerate, speed_factor)

    # 将音频数据转换为 simpleaudio 可接受的格式
    # 例如,转换为 16 位整数 (int16)
    if new_data.dtype != np.int16:
        new_data = (new_data * 32768).astype(np.int16)

    # 播放处理后的音频
    play_obj = sa.play_buffer(new_data, 1, 2, samplerate)
    #play_obj.wait_done()

# 调用函数以 1.5 倍速播放音频,不改变音调
play_audio_at_speed('/mnt/hgfs/reflection/EmotiVoice/outputs/prompt_tts_open_source_joint/test_audio/audio/g_00140000/16000.wav', speed_factor=1.5)

@MisakaMikoto-o
Copy link
Author

MisakaMikoto-o commented Dec 22, 2023

Actual use can be written like this

from demo_page import get_models, tts
from frontend import g2p_cn_en, ROOT_DIR, read_lexicon, G2p
from config.joint.config import Config
import pyrubberband as pyrb
import simpleaudio as sa
import numpy as np

config = Config()
speakers = config.speakers
models = get_models()
lexicon = read_lexicon(f"{ROOT_DIR}/lexicon/librispeech-lexicon.txt")
g2p = G2p()

content = "hello"
text =  g2p_cn_en(content, g2p, lexicon)
path = tts(text, "开心", content, "8051", models)
# 转为0.75倍速
new_data = pyrb.time_stretch(path, 16_000, 0.75)
if new_data.dtype != np.int16:
    new_data = (new_data * 32768).astype(np.int16)
play_obj = sa.play_buffer(new_data, 1, 2, 16_000)
#play_obj.wait_done()

@SaltedSlark
Copy link

@MisakaMikoto-o hello,请问合成的语音机械感比较重,韵律平淡呆板,有什么方法可以优化吗?

@Ccj0221
Copy link

Ccj0221 commented Dec 2, 2024 via email

@MisakaMikoto-o
Copy link
Author

@MisakaMikoto-o hello,请问合成的语音机械感比较重,韵律平淡呆板,有什么方法可以优化吗?

可以试试不同的speaker和情感类型,参考我上面的代码的话就是修改path = tts(text, "开心", content, "8051", models)中"开心"和"8051"这两个

@SaltedSlark
Copy link

@MisakaMikoto-o hello,请问合成的语音机械感比较重,韵律平淡呆板,有什么方法可以优化吗?

可以试试不同的speaker和情感类型,参考我上面的代码的话就是修改path = tts(text, "开心", content, "8051", models)中"开心"和"8051"这两个

感谢回复,我是自己克隆音色得到的音色,并非用的基础模型里的,请问你有试过微调吗?结果怎么样?

@MisakaMikoto-o
Copy link
Author

@MisakaMikoto-o hello,请问合成的语音机械感比较重,韵律平淡呆板,有什么方法可以优化吗?

可以试试不同的speaker和情感类型,参考我上面的代码的话就是修改path = tts(text, "开心", content, "8051", models)中"开心"和"8051"这两个

感谢回复,我是自己克隆音色得到的音色,并非用的基础模型里的,请问你有试过微调吗?结果怎么样?

没试过,我的使用场景不需要声音很仿真,所以用的现有的speaker,也许你可以去别的issue中找找音色克隆相关的部分

@SaltedSlark
Copy link

@MisakaMikoto-o hello,请问合成的语音机械感比较重,韵律平淡呆板,有什么方法可以优化吗?

可以试试不同的speaker和情感类型,参考我上面的代码的话就是修改path = tts(text, "开心", content, "8051", models)中"开心"和"8051"这两个

感谢回复,我是自己克隆音色得到的音色,并非用的基础模型里的,请问你有试过微调吗?结果怎么样?

没试过,我的使用场景不需要声音很仿真,所以用的现有的speaker,也许你可以去别的issue中找找音色克隆相关的部分

ok,感谢

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants