d88a448d3c | ||
---|---|---|
.. | ||
conf | ||
README.md | ||
batch_fn.py | ||
compute_statistics.py | ||
config.py | ||
fastspeech2_updater.py | ||
frontend.py | ||
gen_duration_from_textgrid.py | ||
normalize.py | ||
preprocess.py | ||
preprocess.sh | ||
run.sh | ||
sentences.txt | ||
simple.lexicon | ||
synthesize.py | ||
synthesize.sh | ||
synthesize_e2e.py | ||
synthesize_e2e.sh | ||
train.py |
README.md
FastSpeech2 with BZNSYP
Dataset
Download and Extract the datasaet.
Download BZNSYP from it's Official Website.
Get MFA result of BZNSYP and Extract it.
We use MFA to get durations for fastspeech2. You can download from here baker_alignment_tone.tar.gz, or train your own MFA model reference to use_mfa example of our repo.
Preprocess the dataset.
Assume the path to the dataset is ~/datasets/BZNSYP
.
Assume the path to the MFA result of BZNSYP is ./baker_alignment_tone
.
Run the command below to preprocess the dataset.
./preprocess.sh
Train the model
./run.sh
If you want to train fastspeech2 with cpu, please add --device=cpu
arguments for python3 train.py
in run.sh
.
Synthesize
We use parallel wavegan as the neural vocoder. Download pretrained parallel wavegan model from parallel_wavegan_baker_ckpt_0.4.zip and unzip it.
unzip parallel_wavegan_baker_ckpt_0.4.zip
synthesize.sh
can synthesize waveform from metadata.jsonl
.
synthesize_e2e.sh
can synthesize waveform from text list.
./synthesize.sh
or
./synthesize_e2e.sh
You can see the bash files for more datails of input parameters.
Pretrained Model
Pretrained Model with no sil in the edge of audios can be downloaded here. fastspeech2_nosil_baker_ckpt_0.4.zip
Then, you can use the following scripts to synthesize for sentences.txt
using pretrained fastspeech2 model.
python3 synthesize_e2e.py \
--fastspeech2-config=fastspeech2_nosil_baker_ckpt_0.4/default.yaml \
--fastspeech2-checkpoint=fastspeech2_nosil_baker_ckpt_0.4/snapshot_iter_76000.pdz \
--fastspeech2-stat=fastspeech2_nosil_baker_ckpt_0.4/speech_stats.npy \
--pwg-config=parallel_wavegan_baker_ckpt_0.4/pwg_default.yaml \
--pwg-params=parallel_wavegan_baker_ckpt_0.4/pwg_generator.pdparams \
--pwg-stat=parallel_wavegan_baker_ckpt_0.4/pwg_stats.npy \
--text=sentences.txt \
--output-dir=exp/debug/test_e2e \
--device="gpu" \
--phones-dict=fastspeech2_nosil_baker_ckpt_0.4/phone_id_map.txt