4f288a6d4f
* hacky thing, add tone support for acoustic model * fix experiments for waveflow and wavenet, only write visual log in rank-0 * use emb add in tacotron2 * 1. remove space from numericalized representation; 2. fix decoder paddign mask's unsqueeze dim. * remove bn in postnet * refactoring code * add an option to normalize volume when loading audio. * add an embedding layer. * 1. change the default min value of LogMagnitude to 1e-5; 2. remove stop logit prediction from tacotron2 model. * WIP: baker * add ge2e * fix lstm speaker encoder * fix lstm speaker encoder * fix speaker encoder and add support for 2 more datasets * simplify visualization code * add a simple strategy to support multispeaker for tacotron. * add vctk example for refactored tacotron * fix indentation * fix class name * fix visualizer * fix root path * fix root path * fix root path * fix typos * fix bugs * fix text log extention name * add example for baker and aishell3 * update experiment and display * format code for tacotron_vctk, add plot_waveform to display * add new trainer * minor fix * add global condition support for tacotron2 * add gst layer * add 2 frontend * fix fmax for example/waveflow * update collate function, data loader not does not convert nested list into numpy array. * WIP: add hifigan * WIP:update hifigan * change stft to use conv1d * add audio datasets * change batch_text_id, batch_spec, batch_wav to include valid lengths in the returned value * change wavenet to use on-the-fly prepeocessing * fix typos * resolve conflict * remove imports that are removed * remove files not included in this release * remove imports to deleted modules * move tacotron2_msp * clean code * fix argument order * fix argument name * clean code for data processing * WIP: add README * add more details to thr README, fix some preprocess scripts * add voice cloning notebook * add an optional to alter the loss and model structure of tacotron2, add an alternative config * add plot_multiple_attentions and update visualization code in transformer_tts * format code * remove tacotron2_msp * update tacotron2 from_pretrained, update setup.py * update tacotron2 * update tacotron_aishell3's README * add images for exampels/tacotron2_aishell3's README * update README for examples/ge2e * add STFT back * add extra_config keys into the default config of tacotron * fix typos and docs * update README and doc * update docstrings for tacotron * update doc * update README * add links to downlaod pretrained models * refine READMEs and clean code * add praatio into requirements for running the experiments * format code with pre-commit * simplify text processing code and update notebook |
||
---|---|---|
.. | ||
README.md | ||
config.py | ||
ljspeech.py | ||
preprocess.py | ||
synthesize.ipynb | ||
synthesize.py | ||
train.py |
README.md
Tacotron2
PaddlePaddle dynamic graph implementation of Tacotron2, a neural network architecture for speech synthesis directly from text. The implementation is based on Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions.
Project Structure
├── config.py # default configuration file
├── ljspeech.py # dataset and dataloader settings for LJSpeech
├── preprocess.py # script to preprocess LJSpeech dataset
├── synthesize.py # script to synthesize spectrogram from text
├── train.py # script for tacotron2 model training
├── synthesize.ipynb # notebook example for end-to-end TTS
Dataset
We experiment with the LJSpeech dataset. Download and unzip LJSpeech.
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
tar xjvf LJSpeech-1.1.tar.bz2
Then you need to preprocess the data by running preprocess.py
, the preprocessed data will be placed in --output
directory.
python preprocess.py \
--input=${DATAPATH} \
--output=${PREPROCESSEDDATAPATH} \
-v \
For more help on arguments
python preprocess.py --help
.
Train the model
Tacotron2 model can be trained by running train.py
.
python train.py \
--data=${PREPROCESSEDDATAPATH} \
--output=${OUTPUTPATH} \
--device=gpu \
If you want to train on CPU, just set --device=cpu
.
If you want to train on multiple GPUs, just set --nprocs
as num of GPU.
By default, training will be resumed from the latest checkpoint in --output
, if you want to start a new training, please use a new ${OUTPUTPATH}
with no checkpoint. And if you want to resume from an other existing model, you should set checkpoint_path
to be the checkpoint path you want to load.
Note: The checkpoint path cannot contain the file extension.
For more help on arguments
python train_transformer.py --help
.
Synthesis
After training the Tacotron2, spectrogram can be synthesized by running synthesis.py
.
python synthesis.py \
--config=${CONFIGPATH} \
--checkpoint_path=${CHECKPOINTPATH} \
--input=${TEXTPATH} \
--output=${OUTPUTPATH}
--device=gpu
The ${CONFIGPATH}
needs to be matched with ${CHECKPOINTPATH}
.
For more help on arguments
python synthesis.py --help
.
Then you can find the spectrogram files in ${OUTPUTPATH}
, and then they can be the input of vocoder like waveflow to get audio files.
Pretrained Models
Pretrained Models can be downloaded from links below. We provide 2 models with different configurations.
-
This model use a binary classifier to predict the stop token. tacotron2_ljspeech_ckpt_0.3.zip
-
This model does not have a stop token predictor. It uses the attention peak position to decided whether all the contents have been uttered. Also guided attention loss is used to speed up training. This model is trained with
configs/alternative.yaml
.tacotron2_ljspeech_ckpt_0.3_alternative.zip
Notebook: End-to-end TTS
See synthesize.ipynb for details about end-to-end TTS with tacotron2 and waveflow.