ParakeetRebeccaRosario/examples/waveflow
Feiyu Chan 4f288a6d4f
add ge2e and tacotron2_aishell3 example (#107)
* hacky thing, add tone support for acoustic model

* fix experiments for waveflow and wavenet, only write visual log in rank-0

* use emb add in tacotron2

* 1. remove space from numericalized representation;
2. fix decoder paddign mask's unsqueeze dim.

* remove bn in postnet

* refactoring code

* add an option to normalize volume when loading audio.

* add an embedding layer.

* 1. change the default min value of LogMagnitude to 1e-5;
2. remove stop logit prediction from tacotron2 model.

* WIP: baker

* add ge2e

* fix lstm speaker encoder

* fix lstm speaker encoder

* fix speaker encoder and add support for 2 more datasets

* simplify visualization code

* add a simple strategy to support multispeaker for tacotron.

* add vctk example for refactored tacotron

* fix indentation

* fix class name

* fix visualizer

* fix root path

* fix root path

* fix root path

* fix typos

* fix bugs

* fix text log extention name

* add example for baker and aishell3

* update experiment and display

* format code for tacotron_vctk, add plot_waveform to display

* add new trainer

* minor fix

* add global condition support for tacotron2

* add gst layer

* add 2 frontend

* fix fmax for example/waveflow

* update collate function, data loader not does not convert nested list into numpy array.

* WIP: add hifigan

* WIP:update hifigan

* change stft to use conv1d

* add audio datasets

* change batch_text_id, batch_spec, batch_wav to include valid lengths in the returned value

* change wavenet to use on-the-fly prepeocessing

* fix typos

* resolve conflict

* remove imports that are removed

* remove files not included in this release

* remove imports to deleted modules

* move tacotron2_msp

* clean code

* fix argument order

* fix argument name

* clean code for data processing

* WIP: add README

* add more details to thr README, fix some preprocess scripts

* add voice cloning notebook

* add an optional to alter the loss and model structure of tacotron2, add an alternative config

* add plot_multiple_attentions and update visualization code in transformer_tts

* format code

* remove tacotron2_msp

* update tacotron2 from_pretrained, update setup.py

* update tacotron2

* update tacotron_aishell3's README

* add images for exampels/tacotron2_aishell3's README

* update README for examples/ge2e

* add STFT back

* add extra_config keys into the default config of tacotron

* fix typos and docs

* update README and doc

* update docstrings for tacotron

* update doc

* update README

* add links to downlaod pretrained models

* refine READMEs and clean code

* add praatio into requirements for running the experiments

* format code with pre-commit

* simplify text processing code and update notebook
2021-05-13 17:49:50 +08:00
..
README.md add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
config.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
ljspeech.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
preprocess.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
synthesize.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
train.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00

README.md

WaveFlow with LJSpeech

Dataset

Download the datasaet.

wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2

Extract the dataset.

tar xjvf LJSpeech-1.1.tar.bz2

Preprocess the dataset.

Assume the path to save the preprocessed dataset is ljspeech_waveflow. Run the command below to preprocess the dataset.

python preprocess.py --input=LJSpeech-1.1/  --output=ljspeech_waveflow

Train the model

The training script requires 4 command line arguments. --data is the path of the training dataset, --output is the path of the output directory (we recommend to use a subdirectory in runs to manage different experiments.)

--device should be "cpu" or "gpu", --nprocs is the number of processes to train the model in parallel.

python train.py --data=ljspeech_waveflow/ --output=runs/test --device="gpu" --nprocs=1

If you want distributed training, set a larger --nprocs (e.g. 4). Note that distributed training with cpu is not supported yet.

Synthesize

Synthesize waveform. We assume the --input is a directory containing several mel spectrograms(log magnitude) in .npy format. The output would be saved in --output directory, containing several .wav files, each with the same name as the mel spectrogram does.

--checkpoint_path should be the path of the parameter file (.pdparams) to load. Note that the extention name .pdparmas is not included here.

--device specifies to device to run synthesis on.

python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2000000' --device="gpu" --verbose

Pretrained Model

Pretrained Model with residual channel equals 128 can be downloaded here. waveflow_ljspeech_ckpt_0.3.zip.