Parakeet/examples/wavenet
ShenYuhan bf6d9ef06f add visualdl for parakeet 2020-08-07 16:28:21 +08:00
..
configs update configs for wavenet 2020-03-05 03:04:40 +00:00
README.md Merge branch 'master' of upstream 2020-03-26 10:30:19 +00:00
data.py fix for compatability of python2 and lower versions of numpy 2020-03-10 08:17:56 +00:00
synthesis.py add visualdl for parakeet 2020-08-07 16:28:21 +08:00
train.py add visualdl for parakeet 2020-08-07 16:28:21 +08:00
utils.py Merge branch 'master' of upstream 2020-03-26 10:30:19 +00:00

README.md

WaveNet

PaddlePaddle dynamic graph implementation of WaveNet, a convolutional network based vocoder. WaveNet is originally proposed in WaveNet: A Generative Model for Raw Audio. However, in this experiment, the implementation follows the teacher model in ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech.

Dataset

We experiment with the LJSpeech dataset. Download and unzip LJSpeech.

wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
tar xjvf LJSpeech-1.1.tar.bz2

Project Structure

├── data.py          data_processing
├── configs/         (example) configuration file
├── synthesis.py     script to synthesize waveform from mel_spectrogram
├── train.py         script to train a model
└── utils.py         utility functions

Saving & Loading

train.py and synthesis.py have 3 arguments in common, --checkpooint, iteration and output.

  1. output is the directory for saving results. During training, checkpoints are saved in checkpoints/ in output and tensorboard log is save in log/ in output. Other possible outputs are saved in states/ in outuput. During synthesizing, audio files and other possible outputs are save in synthesis/ in output. So after training and synthesizing with the same output directory, the file structure of the output directory looks like this.
├── checkpoints/      # checkpoint directory (including *.pdparams, *.pdopt and a text file `checkpoint` that records the latest checkpoint)
├── states/           # audio files generated at validation and other possible outputs
├── log/              # tensorboard log
└── synthesis/        # synthesized audio files and other possible outputs
  1. --checkpoint and --iteration for loading from existing checkpoint. Loading existing checkpoiont follows the following rule: If --checkpoint is provided, the checkpoint specified by --checkpoint is loaded. If --checkpoint is not provided, we try to load the model specified by --iteration from the checkpoint directory. If --iteration is not provided, we try to load the latested checkpoint from checkpoint directory.

Train

Train the model using train.py. For help on usage, try python train.py --help.

usage: train.py [-h] [--data DATA] [--config CONFIG] [--device DEVICE]
                [--checkpoint CHECKPOINT | --iteration ITERATION]
                output

Train a WaveNet model with LJSpeech.

positional arguments:
  output                        path to save results

optional arguments:
  -h, --help                    show this help message and exit
  --data DATA                   path of the LJspeech dataset
  --config CONFIG               path of the config file
  --device DEVICE               device to use
  --checkpoint CHECKPOINT       checkpoint to resume from
  --iteration ITERATION         the iteration of the checkpoint to load from output directory
  • --data is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains metadata.txt).

  • --config is the configuration file to use. The provided configurations can be used directly. And you can change some values in the configuration file and train the model with a different config.

  • --device is the device (gpu id) to use for training. -1 means CPU.

  • --checkpoint is the path of the checkpoint.

  • --iteration is the iteration of the checkpoint to load from output directory.

  • output is the directory to save results, all result are saved in this directory.

See Saving-&-Loading for details of checkpoint loading.

Example script:

python train.py \
    --config=./configs/wavenet_single_gaussian.yaml \
    --data=./LJSpeech-1.1/ \
    --device=0 \
    experiment

You can monitor training log via TensorBoard, using the script below.

cd experiment/log
tensorboard --logdir=.

Synthesis

usage: synthesis.py [-h] [--data DATA] [--config CONFIG] [--device DEVICE]
                    [--checkpoint CHECKPOINT | --iteration ITERATION]
                    output

Synthesize valid data from LJspeech with a wavenet model.

positional arguments:
  output                        path to save the synthesized audio

optional arguments:
  -h, --help                    show this help message and exit
  --data DATA                   path of the LJspeech dataset
  --config CONFIG               path of the config file
  --device DEVICE               device to use
  --checkpoint CHECKPOINT       checkpoint to resume from
  --iteration ITERATION         the iteration of the checkpoint to load from output directory
  • --data is the path of the LJspeech dataset. In principle, a dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files.
  • --config is the configuration file to use. You should use the same configuration with which you train you model.
  • --device is the device (gpu id) to use for training. -1 means CPU.
  • --checkpoint is the checkpoint to load.
  • --iteration is the iteration of the checkpoint to load from output directory.
  • output is the directory to save synthesized audio. Audio file is saved in synthesis/ in output directory. See Saving-&-Loading for details of checkpoint loading.

Example script:

python synthesis.py \
    --config=./configs/wavenet_single_gaussian.yaml \
    --data=./LJSpeech-1.1/ \
    --device=0 \
    --checkpoint="experiment/checkpoints/step-1000000" \
    experiment

or

python synthesis.py \
    --config=./configs/wavenet_single_gaussian.yaml \
    --data=./LJSpeech-1.1/ \
    --device=0 \
    --iteration=1000000 \
    experiment