Parakeet/examples/clarinet
ShenYuhan bf6d9ef06f add visualdl for parakeet 2020-08-07 16:28:21 +08:00
..
configs completed fastspeech and modified save/load 2020-04-09 12:06:04 +00:00
README.md fix README for clarinet, pin numba and tqdm verison 2020-05-27 05:24:39 +00:00
synthesis.py add visualdl for parakeet 2020-08-07 16:28:21 +08:00
train.py add visualdl for parakeet 2020-08-07 16:28:21 +08:00
utils.py add visualdl for parakeet 2020-08-07 16:28:21 +08:00

README.md

Clarinet

PaddlePaddle dynamic graph implementation of ClariNet, a convolutional network based vocoder. The implementation is based on the paper ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech.

Dataset

We experiment with the LJSpeech dataset. Download and unzip LJSpeech.

wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
tar xjvf LJSpeech-1.1.tar.bz2

Project Structure

├── data.py          data_processing
├── configs/         (example) configuration file
├── synthesis.py     script to synthesize waveform from mel_spectrogram
├── train.py         script to train a model
└── utils.py         utility functions

Saving & Loading

train.py and synthesis.py have 3 arguments in common, --checkpooint, iteration and output.

  1. output is the directory for saving results. During training, checkpoints are saved in checkpoints/ in output and tensorboard log is save in log/ in output. Other possible outputs are saved in states/ in outuput. During synthesizing, audio files and other possible outputs are save in synthesis/ in output. So after training and synthesizing with the same output directory, the file structure of the output directory looks like this.
├── checkpoints/      # checkpoint directory (including *.pdparams, *.pdopt and a text file `checkpoint` that records the latest checkpoint)
├── states/           # audio files generated at validation and other possible outputs
├── log/              # tensorboard log
└── synthesis/        # synthesized audio files and other possible outputs
  1. --checkpoint and --iteration for loading from existing checkpoint. Loading existing checkpoiont follows the following rule: If --checkpoint is provided, the checkpoint specified by --checkpoint is loaded. If --checkpoint is not provided, we try to load the model specified by --iteration from the checkpoint directory. If --iteration is not provided, we try to load the latested checkpoint from checkpoint directory.

Train

Train the model using train.py, follow the usage displayed by python train.py --help.

usage: train.py [-h] [--config CONFIG] [--device DEVICE] [--data DATA]
                [--checkpoint CHECKPOINT | --iteration ITERATION]
                [--wavenet WAVENET]
                output

Train a ClariNet model with LJspeech and a trained WaveNet model.

positional arguments:
  output                path to save experiment results

optional arguments:
  -h, --help                    show this help message and exit
  --config CONFIG               path of the config file
  --device DEVICE               device to use
  --data DATA                   path of LJspeech dataset
  --checkpoint CHECKPOINT       checkpoint to resume from
  --iteration ITERATION         the iteration of the checkpoint to load from output directory
  --wavenet WAVENET             wavenet checkpoint to use

- `--config` is the configuration file to use. The provided configurations can be used directly. And you can change some values in the configuration file and train the model with a different config.
- `--device` is the device (gpu id) to use for training. `-1` means CPU.
- `--data` is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains `metadata.txt`).

- `--checkpoint` is the path of the checkpoint.
- `--iteration` is the iteration of the checkpoint to load from output directory.
- `output` is the directory to save results, all result are saved in this directory.

See [Saving-&-Loading](#Saving-&-Loading) for details of checkpoint loading.

- `--wavenet` is the path of the wavenet checkpoint to load.
When you start training a ClariNet model without loading form a ClariNet checkpoint, you should have trained a WaveNet model with single Gaussian output distribution. Make sure the config of the teacher model matches that of the trained wavenet model.

Example script:

```bash
python train.py
    --config=./configs/clarinet_ljspeech.yaml
    --data=./LJSpeech-1.1/
    --device=0
    --wavenet="wavenet-step-2000000"
    experiment

You can monitor training log via tensorboard, using the script below.

cd experiment/log
tensorboard --logdir=.

Synthesis

usage: synthesis.py [-h] [--config CONFIG] [--device DEVICE] [--data DATA]
                    [--checkpoint CHECKPOINT | --iteration ITERATION]
                    output

Synthesize audio files from mel spectrogram in the validation set.

positional arguments:
  output                        path to save the synthesized audio

optional arguments:
  -h, --help                    show this help message and exit
  --config CONFIG               path of the config file
  --device DEVICE               device to use.
  --data DATA                   path of LJspeech dataset
  --checkpoint CHECKPOINT       checkpoint to resume from
  --iteration ITERATION         the iteration of the checkpoint to load from output directory
  • --config is the configuration file to use. You should use the same configuration with which you train you model.
  • --device is the device (gpu id) to use for training. -1 means CPU.
  • --data is the path of the LJspeech dataset. In principle, a dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files.
  • --checkpoint is the checkpoint to load.
  • --iteration is the iteration of the checkpoint to load from output directory.
  • output is the directory to save synthesized audio. Audio file is saved in synthesis/ in output directory. See Saving-&-Loading for details of checkpoint loading.

Example script:

python synthesis.py \
    --config=./configs/clarinet_ljspeech.yaml \
    --data=./LJSpeech-1.1/ \
    --device=0 \
    --iteration=500000 \
    experiment

or

python synthesis.py \
    --config=./configs/clarinet_ljspeech.yaml \
    --data=./LJSpeech-1.1/ \
    --device=0 \
    --checkpoint="experiment/checkpoints/step-500000" \
    experiment