ParakeetRebeccaRosario/examples/deepvoice3
chenfeiyu 4b2b974eb4 refine docstring for parakeet.data and deep voice 3, wavenet and clarinet 2020-03-09 03:06:28 +00:00
..
configs update configs 2020-02-28 09:59:21 +00:00
images add model_architecture image 2020-02-16 17:54:11 +00:00
README.md fix conflict 2020-03-05 17:04:08 -08:00
data.py add license 2020-02-26 21:03:51 +08:00
sentences.txt add deepvoice3 model and example 2020-02-16 17:54:11 +00:00
synthesis.py add license 2020-02-26 21:03:51 +08:00
train.py add license 2020-02-26 21:03:51 +08:00
utils.py refine docstring for parakeet.data and deep voice 3, wavenet and clarinet 2020-03-09 03:06:28 +00:00

README.md

Deep Voice 3

PaddlePaddle dynamic graph implementation of Deep Voice 3, a convolutional network based text-to-speech generative model. The implementation is based on Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning.

We implement Deep Voice 3 using Paddle Fluid with dynamic graph, which is convenient for building flexible network architectures.

Dataset

We experiment with the LJSpeech dataset. Download and unzip LJSpeech.

wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
tar xjvf LJSpeech-1.1.tar.bz2

Model Architecture

Deep Voice 3 model architecture

The model consists of an encoder, a decoder and a converter (and a speaker embedding for multispeaker models). The encoder and the decoder together form the seq2seq part of the model, and the converter forms the postnet part.

Project Structure

├── data.py          data_processing
├── configs/         (example) configuration files
├── sentences.txt    sample sentences
├── synthesis.py     script to synthesize waveform from text
├── train.py         script to train a model
└── utils.py         utility functions

Train

Train the model using train.py, follow the usage displayed by python train.py --help.

usage: train.py [-h] [-c CONFIG] [-s DATA] [-r RESUME] [-o OUTPUT] [-g DEVICE]

Train a Deep Voice 3 model with LJSpeech dataset.

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        experimrnt config
  -s DATA, --data DATA  The path of the LJSpeech dataset.
  -r RESUME, --resume RESUME
                        checkpoint to load
  -o OUTPUT, --output OUTPUT
                        The directory to save result.
  -g DEVICE, --device DEVICE
                        device to use
  • --config is the configuration file to use. The provided ljspeech.yaml can be used directly. And you can change some values in the configuration file and train the model with a different config.
  • --data is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains metadata.txt).
  • --resume is the path of the checkpoint. If it is provided, the model would load the checkpoint before trainig.
  • --output is the directory to save results, all results are saved in this directory. The structure of the output directory is shown below.
├── checkpoints      # checkpoint
├── log              # tensorboard log
└── states           # train and evaluation results
    ├── alignments   # attention
    ├── lin_spec     # linear spectrogram
    ├── mel_spec     # mel spectrogram
    └── waveform     # waveform (.wav files)
  • --device is the device (gpu id) to use for training. -1 means CPU.

Example script:

python train.py --config=configs/ljspeech.yaml --data=./LJSpeech-1.1/ --output=experiment --device=0

You can monitor training log via tensorboard, using the script below.

cd experiment/log
tensorboard --logdir=.

Synthesis

usage: synthesis.py [-h] [-c CONFIG] [-g DEVICE] checkpoint text output_path

Synthsize waveform from a checkpoint.

positional arguments:
  checkpoint            checkpoint to load.
  text                  text file to synthesize
  output_path           path to save results

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
                        experiment config.
  -g DEVICE, --device DEVICE
                        device to use
  • --config is the configuration file to use. You should use the same configuration with which you train you model.
  • checkpoint is the checkpoint to load.
  • textis the text file to synthesize.
  • output_path is the directory to save results. The output path contains the generated audio files (*.wav) and attention plots (*.png) for each sentence.
  • --device is the device (gpu id) to use for training. -1 means CPU.

Example script:

python synthesis.py --config=configs/ljspeech.yaml --device=0 experiment/checkpoints/model_step_005000000 sentences.txt generated