Paddle implementation of deepvoice 3 in dynamic graph, a convolutional network based text-to-speech synthesis model. The implementation is based on [Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning](https://arxiv.org/abs/1710.07654).
We implement Deepvoice 3 in paddle fluid with dynamic graph, which is convenient for flexible network architectures.
The model consists of an encoder, a decoder and a converter (and a speaker embedding for multispeaker models). The encoder, together with the decoder forms the seq2seq part of the model, and the converter forms the postnet part.
1.`--config` is the configuration file to use. The provided `ljspeech.yaml` can be used directly. And you can change some values in the configuration file and train the model with a different config.
2.`--data` is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains metadata.txt).
3.`--resume` is the path of the checkpoint. If it is provided, the model would load the checkpoint before trainig.
4.`--output` is the directory to save results, all result are saved in this directory. The structure of the output directory is shown below.
usage: synthesis.py [-h] [-c CONFIG] [-g DEVICE] checkpoint text output_path
Synthsize waveform with a checkpoint.
positional arguments:
checkpoint checkpoint to load.
text text file to synthesize
output_path path to save results
optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
experiment config.
-g DEVICE, --device DEVICE
device to use
```
1.`--config` is the configuration file to use. You should use the same configuration with which you train you model.
2.`checkpoint` is the checkpoint to load.
3.`text`is the text file to synthesize.
4.`output_path` is the directory to save results. The output path contains the generated audio files (`*.wav`) and attention plots (*.png) for each sentence.
5.`--device` is the device (gpu id) to use for training. `-1` means CPU.