From d1d6c206720963ae39b811e4e2c515985669f277 Mon Sep 17 00:00:00 2001 From: chenfeiyu Date: Wed, 30 Dec 2020 14:36:23 +0800 Subject: [PATCH] add README for transformer_tts, waveflow and wavenet --- examples/transformer_tts/README.md | 48 ++++++++++++++++++++++++++++++ examples/waveflow/README.md | 48 ++++++++++++++++++++++++++++++ examples/wavenet/README.md | 48 ++++++++++++++++++++++++++++++ 3 files changed, 144 insertions(+) create mode 100644 examples/transformer_tts/README.md create mode 100644 examples/waveflow/README.md create mode 100644 examples/wavenet/README.md diff --git a/examples/transformer_tts/README.md b/examples/transformer_tts/README.md new file mode 100644 index 0000000..45f8816 --- /dev/null +++ b/examples/transformer_tts/README.md @@ -0,0 +1,48 @@ +# TransformerTTS with LJSpeech + +## Dataset + +### Download the datasaet. + +```bash +wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 +``` + +### Extract the dataset. + +```bash +tar xjvf LJSpeech-1.1.tar.bz2 +``` + +### Preprocess the dataset. + +Assume the path to save the preprocessed dataset is `ljspeech_wavenet`. Run the command below to preprocess the dataset. + +```bash +python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet +``` + +## Train the model + +The training script requires 4 command line arguments. +`--data` is the path of the training dataset, `--output` is the path of the output direcctory (we recommend to use a subdirectory in `runs` to manage different experiments.) + +`--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel. + +```bash +python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1 +``` + +If you want distributed training, set a larger `--nprocs` (e.g. 4). Note that distributed training with cpu is not supported yet. + +## Synthesize + +Synthesize waveform. We assume the `--input` is text file, a sentence per line, and `--output` directory a directory to save the synthesized mel spectrogram(log magnitude) in `.npy` format. The mel spectrogram can be used with `Waveflow` to generate waveforms. + +`--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here. + +`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster. + +```bash +python synthesize.py --input=sentence.txt --output=mels/ --checkpoint_path='step-310000' --device="gpu" --verbose +``` \ No newline at end of file diff --git a/examples/waveflow/README.md b/examples/waveflow/README.md new file mode 100644 index 0000000..ba39402 --- /dev/null +++ b/examples/waveflow/README.md @@ -0,0 +1,48 @@ +# WaveFlow with LJSpeech + +## Dataset + +### Download the datasaet. + +```bash +wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 +``` + +### Extract the dataset. + +```bash +tar xjvf LJSpeech-1.1.tar.bz2 +``` + +### Preprocess the dataset. + +Assume the path to save the preprocessed dataset is `ljspeech_wavenet`. Run the command below to preprocess the dataset. + +```bash +python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet +``` + +## Train the model + +The training script requires 4 command line arguments. +`--data` is the path of the training dataset, `--output` is the path of the output direcctory (we recommend to use a subdirectory in `runs` to manage different experiments.) + +`--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel. + +```bash +python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1 +``` + +If you want distributed training, set a larger `--nprocs` (e.g. 4). Note that distributed training with cpu is not supported yet. + +## Synthesize + +Synthesize waveform. We assume the `--input` is a directory containing several mel spectrogram(log magnitude) in `.npy` format. The output would be saved in `--output` directory, containing several `.wav` files with the same name as the mel spectrogram does. + +`--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here. + +`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster. + +```bash +python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2000000' --device="cpu" --verbose +``` \ No newline at end of file diff --git a/examples/wavenet/README.md b/examples/wavenet/README.md new file mode 100644 index 0000000..0224742 --- /dev/null +++ b/examples/wavenet/README.md @@ -0,0 +1,48 @@ +# WaveNet with LJSpeech + +## Dataset + +### Download the datasaet. + +```bash +wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 +``` + +### Extract the dataset. + +```bash +tar xjvf LJSpeech-1.1.tar.bz2 +``` + +### Preprocess the dataset. + +Assume the path to save the preprocessed dataset is `ljspeech_wavenet`. Run the command below to preprocess the dataset. + +```bash +python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet +``` + +## Train the model + +The training script requires 4 command line arguments. +`--data` is the path of the training dataset, `--output` is the path of the output direcctory (we recommend to use a subdirectory in `runs` to manage different experiments.) + +`--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel. + +```bash +python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1 +``` + +If you want distributed training, set a larger `--nprocs` (e.g. 4). Note that distributed training with cpu is not supported yet. + +## Synthesize + +Synthesize waveform. We assume the `--input` is a directory containing several mel spectrogram(log magnitude) in `.npy` format. The output would be saved in `--output` directory, containing several `.wav` files with the same name as the mel spectrogram does. + +`--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here. + +`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster. + +```bash +python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2450000' --device="cpu" --verbose +``` \ No newline at end of file