fix typos
This commit is contained in:
parent
3df4ecd455
commit
d2dba13ab7
|
@ -16,10 +16,10 @@ tar xjvf LJSpeech-1.1.tar.bz2
|
|||
|
||||
### Preprocess the dataset.
|
||||
|
||||
Assume the path to save the preprocessed dataset is `ljspeech_wavenet`. Run the command below to preprocess the dataset.
|
||||
Assume the path to save the preprocessed dataset is `ljspeech_transformer_tts`. Run the command below to preprocess the dataset.
|
||||
|
||||
```bash
|
||||
python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet
|
||||
python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_transformer_tts
|
||||
```
|
||||
|
||||
## Train the model
|
||||
|
@ -30,18 +30,18 @@ The training script requires 4 command line arguments.
|
|||
`--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel.
|
||||
|
||||
```bash
|
||||
python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1
|
||||
python train.py --data=ljspeech_transformer_tts/ --output=runs/test --device="gpu" --nprocs=1
|
||||
```
|
||||
|
||||
If you want distributed training, set a larger `--nprocs` (e.g. 4). Note that distributed training with cpu is not supported yet.
|
||||
|
||||
## Synthesize
|
||||
|
||||
Synthesize waveform. We assume the `--input` is text file, a sentence per line, and `--output` directory a directory to save the synthesized mel spectrogram(log magnitude) in `.npy` format. The mel spectrogram can be used with `Waveflow` to generate waveforms.
|
||||
Synthesize waveform. We assume the `--input` is text file, a sentence per line, and `--output` is a directory to save the synthesized mel spectrogram(log magnitude) in `.npy` format. The mel spectrogram can be used with `Waveflow` to generate waveforms.
|
||||
|
||||
`--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here.
|
||||
|
||||
`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster.
|
||||
`--device` specifies to device to run synthesis on.
|
||||
|
||||
```bash
|
||||
python synthesize.py --input=sentence.txt --output=mels/ --checkpoint_path='step-310000' --device="gpu" --verbose
|
||||
|
|
|
@ -16,21 +16,21 @@ tar xjvf LJSpeech-1.1.tar.bz2
|
|||
|
||||
### Preprocess the dataset.
|
||||
|
||||
Assume the path to save the preprocessed dataset is `ljspeech_wavenet`. Run the command below to preprocess the dataset.
|
||||
Assume the path to save the preprocessed dataset is `ljspeech_waveflow`. Run the command below to preprocess the dataset.
|
||||
|
||||
```bash
|
||||
python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet
|
||||
python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_waveflow
|
||||
```
|
||||
|
||||
## Train the model
|
||||
|
||||
The training script requires 4 command line arguments.
|
||||
`--data` is the path of the training dataset, `--output` is the path of the output direcctory (we recommend to use a subdirectory in `runs` to manage different experiments.)
|
||||
`--data` is the path of the training dataset, `--output` is the path of the output directory (we recommend to use a subdirectory in `runs` to manage different experiments.)
|
||||
|
||||
`--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel.
|
||||
|
||||
```bash
|
||||
python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1
|
||||
python train.py --data=ljspeech_waveflow/ --output=runs/test --device="gpu" --nprocs=1
|
||||
```
|
||||
|
||||
If you want distributed training, set a larger `--nprocs` (e.g. 4). Note that distributed training with cpu is not supported yet.
|
||||
|
@ -41,8 +41,8 @@ Synthesize waveform. We assume the `--input` is a directory containing several m
|
|||
|
||||
`--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here.
|
||||
|
||||
`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster.
|
||||
`--device` specifies to device to run synthesis on.
|
||||
|
||||
```bash
|
||||
python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2000000' --device="cpu" --verbose
|
||||
python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2000000' --device="gpu" --verbose
|
||||
```
|
|
@ -25,7 +25,7 @@ python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet
|
|||
## Train the model
|
||||
|
||||
The training script requires 4 command line arguments.
|
||||
`--data` is the path of the training dataset, `--output` is the path of the output direcctory (we recommend to use a subdirectory in `runs` to manage different experiments.)
|
||||
`--data` is the path of the training dataset, `--output` is the path of the output directory (we recommend to use a subdirectory in `runs` to manage different experiments.)
|
||||
|
||||
`--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel.
|
||||
|
||||
|
@ -41,7 +41,7 @@ Synthesize waveform. We assume the `--input` is a directory containing several m
|
|||
|
||||
`--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here.
|
||||
|
||||
`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster.
|
||||
`--device` specifies to device to run synthesis on. Due to the autoregressiveness of wavenet, using cpu may be faster.
|
||||
|
||||
```bash
|
||||
python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2450000' --device="cpu" --verbose
|
||||
|
|
Loading…
Reference in New Issue