From d2dba13ab7c3a5beb5dabd235f3beb3a176f7bce Mon Sep 17 00:00:00 2001 From: chenfeiyu Date: Wed, 30 Dec 2020 15:34:24 +0800 Subject: [PATCH] fix typos --- examples/transformer_tts/README.md | 10 +++++----- examples/waveflow/README.md | 12 ++++++------ examples/wavenet/README.md | 4 ++-- 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/examples/transformer_tts/README.md b/examples/transformer_tts/README.md index 45f8816..d3da080 100644 --- a/examples/transformer_tts/README.md +++ b/examples/transformer_tts/README.md @@ -16,10 +16,10 @@ tar xjvf LJSpeech-1.1.tar.bz2 ### Preprocess the dataset. -Assume the path to save the preprocessed dataset is `ljspeech_wavenet`. Run the command below to preprocess the dataset. +Assume the path to save the preprocessed dataset is `ljspeech_transformer_tts`. Run the command below to preprocess the dataset. ```bash -python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet +python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_transformer_tts ``` ## Train the model @@ -30,18 +30,18 @@ The training script requires 4 command line arguments. `--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel. ```bash -python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1 +python train.py --data=ljspeech_transformer_tts/ --output=runs/test --device="gpu" --nprocs=1 ``` If you want distributed training, set a larger `--nprocs` (e.g. 4). Note that distributed training with cpu is not supported yet. ## Synthesize -Synthesize waveform. We assume the `--input` is text file, a sentence per line, and `--output` directory a directory to save the synthesized mel spectrogram(log magnitude) in `.npy` format. The mel spectrogram can be used with `Waveflow` to generate waveforms. +Synthesize waveform. We assume the `--input` is text file, a sentence per line, and `--output` is a directory to save the synthesized mel spectrogram(log magnitude) in `.npy` format. The mel spectrogram can be used with `Waveflow` to generate waveforms. `--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here. -`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster. +`--device` specifies to device to run synthesis on. ```bash python synthesize.py --input=sentence.txt --output=mels/ --checkpoint_path='step-310000' --device="gpu" --verbose diff --git a/examples/waveflow/README.md b/examples/waveflow/README.md index ba39402..5124a9e 100644 --- a/examples/waveflow/README.md +++ b/examples/waveflow/README.md @@ -16,21 +16,21 @@ tar xjvf LJSpeech-1.1.tar.bz2 ### Preprocess the dataset. -Assume the path to save the preprocessed dataset is `ljspeech_wavenet`. Run the command below to preprocess the dataset. +Assume the path to save the preprocessed dataset is `ljspeech_waveflow`. Run the command below to preprocess the dataset. ```bash -python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet +python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_waveflow ``` ## Train the model The training script requires 4 command line arguments. -`--data` is the path of the training dataset, `--output` is the path of the output direcctory (we recommend to use a subdirectory in `runs` to manage different experiments.) +`--data` is the path of the training dataset, `--output` is the path of the output directory (we recommend to use a subdirectory in `runs` to manage different experiments.) `--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel. ```bash -python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1 +python train.py --data=ljspeech_waveflow/ --output=runs/test --device="gpu" --nprocs=1 ``` If you want distributed training, set a larger `--nprocs` (e.g. 4). Note that distributed training with cpu is not supported yet. @@ -41,8 +41,8 @@ Synthesize waveform. We assume the `--input` is a directory containing several m `--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here. -`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster. +`--device` specifies to device to run synthesis on. ```bash -python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2000000' --device="cpu" --verbose +python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2000000' --device="gpu" --verbose ``` \ No newline at end of file diff --git a/examples/wavenet/README.md b/examples/wavenet/README.md index 468c90a..154089b 100644 --- a/examples/wavenet/README.md +++ b/examples/wavenet/README.md @@ -25,7 +25,7 @@ python preprocess.py --input=LJSpeech-1.1/ --output=ljspeech_wavenet ## Train the model The training script requires 4 command line arguments. -`--data` is the path of the training dataset, `--output` is the path of the output direcctory (we recommend to use a subdirectory in `runs` to manage different experiments.) +`--data` is the path of the training dataset, `--output` is the path of the output directory (we recommend to use a subdirectory in `runs` to manage different experiments.) `--device` should be "cpu" or "gpu", `--nprocs` is the number of processes to train the model in parallel. @@ -41,7 +41,7 @@ Synthesize waveform. We assume the `--input` is a directory containing several m `--checkpoint_path` should be the path of the parameter file (`.pdparams`) to load. Note that the extention name `.pdparmas` is not included here. -`--device` specifiies to device to run synthesis. Due to the autoregressiveness of wavenet, using cpu may be faster. +`--device` specifies to device to run synthesis on. Due to the autoregressiveness of wavenet, using cpu may be faster. ```bash python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2450000' --device="cpu" --verbose