ParakeetRebeccaRosario/examples/wavenet
chenfeiyu 9e4d5a3d8a fix experiments for waveflow and wavenet, only write visual log in rank-0 2021-02-21 17:30:13 +08:00
..
README.md fix typos again 2020-12-30 15:44:16 +08:00
config.py format all the code with yapf 2020-12-20 13:15:07 +08:00
ljspeech.py format all the code with yapf 2020-12-20 13:15:07 +08:00
preprocess.py format all the code with yapf 2020-12-20 13:15:07 +08:00
synthesize.py format all the code with yapf 2020-12-20 13:15:07 +08:00
train.py fix experiments for waveflow and wavenet, only write visual log in rank-0 2021-02-21 17:30:13 +08:00

README.md

WaveNet with LJSpeech

Dataset

Download the datasaet.

wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2

Extract the dataset.

tar xjvf LJSpeech-1.1.tar.bz2

Preprocess the dataset.

Assume the path to save the preprocessed dataset is ljspeech_wavenet. Run the command below to preprocess the dataset.

python preprocess.py --input=LJSpeech-1.1/  --output=ljspeech_wavenet

Train the model

The training script requires 4 command line arguments. --data is the path of the training dataset, --output is the path of the output directory (we recommend to use a subdirectory in runs to manage different experiments.)

--device should be "cpu" or "gpu", --nprocs is the number of processes to train the model in parallel.

python train.py --data=ljspeech_wavenet/ --output=runs/test --device="gpu" --nprocs=1

If you want distributed training, set a larger --nprocs (e.g. 4). Note that distributed training with cpu is not supported yet.

Synthesize

Synthesize waveform. We assume the --input is a directory containing several mel spectrograms(normalized into range[0, 1)) in .npy format. The output would be saved in --output directory, containing several .wav files, each with the same name as the mel spectrogram does.

--checkpoint_path should be the path of the parameter file (.pdparams) to load. Note that the extention name .pdparmas is not included here.

--device specifies to device to run synthesis on. Due to the autoregressiveness of wavenet, using cpu may be faster.

python synthesize.py --input=mels/ --output=wavs/ --checkpoint_path='step-2450000' --device="cpu" --verbose