Parakeet/examples/ge2e
Feiyu Chan 4f288a6d4f
add ge2e and tacotron2_aishell3 example (#107)
* hacky thing, add tone support for acoustic model

* fix experiments for waveflow and wavenet, only write visual log in rank-0

* use emb add in tacotron2

* 1. remove space from numericalized representation;
2. fix decoder paddign mask's unsqueeze dim.

* remove bn in postnet

* refactoring code

* add an option to normalize volume when loading audio.

* add an embedding layer.

* 1. change the default min value of LogMagnitude to 1e-5;
2. remove stop logit prediction from tacotron2 model.

* WIP: baker

* add ge2e

* fix lstm speaker encoder

* fix lstm speaker encoder

* fix speaker encoder and add support for 2 more datasets

* simplify visualization code

* add a simple strategy to support multispeaker for tacotron.

* add vctk example for refactored tacotron

* fix indentation

* fix class name

* fix visualizer

* fix root path

* fix root path

* fix root path

* fix typos

* fix bugs

* fix text log extention name

* add example for baker and aishell3

* update experiment and display

* format code for tacotron_vctk, add plot_waveform to display

* add new trainer

* minor fix

* add global condition support for tacotron2

* add gst layer

* add 2 frontend

* fix fmax for example/waveflow

* update collate function, data loader not does not convert nested list into numpy array.

* WIP: add hifigan

* WIP:update hifigan

* change stft to use conv1d

* add audio datasets

* change batch_text_id, batch_spec, batch_wav to include valid lengths in the returned value

* change wavenet to use on-the-fly prepeocessing

* fix typos

* resolve conflict

* remove imports that are removed

* remove files not included in this release

* remove imports to deleted modules

* move tacotron2_msp

* clean code

* fix argument order

* fix argument name

* clean code for data processing

* WIP: add README

* add more details to thr README, fix some preprocess scripts

* add voice cloning notebook

* add an optional to alter the loss and model structure of tacotron2, add an alternative config

* add plot_multiple_attentions and update visualization code in transformer_tts

* format code

* remove tacotron2_msp

* update tacotron2 from_pretrained, update setup.py

* update tacotron2

* update tacotron_aishell3's README

* add images for exampels/tacotron2_aishell3's README

* update README for examples/ge2e

* add STFT back

* add extra_config keys into the default config of tacotron

* fix typos and docs

* update README and doc

* update docstrings for tacotron

* update doc

* update README

* add links to downlaod pretrained models

* refine READMEs and clean code

* add praatio into requirements for running the experiments

* format code with pre-commit

* simplify text processing code and update notebook
2021-05-13 17:49:50 +08:00
..
README.md add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
README_cn.md add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
audio_processor.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
config.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
dataset_processors.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
inference.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
preprocess.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
random_cycle.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
speaker_verification_dataset.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
train.py add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00

README.md

Speaker Encoder

This experiment trains a speaker encoder with speaker verification as its task. It is done as a part of the experiment of transfer learning from speaker verification to multispeaker text-to-speech synthesis, which can be found at tacotron2_aishell3. The trained speaker encoder is used to extract utterance embeddings from utterances.

Model

The model used in this experiment is the speaker encoder with text independent speaker verification task in GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION. GE2E-softmax loss is used.

File Structure

ge2e
├── README.md
├── README_cn.md
├── audio_processor.py
├── config.py
├── dataset_processors.py
├── inference.py
├── preprocess.py
├── random_cycle.py
├── speaker_verification_dataset.py
└── train.py

Download Datasets

Currently supported datasets are Librispeech-other-500, VoxCeleb, VoxCeleb2,ai-datatang-200zh, magicdata, which can be downloaded from corresponding webpage.

  1. Librispeech/train-other-500

    An English multispeaker datasetURLonly the train-other-500 subset is used.

  2. VoxCeleb1

    An English multispeaker datasetURL , Audio Files from Dev A to Dev D should be downloaded, combined and extracted.

  3. VoxCeleb2

    An English multispeaker datasetURL , Audio Files from Dev A to Dev H should be downloaded, combined and extracted.

  4. Aidatatang-200zh

    A Mandarin Chinese multispeaker dataset URL .

  5. magicdata

    A Mandarin Chinese multispeaker dataset URL .

If you want to use other datasets, you can also download and preprocess it as long as it meets the requirements described below.

Preprocess Datasets

Multispeaker datasets are used as training data, though the transcriptions are not used. To enlarge the amount of data used for training, several multispeaker datasets are combined. The preporcessed datasets are organized in a file structure described below. The mel spectrogram of each utterance is save in .npy format. The dataset is 2-stratified (speaker-utterance). Since multiple datasets are combined, to avoid conflict in speaker id, dataset name is prepended to the speake ids.

dataset_root
├── dataset01_speaker01/
│   ├── utterance01.npy
│   ├── utterance02.npy
│   └── utterance03.npy
├── dataset01_speaker02/
│   ├── utterance01.npy
│   ├── utterance02.npy
│   └── utterance03.npy
├── dataset02_speaker01/
│   ├── utterance01.npy
│   ├── utterance02.npy
│   └── utterance03.npy
└── dataset02_speaker02/
    ├── utterance01.npy
    ├── utterance02.npy
    └── utterance03.npy

Run the command to preprocess datasets.

python preprocess.py --datasets_root=<datasets_root> --output_dir=<output_dir> --dataset_names=<dataset_names>

Here --datasets_root is the directory that contains several extracted dataset; --output_dir is the directory to save the preprocessed dataset; --dataset_names is the dataset to preprocess. If there are multiple datasets in --datasets_root to preprocess, the names can be joined with comma. Currently supported dataset names are librispeech_other, voxceleb1, voxceleb2, aidatatang_200zh and magicdata.

Training

When preprocessing is done, run the command below to train the mdoel.

python train.py --data=<data_path> --output=<output> --device="gpu" --nprocs=1
  • --data is the path to the preprocessed dataset.
  • --output is the directory to save resultsusually a subdirectory of runs.It contains visualdl log files, text log files, config file and a checkpoints directory, which contains parameter file and optimizer state file. If --output already has some training results in it, the most recent parameter file and optimizer state file is loaded before training.
  • --device is the device type to run the training, 'cpu' and 'gpu' are supported.
  • --nprocs is the number of replicas to run in multiprocessing based parallel training。Currently multiprocessing based parallel training is only enabled when using 'gpu' as the devicde. CUDA_VISIBLE_DEVICES can be used to specify visible devices with cuda.

Other options are described below.

  • --config is a .yaml config file used to override the default config(which is coded in config.py).
  • --opts is command line options to further override config files. It should be the last comman line options passed with multiple key-value pairs separated by spaces.
  • --checkpoint_path specifies the checkpoiont to load before training, extension is not included. A parameter file ( .pdparams) and an optimizer state file ( .pdopt) with the same name is used. This option has a higher priority than auto-resuming from the --output directory.

Pretrained Model

The pretrained model is first trained to 1560k steps at Librispeech-other-500 and voxceleb1. Then trained at aidatatang_200h and magic_data to 3000k steps.

Download URL ge2e_ckpt_0.3.zip.

Inference

When training is done, run the command below to generate utterance embedding for each utterance in a dataset.

python inference.py --input=<input> --output=<output> --checkpoint_path=<checkpoint_path> --device="gpu"

--input is the path of the dataset used for inference.

--output is the directory to save the processed results. It has the same file structure as the input dataset. Each utterance in the dataset has a corrsponding utterance embedding file in *.npy format.

--checkpoint_path is the path of the checkpoint to use, extension not included.

--pattern is the wildcard pattern to filter audio files for inference, defaults to *.wav.

--device and --opts have the same meaning as in the training script.

References

  1. Generalized End-to-end Loss for Speaker Verification
  2. Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis