Go to file
TianYuan 90dbb377b0
Merge pull request #154 from yt605155624/fix_range
add negative numbers and decimals for re range
2021-08-30 14:47:45 +08:00
.pre-commit-hooks format 2021-08-17 09:54:07 +00:00
docs format 2021-08-17 09:54:07 +00:00
docs_cn rename doc folder 2021-01-13 20:30:51 +08:00
examples fix readme of aishell3 2021-08-30 03:48:11 +00:00
images add documentation sections 2021-01-13 11:06:15 +08:00
parakeet add negative numbers and decimals for re range 2021-08-30 06:43:36 +00:00
tests/unit format 2021-08-17 09:54:07 +00:00
tools add ge2e and tacotron2_aishell3 example (#107) 2021-05-13 17:49:50 +08:00
utils add aishell3 example 2021-08-25 09:37:16 +00:00
.clang-format format 2021-08-17 09:54:07 +00:00
.flake8 fix init import bug 2021-08-26 03:14:24 +00:00
.gitignore WIP: training setup done 2021-06-13 17:24:15 +08:00
.pre-commit-config.yaml format 2021-08-17 09:54:07 +00:00
.readthedocs.yml add paddlepaddle into requirements for readthedocs 2021-01-14 15:30:31 +08:00
.style.yapf format 2021-08-17 09:54:07 +00:00
LICENSE Update README & LICENSE 2020-03-10 16:41:10 +08:00
README.md fix_readme 2021-08-25 07:32:48 +00:00
README_cn.md format all the code with yapf 2020-12-20 13:15:07 +08:00
setup.py fix dl 2021-08-27 10:42:08 +00:00

README.md

Parakeet

Parakeet aims to provide a flexible, efficient and state-of-the-art text-to-speech toolkit for the open-source community. It is built on PaddlePaddle Fluid dynamic graph and includes many influential TTS models proposed by Baidu Research and other research groups.


In particular, it features the latest WaveFlow model proposed by Baidu Research.

  • WaveFlow can synthesize 22.05 kHz high-fidelity speech around 40x faster than real-time on a Nvidia V100 GPU without engineered inference kernels, which is faster than WaveGlow and serveral orders of magnitude faster than WaveNet.
  • WaveFlow is a small-footprint flow-based model for raw audio. It has only 5.9M parameters, which is 15x smalller than WaveGlow (87.9M).
  • WaveFlow is directly trained with maximum likelihood without probability density distillation and auxiliary losses as used in Parallel WaveNet and ClariNet, which simplifies the training pipeline and reduces the cost of development.

Overview

In order to facilitate exploiting the existing TTS models directly and developing the new ones, Parakeet selects typical models and provides their reference implementations in PaddlePaddle. Further more, Parakeet abstracts the TTS pipeline and standardizes the procedure of data preprocessing, common modules sharing, model configuration, and the process of training and synthesis. The models supported here include Vocoders and end-to-end TTS models:

Updates

May-07-2021, Add an example for voice cloning in Chinese. Check examples/tacotron2_aishell3.

Setup

It's difficult to install some dependent libraries for this repo in Windows system, we recommend that you DO NOT use Windows system, please use Linux.

Make sure the library libsndfile1 is installed, e.g., on Ubuntu.

sudo apt-get install libsndfile1

Install PaddlePaddle

See install for more details. This repo requires PaddlePaddle 2.1.2 or above.

Install Parakeet

pip install -U paddle-parakeet

or

git clone https://github.com/PaddlePaddle/Parakeet
cd Parakeet
pip install -e .

If some python dependent packages cannot be installed successfully, you can run the following script first. (replace python3.6 with your own python version)

sudo apt install -y python3.6-dev

See install for more details.

Examples

Entries to the introduction, and the launch of training and synthsis for different example models:

Audio samples

TTS models (Acoustic Model + Neural Vocoder)

Check our website for audio sampels.

Checkpoints

Tacotron2

  1. tacotron2_ljspeech_ckpt_0.3.zip
  2. tacotron2_ljspeech_ckpt_0.3_alternative.zip

Tacotron2_AISHELL3

  1. tacotron2_aishell3_ckpt_0.3.zip

TransformerTTS

  1. transformer_tts_ljspeech_ckpt_0.3.zip

WaveFlow

  1. waveflow_ljspeech_ckpt_0.3.zip

GE2E

  1. ge2e_ckpt_0.3.zip

Parakeet is provided under the Apache-2.0 license.