commit
a29c74d036
|
@ -1,3 +1,9 @@
|
|||
# IDES
|
||||
*.wpr
|
||||
*.wpu
|
||||
*.udb
|
||||
*.ann
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
|
|
|
@ -0,0 +1,233 @@
|
|||
# Parakeet
|
||||
|
||||
Parakeet 自在为开源社区提供一个灵活,高效,先进的语音合成工具箱。Parakeet 基于 PaddlePaddle 2.0 构建,并且包含了 [百度研究院]((http://research.baidu.com)) 以及其他研究机构的许多有影响力的 TTS 模型。
|
||||
|
||||
<img src="./images/logo.png" alt="parakeet-logo" style="zoom: 33%;" />
|
||||
|
||||
其中包含了百度研究院最近提出的 [WaveFlow](https://arxiv.org/abs/1912.01219) 模型。
|
||||
|
||||
- WaveFlow 无需专用于推理的 kernel, 就可以在 Nvidia v100 上以 40 倍实时的速度合成 22.05kHz 的高保真度的语音。这比 [WaveGlow](https://github.com/NVIDIA/waveglow) 模型更快,而且比 WaveNet 快几个数量级。
|
||||
- WaveFlow 是占用小的,基于流的用于生成原始音频的模型,只有 5.9M 个可训练参数,约为 WaveGlow (87.9M 个参数) 的 1/15.
|
||||
- WaveFlow 可以直接通过最大似然方式训练,而不需要概率密度蒸馏,或者是类似 ParallelWaveNet 和 ClariNet 中使用的辅助 loss, 这简化了训练流程,减小了开发成本。
|
||||
|
||||
## 模型概览
|
||||
|
||||
为了方便使用已有的 TTS 模型以及开发新的模型,Parakeet 选取了经典的模型,并且提供了基于 PaddlePaddle 的参考实现。Parakeet 进一步抽象了 TTS 任务的流程,并且将数据预处理,模块共享,模型配置以及训练和合成的流程标准化。目前已经支持的模型包括音码器 (vocoder) 和声学模型。
|
||||
|
||||
- 音码器
|
||||
- [WaveFlow: A Compact Flow-based Model for Raw Audio](https://arxiv.org/abs/1912.01219)
|
||||
- [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](https://arxiv.org/abs/1807.07281)
|
||||
- [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499)
|
||||
|
||||
- 声学模型
|
||||
- [Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning](https://arxiv.org/abs/1710.07654)
|
||||
- [Neural Speech Synthesis with Transformer Network (Transformer TTS)](https://arxiv.org/abs/1809.08895)
|
||||
- [FastSpeech: Fast, Robust and Controllable Text to Speech](https://arxiv.org/abs/1905.09263)
|
||||
|
||||
未来将会添加更多的模型。
|
||||
|
||||
如若需要基于 Parakeet 实现自己的模型和实验,可以参考 [如何准备自己的实验](./docs/experiment_guide_cn.md).
|
||||
|
||||
## 安装
|
||||
|
||||
请参考 [安装](./docs/installation_cn.md).
|
||||
|
||||
## 实验样例
|
||||
|
||||
Parakeet 提供了多个实验样例。这些样例使用 parakeet 中提供的模型,提供在公共数据集上进行实验的完整流程,包含数据处理,模型训练以及预测的功能,是进行实验以及二次开发的示例。
|
||||
|
||||
- [>>> WaveFlow](./examples/waveflow)
|
||||
- [>>> Clarinet](./examples/clarinet)
|
||||
- [>>> WaveNet](./examples/wavenet)
|
||||
- [>>> Deep Voice 3](./examples/deepvoice3)
|
||||
- [>>> Transformer TTS](./examples/transformer_tts)
|
||||
- [>>> FastSpeech](./examples/fastspeech)
|
||||
|
||||
|
||||
## 预训练模型和音频样例
|
||||
|
||||
Parakeet 同时提供了示例模型的训练好的参数,可从下表中获取。每一列列出了一个模型的资源,包含预训练模型的 checkpoint 下载 url, 训练该模型用的数据集,以及使用改 checkpoint 合成的语音样例。点击模型名,可以下载到一个压缩包,其中包含了训练该模型时使用的配置文件。
|
||||
|
||||
#### 音码器
|
||||
|
||||
我们提供了 residual channel 为 64, 96, 128 的 WaveFlow 模型 checkpoint. 另外还提供了 ClariNet 和 WaveNet 的 checkpoint.
|
||||
|
||||
<div align="center">
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="width: 250px">
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res64_ljspeech_ckpt_1.0.zip">WaveFlow (res. channels 64)</a>
|
||||
</th>
|
||||
<th style="width: 250px">
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res96_ljspeech_ckpt_1.0.zip">WaveFlow (res. channels 96)</a>
|
||||
</th>
|
||||
<th style="width: 250px">
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res128_ljspeech_ckpt_1.0.zip">WaveFlow (res. channels 128)</a>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<th>LJSpeech </th>
|
||||
<th>LJSpeech </th>
|
||||
<th>LJSpeech </th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res64_ljspeech_samples_1.0/step_3020k_sentence_0.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res64_ljspeech_samples_1.0/step_3020k_sentence_1.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res64_ljspeech_samples_1.0/step_3020k_sentence_2.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res64_ljspeech_samples_1.0/step_3020k_sentence_3.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res64_ljspeech_samples_1.0/step_3020k_sentence_4.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a>
|
||||
</th>
|
||||
<th>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res96_ljspeech_samples_1.0/step_2000k_sentence_0.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res96_ljspeech_samples_1.0/step_2000k_sentence_1.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res96_ljspeech_samples_1.0/step_2000k_sentence_2.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res96_ljspeech_samples_1.0/step_2000k_sentence_3.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res96_ljspeech_samples_1.0/step_2000k_sentence_4.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a>
|
||||
</th>
|
||||
<th>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res128_ljspeech_samples_1.0/step_2000k_sentence_0.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res128_ljspeech_samples_1.0/step_2000k_sentence_1.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res128_ljspeech_samples_1.0/step_2000k_sentence_2.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res128_ljspeech_samples_1.0/step_2000k_sentence_3.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res128_ljspeech_samples_1.0/step_2000k_sentence_4.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a>
|
||||
</th>
|
||||
</tr>
|
||||
</tbody>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="width: 250px">
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/clarinet_ljspeech_ckpt_1.0.zip">ClariNet</a>
|
||||
</th>
|
||||
<th style="width: 250px">
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/wavenet_ljspeech_ckpt_1.0.zip">WaveNet</a>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<th>LJSpeech </th>
|
||||
<th>LJSpeech </th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/clarinet_ljspeech_samples_1.0/step_500000_sentence_0.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/clarinet_ljspeech_samples_1.0/step_500000_sentence_1.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/clarinet_ljspeech_samples_1.0/step_500000_sentence_2.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/clarinet_ljspeech_samples_1.0/step_500000_sentence_3.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/clarinet_ljspeech_samples_1.0/step_500000_sentence_4.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a>
|
||||
</th>
|
||||
<th>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/wavenet_ljspeech_samples_1.0/step_2450k_sentence_0.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/wavenet_ljspeech_samples_1.0/step_2450k_sentence_1.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/wavenet_ljspeech_samples_1.0/step_2450k_sentence_2.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/wavenet_ljspeech_samples_1.0/step_2450k_sentence_3.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/wavenet_ljspeech_samples_1.0/step_2450k_sentence_4.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a>
|
||||
</th>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
|
||||
**注意:** 输入的 mel 频谱是从验证集中选取的,它们不被用于训练。
|
||||
|
||||
#### 声学模型
|
||||
|
||||
我们也提供了几个端到端的 TTS 模型的 checkpoint, 并展示用随机选取的著名引言合成的语音。对应的转录文本展示如下。
|
||||
|
||||
| |Text| From |
|
||||
|:-:|:-- | :--: |
|
||||
0|*Life was like a box of chocolates, you never know what you're gonna get.* | *Forrest Gump* |
|
||||
1|*With great power there must come great responsibility.* | *Spider-Man*|
|
||||
2|*To be or not to be, that’s a question.*|*Hamlet*|
|
||||
3|*Death is just a part of life, something we're all destined to do.*| *Forrest Gump*|
|
||||
4|*Don’t argue with the people of strong determination, because they may change the fact!*| *William Shakespeare* |
|
||||
|
||||
用于可以使用不同的音码器将声学模型产生的频谱转化为原始音频。我们将展示声学模型配合 [Griffin-Lim](https://ieeexplore.ieee.org/document/1164317) 音码器以及基于神经网络的音码器的合成样例。
|
||||
|
||||
##### 1) Griffin-Lim 音码器
|
||||
|
||||
<div align="center">
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="width: 250px">
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/transformer_tts_ljspeech_ckpt_1.0.zip">Transformer TTS</a>
|
||||
</th>
|
||||
<th style="width: 250px">
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech_ljspeech_ckpt_1.0.zip">FastSpeech</a>
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<th>LJSpeech </th>
|
||||
<th>LJSpeech </th>
|
||||
</tr>
|
||||
<tr>
|
||||
<th >
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/transformer_tts_ljspeech_griffin-lim_samples_1.0/step_120000_sentence_0.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/transformer_tts_ljspeech_griffin-lim_samples_1.0/step_120000_sentence_1.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/transformer_tts_ljspeech_griffin-lim_samples_1.0/step_120000_sentence_2.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/transformer_tts_ljspeech_griffin-lim_samples_1.0/step_120000_sentence_3.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/transformer_tts_ljspeech_griffin-lim_samples_1.0/step_120000_sentence_4.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a>
|
||||
</th>
|
||||
<th >
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech_ljspeech_griffin-lim_samples_1.0/step_162000_sentence_0.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech_ljspeech_griffin-lim_samples_1.0/step_162000_sentence_1.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech_ljspeech_griffin-lim_samples_1.0/step_162000_sentence_2.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech_ljspeech_griffin-lim_samples_1.0/step_162000_sentence_3.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a><br>
|
||||
<a href="https://paddlespeech.bj.bcebos.com/Parakeet/fastspeech_ljspeech_griffin-lim_samples_1.0/step_162000_sentence_4.wav">
|
||||
<img src="images/audio_icon.png" width=250 /></a>
|
||||
</th>
|
||||
</tr>
|
||||
</tbody>
|
||||
<thead>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
##### 2) 神经网络音码器
|
||||
|
||||
正在开发中。
|
||||
|
||||
## 版权和许可
|
||||
|
||||
Parakeet 以 [Apache-2.0 license](LICENSE) 提供。
|
341
docs/data.md
341
docs/data.md
|
@ -1,341 +0,0 @@
|
|||
# parakeet.data
|
||||
|
||||
This short guide shows the design of `parakeet.data` and how we use it in an experiment.
|
||||
|
||||
The most important concepts of `parakeet.data` are `DatasetMixin`, `DataCargo`, `Sampler`, `batch function` and `DataIterator`.
|
||||
|
||||
## Dataset
|
||||
|
||||
Dataset, as we assume here, is a list of examples. You can get its length by `len(dataset)`(which means its length is known, and we have to implement `__len__()` method for it). And you can access its items randomly by `dataset[i]`(which means we have to implement `__getitem__()` method for it). Furthermore, you can iterate over it by `iter(dataset)` or `for example in dataset`, which means we have to implement `__iter__()` method for it.
|
||||
|
||||
### DatasetMixin
|
||||
|
||||
We provide an `DatasetMixin` object which provides the above methods. You can inherit `DatasetMixin` and implement `get_example()` method for it to define your own dataset class. The `get_example()` method is called by `__getitem__()` method automatically.
|
||||
|
||||
We also define several high-order Dataset classes, the obejcts of which can be built from some given Dataset objects.
|
||||
|
||||
### TupleDataset
|
||||
|
||||
Dataset that is a combination of several datasets of the same length. An example of a `Tupledataset` is a tuple of examples of its constituent datasets.
|
||||
|
||||
### DictDataset
|
||||
|
||||
Dataset that is a combination of several datasets of the same length. An example of the `Dictdataset` is a dict of examples of its constituent datasets.
|
||||
|
||||
### SliceDataset
|
||||
|
||||
`SliceDataset` is a slice of the base dataset.
|
||||
|
||||
### SubsetDataset
|
||||
|
||||
`SubsetDataset` is a subset of the base dataset.
|
||||
|
||||
### ChainDataset
|
||||
|
||||
`ChainDataset` is the concatenation of several datastes with the same fields.
|
||||
|
||||
### TransformDataset
|
||||
|
||||
A `TransformeDataset` is created by applying a `transform` to the examples of the base dataset. The `transform` is a callable object which takes an example of the base dataset as parameter and returns an example of the `TransformDataset`. The transformation is lazy, which means it is applied to an example only when requested.
|
||||
|
||||
### FilterDataset
|
||||
|
||||
A `FilterDataset` is created by applying a `filter` to the base dataset. A `filter` is a predicate that takes an example of the base dataset as parameter and returns a boolean. Only those examples that pass the filter are included in the `FilterDataset`.
|
||||
|
||||
Note that the filter is applied to all the examples in the base dataset when initializing a `FilterDataset`.
|
||||
|
||||
### CacheDataset
|
||||
|
||||
By default, we preprocess dataset lazily in `DatasetMixin.get_example()`. An example is preprocessed whenever requested. But `CacheDataset` caches the base dataset lazily, so each example is processed only once when it is first requested. When preprocessing the dataset is slow, you can use `Cachedataset` to speed it up, but caching may consume a lot of RAM if the dataset is large.
|
||||
|
||||
Finally, if preprocessing the dataset is slow and the processed dataset is too large to cache, you can write your own code to save them into files or databases, and then define a Dataset to load them. `Dataset` is flexible, so you can create your own dataset painlessly.
|
||||
|
||||
## DataCargo
|
||||
|
||||
`DataCargo`, like `Dataset`, is an iterable object, but it is an iterable oject of batches. We need `Datacargo` because in deep learning, batching examples into batches exploits the computational resources of modern hardwares. You can iterate over it by `iter(datacargo)` or `for batch in datacargo`. `DataCargo` is an iterable object but not an iterator, in that in can be iterated over more than once.
|
||||
|
||||
### batch function
|
||||
|
||||
The concept of a `batch` is something transformed from a list of examples. Assume that an example is a structure(tuple in python, or struct in C and C++) consists of several fields, then a list of examples is an array of structures(AOS, e.g. a dataset is an AOS). Then a batch here is a structure of arrays (SOA). Here is an example:
|
||||
|
||||
The table below represents 2 examples, each of which contains 5 fields.
|
||||
|
||||
| weight | height | width | depth | density |
|
||||
| ------ | ------ | ----- | ----- | ------- |
|
||||
| 1.2 | 1.1 | 1.3 | 1.4 | 0.8 |
|
||||
| 1.6 | 1.4 | 1.2 | 0.6 | 1.4 |
|
||||
|
||||
The AOS representation and SOA representation of the table are shown below.
|
||||
|
||||
AOS:
|
||||
```text
|
||||
[(1.2, 1,1, 1,3, 1,4, 0.8),
|
||||
|
||||
(1.6, 1.4, 1.2, 0.6, 1.4)]
|
||||
```
|
||||
|
||||
SOA:
|
||||
```text
|
||||
([1,2, 1.6],
|
||||
[1.1, 1.4],
|
||||
[1.3, 1.2],
|
||||
[1.4, 0.6],
|
||||
[0.8, 1.4])
|
||||
```
|
||||
|
||||
For the example above, converting an AOS to an SOA is trivial, just stacking every field for all the examples. But it is not always the case. When a field contains a sequence, you may have to pad all the sequences to the largest length then stack them together. In some other cases, we may want to add a field for the batch, for example, `valid_length` for each example. So in general, a function to transform an AOS to SOA is needed to build a `Datacargo` from a dataset. We call this the batch function (`batch_fn`), but you can use any callable object if you need to.
|
||||
|
||||
Usually we need to define the batch function as an callable object which stores all the options and configurations as its members. Its `__call__()` method transforms a list of examples into a batch.
|
||||
|
||||
### Sampler
|
||||
|
||||
Equipped with a batch function(we have known __how to batch__), here comes the next question. __What to batch?__ We need to decide which examples to pick when creating a batch. Since a dataset is a list of examples, we only need to pick indices for the corresponding examples. A sampler object is what we use to do this.
|
||||
|
||||
A `Sampler` is represented as an iterable object of integers. Assume the dataset has `N` examples, then an iterable object of intergers in the range`[0, N)` is an appropriate sampler for this dataset to build a `DataCargo`.
|
||||
|
||||
We provide several samplers that are ready to use, for example, `SequentialSampler` and `RandomSampler`.
|
||||
|
||||
## DataIterator
|
||||
|
||||
`DataIterator` is what returned by `iter(data_cargo)`. It can only be iterated over once.
|
||||
|
||||
Here's the analogy.
|
||||
|
||||
```text
|
||||
Dataset --> Iterable[Example] | iter(Dataset) -> Iterator[Example]
|
||||
DataCargo --> Iterable[Batch] | iter(DataCargo) -> Iterator[Batch]
|
||||
```
|
||||
|
||||
In order to construct an iterator of batches from an iterator of examples, we construct a DataCargo from a Dataset.
|
||||
|
||||
|
||||
|
||||
## Code Example
|
||||
|
||||
Here's an example of how we use `parakeet.data` to process the `LJSpeech` dataset with a wavenet model.
|
||||
|
||||
First, we would like to define a class which represents the LJSpeech dataset and loads it as-is. We try not to apply any preprocessings here.
|
||||
|
||||
```python
|
||||
import csv
|
||||
import numpy as np
|
||||
import librosa
|
||||
from pathlib import Path
|
||||
import pandas as pd
|
||||
|
||||
from parakeet.data import DatasetMixin
|
||||
from parakeet.data import batch_spec, batch_wav
|
||||
|
||||
class LJSpeechMetaData(DatasetMixin):
|
||||
def __init__(self, root):
|
||||
self.root = Path(root)
|
||||
self._wav_dir = self.root.joinpath("wavs")
|
||||
csv_path = self.root.joinpath("metadata.csv")
|
||||
self._table = pd.read_csv(
|
||||
csv_path,
|
||||
sep="|",
|
||||
header=None,
|
||||
quoting=csv.QUOTE_NONE,
|
||||
names=["fname", "raw_text", "normalized_text"])
|
||||
|
||||
def get_example(self, i):
|
||||
fname, raw_text, normalized_text = self._table.iloc[i]
|
||||
fname = str(self._wav_dir.joinpath(fname + ".wav"))
|
||||
return fname, raw_text, normalized_text
|
||||
|
||||
def __len__(self):
|
||||
return len(self._table)
|
||||
```
|
||||
|
||||
We make this dataset simple in purpose. It requires only the path of the dataset, nothing more. It only loads the `metadata.csv` in the dataset when it is initialized, which includes file names of the audio files, and the transcriptions. We do not even load the audio files at `get_example()`.
|
||||
|
||||
Then we define a `Transform` object to transform an example of `LJSpeechMetaData` into an example we want for the model.
|
||||
|
||||
```python
|
||||
class Transform(object):
|
||||
def __init__(self, sample_rate, n_fft, win_length, hop_length, n_mels):
|
||||
self.sample_rate = sample_rate
|
||||
self.n_fft = n_fft
|
||||
self.win_length = win_length
|
||||
self.hop_length = hop_length
|
||||
self.n_mels = n_mels
|
||||
|
||||
def __call__(self, example):
|
||||
wav_path, _, _ = example
|
||||
|
||||
sr = self.sample_rate
|
||||
n_fft = self.n_fft
|
||||
win_length = self.win_length
|
||||
hop_length = self.hop_length
|
||||
n_mels = self.n_mels
|
||||
|
||||
wav, loaded_sr = librosa.load(wav_path, sr=None)
|
||||
assert loaded_sr == sr, "sample rate does not match, resampling applied"
|
||||
|
||||
# Pad audio to the right size.
|
||||
frames = int(np.ceil(float(wav.size) / hop_length))
|
||||
fft_padding = (n_fft - hop_length) // 2 # sound
|
||||
desired_length = frames * hop_length + fft_padding * 2
|
||||
pad_amount = (desired_length - wav.size) // 2
|
||||
|
||||
if wav.size % 2 == 0:
|
||||
wav = np.pad(wav, (pad_amount, pad_amount), mode='reflect')
|
||||
else:
|
||||
wav = np.pad(wav, (pad_amount, pad_amount + 1), mode='reflect')
|
||||
|
||||
# Normalize audio.
|
||||
wav = wav / np.abs(wav).max() * 0.999
|
||||
|
||||
# Compute mel-spectrogram.
|
||||
# Turn center to False to prevent internal padding.
|
||||
spectrogram = librosa.core.stft(
|
||||
wav,
|
||||
hop_length=hop_length,
|
||||
win_length=win_length,
|
||||
n_fft=n_fft,
|
||||
center=False)
|
||||
spectrogram_magnitude = np.abs(spectrogram)
|
||||
|
||||
# Compute mel-spectrograms.
|
||||
mel_filter_bank = librosa.filters.mel(sr=sr,
|
||||
n_fft=n_fft,
|
||||
n_mels=n_mels)
|
||||
mel_spectrogram = np.dot(mel_filter_bank, spectrogram_magnitude)
|
||||
mel_spectrogram = mel_spectrogram
|
||||
|
||||
# Rescale mel_spectrogram.
|
||||
min_level, ref_level = 1e-5, 20 # hard code it
|
||||
mel_spectrogram = 20 * np.log10(np.maximum(min_level, mel_spectrogram))
|
||||
mel_spectrogram = mel_spectrogram - ref_level
|
||||
mel_spectrogram = np.clip((mel_spectrogram + 100) / 100, 0, 1)
|
||||
|
||||
# Extract the center of audio that corresponds to mel spectrograms.
|
||||
audio = wav[fft_padding:-fft_padding]
|
||||
assert mel_spectrogram.shape[1] * hop_length == audio.size
|
||||
|
||||
# there is no clipping here
|
||||
return audio, mel_spectrogram
|
||||
```
|
||||
|
||||
`Transform` loads the audio files, and extracts `mel_spectrogram` from them. This transformation actually needs a lot of options to specify, namely, the sample rate of the audio files, the `n_fft`, `win_length`, `hop_length` of `stft` transformation, and `n_mels` for transforming spectrogram into mel_spectrogram. So we define it as a callable class. You can also use a closure, or a `partial` if you want to.
|
||||
|
||||
Then we defines a functor to batch examples into a batch. Because the two fields ( `audio` and `mel_spectrogram`) are both sequences, batching them is not trivial. Also, because the wavenet model trains in audio clips of a fixed length(0.5 seconds, for example), we have to truncate the audio when creating batches. We want to crop audio randomly when creating batches, instead of truncating them when preprocessing each example, because it allows for an audio to be truncated at different positions.
|
||||
|
||||
```python
|
||||
class DataCollector(object):
|
||||
def __init__(self,
|
||||
context_size,
|
||||
sample_rate,
|
||||
hop_length,
|
||||
train_clip_seconds,
|
||||
valid=False):
|
||||
frames_per_second = sample_rate // hop_length
|
||||
train_clip_frames = int(
|
||||
np.ceil(train_clip_seconds * frames_per_second))
|
||||
context_frames = context_size // hop_length
|
||||
self.num_frames = train_clip_frames + context_frames
|
||||
|
||||
self.sample_rate = sample_rate
|
||||
self.hop_length = hop_length
|
||||
self.valid = valid
|
||||
|
||||
def random_crop(self, sample):
|
||||
audio, mel_spectrogram = sample
|
||||
audio_frames = int(audio.size) // self.hop_length
|
||||
max_start_frame = audio_frames - self.num_frames
|
||||
assert max_start_frame >= 0, "audio is too short to be cropped"
|
||||
|
||||
frame_start = np.random.randint(0, max_start_frame)
|
||||
# frame_start = 0 # norandom
|
||||
frame_end = frame_start + self.num_frames
|
||||
|
||||
audio_start = frame_start * self.hop_length
|
||||
audio_end = frame_end * self.hop_length
|
||||
|
||||
audio = audio[audio_start:audio_end]
|
||||
return audio, mel_spectrogram, audio_start
|
||||
|
||||
def __call__(self, samples):
|
||||
# transform them first
|
||||
if self.valid:
|
||||
samples = [(audio, mel_spectrogram, 0)
|
||||
for audio, mel_spectrogram in samples]
|
||||
else:
|
||||
samples = [self.random_crop(sample) for sample in samples]
|
||||
# batch them
|
||||
audios = [sample[0] for sample in samples]
|
||||
audio_starts = [sample[2] for sample in samples]
|
||||
mels = [sample[1] for sample in samples]
|
||||
|
||||
mels = batch_spec(mels)
|
||||
|
||||
if self.valid:
|
||||
audios = batch_wav(audios, dtype=np.float32)
|
||||
else:
|
||||
audios = np.array(audios, dtype=np.float32)
|
||||
audio_starts = np.array(audio_starts, dtype=np.int64)
|
||||
return audios, mels, audio_starts
|
||||
```
|
||||
|
||||
When these 3 components are defined, we can start building our dataset with them.
|
||||
|
||||
```python
|
||||
# building the ljspeech dataset
|
||||
ljspeech_meta = LJSpeechMetaData(root)
|
||||
transform = Transform(sample_rate, n_fft, win_length, hop_length, n_mels)
|
||||
ljspeech = TransformDataset(ljspeech_meta, transform)
|
||||
|
||||
# split them into train and valid dataset
|
||||
ljspeech_valid = SliceDataset(ljspeech, 0, valid_size)
|
||||
ljspeech_train = SliceDataset(ljspeech, valid_size, len(ljspeech))
|
||||
|
||||
# building batch functions (they can be differnt for training and validation if you need it)
|
||||
train_batch_fn = DataCollector(context_size, sample_rate, hop_length,
|
||||
train_clip_seconds)
|
||||
valid_batch_fn = DataCollector(
|
||||
context_size, sample_rate, hop_length, train_clip_seconds, valid=True)
|
||||
|
||||
# building the data cargo
|
||||
train_cargo = DataCargo(
|
||||
ljspeech_train,
|
||||
train_batch_fn,
|
||||
batch_size,
|
||||
sampler=RandomSampler(ljspeech_train))
|
||||
|
||||
valid_cargo = DataCargo(
|
||||
ljspeech_valid,
|
||||
valid_batch_fn,
|
||||
batch_size=1, # only batch=1 for validation is enabled
|
||||
sampler=SequentialSampler(ljspeech_valid))
|
||||
```
|
||||
|
||||
Here comes the next question, how to bring batches into Paddle's computation. Do we need some adapter to transform numpy.ndarray into Paddle's native Variable type? Yes.
|
||||
|
||||
First we can use `var = dg.to_variable(array)` to transform ndarray into Variable.
|
||||
|
||||
```python
|
||||
for batch in train_cargo:
|
||||
audios, mels, audio_starts = batch
|
||||
audios = dg.to_variable(audios)
|
||||
mels = dg.to_variable(mels)
|
||||
audio_starts = dg.to_variable(audio_starts)
|
||||
|
||||
# your training code here
|
||||
```
|
||||
|
||||
In the code above, processing of the data and training of the model run in the same process. So the next batch starts to load after the training of the current batch has finished. There is actually better solutions for this. Data processing and model training can be run asynchronously. To accomplish this, we would use `DataLoader` from Paddle. This serves as an adapter to transform an iterable object of batches into another iterable object of batches, which runs asynchronously and transform each ndarray into `Variable`.
|
||||
|
||||
```python
|
||||
# connect our data cargos with corresponding DataLoader
|
||||
# now the data cargo is connected with paddle
|
||||
with dg.guard(place):
|
||||
train_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10,return_list=True).set_batch_generator(train_cargo, place)
|
||||
valid_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True).set_batch_generator(valid_cargo, place)
|
||||
|
||||
# iterate over the dataloader
|
||||
for batch in train_loader:
|
||||
audios, mels, audio_starts = batch
|
||||
# your trains cript here
|
||||
```
|
|
@ -0,0 +1,216 @@
|
|||
# 数据准备
|
||||
|
||||
本节主要讲述 `parakeet.data` 子模块的设计以及如何在实验中使用它。
|
||||
|
||||
`parakeet.data` 遵循 paddle 管用的数据准备流程。Dataset, Sampler, batch function, DataLoader.
|
||||
|
||||
## Dataset
|
||||
|
||||
我们假设数据集是样例的列表。你可以通过 `__len__` 方法获取其长度,并且可以通过 `__getitem__` 方法随机访问其元素。有了上述两个调节,我们也可以用 `iter(dataset)` 来获得一个 dataset 的迭代器。我们一般通过继承 `paddle.io.Dataset` 来创建自己的数据集。为其实现 `__len__` 方法和 `__getitem__` 方法即可。
|
||||
|
||||
出于数据处理,数据加载和数据集大小等方面的考虑,可以采用集中策略来调控数据集是否被懒惰地预处理,是否被懒惰地被加载,是否常驻内存等。
|
||||
|
||||
1. 数据在数据集实例化的时候被全部预处理并常驻内存。对于数据预处理比较快,且整个数据集较小的情况,可以采用这样的策略。因为整个的数据集的预处理在数据集实例化时完成,因此要求预处理很快,否则将要花时间等待数据集实例化。因为被处理后的数据集常驻内存,因此要求数据集较小,否则可能不能将整个数据集加载进内存。
|
||||
2. 每个样例在被请求的时候预处理,并且把预处理的结果缓存。可以通过在数据集的 `__getitem__` 方法中调用单条样例的预处理方法来实现这个策略。这样做的条件一样是数据可以整个载入内存。但好处是不必花费很多时间等待数据集实例化。使用这个策略,则数据集被完整迭代一次之后,访问样例的时候会显著变快,因为不需要再次处理。但在首次使用的时候仍然会需要即时处理,所以如果快速评估数据迭代的数度还需要等数据集被迭代一遍。
|
||||
3. 先将数据集预处理一遍把结果保存下来。再作为另一个数据集使用,这个新的数据集的 `__getitem__` 方法则只是从存储器读取数据。一般来说数据读取的性能并不会制约模型的训练,并且这也不要求内存必须足以装下整个数据集。是一种较为灵活的方法。但是会需要一个单独的预处理脚本,并且根据处理后的数据写一个数据集。
|
||||
|
||||
以上的三种只是一种概念上的划分,实际使用时候我们可能混用以上的策略。举例如下:
|
||||
|
||||
1. 对于一个样例的多个字段,有的是很小的,比如说文本,可能可能常驻内存;而对于音频,频谱或者图像,可能预先处理并存储,在访问时仅加载处理好的结果。
|
||||
2. 对于某些比较大或者预处理比较慢的数据集。我们可以仅加载一个较小的元数据,里面包含了一些可以用于对样例进行排序或者筛选的特征码,则我们可以在不加载整个样例就可以利用这些元数据对数据进行排序或者筛选。
|
||||
|
||||
一般来说,我们将一个 Dataset 的子类看作是数据集和实验的具体需求之间的适配器。
|
||||
|
||||
parakeet 还提供了若干个高阶的 Dataset 类,用于从已有的 Dataset 产生新的 Dataset.
|
||||
|
||||
1. 用于字段组合的有 TupleDataset, DictDataset;
|
||||
2. 用于数据集切分合并的有 SliceDataset, SubsetDataset, ChainDataset;
|
||||
3. 用于缓存数据集的有 CacheDataset;
|
||||
4. 用于数据集筛选的有 FilterDataset;
|
||||
5. 用于变换数据集的有 TransformDataset.
|
||||
|
||||
可以灵活地使用这些高阶数据集来使数据处理更加灵活。
|
||||
|
||||
## DataLoader
|
||||
|
||||
`DataLoader` 类似 `Dataset` 也是可迭代对象,但是一般情况下,它是按批量来迭代的。在深度学习中我们需要 `DataLoader` 是因为把多个样例组成一个批次可以充分利用现代硬件的计算资源。可以根据一个 Dataset 构建一个 DataLoader,它可以被多次迭代。
|
||||
|
||||
构建 DataLoader 除了需要一个 Dataset 之外,还需要两个要素。
|
||||
|
||||
1. 如何组成批次。
|
||||
2. 如何选取样例来组成批次;
|
||||
|
||||
下面的两个小节将分别提供这两个要素。
|
||||
|
||||
### batch function
|
||||
|
||||
批次是包含多个样例的列表经过某种变换的结果。假设一个样例是一个拥有多个字段的结构(在不同的编程语言可能有不同的实现,比如在 python 中可以是 tuple, dict 等,在 C/C++ 中可能是一个 struct)。那么包含多个样例的列表就是一个结构的阵列(array of structure, AOS). 而出于训练神经网络的需要,我们希望一个批次和一个样例一样,是拥有多个字段的一个结构。因此需要一个方法,把一个结构的阵列(array of structures)变成一个阵列的结构(structure of arrays).
|
||||
|
||||
下面是一个简单的例子:
|
||||
|
||||
下面的表格代表了两个样例,每个包含 5 个字段。
|
||||
|
||||
| weight | height | width | depth | density |
|
||||
| ------ | ------ | ----- | ----- | ------- |
|
||||
| 1.2 | 1.1 | 1.3 | 1.4 | 0.8 |
|
||||
| 1.6 | 1.4 | 1.2 | 0.6 | 1.4 |
|
||||
|
||||
以上表格的 AOS 表示形式和 SOA 表示形式如下:
|
||||
|
||||
AOS:
|
||||
|
||||
```text
|
||||
[(1.2, 1,1, 1,3, 1,4, 0.8),
|
||||
|
||||
(1.6, 1.4, 1.2, 0.6, 1.4)]
|
||||
```
|
||||
|
||||
SOA:
|
||||
|
||||
```text
|
||||
([1,2, 1.6],
|
||||
[1.1, 1.4],
|
||||
[1.3, 1.2],
|
||||
[1.4, 0.6],
|
||||
[0.8, 1.4])
|
||||
```
|
||||
|
||||
对于上述的例子,将 AOS 转换为 SOA 是平凡的。只要把所有样例的各个字段 stack 起来就可以。但事情并非总是如此简单。当一个字段包含一个序列,你可能就需要先把所有的序列都补长 (pad) 到最长的序列长度,然后才能把它们 stack 起来。对于某些情形,批次可能比样例多一些字段,比如说对于包含序列的样例,在补长之后,可能需要增设一个字段来记录那些字段的有效长度。因此,一般情况下,需要一个函数来实现这个功能,而且这是和这个数据集搭配的。当然除了函数之外,也可以使用任何的可调用对象,我们把这些称为 batch function.
|
||||
|
||||
|
||||
### Sampler
|
||||
|
||||
有了 batch function(我们知道如何组成批次), 接下来是另一个问题,将什么组成批次呢?当组建一个批次的时候,我们需要决定选取那些样例来组成它。因此我们预设数据集是可以随机访问的,我们只需要选取对应的索引即可。我们使用 sampler 来完成选取 index 的任务。
|
||||
|
||||
Sampler 被实现为产生整数的可迭代对象。假设数据集有 `N` 个样例,那么产生 `[0, N)` 之间的整数的迭代器就是一个合适的迭代器。最常用的 sampler 是 `SequentialSampler` 和 `RandomSampler`.
|
||||
|
||||
当迭代一个 DataLoader 的时候,首先 sampler 产生多个 index, 然后根据这些 index 去取出对应的样例,并调用 batch function 把这些样例组成一个批次。当然取出样例的过程是可并行的,但调用 batch function 组成 batch 不是。
|
||||
|
||||
另外的一种选择是使用 batch sampler, 它是产生整数列表的可迭代对象。对于一般的 sampler, 需要对其迭代器使用 next 多次才能产出多个 index, 而对于 batch sampler, 对其迭代器使用 next 一次就可以产出多个 index. 对于使用一般的 sampler 的情形,batch size 由 DataLoader 的来决定。而对于 batch sampler, 则是由它决定了 DataLoader 的 batch size, 因此可以用它来实现一些特别的需求,比如说动态 batch size.
|
||||
|
||||
## 示例代码
|
||||
|
||||
以下是我们使用 `parakeet.data` 处理 `LJSpeech` 数据集的代码。
|
||||
|
||||
首先,我们定义一个 class 来代表 LJspeech 数据集,它只是如其所是地加载了元数据,亦即数据集中的 `metadata.csv` 文件,其中记录了音频文件的文件名,以及转录文本。但并不加载音频,也并不做任何的预处理。我们有意让这个数据集保持简单,它仅需要数据集的路径来实例化。
|
||||
|
||||
```python
|
||||
import csv
|
||||
import numpy as np
|
||||
import librosa
|
||||
from pathlib import Path
|
||||
from paddle.io import Dataset
|
||||
|
||||
from parakeet.data import batch_spec, batch_wav
|
||||
|
||||
class LJSpeechMetaData(Dataset):
|
||||
def __init__(self, root):
|
||||
self.root = Path(root).expanduser()
|
||||
wav_dir = self.root / "wavs"
|
||||
csv_path = self.root / "metadata.csv"
|
||||
records = []
|
||||
speaker_name = "ljspeech"
|
||||
with open(str(csv_path), 'rt') as f:
|
||||
for line in f:
|
||||
filename, _, normalized_text = line.strip().split("|")
|
||||
filename = str(wav_dir / (filename + ".wav"))
|
||||
records.append([filename, normalized_text, speaker_name])
|
||||
self.records = records
|
||||
|
||||
def __getitem__(self, i):
|
||||
return self.records[i]
|
||||
|
||||
def __len__(self):
|
||||
return len(self.records)
|
||||
```
|
||||
|
||||
然后我们定义一个 `Transform` 类,用于处理 `LJSpeechMetaData` 中的样例,将其转换为模型所需要的数据。对于不同的模型可以定义不同的 Transform,这样就可以共用 `LJSpeechMetaData` 的代码。
|
||||
|
||||
```python
|
||||
from parakeet.audio import AudioProcessor
|
||||
from parakeet.audio import LogMagnitude
|
||||
from parakeet.frontend import English
|
||||
|
||||
class Transform(object):
|
||||
def __init__(self):
|
||||
self.frontend = English()
|
||||
self.processor = AudioProcessor(
|
||||
sample_rate=22050,
|
||||
n_fft=1024,
|
||||
win_length=1024,
|
||||
hop_length=256,
|
||||
f_max=8000)
|
||||
self.normalizer = LogMagnitude()
|
||||
|
||||
def forward(self, record):
|
||||
fname, text, _ = meta_data:
|
||||
wav = processor.read_wav(fname)
|
||||
mel = processor.mel_spectrogram(wav)
|
||||
mel = normalizer.transform(mel)
|
||||
phonemes = frontend.phoneticize(text)
|
||||
ids = frontend.numericalize(phonemes)
|
||||
mel_name = os.path.splitext(os.path.basename(fname))[0]
|
||||
stop_probs = np.ones([mel.shape[1]], dtype=np.int64)
|
||||
stop_probs[-1] = 2
|
||||
return (ids, mel, stop_probs)
|
||||
```
|
||||
|
||||
`Transform` 加载音频,并且提取频谱。把 `Transform` 实现为一个可调用的类可以方便地持有许多选项,比如和傅里叶变换相关的参数。这里可以把一个 `LJSpeechMetaData` 对象和一个 `Transform` 对象组合起来,创建一个 `TransformDataset`.
|
||||
|
||||
```python
|
||||
from parakeet.data import TransformDataset
|
||||
|
||||
meta = LJSpeechMetaData(data_path)
|
||||
transform = Transform()
|
||||
ljspeech = TransformDataset(meta, transform)
|
||||
```
|
||||
|
||||
当然也可以选择专门写一个转换脚本把转换后的数据集保存下来,然后再写一个适配的 Dataset 子类去加载这些保存的数据。实际这么做的效率会更高。
|
||||
|
||||
接下来我们需要写一个可调用对象将多个样例组成批次。因为其中的 ids 和 mel 频谱是序列数据,所以我们需要进行 padding.
|
||||
|
||||
```python
|
||||
class LJSpeechCollector(object):
|
||||
"""A simple callable to batch LJSpeech examples."""
|
||||
def __init__(self, padding_idx=0, padding_value=0.):
|
||||
self.padding_idx = padding_idx
|
||||
self.padding_value = padding_value
|
||||
|
||||
def __call__(self, examples):
|
||||
ids = [example[0] for example in examples]
|
||||
mels = [example[1] for example in examples]
|
||||
stop_probs = [example[2] for example in examples]
|
||||
|
||||
ids = batch_text_id(ids, pad_id=self.padding_idx)
|
||||
mels = batch_spec(mels, pad_value=self.padding_value)
|
||||
stop_probs = batch_text_id(stop_probs, pad_id=self.padding_idx)
|
||||
return ids, np.transpose(mels, [0, 2, 1]), stop_probs
|
||||
```
|
||||
|
||||
以上的组件准备就绪后,可以准备整个数据流。
|
||||
|
||||
```python
|
||||
def create_dataloader(source_path, valid_size, batch_size):
|
||||
lj = LJSpeechMeta(source_path)
|
||||
transform = Transform()
|
||||
lj = TransformDataset(lj, transform)
|
||||
|
||||
valid_set, train_set = dataset.split(lj, valid_size)
|
||||
train_loader = DataLoader(
|
||||
train_set,
|
||||
return_list=False,
|
||||
batch_size=batch_size,
|
||||
shuffle=True,
|
||||
drop_last=True,
|
||||
collate_fn=LJSpeechCollector())
|
||||
valid_loader = DataLoader(
|
||||
valid_set,
|
||||
return_list=False,
|
||||
batch_size=batch_size,
|
||||
shuffle=False,
|
||||
drop_last=False,
|
||||
collate_fn=LJSpeechCollector())
|
||||
return train_loader, valid_loader
|
||||
```
|
||||
|
||||
train_loader 和 valid_loader 可以被迭代。对其迭代器使用 next, 返回的是 `paddle.Tensor` 的 list, 代表一个 batch,这些就可以直接用作 `paddle.nn.Layer` 的输入了。
|
|
@ -1,87 +0,0 @@
|
|||
# How to build your own model and experiment?
|
||||
|
||||
For a general deep learning experiment, there are 4 parts to care for.
|
||||
|
||||
1. Preprocess dataset to meet the needs for model training and iterate over them in batches;
|
||||
2. Define the model and the optimizer;
|
||||
3. Write the training process (including forward-backward computation, parameter update, logging, evaluation, etc.)
|
||||
4. Configure and launch the experiment.
|
||||
|
||||
## Data Processing
|
||||
|
||||
For processing data, `parakeet.data` provides `DatasetMixin`, `DataCargo` and `DataIterator`.
|
||||
|
||||
Dataset is an iterable object of examples. `DatasetMixin` provides the standard indexing interface, and other classes in [parakeet.data.dataset](../parakeet/data/dataset.py) provide flexible interfaces for building customized datasets.
|
||||
|
||||
`DataCargo` is an iterable object of batches. It differs from a dataset in that it can be iterated over in batches. In addition to a dataset, a `Sampler` and a `batch function` are required to build a `DataCargo`. `Sampler` specifies which examples to pick, and `batch function` specifies how to create a batch from them. Commonly used `Samplers` are provided by [parakeet.data](../parakeet/data/). Users should define a `batch function` for a datasets, in order to batch its examples.
|
||||
|
||||
`DataIterator` is an iterator class for `DataCargo`. It is create when explicitly creating an iterator of a `DataCargo` by `iter(DataCargo)`, or iterating over a `DataCargo` with `for` loop.
|
||||
|
||||
Data processing is splited into two phases: sample-level processing and batching.
|
||||
|
||||
1. Sample-level processing. This process is transforming an example into another. This process can be defined as `get_example()` method of a dataset, or as a `transform` (callable object) and build a `TransformDataset` with it.
|
||||
|
||||
2. Batching. It is the process of transforming a list of examples into a batch. The rationale is to transform an array of structures into a structure of arrays. We generally define a batch function (or a callable object) to do this.
|
||||
|
||||
To connect a `DataCargo` with Paddlepaddle's asynchronous data loading mechanism, we need to create a `fluid.io.DataLoader` and connect it to the `Datacargo`.
|
||||
|
||||
The overview of data processing in an experiment with Parakeet is :
|
||||
|
||||
```text
|
||||
Dataset --(transform)--> Dataset --+
|
||||
sampler --+
|
||||
batch_fn --+-> DataCargo --> DataLoader
|
||||
```
|
||||
|
||||
The user need to define a customized transform and a batch function to accomplish this process. See [data](./data.md) for more details.
|
||||
|
||||
## Model
|
||||
|
||||
Parakeet provides commonly used functions, modules and models for the users to define their own models. Functions contain no trainable `Parameter`s, and are used in modules and models. Modules and modes are subclasses of `fluid.dygraph.Layer`. The distinction is that `module`s tend to be generic, simple and highly reusable, while `model`s tend to be task-sepcific, complicated and not that reusable. Some models are so complicated that we extract building blocks from it as separate classes but if these building blocks are not common and reusable enough, they are considered as submodels.
|
||||
|
||||
In the structure of the project, modules are placed in [parakeet.modules](../parakeet/modules/), while models are in [parakeet.models](../parakeet/models) and grouped into folders like `waveflow` and `wavenet`, which include the whole model and their submodels.
|
||||
|
||||
When developers want to add new models to `parakeet`, they can consider the distinctions described above and put the code in an appropriate place.
|
||||
|
||||
|
||||
|
||||
## Training Process
|
||||
|
||||
Training process is basically running a training loop for multiple times. A typical training loop consists of the procedures below:
|
||||
|
||||
1. Iterating over training dataset;
|
||||
2. Prerocessing mini-batches;
|
||||
3. Forward/backward computations of the neural networks;
|
||||
4. Updating Parameters;
|
||||
5. Evaluating the model on validation dataset;
|
||||
6. Logging or saving intermediate results;
|
||||
7. Saving checkpoints of the model and the optimizer.
|
||||
|
||||
In section `DataProcessing` we have cover 1 and 2.
|
||||
|
||||
`Model` and `Optimizer` cover 3 and 4.
|
||||
|
||||
To keep the training loop clear, it's a good idea to define functions for saving/loading of checkpoints, evaluation on validation set, logging and saving of intermediate results, etc. For some complicated model, it is also recommended to define a function to create the model. This function can be used in both train and inference, to ensure that the model is identical at training and inference.
|
||||
|
||||
Code is typically organized in this way:
|
||||
|
||||
```text
|
||||
├── configs/ (example configuration)
|
||||
├── data.py (definition of custom Dataset, transform and batch function)
|
||||
├── README.md (README for the experiment)
|
||||
├── synthesis.py (code for inference)
|
||||
├── train.py (code for training)
|
||||
└── utils.py (all other utility functions)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Deep learning experiments have many options to configure. These configurations can be roughly grouped into different types: configurations about path of the dataset and path to save results, configurations about how to process data, configurations about the model and configurations about the training process.
|
||||
|
||||
Some configurations tend to change when running the code at different times, for example, path of the data and path to save results and whether to load model before training, etc. For these configurations, it's better to define them as command line arguments. We use `argparse` to handle them.
|
||||
|
||||
Other groups of configurations may overlap with others. For example, data processing and model may have some common options. The recommended way is to save them as configuration files, for example, `yaml` or `json`. We prefer `yaml`, for it is more human-reabable.
|
||||
|
||||
|
||||
|
||||
There are several examples in this repo, check [Parakeet/examples](../examples) for more details. `Parakeet/examples` is where we place our experiments. Though experiments are not a part of package `parakeet`, it is a part of repo `Parakeet`. They are provided as examples and allow for the users to run our experiment out-of-the-box. Feel free to add new examples and contribute to `Parakeet`.
|
|
@ -0,0 +1,79 @@
|
|||
# 如何准备自己的实验
|
||||
|
||||
对于一般的深度学习实验,有几个部分需要处理。
|
||||
|
||||
1. 按照模型的需要对数据进行预处理,并且按批次迭代数据集;
|
||||
2. 定义模型以及优化器等组件;
|
||||
3. 写出训练过程(一般包括 forward/backward 计算,参数更新,log 记录,可视化,定期评估等步骤);
|
||||
4. 配置并运行实验。
|
||||
|
||||
## 数据处理
|
||||
|
||||
对于数据处理,`parakeet.data` 采用了 paddlepaddle 常用的 `Dataset -> DataLoader` 的流程。数据处理流程的概览如下:
|
||||
|
||||
```text
|
||||
Dataset --(transform)--> Dataset --+
|
||||
sampler --+
|
||||
batch_fn --+-> DataLoader
|
||||
```
|
||||
|
||||
其中 transform 代表的是对样例的预处理。可以使用 `parakeet.data` 中的 TransformDataset 来从一个 Dataset 构建另一个 Dataset.
|
||||
|
||||
得到想要的 Dataset 之后,提供 sampler 和 batch function, 即可据此构建 DataLoader. DataLoader 产生的结果可以直接用作模型的输入。
|
||||
|
||||
详细的使用方式参见 [data_cn](./data_cn.md).
|
||||
|
||||
## 模型
|
||||
|
||||
为了对模型的可复用行和功能做较好的平衡,我们把模型按照其特征分为几种。
|
||||
|
||||
对于较为常用,可以作为其他更大的模型的部分的模块,我们尽可能将其实现得足够简单和通用,因为它们会被复用。对于含有可训练参数的模块,一般实现为 `paddle.nn.Layer` 的子类,但它们不是直接面向一个任务,因此不会带上处理未加工的输入和输出的功能。对于不含有可训练参数的模块,可以直接实现为一个函数,其输入输出都是 `paddle.Tensor` 或其集合。
|
||||
|
||||
针对一个特定任务的开箱模型,一般实现为 `paddle.nn.Layer` 的子类,是一个任务的核心计算单元。为了方便地处理输入和输出,一般还可以为它添加处理未加工的输入输出的功能。比如对于 NLP 任务来说,尽管神经网络接受的输出是文本的 id, 但是为了使模型能够处理未加工的输入,文本预处理的功能,以及文本转 id 的字典,也都应该视作模型的一部分。
|
||||
|
||||
当一个模型足够复杂,对其进行模块化切分是更好的选择,尽管拆分出来的小模块的功能也不一定非常通用,可能只是用于某个模型,但是当作么做有利于代码的清晰简洁时,仍然推荐这么做。
|
||||
|
||||
在 parakeet 的目录结构中,复用性较高的模块被放在 [parakeet.modules](../parakeet/modules/), 但是针对特定任务的模型则放在 [parakeet.models](../parakeet/models).
|
||||
|
||||
当开发新的模型的时候,开发这需要考虑拆分模块的可行性,以及模块的通用程度,把它们分置于合适的目录。
|
||||
|
||||
## 训练流程
|
||||
|
||||
训练流程一般就是多次训练一个循环体。典型的循环体包含如下的过程:
|
||||
|
||||
1. 迭代数据集;
|
||||
2. 处理批次数据;
|
||||
3. 神经网络的 forward/backward 计算;
|
||||
4. 参数更新;
|
||||
5. 符合一定条件时,在验证数据集上评估模型;
|
||||
6. 写日志,可视化,保存中间结果;
|
||||
7. 保存模型和优化器的状态。
|
||||
|
||||
`数据处理` 一节包含了 1 和 2, 模型和优化器包含了 3 和 4. 那么 5,6,7 是训练流程主要要完成的事情。为了使训练循环体简洁清晰,推荐将模型的保存和加载,模型评估,写日志以及可视化等功能都实现成函数,尽管很多情况下,它们可能需要访问很多局部变量。我们也正在考虑使用一个 Experiment 或者 Trainer 类来规范化这些训练循环体的写法。这样可以把一些需要被许多函数访问的变量作为类内的变量,可以使代码简洁而不至于引入太多的全局变量。
|
||||
|
||||
实验代码一般以如下的方式组织:
|
||||
|
||||
```text
|
||||
├── configs/ (实验配置)
|
||||
├── data.py (Dataset, DataLoader 等的定义)
|
||||
├── README.md (实验的帮助信息)
|
||||
├── synthesis.py (用于生成的代码)
|
||||
├── train.py (用于训练的代码)
|
||||
└── utils.py (其他必要的辅助函数)
|
||||
```
|
||||
|
||||
## 配置实验
|
||||
|
||||
深度学习实验常常有很多选项可配置。这些配置大概可以被分为几类:
|
||||
|
||||
1. 数据源以及数据处理方式配置;
|
||||
2. 实验结果保存路径配置;
|
||||
3. 数据预处理方式配置;
|
||||
4. 模型结构和超参数配置;
|
||||
5. 训练过程配置。
|
||||
|
||||
这些配置之间也可能存在某些重叠项,比如数据预处理部分的配置可能就和模型配置有关。比如说 mel 频谱的维数。
|
||||
|
||||
有部分配置是经常会发生改变的,比如数据源以及保存实验结果的路径,或者加载的 checkpoint 的路径等。对于这些配置,更好的做法是把它们实现为命令行参数。其余的不经常发生变动的参数,推荐将其写在配置文件中,我们推荐使用 `yaml` 作为配置文件,因为它允许添加注释,并且更加人类可读。
|
||||
|
||||
在这个软件源中包含了几个例子,可以在 [Parakeet/examples](../examples) 中查看。这些实验被作为样例提供给用户,可以直接运行。同时也欢迎用户添加新的模型和实验并为 `Parakeet` 贡献代码。
|
|
@ -0,0 +1,57 @@
|
|||
# 安装
|
||||
|
||||
[TOC]
|
||||
|
||||
|
||||
## 安装 PaddlePaddle
|
||||
|
||||
Parakeet 以 PaddlePaddle 作为其后端,因此依赖 PaddlePaddle,值得说明的是 Parakeet 要求 2.0 及以上版本的 PaddlePaddle。你可以通过 pip 安装。如果需要安装支持 gpu 版本的 PaddlePaddle,需要根据环境中的 cuda 和 cudnn 的版本来选择 wheel 包的版本。使用 conda 安装以及源码编译安装的方式请参考 [PaddlePaddle 快速安装](https://www.paddlepaddle.org.cn/install/quick/zh/2.0rc-linux-pip).
|
||||
|
||||
**gpu 版 PaddlePaddle**
|
||||
|
||||
```bash
|
||||
python -m pip install paddlepaddle-gpu==2.0.0rc0.post101 -f https://paddlepaddle.org.cn/whl/stable.html
|
||||
python -m pip install paddlepaddle-gpu==2.0.0rc0.post100 -f https://paddlepaddle.org.cn/whl/stable.html
|
||||
```
|
||||
|
||||
**cpu 版 PaddlePaddle**
|
||||
|
||||
```bash
|
||||
python -m pip install paddlepaddle==2.0.0rc0 -i https://mirror.baidu.com/pypi/simple
|
||||
```
|
||||
|
||||
## 安装 libsndfile
|
||||
|
||||
因为 Parakeet 的实验中常常会需要用到和音频处理,以及频谱处理相关的功能,所以我们依赖 librosa 和 soundfile 进行音频处理。而 librosa 和 soundfile 依赖一个 C 的库 libsndfile, 因为这不是 python 的包,对于 windows 用户和 mac 用户,使用 pip 安装 soundfile 的时候,libsndfile 也会被安装。如果遇到问题也可以参考 [SoundFile](https://pypi.org/project/SoundFile).
|
||||
|
||||
对于 linux 用户,需要使用系统的包管理器安装这个包,常见发行版上的命令参考如下。
|
||||
|
||||
|
||||
```bash
|
||||
# ubuntu, debian
|
||||
sudo apt-get install libsndfile1
|
||||
|
||||
# centos, fedora,
|
||||
sudo yum install libsndfile
|
||||
|
||||
# openSUSE
|
||||
sudo zypper in libsndfile
|
||||
```
|
||||
|
||||
## 安装 Parakeet
|
||||
|
||||
|
||||
我们提供两种方式来使用 Parakeet.
|
||||
|
||||
1. 需要运行 Parakeet 自带的实验代码,或者希望进行二次开发的用户,可以先从 github 克隆本工程,cd 仅工程目录,并进行可编辑式安装(不会被复制到 site-packages, 而且对工程的修改会立即生效,不需要重新安装),之后就可以使用了。
|
||||
|
||||
```bash
|
||||
# -e 表示可编辑式安装
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
2. 仅需要使用我们提供的训练好的模型进行预测,那么也可以直接安装 pypi 上的 wheel 包的版本。
|
||||
|
||||
```bash
|
||||
pip install paddle-parakeet
|
||||
```
|
|
@ -0,0 +1,18 @@
|
|||
# Parakeet 概览
|
||||
|
||||
<img src="../images/logo.png" alt="parakeet-logo" style="zoom: 33%;" />
|
||||
|
||||
Parakeet 旨在为开源社区提供一个灵活,高效,先进的语音合成工具箱。Parakeet 基于PaddlePaddle 2.0 构建,并且包含了百度研究院以及其他研究机构的许多有影响力的 TTS 模型。
|
||||
|
||||
Parakeet 为用户和开发者提供了
|
||||
|
||||
1. 可复用的模型以及常用的模块;
|
||||
2. 从数据处理,模型训练到预测等一系列过程的完整实验;
|
||||
3. 高质量的开箱即用模型。
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,148 +0,0 @@
|
|||
# Clarinet
|
||||
|
||||
PaddlePaddle dynamic graph implementation of ClariNet, a convolutional network based vocoder. The implementation is based on the paper [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281).
|
||||
|
||||
|
||||
## Dataset
|
||||
|
||||
We experiment with the LJSpeech dataset. Download and unzip [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
||||
|
||||
```bash
|
||||
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
|
||||
tar xjvf LJSpeech-1.1.tar.bz2
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
├── data.py data_processing
|
||||
├── configs/ (example) configuration file
|
||||
├── synthesis.py script to synthesize waveform from mel_spectrogram
|
||||
├── train.py script to train a model
|
||||
└── utils.py utility functions
|
||||
```
|
||||
|
||||
## Saving & Loading
|
||||
`train.py` and `synthesis.py` have 3 arguments in common, `--checkpooint`, `iteration` and `output`.
|
||||
|
||||
1. `output` is the directory for saving results.
|
||||
During training, checkpoints are saved in `checkpoints/` in `output` and tensorboard log is save in `log/` in `output`. Other possible outputs are saved in `states/` in `outuput`.
|
||||
During synthesizing, audio files and other possible outputs are save in `synthesis/` in `output`.
|
||||
So after training and synthesizing with the same output directory, the file structure of the output directory looks like this.
|
||||
|
||||
```text
|
||||
├── checkpoints/ # checkpoint directory (including *.pdparams, *.pdopt and a text file `checkpoint` that records the latest checkpoint)
|
||||
├── states/ # audio files generated at validation and other possible outputs
|
||||
├── log/ # tensorboard log
|
||||
└── synthesis/ # synthesized audio files and other possible outputs
|
||||
```
|
||||
|
||||
2. `--checkpoint` and `--iteration` for loading from existing checkpoint. Loading existing checkpoiont follows the following rule:
|
||||
If `--checkpoint` is provided, the checkpoint specified by `--checkpoint` is loaded.
|
||||
If `--checkpoint` is not provided, we try to load the model specified by `--iteration` from the checkpoint directory. If `--iteration` is not provided, we try to load the latested checkpoint from checkpoint directory.
|
||||
|
||||
## Train
|
||||
|
||||
Train the model using train.py, follow the usage displayed by `python train.py --help`.
|
||||
|
||||
```text
|
||||
usage: train.py [-h] [--config CONFIG] [--device DEVICE] [--data DATA]
|
||||
[--checkpoint CHECKPOINT | --iteration ITERATION]
|
||||
[--wavenet WAVENET]
|
||||
output
|
||||
|
||||
Train a ClariNet model with LJspeech and a trained WaveNet model.
|
||||
|
||||
positional arguments:
|
||||
output path to save experiment results
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG path of the config file
|
||||
--device DEVICE device to use
|
||||
--data DATA path of LJspeech dataset
|
||||
--checkpoint CHECKPOINT checkpoint to resume from
|
||||
--iteration ITERATION the iteration of the checkpoint to load from output directory
|
||||
--wavenet WAVENET wavenet checkpoint to use
|
||||
|
||||
- `--config` is the configuration file to use. The provided configurations can be used directly. And you can change some values in the configuration file and train the model with a different config.
|
||||
- `--device` is the device (gpu id) to use for training. `-1` means CPU.
|
||||
- `--data` is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains `metadata.txt`).
|
||||
|
||||
- `--checkpoint` is the path of the checkpoint.
|
||||
- `--iteration` is the iteration of the checkpoint to load from output directory.
|
||||
- `output` is the directory to save results, all result are saved in this directory.
|
||||
|
||||
See [Saving-&-Loading](#Saving-&-Loading) for details of checkpoint loading.
|
||||
|
||||
- `--wavenet` is the path of the wavenet checkpoint to load.
|
||||
When you start training a ClariNet model without loading form a ClariNet checkpoint, you should have trained a WaveNet model with single Gaussian output distribution. Make sure the config of the teacher model matches that of the trained wavenet model.
|
||||
|
||||
Example script:
|
||||
|
||||
```bash
|
||||
python train.py
|
||||
--config=./configs/clarinet_ljspeech.yaml
|
||||
--data=./LJSpeech-1.1/
|
||||
--device=0
|
||||
--wavenet="wavenet-step-2000000"
|
||||
experiment
|
||||
```
|
||||
|
||||
You can monitor training log via tensorboard, using the script below.
|
||||
|
||||
```bash
|
||||
cd experiment/log
|
||||
tensorboard --logdir=.
|
||||
```
|
||||
|
||||
## Synthesis
|
||||
```text
|
||||
usage: synthesis.py [-h] [--config CONFIG] [--device DEVICE] [--data DATA]
|
||||
[--checkpoint CHECKPOINT | --iteration ITERATION]
|
||||
output
|
||||
|
||||
Synthesize audio files from mel spectrogram in the validation set.
|
||||
|
||||
positional arguments:
|
||||
output path to save the synthesized audio
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG path of the config file
|
||||
--device DEVICE device to use.
|
||||
--data DATA path of LJspeech dataset
|
||||
--checkpoint CHECKPOINT checkpoint to resume from
|
||||
--iteration ITERATION the iteration of the checkpoint to load from output directory
|
||||
```
|
||||
|
||||
- `--config` is the configuration file to use. You should use the same configuration with which you train you model.
|
||||
- `--device` is the device (gpu id) to use for training. `-1` means CPU.
|
||||
- `--data` is the path of the LJspeech dataset. In principle, a dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files.
|
||||
- `--checkpoint` is the checkpoint to load.
|
||||
- `--iteration` is the iteration of the checkpoint to load from output directory.
|
||||
- `output` is the directory to save synthesized audio. Audio file is saved in `synthesis/` in `output` directory.
|
||||
See [Saving-&-Loading](#Saving-&-Loading) for details of checkpoint loading.
|
||||
|
||||
|
||||
Example script:
|
||||
|
||||
```bash
|
||||
python synthesis.py \
|
||||
--config=./configs/clarinet_ljspeech.yaml \
|
||||
--data=./LJSpeech-1.1/ \
|
||||
--device=0 \
|
||||
--iteration=500000 \
|
||||
experiment
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
python synthesis.py \
|
||||
--config=./configs/clarinet_ljspeech.yaml \
|
||||
--data=./LJSpeech-1.1/ \
|
||||
--device=0 \
|
||||
--checkpoint="experiment/checkpoints/step-500000" \
|
||||
experiment
|
||||
```
|
|
@ -1,52 +0,0 @@
|
|||
data:
|
||||
batch_size: 8
|
||||
train_clip_seconds: 0.5
|
||||
sample_rate: 22050
|
||||
hop_length: 256
|
||||
win_length: 1024
|
||||
n_fft: 2048
|
||||
|
||||
n_mels: 80
|
||||
valid_size: 16
|
||||
|
||||
|
||||
conditioner:
|
||||
upsampling_factors: [16, 16]
|
||||
|
||||
teacher:
|
||||
n_loop: 10
|
||||
n_layer: 3
|
||||
filter_size: 2
|
||||
residual_channels: 128
|
||||
loss_type: "mog"
|
||||
output_dim: 3
|
||||
log_scale_min: -9
|
||||
|
||||
student:
|
||||
n_loops: [10, 10, 10, 10, 10, 10]
|
||||
n_layers: [1, 1, 1, 1, 1, 1]
|
||||
filter_size: 3
|
||||
residual_channels: 64
|
||||
log_scale_min: -7
|
||||
|
||||
stft:
|
||||
n_fft: 2048
|
||||
win_length: 1024
|
||||
hop_length: 256
|
||||
|
||||
loss:
|
||||
lmd: 4
|
||||
|
||||
train:
|
||||
learning_rate: 0.0005
|
||||
anneal_rate: 0.5
|
||||
anneal_interval: 200000
|
||||
gradient_max_norm: 100.0
|
||||
|
||||
checkpoint_interval: 1000
|
||||
eval_interval: 1000
|
||||
|
||||
max_iterations: 2000000
|
||||
|
||||
|
||||
|
|
@ -1,52 +0,0 @@
|
|||
data:
|
||||
batch_size: 8
|
||||
train_clip_seconds: 0.5
|
||||
sample_rate: 22050
|
||||
hop_length: 256
|
||||
win_length: 1024
|
||||
n_fft: 2048
|
||||
|
||||
n_mels: 80
|
||||
valid_size: 16
|
||||
|
||||
|
||||
conditioner:
|
||||
upsampling_factors: [16, 16]
|
||||
|
||||
teacher:
|
||||
n_loop: 10
|
||||
n_layer: 3
|
||||
filter_size: 2
|
||||
residual_channels: 128
|
||||
loss_type: "mog"
|
||||
output_dim: 3
|
||||
log_scale_min: -9
|
||||
|
||||
student:
|
||||
n_loops: [10, 10, 10, 10, 10, 10]
|
||||
n_layers: [1, 1, 1, 1, 1, 1]
|
||||
filter_size: 3
|
||||
residual_channels: 64
|
||||
log_scale_min: -7
|
||||
|
||||
stft:
|
||||
n_fft: 2048
|
||||
win_length: 1024
|
||||
hop_length: 256
|
||||
|
||||
loss:
|
||||
lmd: 4
|
||||
|
||||
train:
|
||||
learning_rate: 0.0005
|
||||
anneal_rate: 0.5
|
||||
anneal_interval: 200000
|
||||
gradient_max_norm: 100.0
|
||||
|
||||
checkpoint_interval: 1000
|
||||
eval_interval: 1000
|
||||
|
||||
max_iterations: 2000000
|
||||
|
||||
|
||||
|
|
@ -1,179 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import ruamel.yaml
|
||||
import random
|
||||
from tqdm import tqdm
|
||||
import pickle
|
||||
import numpy as np
|
||||
|
||||
import paddle.fluid.dygraph as dg
|
||||
from paddle import fluid
|
||||
fluid.require_version('1.8.0')
|
||||
|
||||
from parakeet.modules.weight_norm import WeightNormWrapper
|
||||
from parakeet.models.wavenet import WaveNet, UpsampleNet
|
||||
from parakeet.models.clarinet import STFT, Clarinet, ParallelWaveNet
|
||||
from parakeet.data import TransformDataset, SliceDataset, RandomSampler, SequentialSampler, DataCargo
|
||||
from parakeet.utils.layer_tools import summary, freeze
|
||||
from parakeet.utils import io
|
||||
|
||||
from utils import eval_model
|
||||
sys.path.append("../wavenet")
|
||||
from data import LJSpeechMetaData, Transform, DataCollector
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Synthesize audio files from mel spectrogram in the validation set."
|
||||
)
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument(
|
||||
"--device", type=int, default=-1, help="device to use.")
|
||||
parser.add_argument("--data", type=str, help="path of LJspeech dataset")
|
||||
|
||||
g = parser.add_mutually_exclusive_group()
|
||||
g.add_argument("--checkpoint", type=str, help="checkpoint to resume from")
|
||||
g.add_argument(
|
||||
"--iteration",
|
||||
type=int,
|
||||
help="the iteration of the checkpoint to load from output directory")
|
||||
|
||||
parser.add_argument(
|
||||
"output",
|
||||
type=str,
|
||||
default="experiment",
|
||||
help="path to save the synthesized audio")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
with open(args.config, 'rt') as f:
|
||||
config = ruamel.yaml.safe_load(f)
|
||||
|
||||
if args.device == -1:
|
||||
place = fluid.CPUPlace()
|
||||
else:
|
||||
place = fluid.CUDAPlace(args.device)
|
||||
|
||||
dg.enable_dygraph(place)
|
||||
|
||||
ljspeech_meta = LJSpeechMetaData(args.data)
|
||||
|
||||
data_config = config["data"]
|
||||
sample_rate = data_config["sample_rate"]
|
||||
n_fft = data_config["n_fft"]
|
||||
win_length = data_config["win_length"]
|
||||
hop_length = data_config["hop_length"]
|
||||
n_mels = data_config["n_mels"]
|
||||
train_clip_seconds = data_config["train_clip_seconds"]
|
||||
transform = Transform(sample_rate, n_fft, win_length, hop_length, n_mels)
|
||||
ljspeech = TransformDataset(ljspeech_meta, transform)
|
||||
|
||||
valid_size = data_config["valid_size"]
|
||||
ljspeech_valid = SliceDataset(ljspeech, 0, valid_size)
|
||||
ljspeech_train = SliceDataset(ljspeech, valid_size, len(ljspeech))
|
||||
|
||||
teacher_config = config["teacher"]
|
||||
n_loop = teacher_config["n_loop"]
|
||||
n_layer = teacher_config["n_layer"]
|
||||
filter_size = teacher_config["filter_size"]
|
||||
context_size = 1 + n_layer * sum([filter_size**i for i in range(n_loop)])
|
||||
print("context size is {} samples".format(context_size))
|
||||
train_batch_fn = DataCollector(context_size, sample_rate, hop_length,
|
||||
train_clip_seconds)
|
||||
valid_batch_fn = DataCollector(
|
||||
context_size, sample_rate, hop_length, train_clip_seconds, valid=True)
|
||||
|
||||
batch_size = data_config["batch_size"]
|
||||
train_cargo = DataCargo(
|
||||
ljspeech_train,
|
||||
train_batch_fn,
|
||||
batch_size,
|
||||
sampler=RandomSampler(ljspeech_train))
|
||||
|
||||
# only batch=1 for validation is enabled
|
||||
valid_cargo = DataCargo(
|
||||
ljspeech_valid,
|
||||
valid_batch_fn,
|
||||
batch_size=1,
|
||||
sampler=SequentialSampler(ljspeech_valid))
|
||||
|
||||
# conditioner(upsampling net)
|
||||
conditioner_config = config["conditioner"]
|
||||
upsampling_factors = conditioner_config["upsampling_factors"]
|
||||
upsample_net = UpsampleNet(upscale_factors=upsampling_factors)
|
||||
freeze(upsample_net)
|
||||
|
||||
residual_channels = teacher_config["residual_channels"]
|
||||
loss_type = teacher_config["loss_type"]
|
||||
output_dim = teacher_config["output_dim"]
|
||||
log_scale_min = teacher_config["log_scale_min"]
|
||||
assert loss_type == "mog" and output_dim == 3, \
|
||||
"the teacher wavenet should be a wavenet with single gaussian output"
|
||||
|
||||
teacher = WaveNet(n_loop, n_layer, residual_channels, output_dim, n_mels,
|
||||
filter_size, loss_type, log_scale_min)
|
||||
# load & freeze upsample_net & teacher
|
||||
freeze(teacher)
|
||||
|
||||
student_config = config["student"]
|
||||
n_loops = student_config["n_loops"]
|
||||
n_layers = student_config["n_layers"]
|
||||
student_residual_channels = student_config["residual_channels"]
|
||||
student_filter_size = student_config["filter_size"]
|
||||
student_log_scale_min = student_config["log_scale_min"]
|
||||
student = ParallelWaveNet(n_loops, n_layers, student_residual_channels,
|
||||
n_mels, student_filter_size)
|
||||
|
||||
stft_config = config["stft"]
|
||||
stft = STFT(
|
||||
n_fft=stft_config["n_fft"],
|
||||
hop_length=stft_config["hop_length"],
|
||||
win_length=stft_config["win_length"])
|
||||
|
||||
lmd = config["loss"]["lmd"]
|
||||
model = Clarinet(upsample_net, teacher, student, stft,
|
||||
student_log_scale_min, lmd)
|
||||
summary(model)
|
||||
|
||||
# load parameters
|
||||
if args.checkpoint is not None:
|
||||
# load from args.checkpoint
|
||||
iteration = io.load_parameters(model, checkpoint_path=args.checkpoint)
|
||||
else:
|
||||
# load from "args.output/checkpoints"
|
||||
checkpoint_dir = os.path.join(args.output, "checkpoints")
|
||||
iteration = io.load_parameters(
|
||||
model, checkpoint_dir=checkpoint_dir, iteration=args.iteration)
|
||||
assert iteration > 0, "A trained checkpoint is needed."
|
||||
|
||||
# make generation fast
|
||||
for sublayer in model.sublayers():
|
||||
if isinstance(sublayer, WeightNormWrapper):
|
||||
sublayer.remove_weight_norm()
|
||||
|
||||
# data loader
|
||||
valid_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True)
|
||||
valid_loader.set_batch_generator(valid_cargo, place)
|
||||
|
||||
# the directory to save audio files
|
||||
synthesis_dir = os.path.join(args.output, "synthesis")
|
||||
if not os.path.exists(synthesis_dir):
|
||||
os.makedirs(synthesis_dir)
|
||||
|
||||
eval_model(model, valid_loader, synthesis_dir, iteration, sample_rate)
|
|
@ -1,243 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import ruamel.yaml
|
||||
import random
|
||||
from tqdm import tqdm
|
||||
import pickle
|
||||
import numpy as np
|
||||
from visualdl import LogWriter
|
||||
|
||||
import paddle.fluid.dygraph as dg
|
||||
from paddle import fluid
|
||||
fluid.require_version('1.8.0')
|
||||
|
||||
from parakeet.models.wavenet import WaveNet, UpsampleNet
|
||||
from parakeet.models.clarinet import STFT, Clarinet, ParallelWaveNet
|
||||
from parakeet.data import TransformDataset, SliceDataset, CacheDataset, RandomSampler, SequentialSampler, DataCargo
|
||||
from parakeet.utils.layer_tools import summary, freeze
|
||||
from parakeet.utils import io
|
||||
|
||||
from utils import make_output_tree, eval_model, load_wavenet
|
||||
|
||||
# import dataset from wavenet
|
||||
sys.path.append("../wavenet")
|
||||
from data import LJSpeechMetaData, Transform, DataCollector
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Train a ClariNet model with LJspeech and a trained WaveNet model."
|
||||
)
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--device", type=int, default=-1, help="device to use")
|
||||
parser.add_argument("--data", type=str, help="path of LJspeech dataset")
|
||||
|
||||
g = parser.add_mutually_exclusive_group()
|
||||
g.add_argument("--checkpoint", type=str, help="checkpoint to resume from")
|
||||
g.add_argument(
|
||||
"--iteration",
|
||||
type=int,
|
||||
help="the iteration of the checkpoint to load from output directory")
|
||||
|
||||
parser.add_argument(
|
||||
"--wavenet", type=str, help="wavenet checkpoint to use")
|
||||
|
||||
parser.add_argument(
|
||||
"output",
|
||||
type=str,
|
||||
default="experiment",
|
||||
help="path to save experiment results")
|
||||
|
||||
args = parser.parse_args()
|
||||
with open(args.config, 'rt') as f:
|
||||
config = ruamel.yaml.safe_load(f)
|
||||
|
||||
if args.device == -1:
|
||||
place = fluid.CPUPlace()
|
||||
else:
|
||||
place = fluid.CUDAPlace(args.device)
|
||||
|
||||
dg.enable_dygraph(place)
|
||||
|
||||
print("Command Line args: ")
|
||||
for k, v in vars(args).items():
|
||||
print("{}: {}".format(k, v))
|
||||
|
||||
ljspeech_meta = LJSpeechMetaData(args.data)
|
||||
|
||||
data_config = config["data"]
|
||||
sample_rate = data_config["sample_rate"]
|
||||
n_fft = data_config["n_fft"]
|
||||
win_length = data_config["win_length"]
|
||||
hop_length = data_config["hop_length"]
|
||||
n_mels = data_config["n_mels"]
|
||||
train_clip_seconds = data_config["train_clip_seconds"]
|
||||
transform = Transform(sample_rate, n_fft, win_length, hop_length, n_mels)
|
||||
ljspeech = TransformDataset(ljspeech_meta, transform)
|
||||
|
||||
valid_size = data_config["valid_size"]
|
||||
ljspeech_valid = CacheDataset(SliceDataset(ljspeech, 0, valid_size))
|
||||
ljspeech_train = CacheDataset(
|
||||
SliceDataset(ljspeech, valid_size, len(ljspeech)))
|
||||
|
||||
teacher_config = config["teacher"]
|
||||
n_loop = teacher_config["n_loop"]
|
||||
n_layer = teacher_config["n_layer"]
|
||||
filter_size = teacher_config["filter_size"]
|
||||
context_size = 1 + n_layer * sum([filter_size**i for i in range(n_loop)])
|
||||
print("context size is {} samples".format(context_size))
|
||||
train_batch_fn = DataCollector(context_size, sample_rate, hop_length,
|
||||
train_clip_seconds)
|
||||
valid_batch_fn = DataCollector(
|
||||
context_size, sample_rate, hop_length, train_clip_seconds, valid=True)
|
||||
|
||||
batch_size = data_config["batch_size"]
|
||||
train_cargo = DataCargo(
|
||||
ljspeech_train,
|
||||
train_batch_fn,
|
||||
batch_size,
|
||||
sampler=RandomSampler(ljspeech_train))
|
||||
|
||||
# only batch=1 for validation is enabled
|
||||
valid_cargo = DataCargo(
|
||||
ljspeech_valid,
|
||||
valid_batch_fn,
|
||||
batch_size=1,
|
||||
sampler=SequentialSampler(ljspeech_valid))
|
||||
|
||||
make_output_tree(args.output)
|
||||
|
||||
# conditioner(upsampling net)
|
||||
conditioner_config = config["conditioner"]
|
||||
upsampling_factors = conditioner_config["upsampling_factors"]
|
||||
upsample_net = UpsampleNet(upscale_factors=upsampling_factors)
|
||||
freeze(upsample_net)
|
||||
|
||||
residual_channels = teacher_config["residual_channels"]
|
||||
loss_type = teacher_config["loss_type"]
|
||||
output_dim = teacher_config["output_dim"]
|
||||
log_scale_min = teacher_config["log_scale_min"]
|
||||
assert loss_type == "mog" and output_dim == 3, \
|
||||
"the teacher wavenet should be a wavenet with single gaussian output"
|
||||
|
||||
teacher = WaveNet(n_loop, n_layer, residual_channels, output_dim, n_mels,
|
||||
filter_size, loss_type, log_scale_min)
|
||||
freeze(teacher)
|
||||
|
||||
student_config = config["student"]
|
||||
n_loops = student_config["n_loops"]
|
||||
n_layers = student_config["n_layers"]
|
||||
student_residual_channels = student_config["residual_channels"]
|
||||
student_filter_size = student_config["filter_size"]
|
||||
student_log_scale_min = student_config["log_scale_min"]
|
||||
student = ParallelWaveNet(n_loops, n_layers, student_residual_channels,
|
||||
n_mels, student_filter_size)
|
||||
|
||||
stft_config = config["stft"]
|
||||
stft = STFT(
|
||||
n_fft=stft_config["n_fft"],
|
||||
hop_length=stft_config["hop_length"],
|
||||
win_length=stft_config["win_length"])
|
||||
|
||||
lmd = config["loss"]["lmd"]
|
||||
model = Clarinet(upsample_net, teacher, student, stft,
|
||||
student_log_scale_min, lmd)
|
||||
summary(model)
|
||||
|
||||
# optim
|
||||
train_config = config["train"]
|
||||
learning_rate = train_config["learning_rate"]
|
||||
anneal_rate = train_config["anneal_rate"]
|
||||
anneal_interval = train_config["anneal_interval"]
|
||||
lr_scheduler = dg.ExponentialDecay(
|
||||
learning_rate, anneal_interval, anneal_rate, staircase=True)
|
||||
gradiant_max_norm = train_config["gradient_max_norm"]
|
||||
optim = fluid.optimizer.Adam(
|
||||
lr_scheduler,
|
||||
parameter_list=model.parameters(),
|
||||
grad_clip=fluid.clip.ClipByGlobalNorm(gradiant_max_norm))
|
||||
|
||||
# train
|
||||
max_iterations = train_config["max_iterations"]
|
||||
checkpoint_interval = train_config["checkpoint_interval"]
|
||||
eval_interval = train_config["eval_interval"]
|
||||
checkpoint_dir = os.path.join(args.output, "checkpoints")
|
||||
state_dir = os.path.join(args.output, "states")
|
||||
log_dir = os.path.join(args.output, "log")
|
||||
writer = LogWriter(log_dir)
|
||||
|
||||
if args.checkpoint is not None:
|
||||
iteration = io.load_parameters(
|
||||
model, optim, checkpoint_path=args.checkpoint)
|
||||
else:
|
||||
iteration = io.load_parameters(
|
||||
model,
|
||||
optim,
|
||||
checkpoint_dir=checkpoint_dir,
|
||||
iteration=args.iteration)
|
||||
|
||||
if iteration == 0:
|
||||
assert args.wavenet is not None, "When training afresh, a trained wavenet model should be provided."
|
||||
load_wavenet(model, args.wavenet)
|
||||
|
||||
# loader
|
||||
train_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True)
|
||||
train_loader.set_batch_generator(train_cargo, place)
|
||||
|
||||
valid_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True)
|
||||
valid_loader.set_batch_generator(valid_cargo, place)
|
||||
|
||||
# training loop
|
||||
global_step = iteration + 1
|
||||
iterator = iter(tqdm(train_loader))
|
||||
while global_step <= max_iterations:
|
||||
try:
|
||||
batch = next(iterator)
|
||||
except StopIteration as e:
|
||||
iterator = iter(tqdm(train_loader))
|
||||
batch = next(iterator)
|
||||
|
||||
audios, mels, audio_starts = batch
|
||||
model.train()
|
||||
loss_dict = model(
|
||||
audios, mels, audio_starts, clip_kl=global_step > 500)
|
||||
|
||||
writer.add_scalar("learning_rate",
|
||||
optim._learning_rate.step().numpy()[0], global_step)
|
||||
for k, v in loss_dict.items():
|
||||
writer.add_scalar("loss/{}".format(k), v.numpy()[0], global_step)
|
||||
|
||||
l = loss_dict["loss"]
|
||||
step_loss = l.numpy()[0]
|
||||
print("[train] global_step: {} loss: {:<8.6f}".format(global_step,
|
||||
step_loss))
|
||||
|
||||
l.backward()
|
||||
optim.minimize(l)
|
||||
optim.clear_gradients()
|
||||
|
||||
if global_step % eval_interval == 0:
|
||||
# evaluate on valid dataset
|
||||
eval_model(model, valid_loader, state_dir, global_step,
|
||||
sample_rate)
|
||||
if global_step % checkpoint_interval == 0:
|
||||
io.save_parameters(checkpoint_dir, global_step, model, optim)
|
||||
|
||||
global_step += 1
|
|
@ -1,60 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import os
|
||||
import soundfile as sf
|
||||
from collections import OrderedDict
|
||||
|
||||
from paddle import fluid
|
||||
import paddle.fluid.dygraph as dg
|
||||
|
||||
|
||||
def make_output_tree(output_dir):
|
||||
checkpoint_dir = os.path.join(output_dir, "checkpoints")
|
||||
if not os.path.exists(checkpoint_dir):
|
||||
os.makedirs(checkpoint_dir)
|
||||
|
||||
state_dir = os.path.join(output_dir, "states")
|
||||
if not os.path.exists(state_dir):
|
||||
os.makedirs(state_dir)
|
||||
|
||||
|
||||
def eval_model(model, valid_loader, output_dir, iteration, sample_rate):
|
||||
model.eval()
|
||||
for i, batch in enumerate(valid_loader):
|
||||
# print("sentence {}".format(i))
|
||||
path = os.path.join(output_dir,
|
||||
"sentence_{}_step_{}.wav".format(i, iteration))
|
||||
audio_clips, mel_specs, audio_starts = batch
|
||||
wav_var = model.synthesis(mel_specs)
|
||||
wav_np = wav_var.numpy()[0]
|
||||
sf.write(path, wav_np, samplerate=sample_rate)
|
||||
print("generated {}".format(path))
|
||||
|
||||
|
||||
def load_wavenet(model, path):
|
||||
wavenet_dict, _ = dg.load_dygraph(path)
|
||||
encoder_dict = OrderedDict()
|
||||
teacher_dict = OrderedDict()
|
||||
for k, v in wavenet_dict.items():
|
||||
if k.startswith("encoder."):
|
||||
encoder_dict[k.split('.', 1)[1]] = v
|
||||
else:
|
||||
# k starts with "decoder."
|
||||
teacher_dict[k.split('.', 1)[1]] = v
|
||||
|
||||
model.encoder.set_dict(encoder_dict)
|
||||
model.teacher.set_dict(teacher_dict)
|
||||
print("loaded the encoder part and teacher part from wavenet model.")
|
|
@ -1,144 +0,0 @@
|
|||
# Deep Voice 3
|
||||
|
||||
PaddlePaddle dynamic graph implementation of Deep Voice 3, a convolutional network based text-to-speech generative model. The implementation is based on [Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning](https://arxiv.org/abs/1710.07654).
|
||||
|
||||
We implement Deep Voice 3 using Paddle Fluid with dynamic graph, which is convenient for building flexible network architectures.
|
||||
|
||||
## Dataset
|
||||
|
||||
We experiment with the LJSpeech dataset. Download and unzip [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
||||
|
||||
```bash
|
||||
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
|
||||
tar xjvf LJSpeech-1.1.tar.bz2
|
||||
```
|
||||
|
||||
## Model Architecture
|
||||
|
||||
![Deep Voice 3 model architecture](./images/model_architecture.png)
|
||||
|
||||
The model consists of an encoder, a decoder and a converter (and a speaker embedding for multispeaker models). The encoder and the decoder together form the seq2seq part of the model, and the converter forms the postnet part.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
├── config/
|
||||
├── synthesize.py
|
||||
├── data.py
|
||||
├── preprocess.py
|
||||
├── clip.py
|
||||
├── train.py
|
||||
└── vocoder.py
|
||||
```
|
||||
|
||||
# Preprocess
|
||||
|
||||
Preprocess to dataset with `preprocess.py`.
|
||||
|
||||
```text
|
||||
usage: preprocess.py [-h] --config CONFIG --input INPUT --output OUTPUT
|
||||
|
||||
preprocess ljspeech dataset and save it.
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG config file
|
||||
--input INPUT data path of the original data
|
||||
--output OUTPUT path to save the preprocessed dataset
|
||||
```
|
||||
|
||||
example code:
|
||||
|
||||
```bash
|
||||
python preprocess.py --config=configs/ljspeech.yaml --input=LJSpeech-1.1/ --output=data/ljspeech
|
||||
```
|
||||
|
||||
## Train
|
||||
|
||||
Train the model using train.py, follow the usage displayed by `python train.py --help`.
|
||||
|
||||
```text
|
||||
usage: train.py [-h] --config CONFIG --input INPUT
|
||||
|
||||
train a Deep Voice 3 model with LJSpeech
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG config file
|
||||
--input INPUT data path of the original data
|
||||
```
|
||||
|
||||
example code:
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=0 python train.py --config=configs/ljspeech.yaml --input=data/ljspeech
|
||||
```
|
||||
|
||||
It would create a `runs` folder, outputs for each run is saved in a seperate folder in `runs`, whose name is the time joined with hostname. Inside this filder, tensorboard log, parameters and optimizer states are saved. Parameters(`*.pdparams`) and optimizer states(`*.pdopt`) are named by the step when they are saved.
|
||||
|
||||
```text
|
||||
runs/Jul07_09-39-34_instance-mqcyj27y-4/
|
||||
├── checkpoint
|
||||
├── events.out.tfevents.1594085974.instance-mqcyj27y-4
|
||||
├── step-1000000.pdopt
|
||||
├── step-1000000.pdparams
|
||||
├── step-100000.pdopt
|
||||
├── step-100000.pdparams
|
||||
...
|
||||
```
|
||||
|
||||
Since we use waveflow to synthesize audio while training, so download the trained waveflow model and extract it in current directory before training.
|
||||
|
||||
```bash
|
||||
wget https://paddlespeech.bj.bcebos.com/Parakeet/waveflow_res128_ljspeech_ckpt_1.0.zip
|
||||
unzip waveflow_res128_ljspeech_ckpt_1.0.zip
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Visualization
|
||||
|
||||
You can visualize training losses, check the attention and listen to the synthesized audio when training with teacher forcing.
|
||||
|
||||
example code:
|
||||
|
||||
```bash
|
||||
tensorboard --logdir=runs/ --host=$HOSTNAME --port=8000
|
||||
```
|
||||
|
||||
## Synthesis
|
||||
|
||||
```text
|
||||
usage: synthesize from a checkpoint [-h] --config CONFIG --input INPUT
|
||||
--output OUTPUT --checkpoint CHECKPOINT
|
||||
--monotonic_layers MONOTONIC_LAYERS
|
||||
[--vocoder {griffin-lim,waveflow}]
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--config CONFIG config file
|
||||
--input INPUT text file to synthesize
|
||||
--output OUTPUT path to save audio
|
||||
--checkpoint CHECKPOINT
|
||||
data path of the checkpoint
|
||||
--monotonic_layers MONOTONIC_LAYERS
|
||||
monotonic decoder layers' indices(start from 1)
|
||||
--vocoder {griffin-lim,waveflow}
|
||||
vocoder to use
|
||||
```
|
||||
|
||||
`synthesize.py` is used to synthesize several sentences in a text file.
|
||||
`--monotonic_layers` is the index of the decoders layer that manifest monotonic diagonal attention. You can get monotonic layers by inspecting tensorboard logs. Mind that the index starts from 1. The layers that manifest monotonic diagonal attention are stable for a model during training and synthesizing, but differ among different runs. So once you get the indices of monotonic layers by inspecting tensorboard log, you can use them at synthesizing. Note that only decoder layers that show strong diagonal attention should be considerd.
|
||||
`--vocoder` is the vocoder to use. Current supported values are "waveflow" and "griffin-lim". Default value is "waveflow".
|
||||
|
||||
example code:
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=2 python synthesize.py \
|
||||
--config configs/ljspeech.yaml \
|
||||
--input sentences.txt \
|
||||
--output outputs/ \
|
||||
--checkpoint runs/Jul07_09-39-34_instance-mqcyj27y-4/step-1320000 \
|
||||
--monotonic_layers "5,6" \
|
||||
--vocoder waveflow
|
||||
```
|
|
@ -1,84 +0,0 @@
|
|||
from __future__ import print_function
|
||||
|
||||
import copy
|
||||
import six
|
||||
import warnings
|
||||
|
||||
import functools
|
||||
from paddle.fluid import layers
|
||||
from paddle.fluid import framework
|
||||
from paddle.fluid import core
|
||||
from paddle.fluid import name_scope
|
||||
from paddle.fluid.dygraph import base as imperative_base
|
||||
from paddle.fluid.clip import GradientClipBase, _correct_clip_op_role_var
|
||||
|
||||
class DoubleClip(GradientClipBase):
|
||||
def __init__(self, clip_value, clip_norm, group_name="default_group", need_clip=None):
|
||||
super(DoubleClip, self).__init__(need_clip)
|
||||
self.clip_value = float(clip_value)
|
||||
self.clip_norm = float(clip_norm)
|
||||
self.group_name = group_name
|
||||
|
||||
def __str__(self):
|
||||
return "Gradient Clip By Value and GlobalNorm, value={}, global_norm={}".format(
|
||||
self.clip_value, self.clip_norm)
|
||||
|
||||
@imperative_base.no_grad
|
||||
def _dygraph_clip(self, params_grads):
|
||||
params_grads = self._dygraph_clip_by_value(params_grads)
|
||||
params_grads = self._dygraph_clip_by_global_norm(params_grads)
|
||||
return params_grads
|
||||
|
||||
@imperative_base.no_grad
|
||||
def _dygraph_clip_by_value(self, params_grads):
|
||||
params_and_grads = []
|
||||
for p, g in params_grads:
|
||||
if g is None:
|
||||
continue
|
||||
if self._need_clip_func is not None and not self._need_clip_func(p):
|
||||
params_and_grads.append((p, g))
|
||||
continue
|
||||
new_grad = layers.clip(x=g, min=-self.clip_value, max=self.clip_value)
|
||||
params_and_grads.append((p, new_grad))
|
||||
return params_and_grads
|
||||
|
||||
@imperative_base.no_grad
|
||||
def _dygraph_clip_by_global_norm(self, params_grads):
|
||||
params_and_grads = []
|
||||
sum_square_list = []
|
||||
for p, g in params_grads:
|
||||
if g is None:
|
||||
continue
|
||||
if self._need_clip_func is not None and not self._need_clip_func(p):
|
||||
continue
|
||||
merge_grad = g
|
||||
if g.type == core.VarDesc.VarType.SELECTED_ROWS:
|
||||
merge_grad = layers.merge_selected_rows(g)
|
||||
merge_grad = layers.get_tensor_from_selected_rows(merge_grad)
|
||||
square = layers.square(merge_grad)
|
||||
sum_square = layers.reduce_sum(square)
|
||||
sum_square_list.append(sum_square)
|
||||
|
||||
# all parameters have been filterd out
|
||||
if len(sum_square_list) == 0:
|
||||
return params_grads
|
||||
|
||||
global_norm_var = layers.concat(sum_square_list)
|
||||
global_norm_var = layers.reduce_sum(global_norm_var)
|
||||
global_norm_var = layers.sqrt(global_norm_var)
|
||||
max_global_norm = layers.fill_constant(
|
||||
shape=[1], dtype='float32', value=self.clip_norm)
|
||||
clip_var = layers.elementwise_div(
|
||||
x=max_global_norm,
|
||||
y=layers.elementwise_max(
|
||||
x=global_norm_var, y=max_global_norm))
|
||||
for p, g in params_grads:
|
||||
if g is None:
|
||||
continue
|
||||
if self._need_clip_func is not None and not self._need_clip_func(p):
|
||||
params_and_grads.append((p, g))
|
||||
continue
|
||||
new_grad = layers.elementwise_mul(x=g, y=clip_var)
|
||||
params_and_grads.append((p, new_grad))
|
||||
|
||||
return params_and_grads
|
|
@ -1,46 +0,0 @@
|
|||
# data processing
|
||||
p_pronunciation: 0.99
|
||||
sample_rate: 22050 # Hz
|
||||
n_fft: 1024
|
||||
win_length: 1024
|
||||
hop_length: 256
|
||||
n_mels: 80
|
||||
reduction_factor: 4
|
||||
|
||||
# model-s2s
|
||||
n_speakers: 1
|
||||
speaker_dim: 16
|
||||
char_dim: 256
|
||||
encoder_dim: 64
|
||||
kernel_size: 5
|
||||
encoder_layers: 7
|
||||
decoder_layers: 8
|
||||
prenet_sizes: [128]
|
||||
attention_dim: 128
|
||||
|
||||
# model-postnet
|
||||
postnet_layers: 5
|
||||
postnet_dim: 256
|
||||
|
||||
# position embedding
|
||||
position_weight: 1.0
|
||||
position_rate: 5.54
|
||||
forward_step: 4
|
||||
backward_step: 0
|
||||
|
||||
dropout: 0.05
|
||||
|
||||
# output-griffinlim
|
||||
sharpening_factor: 1.4
|
||||
|
||||
# optimizer:
|
||||
learning_rate: 0.001
|
||||
clip_value: 5.0
|
||||
clip_norm: 100.0
|
||||
|
||||
# training:
|
||||
max_iteration: 1000000
|
||||
batch_size: 16
|
||||
report_interval: 10000
|
||||
save_interval: 10000
|
||||
valid_size: 5
|
|
@ -1,108 +0,0 @@
|
|||
import numpy as np
|
||||
import os
|
||||
import csv
|
||||
import pandas as pd
|
||||
|
||||
import paddle
|
||||
from paddle import fluid
|
||||
from paddle.fluid import dygraph as dg
|
||||
from paddle.fluid.dataloader import Dataset, BatchSampler
|
||||
from paddle.fluid.io import DataLoader
|
||||
|
||||
from parakeet.data import DatasetMixin, DataCargo, PartialyRandomizedSimilarTimeLengthSampler
|
||||
from parakeet.g2p import en
|
||||
|
||||
class LJSpeech(DatasetMixin):
|
||||
def __init__(self, root):
|
||||
self._root = root
|
||||
self._table = pd.read_csv(
|
||||
os.path.join(root, "metadata.csv"),
|
||||
sep="|",
|
||||
encoding="utf-8",
|
||||
quoting=csv.QUOTE_NONE,
|
||||
header=None,
|
||||
names=["num_frames", "spec_name", "mel_name", "text"],
|
||||
dtype={"num_frames": np.int64, "spec_name": str, "mel_name":str, "text":str})
|
||||
|
||||
def num_frames(self):
|
||||
return self._table["num_frames"].to_list()
|
||||
|
||||
def get_example(self, i):
|
||||
"""
|
||||
spec (T_frame, C_spec)
|
||||
mel (T_frame, C_mel)
|
||||
"""
|
||||
num_frames, spec_name, mel_name, text = self._table.iloc[i]
|
||||
spec = np.load(os.path.join(self._root, spec_name))
|
||||
mel = np.load(os.path.join(self._root, mel_name))
|
||||
return (text, spec, mel, num_frames)
|
||||
|
||||
def __len__(self):
|
||||
return len(self._table)
|
||||
|
||||
class DataCollector(object):
|
||||
def __init__(self, p_pronunciation):
|
||||
self.p_pronunciation = p_pronunciation
|
||||
|
||||
def __call__(self, examples):
|
||||
"""
|
||||
output shape and dtype
|
||||
(B, T_text) int64
|
||||
(B,) int64
|
||||
(B, T_frame, C_spec) float32
|
||||
(B, T_frame, C_mel) float32
|
||||
(B,) int64
|
||||
"""
|
||||
text_seqs = []
|
||||
specs = []
|
||||
mels = []
|
||||
num_frames = np.array([example[3] for example in examples], dtype=np.int64)
|
||||
max_frames = np.max(num_frames)
|
||||
|
||||
for example in examples:
|
||||
text, spec, mel, _ = example
|
||||
text_seqs.append(en.text_to_sequence(text, self.p_pronunciation))
|
||||
specs.append(np.pad(spec, [(0, max_frames - spec.shape[0]), (0, 0)], mode="constant"))
|
||||
mels.append(np.pad(mel, [(0, max_frames - mel.shape[0]), (0, 0)], mode="constant"))
|
||||
|
||||
specs = np.stack(specs)
|
||||
mels = np.stack(mels)
|
||||
|
||||
text_lengths = np.array([len(seq) for seq in text_seqs], dtype=np.int64)
|
||||
max_length = np.max(text_lengths)
|
||||
text_seqs = np.array([seq + [0] * (max_length - len(seq)) for seq in text_seqs], dtype=np.int64)
|
||||
return text_seqs, text_lengths, specs, mels, num_frames
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
import tqdm
|
||||
import time
|
||||
from ruamel import yaml
|
||||
|
||||
parser = argparse.ArgumentParser(description="load the preprocessed ljspeech dataset")
|
||||
parser.add_argument("--config", type=str, required=True, help="config file")
|
||||
parser.add_argument("--input", type=str, required=True, help="data path of the original data")
|
||||
args = parser.parse_args()
|
||||
with open(args.config, 'rt') as f:
|
||||
config = yaml.safe_load(f)
|
||||
|
||||
print("========= Command Line Arguments ========")
|
||||
for k, v in vars(args).items():
|
||||
print("{}: {}".format(k, v))
|
||||
print("=========== Configurations ==============")
|
||||
for k in ["p_pronunciation", "batch_size"]:
|
||||
print("{}: {}".format(k, config[k]))
|
||||
|
||||
ljspeech = LJSpeech(args.input)
|
||||
collate_fn = DataCollector(config["p_pronunciation"])
|
||||
|
||||
dg.enable_dygraph(fluid.CPUPlace())
|
||||
sampler = PartialyRandomizedSimilarTimeLengthSampler(ljspeech.num_frames())
|
||||
cargo = DataCargo(ljspeech, collate_fn,
|
||||
batch_size=config["batch_size"], sampler=sampler)
|
||||
loader = DataLoader\
|
||||
.from_generator(capacity=5, return_list=True)\
|
||||
.set_batch_generator(cargo)
|
||||
|
||||
for i, batch in tqdm.tqdm(enumerate(loader)):
|
||||
continue
|
Binary file not shown.
Before Width: | Height: | Size: 447 KiB |
|
@ -1,122 +0,0 @@
|
|||
from __future__ import division
|
||||
import os
|
||||
import argparse
|
||||
from ruamel import yaml
|
||||
import tqdm
|
||||
from os.path import join
|
||||
import csv
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import librosa
|
||||
import logging
|
||||
|
||||
from parakeet.data import DatasetMixin
|
||||
|
||||
|
||||
class LJSpeechMetaData(DatasetMixin):
|
||||
def __init__(self, root):
|
||||
self.root = root
|
||||
self._wav_dir = join(root, "wavs")
|
||||
csv_path = join(root, "metadata.csv")
|
||||
self._table = pd.read_csv(
|
||||
csv_path,
|
||||
sep="|",
|
||||
encoding="utf-8",
|
||||
header=None,
|
||||
quoting=csv.QUOTE_NONE,
|
||||
names=["fname", "raw_text", "normalized_text"])
|
||||
|
||||
def get_example(self, i):
|
||||
fname, raw_text, normalized_text = self._table.iloc[i]
|
||||
abs_fname = join(self._wav_dir, fname + ".wav")
|
||||
return fname, abs_fname, raw_text, normalized_text
|
||||
|
||||
def __len__(self):
|
||||
return len(self._table)
|
||||
|
||||
|
||||
class Transform(object):
|
||||
def __init__(self, sample_rate, n_fft, hop_length, win_length, n_mels, reduction_factor):
|
||||
self.sample_rate = sample_rate
|
||||
self.n_fft = n_fft
|
||||
self.win_length = win_length
|
||||
self.hop_length = hop_length
|
||||
self.n_mels = n_mels
|
||||
self.reduction_factor = reduction_factor
|
||||
|
||||
def __call__(self, fname):
|
||||
# wave processing
|
||||
audio, _ = librosa.load(fname, sr=self.sample_rate)
|
||||
|
||||
# Pad the data to the right size to have a whole number of timesteps,
|
||||
# accounting properly for the model reduction factor.
|
||||
frames = audio.size // (self.reduction_factor * self.hop_length) + 1
|
||||
# librosa's stft extract frame of n_fft size, so we should pad n_fft // 2 on both sidess
|
||||
desired_length = (frames * self.reduction_factor - 1) * self.hop_length + self.n_fft
|
||||
pad_amount = (desired_length - audio.size) // 2
|
||||
|
||||
# we pad mannually to control the number of generated frames
|
||||
if audio.size % 2 == 0:
|
||||
audio = np.pad(audio, (pad_amount, pad_amount), mode='reflect')
|
||||
else:
|
||||
audio = np.pad(audio, (pad_amount, pad_amount + 1), mode='reflect')
|
||||
|
||||
# STFT
|
||||
D = librosa.stft(audio, self.n_fft, self.hop_length, self.win_length, center=False)
|
||||
S = np.abs(D)
|
||||
S_mel = librosa.feature.melspectrogram(sr=self.sample_rate, S=S, n_mels=self.n_mels, fmax=8000.0)
|
||||
|
||||
# log magnitude
|
||||
log_spectrogram = np.log(np.clip(S, a_min=1e-5, a_max=None))
|
||||
log_mel_spectrogram = np.log(np.clip(S_mel, a_min=1e-5, a_max=None))
|
||||
num_frames = log_spectrogram.shape[-1]
|
||||
assert num_frames % self.reduction_factor == 0, "num_frames is wrong"
|
||||
return (log_spectrogram.T, log_mel_spectrogram.T, num_frames)
|
||||
|
||||
|
||||
def save(output_path, dataset, transform):
|
||||
if not os.path.exists(output_path):
|
||||
os.makedirs(output_path)
|
||||
records = []
|
||||
for example in tqdm.tqdm(dataset):
|
||||
fname, abs_fname, _, normalized_text = example
|
||||
log_spec, log_mel_spec, num_frames = transform(abs_fname)
|
||||
records.append((num_frames,
|
||||
fname + "_spec.npy",
|
||||
fname + "_mel.npy",
|
||||
normalized_text))
|
||||
np.save(join(output_path, fname + "_spec"), log_spec)
|
||||
np.save(join(output_path, fname + "_mel"), log_mel_spec)
|
||||
meta_data = pd.DataFrame.from_records(records)
|
||||
meta_data.to_csv(join(output_path, "metadata.csv"),
|
||||
quoting=csv.QUOTE_NONE, sep="|", encoding="utf-8",
|
||||
header=False, index=False)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="preprocess ljspeech dataset and save it.")
|
||||
parser.add_argument("--config", type=str, required=True, help="config file")
|
||||
parser.add_argument("--input", type=str, required=True, help="data path of the original data")
|
||||
parser.add_argument("--output", type=str, required=True, help="path to save the preprocessed dataset")
|
||||
|
||||
args = parser.parse_args()
|
||||
with open(args.config, 'rt') as f:
|
||||
config = yaml.safe_load(f)
|
||||
|
||||
print("========= Command Line Arguments ========")
|
||||
for k, v in vars(args).items():
|
||||
print("{}: {}".format(k, v))
|
||||
print("=========== Configurations ==============")
|
||||
for k in ["sample_rate", "n_fft", "win_length",
|
||||
"hop_length", "n_mels", "reduction_factor"]:
|
||||
print("{}: {}".format(k, config[k]))
|
||||
|
||||
ljspeech_meta = LJSpeechMetaData(args.input)
|
||||
transform = Transform(config["sample_rate"],
|
||||
config["n_fft"],
|
||||
config["hop_length"],
|
||||
config["win_length"],
|
||||
config["n_mels"],
|
||||
config["reduction_factor"])
|
||||
save(args.output, ljspeech_meta, transform)
|
||||
|
|
@ -1,101 +0,0 @@
|
|||
import numpy as np
|
||||
from matplotlib import cm
|
||||
import librosa
|
||||
import os
|
||||
import time
|
||||
import tqdm
|
||||
import argparse
|
||||
from ruamel import yaml
|
||||
import paddle
|
||||
from paddle import fluid
|
||||
from paddle.fluid import layers as F
|
||||
from paddle.fluid import dygraph as dg
|
||||
from paddle.fluid.io import DataLoader
|
||||
import soundfile as sf
|
||||
|
||||
from parakeet.data import SliceDataset, DataCargo, PartialyRandomizedSimilarTimeLengthSampler, SequentialSampler
|
||||
from parakeet.utils.io import save_parameters, load_parameters, add_yaml_config_to_args
|
||||
from parakeet.g2p import en
|
||||
from parakeet.models.deepvoice3.weight_norm_hook import remove_weight_norm
|
||||
from vocoder import WaveflowVocoder, GriffinLimVocoder
|
||||
from train import create_model
|
||||
|
||||
|
||||
def main(args, config):
|
||||
model = create_model(config)
|
||||
loaded_step = load_parameters(model, checkpoint_path=args.checkpoint)
|
||||
for name, layer in model.named_sublayers():
|
||||
try:
|
||||
remove_weight_norm(layer)
|
||||
except ValueError:
|
||||
# this layer has not weight norm hook
|
||||
pass
|
||||
model.eval()
|
||||
if args.vocoder == "waveflow":
|
||||
vocoder = WaveflowVocoder()
|
||||
vocoder.model.eval()
|
||||
elif args.vocoder == "griffin-lim":
|
||||
vocoder = GriffinLimVocoder(
|
||||
sharpening_factor=config["sharpening_factor"],
|
||||
sample_rate=config["sample_rate"],
|
||||
n_fft=config["n_fft"],
|
||||
win_length=config["win_length"],
|
||||
hop_length=config["hop_length"])
|
||||
else:
|
||||
raise ValueError("Other vocoders are not supported.")
|
||||
|
||||
if not os.path.exists(args.output):
|
||||
os.makedirs(args.output)
|
||||
monotonic_layers = [int(item.strip()) - 1 for item in args.monotonic_layers.split(',')]
|
||||
with open(args.input, 'rt') as f:
|
||||
sentences = [line.strip() for line in f.readlines()]
|
||||
for i, sentence in enumerate(sentences):
|
||||
wav = synthesize(args, config, model, vocoder, sentence, monotonic_layers)
|
||||
sf.write(os.path.join(args.output, "sentence{}.wav".format(i)),
|
||||
wav, samplerate=config["sample_rate"])
|
||||
|
||||
|
||||
def synthesize(args, config, model, vocoder, sentence, monotonic_layers):
|
||||
print("[synthesize] {}".format(sentence))
|
||||
text = en.text_to_sequence(sentence, p=1.0)
|
||||
text = np.expand_dims(np.array(text, dtype="int64"), 0)
|
||||
lengths = np.array([text.size], dtype=np.int64)
|
||||
text_seqs = dg.to_variable(text)
|
||||
text_lengths = dg.to_variable(lengths)
|
||||
|
||||
decoder_layers = config["decoder_layers"]
|
||||
force_monotonic_attention = [False] * decoder_layers
|
||||
for i in monotonic_layers:
|
||||
force_monotonic_attention[i] = True
|
||||
|
||||
with dg.no_grad():
|
||||
outputs = model(text_seqs, text_lengths, speakers=None,
|
||||
force_monotonic_attention=force_monotonic_attention,
|
||||
window=(config["backward_step"], config["forward_step"]))
|
||||
decoded, refined, attentions = outputs
|
||||
if args.vocoder == "griffin-lim":
|
||||
wav_np = vocoder(refined.numpy()[0].T)
|
||||
else:
|
||||
wav = vocoder(F.transpose(refined, (0, 2, 1)))
|
||||
wav_np = wav.numpy()[0]
|
||||
return wav_np
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
from ruamel import yaml
|
||||
parser = argparse.ArgumentParser("synthesize from a checkpoint")
|
||||
parser.add_argument("--config", type=str, required=True, help="config file")
|
||||
parser.add_argument("--input", type=str, required=True, help="text file to synthesize")
|
||||
parser.add_argument("--output", type=str, required=True, help="path to save audio")
|
||||
parser.add_argument("--checkpoint", type=str, required=True, help="data path of the checkpoint")
|
||||
parser.add_argument("--monotonic_layers", type=str, required=True, help="monotonic decoder layers' indices(start from 1)")
|
||||
parser.add_argument("--vocoder", type=str, default="waveflow", choices=['griffin-lim', 'waveflow'], help="vocoder to use")
|
||||
args = parser.parse_args()
|
||||
with open(args.config, 'rt') as f:
|
||||
config = yaml.safe_load(f)
|
||||
|
||||
dg.enable_dygraph(fluid.CUDAPlace(0))
|
||||
main(args, config)
|
|
@ -1,187 +0,0 @@
|
|||
import numpy as np
|
||||
from matplotlib import cm
|
||||
import librosa
|
||||
import os
|
||||
import time
|
||||
import tqdm
|
||||
import paddle
|
||||
from paddle import fluid
|
||||
from paddle.fluid import layers as F
|
||||
from paddle.fluid import initializer as I
|
||||
from paddle.fluid import dygraph as dg
|
||||
from paddle.fluid.io import DataLoader
|
||||
from visualdl import LogWriter
|
||||
|
||||
from parakeet.models.deepvoice3 import Encoder, Decoder, PostNet, SpectraNet
|
||||
from parakeet.data import SliceDataset, DataCargo, SequentialSampler, RandomSampler
|
||||
from parakeet.utils.io import save_parameters, load_parameters
|
||||
from parakeet.g2p import en
|
||||
|
||||
from data import LJSpeech, DataCollector
|
||||
from vocoder import WaveflowVocoder, GriffinLimVocoder
|
||||
from clip import DoubleClip
|
||||
|
||||
|
||||
def create_model(config):
|
||||
char_embedding = dg.Embedding((en.n_vocab, config["char_dim"]), param_attr=I.Normal(scale=0.1))
|
||||
multi_speaker = config["n_speakers"] > 1
|
||||
speaker_embedding = dg.Embedding((config["n_speakers"], config["speaker_dim"]), param_attr=I.Normal(scale=0.1)) \
|
||||
if multi_speaker else None
|
||||
encoder = Encoder(config["encoder_layers"], config["char_dim"],
|
||||
config["encoder_dim"], config["kernel_size"],
|
||||
has_bias=multi_speaker, bias_dim=config["speaker_dim"],
|
||||
keep_prob=1.0 - config["dropout"])
|
||||
decoder = Decoder(config["n_mels"], config["reduction_factor"],
|
||||
list(config["prenet_sizes"]) + [config["char_dim"]],
|
||||
config["decoder_layers"], config["kernel_size"],
|
||||
config["attention_dim"],
|
||||
position_encoding_weight=config["position_weight"],
|
||||
omega=config["position_rate"],
|
||||
has_bias=multi_speaker, bias_dim=config["speaker_dim"],
|
||||
keep_prob=1.0 - config["dropout"])
|
||||
postnet = PostNet(config["postnet_layers"], config["char_dim"],
|
||||
config["postnet_dim"], config["kernel_size"],
|
||||
config["n_mels"], config["reduction_factor"],
|
||||
has_bias=multi_speaker, bias_dim=config["speaker_dim"],
|
||||
keep_prob=1.0 - config["dropout"])
|
||||
spectranet = SpectraNet(char_embedding, speaker_embedding, encoder, decoder, postnet)
|
||||
return spectranet
|
||||
|
||||
def create_data(config, data_path):
|
||||
dataset = LJSpeech(data_path)
|
||||
|
||||
train_dataset = SliceDataset(dataset, config["valid_size"], len(dataset))
|
||||
train_collator = DataCollector(config["p_pronunciation"])
|
||||
train_sampler = RandomSampler(train_dataset)
|
||||
train_cargo = DataCargo(train_dataset, train_collator,
|
||||
batch_size=config["batch_size"], sampler=train_sampler)
|
||||
train_loader = DataLoader\
|
||||
.from_generator(capacity=10, return_list=True)\
|
||||
.set_batch_generator(train_cargo)
|
||||
|
||||
valid_dataset = SliceDataset(dataset, 0, config["valid_size"])
|
||||
valid_collector = DataCollector(1.)
|
||||
valid_sampler = SequentialSampler(valid_dataset)
|
||||
valid_cargo = DataCargo(valid_dataset, valid_collector,
|
||||
batch_size=1, sampler=valid_sampler)
|
||||
valid_loader = DataLoader\
|
||||
.from_generator(capacity=2, return_list=True)\
|
||||
.set_batch_generator(valid_cargo)
|
||||
return train_loader, valid_loader
|
||||
|
||||
def create_optimizer(model, config):
|
||||
optim = fluid.optimizer.Adam(config["learning_rate"],
|
||||
parameter_list=model.parameters(),
|
||||
grad_clip=DoubleClip(config["clip_value"], config["clip_norm"]))
|
||||
return optim
|
||||
|
||||
def train(args, config):
|
||||
model = create_model(config)
|
||||
train_loader, valid_loader = create_data(config, args.input)
|
||||
optim = create_optimizer(model, config)
|
||||
|
||||
global global_step
|
||||
max_iteration = config["max_iteration"]
|
||||
|
||||
iterator = iter(tqdm.tqdm(train_loader))
|
||||
while global_step <= max_iteration:
|
||||
# get inputs
|
||||
try:
|
||||
batch = next(iterator)
|
||||
except StopIteration:
|
||||
iterator = iter(tqdm.tqdm(train_loader))
|
||||
batch = next(iterator)
|
||||
|
||||
# unzip it
|
||||
text_seqs, text_lengths, specs, mels, num_frames = batch
|
||||
|
||||
# forward & backward
|
||||
model.train()
|
||||
outputs = model(text_seqs, text_lengths, speakers=None, mel=mels)
|
||||
decoded, refined, attentions, final_state = outputs
|
||||
|
||||
causal_mel_loss = model.spec_loss(decoded, mels, num_frames)
|
||||
non_causal_mel_loss = model.spec_loss(refined, mels, num_frames)
|
||||
loss = causal_mel_loss + non_causal_mel_loss
|
||||
loss.backward()
|
||||
|
||||
# update
|
||||
optim.minimize(loss)
|
||||
|
||||
# logging
|
||||
tqdm.tqdm.write("[train] step: {}\tloss: {:.6f}\tcausal:{:.6f}\tnon_causal:{:.6f}".format(
|
||||
global_step,
|
||||
loss.numpy()[0],
|
||||
causal_mel_loss.numpy()[0],
|
||||
non_causal_mel_loss.numpy()[0]))
|
||||
writer.add_scalar("loss/causal_mel_loss", causal_mel_loss.numpy()[0], step=global_step)
|
||||
writer.add_scalar("loss/non_causal_mel_loss", non_causal_mel_loss.numpy()[0], step=global_step)
|
||||
writer.add_scalar("loss/loss", loss.numpy()[0], step=global_step)
|
||||
|
||||
if global_step % config["report_interval"] == 0:
|
||||
text_length = int(text_lengths.numpy()[0])
|
||||
num_frame = int(num_frames.numpy()[0])
|
||||
|
||||
tag = "train_mel/ground-truth"
|
||||
img = cm.viridis(normalize(mels.numpy()[0, :num_frame].T))
|
||||
writer.add_image(tag, img, step=global_step)
|
||||
|
||||
tag = "train_mel/decoded"
|
||||
img = cm.viridis(normalize(decoded.numpy()[0, :num_frame].T))
|
||||
writer.add_image(tag, img, step=global_step)
|
||||
|
||||
tag = "train_mel/refined"
|
||||
img = cm.viridis(normalize(refined.numpy()[0, :num_frame].T))
|
||||
writer.add_image(tag, img, step=global_step)
|
||||
|
||||
vocoder = WaveflowVocoder()
|
||||
vocoder.model.eval()
|
||||
|
||||
tag = "train_audio/ground-truth-waveflow"
|
||||
wav = vocoder(F.transpose(mels[0:1, :num_frame, :], (0, 2, 1)))
|
||||
writer.add_audio(tag, wav.numpy()[0], step=global_step, sample_rate=22050)
|
||||
|
||||
tag = "train_audio/decoded-waveflow"
|
||||
wav = vocoder(F.transpose(decoded[0:1, :num_frame, :], (0, 2, 1)))
|
||||
writer.add_audio(tag, wav.numpy()[0], step=global_step, sample_rate=22050)
|
||||
|
||||
tag = "train_audio/refined-waveflow"
|
||||
wav = vocoder(F.transpose(refined[0:1, :num_frame, :], (0, 2, 1)))
|
||||
writer.add_audio(tag, wav.numpy()[0], step=global_step, sample_rate=22050)
|
||||
|
||||
attentions_np = attentions.numpy()
|
||||
attentions_np = attentions_np[:, 0, :num_frame // 4 , :text_length]
|
||||
for i, attention_layer in enumerate(np.rot90(attentions_np, axes=(1,2))):
|
||||
tag = "train_attention/layer_{}".format(i)
|
||||
img = cm.viridis(normalize(attention_layer))
|
||||
writer.add_image(tag, img, step=global_step, dataformats="HWC")
|
||||
|
||||
if global_step % config["save_interval"] == 0:
|
||||
save_parameters(writer.logdir, global_step, model, optim)
|
||||
|
||||
# global step +1
|
||||
global_step += 1
|
||||
|
||||
def normalize(arr):
|
||||
return (arr - arr.min()) / (arr.max() - arr.min())
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
from ruamel import yaml
|
||||
|
||||
parser = argparse.ArgumentParser(description="train a Deep Voice 3 model with LJSpeech")
|
||||
parser.add_argument("--config", type=str, required=True, help="config file")
|
||||
parser.add_argument("--input", type=str, required=True, help="data path of the original data")
|
||||
|
||||
args = parser.parse_args()
|
||||
with open(args.config, 'rt') as f:
|
||||
config = yaml.safe_load(f)
|
||||
|
||||
dg.enable_dygraph(fluid.CUDAPlace(0))
|
||||
global global_step
|
||||
global_step = 1
|
||||
global writer
|
||||
writer = LogWriter()
|
||||
print("[Training] tensorboard log and checkpoints are save in {}".format(
|
||||
writer.logdir))
|
||||
train(args, config)
|
|
@ -1,51 +0,0 @@
|
|||
import argparse
|
||||
from ruamel import yaml
|
||||
import numpy as np
|
||||
import librosa
|
||||
import paddle
|
||||
from paddle import fluid
|
||||
from paddle.fluid import layers as F
|
||||
from paddle.fluid import dygraph as dg
|
||||
from parakeet.utils.io import load_parameters
|
||||
from parakeet.models.waveflow.waveflow_modules import WaveFlowModule
|
||||
|
||||
class WaveflowVocoder(object):
|
||||
def __init__(self):
|
||||
config_path = "waveflow_res128_ljspeech_ckpt_1.0/waveflow_ljspeech.yaml"
|
||||
with open(config_path, 'rt') as f:
|
||||
config = yaml.safe_load(f)
|
||||
ns = argparse.Namespace()
|
||||
for k, v in config.items():
|
||||
setattr(ns, k, v)
|
||||
ns.use_fp16 = False
|
||||
|
||||
self.model = WaveFlowModule(ns)
|
||||
checkpoint_path = "waveflow_res128_ljspeech_ckpt_1.0/step-2000000"
|
||||
load_parameters(self.model, checkpoint_path=checkpoint_path)
|
||||
|
||||
def __call__(self, mel):
|
||||
with dg.no_grad():
|
||||
self.model.eval()
|
||||
audio = self.model.synthesize(mel)
|
||||
self.model.train()
|
||||
return audio
|
||||
|
||||
class GriffinLimVocoder(object):
|
||||
def __init__(self, sharpening_factor=1.4, sample_rate=22050, n_fft=1024,
|
||||
win_length=1024, hop_length=256):
|
||||
self.sample_rate = sample_rate
|
||||
self.n_fft = n_fft
|
||||
self.sharpening_factor = sharpening_factor
|
||||
self.win_length = win_length
|
||||
self.hop_length = hop_length
|
||||
|
||||
def __call__(self, mel):
|
||||
spec = librosa.feature.inverse.mel_to_stft(
|
||||
np.exp(mel),
|
||||
sr=self.sample_rate,
|
||||
n_fft=self.n_fft,
|
||||
fmin=0, fmax=8000.0, power=1.0)
|
||||
audio = librosa.core.griffinlim(spec ** self.sharpening_factor,
|
||||
win_length=self.win_length, hop_length=self.hop_length)
|
||||
return audio
|
||||
|
|
@ -1,144 +0,0 @@
|
|||
# Fastspeech
|
||||
|
||||
PaddlePaddle dynamic graph implementation of Fastspeech, a feed-forward network based on Transformer. The implementation is based on [FastSpeech: Fast, Robust and Controllable Text to Speech](https://arxiv.org/abs/1905.09263).
|
||||
|
||||
## Dataset
|
||||
|
||||
We experiment with the LJSpeech dataset. Download and unzip [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
||||
|
||||
```bash
|
||||
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
|
||||
tar xjvf LJSpeech-1.1.tar.bz2
|
||||
```
|
||||
|
||||
## Model Architecture
|
||||
|
||||
![FastSpeech model architecture](./images/model_architecture.png)
|
||||
|
||||
FastSpeech is a feed-forward structure based on Transformer, instead of using the encoder-attention-decoder based architecture. This model extracts attention alignments from an encoder-decoder based teacher model for phoneme duration prediction, which is used by a length
|
||||
regulator to expand the source phoneme sequence to match the length of the target
|
||||
mel-spectrogram sequence for parallel mel-spectrogram generation. We use the TransformerTTS as teacher model.
|
||||
The model consists of encoder, decoder and length regulator three parts.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
├── config # yaml configuration files
|
||||
├── synthesis.py # script to synthesize waveform from text
|
||||
├── train.py # script for model training
|
||||
```
|
||||
|
||||
## Saving & Loading
|
||||
|
||||
`train_transformer.py` and `train_vocoer.py` have 3 arguments in common, `--checkpoint`, `--iteration` and `--output`.
|
||||
|
||||
1. `--output` is the directory for saving results.
|
||||
During training, checkpoints are saved in `${output}/checkpoints` and tensorboard logs are saved in `${output}/log`.
|
||||
During synthesis, results are saved in `${output}/samples` and tensorboard log is save in `${output}/log`.
|
||||
|
||||
2. `--checkpoint` is the path of a checkpoint and `--iteration` is the target step. They are used to load checkpoints in the following way.
|
||||
|
||||
- If `--checkpoint` is provided, the checkpoint specified by `--checkpoint` is loaded.
|
||||
|
||||
- If `--checkpoint` is not provided, we try to load the checkpoint of the target step specified by `--iteration` from the `${output}/checkpoints/` directory, e.g. if given `--iteration 120000`, the checkpoint `${output}/checkpoints/step-120000.*` will be load.
|
||||
|
||||
- If both `--checkpoint` and `--iteration` are not provided, we try to load the latest checkpoint from `${output}/checkpoints/` directory.
|
||||
|
||||
## Compute Phoneme Duration
|
||||
|
||||
A ground truth duration of each phoneme (number of frames in the spectrogram that correspond to that phoneme) should be provided when training a FastSpeech model.
|
||||
|
||||
We compute the ground truth duration of each phomemes in the following way.
|
||||
We extract the encoder-decoder attention alignment from a trained Transformer TTS model;
|
||||
Each frame is considered corresponding to the phoneme that receive the most attention;
|
||||
|
||||
You can run alignments/get_alignments.py to get it.
|
||||
|
||||
```bash
|
||||
cd alignments
|
||||
python get_alignments.py \
|
||||
--use_gpu=1 \
|
||||
--output='./alignments' \
|
||||
--data=${DATAPATH} \
|
||||
--config=${CONFIG} \
|
||||
--checkpoint_transformer=${CHECKPOINT} \
|
||||
```
|
||||
|
||||
where `${DATAPATH}` is the path saved LJSpeech data, `${CHECKPOINT}` is the pretrain model path of TransformerTTS, `${CONFIG}` is the config yaml file of TransformerTTS checkpoint. It is necessary for you to prepare a pre-trained TranformerTTS checkpoint.
|
||||
|
||||
For more help on arguments
|
||||
|
||||
``python alignments.py --help``.
|
||||
|
||||
Or you can use your own phoneme duration, you just need to process the data into the following format.
|
||||
|
||||
```bash
|
||||
{'fname1': alignment1,
|
||||
'fname2': alignment2,
|
||||
...}
|
||||
```
|
||||
|
||||
## Train FastSpeech
|
||||
|
||||
FastSpeech model can be trained by running ``train.py``.
|
||||
|
||||
```bash
|
||||
python train.py \
|
||||
--use_gpu=1 \
|
||||
--data=${DATAPATH} \
|
||||
--alignments_path=${ALIGNMENTS_PATH} \
|
||||
--output=${OUTPUTPATH} \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
```
|
||||
|
||||
Or you can run the script file directly.
|
||||
|
||||
```bash
|
||||
sh train.sh
|
||||
```
|
||||
|
||||
If you want to train on multiple GPUs, start training in the following way.
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=0,1,2,3
|
||||
python -m paddle.distributed.launch --selected_gpus=0,1,2,3 --log_dir ./mylog train.py \
|
||||
--use_gpu=1 \
|
||||
--data=${DATAPATH} \
|
||||
--alignments_path=${ALIGNMENTS_PATH} \
|
||||
--output=${OUTPUTPATH} \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
```
|
||||
|
||||
If you wish to resume from an existing model, See [Saving-&-Loading](#Saving-&-Loading) for details of checkpoint loading.
|
||||
|
||||
For more help on arguments
|
||||
|
||||
``python train.py --help``.
|
||||
|
||||
## Synthesis
|
||||
|
||||
After training the FastSpeech, audio can be synthesized by running ``synthesis.py``.
|
||||
|
||||
```bash
|
||||
python synthesis.py \
|
||||
--use_gpu=1 \
|
||||
--alpha=1.0 \
|
||||
--checkpoint=${CHECKPOINTPATH} \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
--output=${OUTPUTPATH} \
|
||||
--vocoder='griffin-lim' \
|
||||
```
|
||||
|
||||
We currently support two vocoders, Griffin-Lim algorithm and WaveFlow. You can set ``--vocoder`` to use one of them. If you want to use WaveFlow as your vocoder, you need to set ``--config_vocoder`` and ``--checkpoint_vocoder`` which are the path of the config and checkpoint of vocoder. You can download the pre-trained model of WaveFlow from [here](https://github.com/PaddlePaddle/Parakeet#vocoders).
|
||||
|
||||
Or you can run the script file directly.
|
||||
|
||||
```bash
|
||||
sh synthesis.sh
|
||||
```
|
||||
|
||||
For more help on arguments
|
||||
|
||||
``python synthesis.py --help``.
|
||||
|
||||
Then you can find the synthesized audio files in ``${OUTPUTPATH}/samples``.
|
|
@ -1,132 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
from scipy.io.wavfile import write
|
||||
from parakeet.g2p.en import text_to_sequence
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import csv
|
||||
from tqdm import tqdm
|
||||
from ruamel import yaml
|
||||
import pickle
|
||||
from pathlib import Path
|
||||
import argparse
|
||||
from pprint import pprint
|
||||
from collections import OrderedDict
|
||||
import paddle.fluid as fluid
|
||||
import paddle.fluid.dygraph as dg
|
||||
from parakeet.models.transformer_tts.utils import *
|
||||
from parakeet.models.transformer_tts import TransformerTTS
|
||||
from parakeet.models.fastspeech.utils import get_alignment
|
||||
from parakeet.utils import io
|
||||
|
||||
|
||||
def add_config_options_to_parser(parser):
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--use_gpu", type=int, default=0, help="device to use")
|
||||
parser.add_argument("--data", type=str, help="path of LJspeech dataset")
|
||||
|
||||
parser.add_argument(
|
||||
"--checkpoint_transformer",
|
||||
type=str,
|
||||
help="transformer_tts checkpoint to synthesis")
|
||||
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=str,
|
||||
default="./alignments",
|
||||
help="path to save experiment results")
|
||||
|
||||
|
||||
def alignments(args):
|
||||
local_rank = dg.parallel.Env().local_rank
|
||||
place = (fluid.CUDAPlace(local_rank) if args.use_gpu else fluid.CPUPlace())
|
||||
|
||||
with open(args.config) as f:
|
||||
cfg = yaml.load(f, Loader=yaml.Loader)
|
||||
|
||||
with dg.guard(place):
|
||||
network_cfg = cfg['network']
|
||||
model = TransformerTTS(
|
||||
network_cfg['embedding_size'], network_cfg['hidden_size'],
|
||||
network_cfg['encoder_num_head'], network_cfg['encoder_n_layers'],
|
||||
cfg['audio']['num_mels'], network_cfg['outputs_per_step'],
|
||||
network_cfg['decoder_num_head'], network_cfg['decoder_n_layers'])
|
||||
# Load parameters.
|
||||
global_step = io.load_parameters(
|
||||
model=model, checkpoint_path=args.checkpoint_transformer)
|
||||
model.eval()
|
||||
|
||||
# get text data
|
||||
root = Path(args.data)
|
||||
csv_path = root.joinpath("metadata.csv")
|
||||
table = pd.read_csv(
|
||||
csv_path,
|
||||
sep="|",
|
||||
header=None,
|
||||
quoting=csv.QUOTE_NONE,
|
||||
names=["fname", "raw_text", "normalized_text"])
|
||||
|
||||
pbar = tqdm(range(len(table)))
|
||||
alignments = OrderedDict()
|
||||
for i in pbar:
|
||||
fname, raw_text, normalized_text = table.iloc[i]
|
||||
# init input
|
||||
text = np.asarray(text_to_sequence(normalized_text))
|
||||
text = fluid.layers.unsqueeze(dg.to_variable(text), [0])
|
||||
pos_text = np.arange(1, text.shape[1] + 1)
|
||||
pos_text = fluid.layers.unsqueeze(dg.to_variable(pos_text), [0])
|
||||
|
||||
# load
|
||||
wav, _ = librosa.load(
|
||||
str(os.path.join(args.data, 'wavs', fname + ".wav")))
|
||||
|
||||
spec = librosa.stft(
|
||||
y=wav,
|
||||
n_fft=cfg['audio']['n_fft'],
|
||||
win_length=cfg['audio']['win_length'],
|
||||
hop_length=cfg['audio']['hop_length'])
|
||||
mag = np.abs(spec)
|
||||
mel = librosa.filters.mel(sr=cfg['audio']['sr'],
|
||||
n_fft=cfg['audio']['n_fft'],
|
||||
n_mels=cfg['audio']['num_mels'],
|
||||
fmin=cfg['audio']['fmin'],
|
||||
fmax=cfg['audio']['fmax'])
|
||||
mel = np.matmul(mel, mag)
|
||||
mel = np.log(np.maximum(mel, 1e-5))
|
||||
|
||||
mel_input = np.transpose(mel, axes=(1, 0))
|
||||
mel_input = fluid.layers.unsqueeze(dg.to_variable(mel_input), [0])
|
||||
mel_lens = mel_input.shape[1]
|
||||
|
||||
pos_mel = np.arange(1, mel_input.shape[1] + 1)
|
||||
pos_mel = fluid.layers.unsqueeze(dg.to_variable(pos_mel), [0])
|
||||
mel_pred, postnet_pred, attn_probs, stop_preds, attn_enc, attn_dec = model(
|
||||
text, mel_input, pos_text, pos_mel)
|
||||
mel_input = fluid.layers.concat(
|
||||
[mel_input, postnet_pred[:, -1:, :]], axis=1)
|
||||
|
||||
alignment, _ = get_alignment(attn_probs, mel_lens,
|
||||
network_cfg['decoder_num_head'])
|
||||
alignments[fname] = alignment
|
||||
with open(args.output + '.pkl', "wb") as f:
|
||||
pickle.dump(alignments, f)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Get alignments from TransformerTTS model")
|
||||
add_config_options_to_parser(parser)
|
||||
args = parser.parse_args()
|
||||
alignments(args)
|
|
@ -1,14 +0,0 @@
|
|||
|
||||
CUDA_VISIBLE_DEVICES=0 \
|
||||
python -u get_alignments.py \
|
||||
--use_gpu=1 \
|
||||
--output='./alignments' \
|
||||
--data='../../../dataset/LJSpeech-1.1' \
|
||||
--config='../../transformer_tts/configs/ljspeech.yaml' \
|
||||
--checkpoint_transformer='../../transformer_tts/checkpoint/transformer/step-120000' \
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed in training!"
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
|
@ -1,36 +0,0 @@
|
|||
audio:
|
||||
num_mels: 80 #the number of mel bands when calculating mel spectrograms.
|
||||
n_fft: 1024 #the number of fft components.
|
||||
sr: 22050 #the sampling rate of audio data file.
|
||||
hop_length: 256 #the number of samples to advance between frames.
|
||||
win_length: 1024 #the length (width) of the window function.
|
||||
preemphasis: 0.97
|
||||
power: 1.2 #the power to raise before griffin-lim.
|
||||
fmin: 0
|
||||
fmax: 8000
|
||||
|
||||
network:
|
||||
encoder_n_layer: 6 #the number of FFT Block in encoder.
|
||||
encoder_head: 2 #the attention head number in encoder.
|
||||
encoder_conv1d_filter_size: 1536 #the filter size of conv1d in encoder.
|
||||
max_seq_len: 2048 #the max length of sequence.
|
||||
decoder_n_layer: 6 #the number of FFT Block in decoder.
|
||||
decoder_head: 2 #the attention head number in decoder.
|
||||
decoder_conv1d_filter_size: 1536 #the filter size of conv1d in decoder.
|
||||
hidden_size: 384 #the hidden size in model of fastspeech.
|
||||
duration_predictor_output_size: 256 #the output size of duration predictior.
|
||||
duration_predictor_filter_size: 3 #the filter size of conv1d in duration prediction.
|
||||
fft_conv1d_filter: 3 #the filter size of conv1d in fft.
|
||||
fft_conv1d_padding: 1 #the padding size of conv1d in fft.
|
||||
dropout: 0.1 #the dropout in network.
|
||||
outputs_per_step: 1
|
||||
|
||||
train:
|
||||
batch_size: 32
|
||||
learning_rate: 0.001
|
||||
warm_up_step: 4000 #the warm up step of learning rate.
|
||||
grad_clip_thresh: 0.1 #the threshold of grad clip.
|
||||
|
||||
checkpoint_interval: 1000
|
||||
max_iteration: 500000
|
||||
|
|
@ -1,186 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from pathlib import Path
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import librosa
|
||||
import csv
|
||||
import pickle
|
||||
|
||||
from paddle import fluid
|
||||
from parakeet import g2p
|
||||
from parakeet import audio
|
||||
from parakeet.data.sampler import *
|
||||
from parakeet.data.datacargo import DataCargo
|
||||
from parakeet.data.batch import TextIDBatcher, SpecBatcher
|
||||
from parakeet.data.dataset import DatasetMixin, TransformDataset, CacheDataset, SliceDataset
|
||||
from parakeet.models.transformer_tts.utils import *
|
||||
|
||||
|
||||
class LJSpeechLoader:
|
||||
def __init__(self,
|
||||
config,
|
||||
place,
|
||||
data_path,
|
||||
alignments_path,
|
||||
batch_size,
|
||||
nranks,
|
||||
rank,
|
||||
is_vocoder=False,
|
||||
shuffle=True):
|
||||
|
||||
LJSPEECH_ROOT = Path(data_path)
|
||||
metadata = LJSpeechMetaData(LJSPEECH_ROOT, alignments_path)
|
||||
transformer = LJSpeech(config)
|
||||
dataset = TransformDataset(metadata, transformer)
|
||||
dataset = CacheDataset(dataset)
|
||||
|
||||
sampler = DistributedSampler(
|
||||
len(dataset), nranks, rank, shuffle=shuffle)
|
||||
|
||||
assert batch_size % nranks == 0
|
||||
each_bs = batch_size // nranks
|
||||
dataloader = DataCargo(
|
||||
dataset,
|
||||
sampler=sampler,
|
||||
batch_size=each_bs,
|
||||
shuffle=shuffle,
|
||||
batch_fn=batch_examples,
|
||||
drop_last=True)
|
||||
self.reader = fluid.io.DataLoader.from_generator(
|
||||
capacity=32,
|
||||
iterable=True,
|
||||
use_double_buffer=True,
|
||||
return_list=True)
|
||||
self.reader.set_batch_generator(dataloader, place)
|
||||
|
||||
|
||||
class LJSpeechMetaData(DatasetMixin):
|
||||
def __init__(self, root, alignments_path):
|
||||
self.root = Path(root)
|
||||
self._wav_dir = self.root.joinpath("wavs")
|
||||
csv_path = self.root.joinpath("metadata.csv")
|
||||
self._table = pd.read_csv(
|
||||
csv_path,
|
||||
sep="|",
|
||||
header=None,
|
||||
quoting=csv.QUOTE_NONE,
|
||||
names=["fname", "raw_text", "normalized_text"])
|
||||
with open(alignments_path, "rb") as f:
|
||||
self._alignments = pickle.load(f)
|
||||
|
||||
def get_example(self, i):
|
||||
fname, raw_text, normalized_text = self._table.iloc[i]
|
||||
alignment = self._alignments[fname]
|
||||
fname = str(self._wav_dir.joinpath(fname + ".wav"))
|
||||
return fname, normalized_text, alignment
|
||||
|
||||
def __len__(self):
|
||||
return len(self._table)
|
||||
|
||||
|
||||
class LJSpeech(object):
|
||||
def __init__(self, cfg):
|
||||
super(LJSpeech, self).__init__()
|
||||
self.sr = cfg['sr']
|
||||
self.n_fft = cfg['n_fft']
|
||||
self.num_mels = cfg['num_mels']
|
||||
self.win_length = cfg['win_length']
|
||||
self.hop_length = cfg['hop_length']
|
||||
self.preemphasis = cfg['preemphasis']
|
||||
self.fmin = cfg['fmin']
|
||||
self.fmax = cfg['fmax']
|
||||
|
||||
def __call__(self, metadatum):
|
||||
"""All the code for generating an Example from a metadatum. If you want a
|
||||
different preprocessing pipeline, you can override this method.
|
||||
This method may require several processor, each of which has a lot of options.
|
||||
In this case, you'd better pass a composed transform and pass it to the init
|
||||
method.
|
||||
"""
|
||||
fname, normalized_text, alignment = metadatum
|
||||
|
||||
wav, _ = librosa.load(str(fname))
|
||||
spec = librosa.stft(
|
||||
y=wav,
|
||||
n_fft=self.n_fft,
|
||||
win_length=self.win_length,
|
||||
hop_length=self.hop_length)
|
||||
mag = np.abs(spec)
|
||||
mel = librosa.filters.mel(self.sr,
|
||||
self.n_fft,
|
||||
n_mels=self.num_mels,
|
||||
fmin=self.fmin,
|
||||
fmax=self.fmax)
|
||||
mel = np.matmul(mel, mag)
|
||||
mel = np.log(np.maximum(mel, 1e-5))
|
||||
phonemes = np.array(
|
||||
g2p.en.text_to_sequence(normalized_text), dtype=np.int64)
|
||||
return (mel, phonemes, alignment
|
||||
) # maybe we need to implement it as a map in the future
|
||||
|
||||
|
||||
def batch_examples(batch):
|
||||
texts = []
|
||||
mels = []
|
||||
text_lens = []
|
||||
pos_texts = []
|
||||
pos_mels = []
|
||||
alignments = []
|
||||
for data in batch:
|
||||
mel, text, alignment = data
|
||||
text_lens.append(len(text))
|
||||
pos_texts.append(np.arange(1, len(text) + 1))
|
||||
pos_mels.append(np.arange(1, mel.shape[1] + 1))
|
||||
mels.append(mel)
|
||||
texts.append(text)
|
||||
alignments.append(alignment)
|
||||
|
||||
# Sort by text_len in descending order
|
||||
texts = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(texts, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
mels = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(mels, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
pos_texts = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(pos_texts, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
pos_mels = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(pos_mels, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
alignments = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(alignments, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
#text_lens = sorted(text_lens, reverse=True)
|
||||
|
||||
# Pad sequence with largest len of the batch
|
||||
texts = TextIDBatcher(pad_id=0)(texts) #(B, T)
|
||||
pos_texts = TextIDBatcher(pad_id=0)(pos_texts) #(B,T)
|
||||
pos_mels = TextIDBatcher(pad_id=0)(pos_mels) #(B,T)
|
||||
alignments = TextIDBatcher(pad_id=0)(alignments).astype(np.float32)
|
||||
mels = np.transpose(
|
||||
SpecBatcher(pad_value=0.)(mels), axes=(0, 2, 1)) #(B,T,num_mels)
|
||||
|
||||
return (texts, mels, pos_texts, pos_mels, alignments)
|
Binary file not shown.
Before Width: | Height: | Size: 513 KiB |
|
@ -1,170 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
from visualdl import LogWriter
|
||||
from scipy.io.wavfile import write
|
||||
from collections import OrderedDict
|
||||
import argparse
|
||||
from pprint import pprint
|
||||
from ruamel import yaml
|
||||
from matplotlib import cm
|
||||
import numpy as np
|
||||
import paddle.fluid as fluid
|
||||
import paddle.fluid.dygraph as dg
|
||||
from parakeet.g2p.en import text_to_sequence
|
||||
from parakeet import audio
|
||||
from parakeet.models.fastspeech.fastspeech import FastSpeech
|
||||
from parakeet.models.transformer_tts.utils import *
|
||||
from parakeet.models.wavenet import WaveNet, UpsampleNet
|
||||
from parakeet.models.clarinet import STFT, Clarinet, ParallelWaveNet
|
||||
from parakeet.modules import weight_norm
|
||||
from parakeet.models.waveflow import WaveFlowModule
|
||||
from parakeet.utils.layer_tools import freeze
|
||||
from parakeet.utils import io
|
||||
|
||||
|
||||
def add_config_options_to_parser(parser):
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument(
|
||||
"--vocoder",
|
||||
type=str,
|
||||
default="griffin-lim",
|
||||
choices=['griffin-lim', 'waveflow'],
|
||||
help="vocoder method")
|
||||
parser.add_argument(
|
||||
"--config_vocoder", type=str, help="path of the vocoder config file")
|
||||
parser.add_argument("--use_gpu", type=int, default=0, help="device to use")
|
||||
parser.add_argument(
|
||||
"--alpha",
|
||||
type=float,
|
||||
default=1,
|
||||
help="determine the length of the expanded sequence mel, controlling the voice speed."
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--checkpoint", type=str, help="fastspeech checkpoint for synthesis")
|
||||
parser.add_argument(
|
||||
"--checkpoint_vocoder",
|
||||
type=str,
|
||||
help="vocoder checkpoint for synthesis")
|
||||
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=str,
|
||||
default="synthesis",
|
||||
help="path to save experiment results")
|
||||
|
||||
|
||||
def synthesis(text_input, args):
|
||||
local_rank = dg.parallel.Env().local_rank
|
||||
place = (fluid.CUDAPlace(local_rank) if args.use_gpu else fluid.CPUPlace())
|
||||
fluid.enable_dygraph(place)
|
||||
|
||||
with open(args.config) as f:
|
||||
cfg = yaml.load(f, Loader=yaml.Loader)
|
||||
|
||||
# tensorboard
|
||||
if not os.path.exists(args.output):
|
||||
os.mkdir(args.output)
|
||||
|
||||
writer = LogWriter(os.path.join(args.output, 'log'))
|
||||
|
||||
model = FastSpeech(cfg['network'], num_mels=cfg['audio']['num_mels'])
|
||||
# Load parameters.
|
||||
global_step = io.load_parameters(
|
||||
model=model, checkpoint_path=args.checkpoint)
|
||||
model.eval()
|
||||
|
||||
text = np.asarray(text_to_sequence(text_input))
|
||||
text = np.expand_dims(text, axis=0)
|
||||
pos_text = np.arange(1, text.shape[1] + 1)
|
||||
pos_text = np.expand_dims(pos_text, axis=0)
|
||||
|
||||
text = dg.to_variable(text).astype(np.int64)
|
||||
pos_text = dg.to_variable(pos_text).astype(np.int64)
|
||||
|
||||
_, mel_output_postnet = model(text, pos_text, alpha=args.alpha)
|
||||
|
||||
if args.vocoder == 'griffin-lim':
|
||||
#synthesis use griffin-lim
|
||||
wav = synthesis_with_griffinlim(mel_output_postnet, cfg['audio'])
|
||||
elif args.vocoder == 'waveflow':
|
||||
wav = synthesis_with_waveflow(mel_output_postnet, args,
|
||||
args.checkpoint_vocoder, place)
|
||||
else:
|
||||
print(
|
||||
'vocoder error, we only support griffinlim and waveflow, but recevied %s.'
|
||||
% args.vocoder)
|
||||
|
||||
writer.add_audio(text_input + '(' + args.vocoder + ')', wav, 0,
|
||||
cfg['audio']['sr'])
|
||||
if not os.path.exists(os.path.join(args.output, 'samples')):
|
||||
os.mkdir(os.path.join(args.output, 'samples'))
|
||||
write(
|
||||
os.path.join(
|
||||
os.path.join(args.output, 'samples'), args.vocoder + '.wav'),
|
||||
cfg['audio']['sr'], wav)
|
||||
print("Synthesis completed !!!")
|
||||
writer.close()
|
||||
|
||||
|
||||
def synthesis_with_griffinlim(mel_output, cfg):
|
||||
mel_output = fluid.layers.transpose(
|
||||
fluid.layers.squeeze(mel_output, [0]), [1, 0])
|
||||
mel_output = np.exp(mel_output.numpy())
|
||||
basis = librosa.filters.mel(cfg['sr'],
|
||||
cfg['n_fft'],
|
||||
cfg['num_mels'],
|
||||
fmin=cfg['fmin'],
|
||||
fmax=cfg['fmax'])
|
||||
inv_basis = np.linalg.pinv(basis)
|
||||
spec = np.maximum(1e-10, np.dot(inv_basis, mel_output))
|
||||
|
||||
wav = librosa.core.griffinlim(
|
||||
spec**cfg['power'],
|
||||
hop_length=cfg['hop_length'],
|
||||
win_length=cfg['win_length'])
|
||||
|
||||
return wav
|
||||
|
||||
|
||||
def synthesis_with_waveflow(mel_output, args, checkpoint, place):
|
||||
|
||||
fluid.enable_dygraph(place)
|
||||
args.config = args.config_vocoder
|
||||
args.use_fp16 = False
|
||||
config = io.add_yaml_config_to_args(args)
|
||||
|
||||
mel_spectrogram = fluid.layers.transpose(mel_output, [0, 2, 1])
|
||||
|
||||
# Build model.
|
||||
waveflow = WaveFlowModule(config)
|
||||
io.load_parameters(model=waveflow, checkpoint_path=checkpoint)
|
||||
for layer in waveflow.sublayers():
|
||||
if isinstance(layer, weight_norm.WeightNormWrapper):
|
||||
layer.remove_weight_norm()
|
||||
|
||||
# Run model inference.
|
||||
wav = waveflow.synthesize(mel_spectrogram, sigma=config.sigma)
|
||||
return wav.numpy()[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description="Synthesis model")
|
||||
add_config_options_to_parser(parser)
|
||||
args = parser.parse_args()
|
||||
pprint(vars(args))
|
||||
synthesis(
|
||||
"Don't argue with the people of strong determination, because they may change the fact!",
|
||||
args)
|
|
@ -1,20 +0,0 @@
|
|||
# train model
|
||||
|
||||
CUDA_VISIBLE_DEVICES=0 \
|
||||
python -u synthesis.py \
|
||||
--use_gpu=1 \
|
||||
--alpha=1.0 \
|
||||
--checkpoint='./fastspeech_ljspeech_ckpt_1.0/fastspeech/step-162000' \
|
||||
--config='fastspeech_ljspeech_ckpt_1.0/ljspeech.yaml' \
|
||||
--output='./synthesis' \
|
||||
--vocoder='waveflow' \
|
||||
--config_vocoder='./waveflow_res128_ljspeech_ckpt_1.0/waveflow_ljspeech.yaml' \
|
||||
--checkpoint_vocoder='./waveflow_res128_ljspeech_ckpt_1.0/step-2000000' \
|
||||
|
||||
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed in synthesis!"
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
|
@ -1,166 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import numpy as np
|
||||
import argparse
|
||||
import os
|
||||
import time
|
||||
import math
|
||||
from pathlib import Path
|
||||
from pprint import pprint
|
||||
from ruamel import yaml
|
||||
from tqdm import tqdm
|
||||
from matplotlib import cm
|
||||
from collections import OrderedDict
|
||||
from visualdl import LogWriter
|
||||
import paddle.fluid.dygraph as dg
|
||||
import paddle.fluid.layers as layers
|
||||
import paddle.fluid as fluid
|
||||
from parakeet.models.fastspeech.fastspeech import FastSpeech
|
||||
from parakeet.models.fastspeech.utils import get_alignment
|
||||
from data import LJSpeechLoader
|
||||
from parakeet.utils import io
|
||||
|
||||
|
||||
def add_config_options_to_parser(parser):
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--use_gpu", type=int, default=0, help="device to use")
|
||||
parser.add_argument("--data", type=str, help="path of LJspeech dataset")
|
||||
parser.add_argument(
|
||||
"--alignments_path", type=str, help="path of alignments")
|
||||
|
||||
g = parser.add_mutually_exclusive_group()
|
||||
g.add_argument("--checkpoint", type=str, help="checkpoint to resume from")
|
||||
g.add_argument(
|
||||
"--iteration",
|
||||
type=int,
|
||||
help="the iteration of the checkpoint to load from output directory")
|
||||
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=str,
|
||||
default="experiment",
|
||||
help="path to save experiment results")
|
||||
|
||||
|
||||
def main(args):
|
||||
local_rank = dg.parallel.Env().local_rank
|
||||
nranks = dg.parallel.Env().nranks
|
||||
parallel = nranks > 1
|
||||
|
||||
with open(args.config) as f:
|
||||
cfg = yaml.load(f, Loader=yaml.Loader)
|
||||
|
||||
global_step = 0
|
||||
place = fluid.CUDAPlace(dg.parallel.Env()
|
||||
.dev_id) if args.use_gpu else fluid.CPUPlace()
|
||||
fluid.enable_dygraph(place)
|
||||
|
||||
if not os.path.exists(args.output):
|
||||
os.mkdir(args.output)
|
||||
|
||||
writer = LogWriter(os.path.join(args.output,
|
||||
'log')) if local_rank == 0 else None
|
||||
|
||||
model = FastSpeech(cfg['network'], num_mels=cfg['audio']['num_mels'])
|
||||
model.train()
|
||||
optimizer = fluid.optimizer.AdamOptimizer(
|
||||
learning_rate=dg.NoamDecay(1 / (cfg['train']['warm_up_step'] *
|
||||
(cfg['train']['learning_rate']**2)),
|
||||
cfg['train']['warm_up_step']),
|
||||
parameter_list=model.parameters(),
|
||||
grad_clip=fluid.clip.GradientClipByGlobalNorm(cfg['train'][
|
||||
'grad_clip_thresh']))
|
||||
reader = LJSpeechLoader(
|
||||
cfg['audio'],
|
||||
place,
|
||||
args.data,
|
||||
args.alignments_path,
|
||||
cfg['train']['batch_size'],
|
||||
nranks,
|
||||
local_rank,
|
||||
shuffle=True).reader
|
||||
iterator = iter(tqdm(reader))
|
||||
|
||||
# Load parameters.
|
||||
global_step = io.load_parameters(
|
||||
model=model,
|
||||
optimizer=optimizer,
|
||||
checkpoint_dir=os.path.join(args.output, 'checkpoints'),
|
||||
iteration=args.iteration,
|
||||
checkpoint_path=args.checkpoint)
|
||||
print("Rank {}: checkpoint loaded.".format(local_rank))
|
||||
|
||||
if parallel:
|
||||
strategy = dg.parallel.prepare_context()
|
||||
model = fluid.dygraph.parallel.DataParallel(model, strategy)
|
||||
|
||||
while global_step <= cfg['train']['max_iteration']:
|
||||
try:
|
||||
batch = next(iterator)
|
||||
except StopIteration as e:
|
||||
iterator = iter(tqdm(reader))
|
||||
batch = next(iterator)
|
||||
|
||||
(character, mel, pos_text, pos_mel, alignment) = batch
|
||||
|
||||
global_step += 1
|
||||
|
||||
#Forward
|
||||
result = model(
|
||||
character, pos_text, mel_pos=pos_mel, length_target=alignment)
|
||||
mel_output, mel_output_postnet, duration_predictor_output, _, _ = result
|
||||
mel_loss = layers.mse_loss(mel_output, mel)
|
||||
mel_postnet_loss = layers.mse_loss(mel_output_postnet, mel)
|
||||
duration_loss = layers.mean(
|
||||
layers.abs(
|
||||
layers.elementwise_sub(duration_predictor_output, alignment)))
|
||||
total_loss = mel_loss + mel_postnet_loss + duration_loss
|
||||
|
||||
if local_rank == 0:
|
||||
writer.add_scalar('mel_loss', mel_loss.numpy(), global_step)
|
||||
writer.add_scalar('post_mel_loss',
|
||||
mel_postnet_loss.numpy(), global_step)
|
||||
writer.add_scalar('duration_loss',
|
||||
duration_loss.numpy(), global_step)
|
||||
writer.add_scalar('learning_rate',
|
||||
optimizer._learning_rate.step().numpy(),
|
||||
global_step)
|
||||
|
||||
if parallel:
|
||||
total_loss = model.scale_loss(total_loss)
|
||||
total_loss.backward()
|
||||
model.apply_collective_grads()
|
||||
else:
|
||||
total_loss.backward()
|
||||
optimizer.minimize(total_loss)
|
||||
model.clear_gradients()
|
||||
|
||||
# save checkpoint
|
||||
if local_rank == 0 and global_step % cfg['train'][
|
||||
'checkpoint_interval'] == 0:
|
||||
io.save_parameters(
|
||||
os.path.join(args.output, 'checkpoints'), global_step, model,
|
||||
optimizer)
|
||||
|
||||
if local_rank == 0:
|
||||
writer.close()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description="Train Fastspeech model")
|
||||
add_config_options_to_parser(parser)
|
||||
args = parser.parse_args()
|
||||
# Print the whole config setting.
|
||||
pprint(vars(args))
|
||||
main(args)
|
|
@ -1,15 +0,0 @@
|
|||
# train model
|
||||
export CUDA_VISIBLE_DEVICES=0
|
||||
python -u train.py \
|
||||
--use_gpu=1 \
|
||||
--data='../../dataset/LJSpeech-1.1' \
|
||||
--alignments_path='./alignments/alignments.pkl' \
|
||||
--output='./experiment' \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
#--checkpoint='./checkpoint/fastspeech/step-120000' \
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed in training!"
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
|
@ -1,112 +0,0 @@
|
|||
# TransformerTTS
|
||||
|
||||
PaddlePaddle dynamic graph implementation of TransformerTTS, a neural TTS with Transformer. The implementation is based on [Neural Speech Synthesis with Transformer Network](https://arxiv.org/abs/1809.08895).
|
||||
|
||||
## Dataset
|
||||
|
||||
We experiment with the LJSpeech dataset. Download and unzip [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
||||
|
||||
```bash
|
||||
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
|
||||
tar xjvf LJSpeech-1.1.tar.bz2
|
||||
```
|
||||
|
||||
## Model Architecture
|
||||
|
||||
<div align="center" name="TransformerTTS model architecture">
|
||||
<img src="./images/model_architecture.jpg" width=400 height=600 /> <br>
|
||||
</div>
|
||||
<div align="center" >
|
||||
TransformerTTS model architecture
|
||||
</div>
|
||||
|
||||
The model adopts the multi-head attention mechanism to replace the RNN structures and also the original attention mechanism in [Tacotron2](https://arxiv.org/abs/1712.05884). The model consists of two main parts, encoder and decoder. We also implement the CBHG model of Tacotron as the vocoder part and convert the spectrogram into raw wave using Griffin-Lim algorithm.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
├── config # yaml configuration files
|
||||
├── data.py # dataset and dataloader settings for LJSpeech
|
||||
├── synthesis.py # script to synthesize waveform from text
|
||||
├── train_transformer.py # script for transformer model training
|
||||
├── train_vocoder.py # script for vocoder model training
|
||||
```
|
||||
|
||||
## Saving & Loading
|
||||
|
||||
`train_transformer.py` and `train_vocoer.py` have 3 arguments in common, `--checkpoint`, `--iteration` and `--output`.
|
||||
|
||||
1. `--output` is the directory for saving results.
|
||||
During training, checkpoints are saved in `${output}/checkpoints` and tensorboard logs are saved in `${output}/log`.
|
||||
During synthesis, results are saved in `${output}/samples` and tensorboard log is save in `${output}/log`.
|
||||
|
||||
2. `--checkpoint` is the path of a checkpoint and `--iteration` is the target step. They are used to load checkpoints in the following way.
|
||||
|
||||
- If `--checkpoint` is provided, the checkpoint specified by `--checkpoint` is loaded.
|
||||
|
||||
- If `--checkpoint` is not provided, we try to load the checkpoint of the target step specified by `--iteration` from the `${output}/checkpoints/` directory, e.g. if given `--iteration 120000`, the checkpoint `${output}/checkpoints/step-120000.*` will be load.
|
||||
|
||||
- If both `--checkpoint` and `--iteration` are not provided, we try to load the latest checkpoint from `${output}/checkpoints/` directory.
|
||||
|
||||
## Train Transformer
|
||||
|
||||
TransformerTTS model can be trained by running ``train_transformer.py``.
|
||||
|
||||
```bash
|
||||
python train_transformer.py \
|
||||
--use_gpu=1 \
|
||||
--data=${DATAPATH} \
|
||||
--output=${OUTPUTPATH} \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
```
|
||||
|
||||
Or you can run the script file directly.
|
||||
|
||||
```bash
|
||||
sh train_transformer.sh
|
||||
```
|
||||
|
||||
If you want to train on multiple GPUs, you must start training in the following way.
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=0,1,2,3
|
||||
python -m paddle.distributed.launch --selected_gpus=0,1,2,3 --log_dir ./mylog train_transformer.py \
|
||||
--use_gpu=1 \
|
||||
--data=${DATAPATH} \
|
||||
--output=${OUTPUTPATH} \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
```
|
||||
|
||||
If you wish to resume from an existing model, See [Saving-&-Loading](#Saving-&-Loading) for details of checkpoint loading.
|
||||
|
||||
**Note: In order to ensure the training effect, we recommend using multi-GPU training to enlarge the batch size, and at least 16 samples in single batch per GPU.**
|
||||
|
||||
For more help on arguments
|
||||
|
||||
``python train_transformer.py --help``.
|
||||
|
||||
## Synthesis
|
||||
|
||||
After training the TransformerTTS, audio can be synthesized by running ``synthesis.py``.
|
||||
|
||||
```bash
|
||||
python synthesis.py \
|
||||
--use_gpu=0 \
|
||||
--output=${OUTPUTPATH} \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
--checkpoint_transformer=${CHECKPOINTPATH} \
|
||||
--vocoder='griffin-lim' \
|
||||
```
|
||||
|
||||
We currently support two vocoders, Griffin-Lim algorithm and WaveFlow. You can set ``--vocoder`` to use one of them. If you want to use WaveFlow as your vocoder, you need to set ``--config_vocoder`` and ``--checkpoint_vocoder`` which are the path of the config and checkpoint of vocoder. You can download the pre-trained model of WaveFlow from [here](https://github.com/PaddlePaddle/Parakeet#vocoders).
|
||||
|
||||
Or you can run the script file directly.
|
||||
|
||||
```bash
|
||||
sh synthesis.sh
|
||||
```
|
||||
For more help on arguments
|
||||
|
||||
``python synthesis.py --help``.
|
||||
|
||||
Then you can find the synthesized audio files in ``${OUTPUTPATH}/samples``.
|
|
@ -1,38 +0,0 @@
|
|||
audio:
|
||||
num_mels: 80
|
||||
n_fft: 1024
|
||||
sr: 22050
|
||||
preemphasis: 0.97
|
||||
hop_length: 256
|
||||
win_length: 1024
|
||||
power: 1.2
|
||||
fmin: 0
|
||||
fmax: 8000
|
||||
|
||||
network:
|
||||
hidden_size: 256
|
||||
embedding_size: 512
|
||||
encoder_num_head: 4
|
||||
encoder_n_layers: 3
|
||||
decoder_num_head: 4
|
||||
decoder_n_layers: 3
|
||||
outputs_per_step: 1
|
||||
stop_loss_weight: 8
|
||||
|
||||
vocoder:
|
||||
hidden_size: 256
|
||||
|
||||
train:
|
||||
batch_size: 32
|
||||
learning_rate: 0.001
|
||||
warm_up_step: 4000
|
||||
grad_clip_thresh: 1.0
|
||||
|
||||
checkpoint_interval: 1000
|
||||
image_interval: 2000
|
||||
|
||||
max_iteration: 500000
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,219 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from pathlib import Path
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import librosa
|
||||
import csv
|
||||
|
||||
from paddle import fluid
|
||||
from parakeet import g2p
|
||||
from parakeet.data.sampler import *
|
||||
from parakeet.data.datacargo import DataCargo
|
||||
from parakeet.data.batch import TextIDBatcher, SpecBatcher
|
||||
from parakeet.data.dataset import DatasetMixin, TransformDataset, CacheDataset, SliceDataset
|
||||
from parakeet.models.transformer_tts.utils import *
|
||||
|
||||
|
||||
class LJSpeechLoader:
|
||||
def __init__(self,
|
||||
config,
|
||||
place,
|
||||
data_path,
|
||||
batch_size,
|
||||
nranks,
|
||||
rank,
|
||||
is_vocoder=False,
|
||||
shuffle=True):
|
||||
|
||||
LJSPEECH_ROOT = Path(data_path)
|
||||
metadata = LJSpeechMetaData(LJSPEECH_ROOT)
|
||||
transformer = LJSpeech(config)
|
||||
dataset = TransformDataset(metadata, transformer)
|
||||
dataset = CacheDataset(dataset)
|
||||
|
||||
sampler = DistributedSampler(
|
||||
len(dataset), nranks, rank, shuffle=shuffle)
|
||||
|
||||
assert batch_size % nranks == 0
|
||||
each_bs = batch_size // nranks
|
||||
if is_vocoder:
|
||||
dataloader = DataCargo(
|
||||
dataset,
|
||||
sampler=sampler,
|
||||
batch_size=each_bs,
|
||||
shuffle=shuffle,
|
||||
batch_fn=batch_examples_vocoder,
|
||||
drop_last=True)
|
||||
else:
|
||||
dataloader = DataCargo(
|
||||
dataset,
|
||||
sampler=sampler,
|
||||
batch_size=each_bs,
|
||||
shuffle=shuffle,
|
||||
batch_fn=batch_examples,
|
||||
drop_last=True)
|
||||
self.reader = fluid.io.DataLoader.from_generator(
|
||||
capacity=32,
|
||||
iterable=True,
|
||||
use_double_buffer=True,
|
||||
return_list=True)
|
||||
self.reader.set_batch_generator(dataloader, place)
|
||||
|
||||
|
||||
class LJSpeechMetaData(DatasetMixin):
|
||||
def __init__(self, root):
|
||||
self.root = Path(root)
|
||||
self._wav_dir = self.root.joinpath("wavs")
|
||||
csv_path = self.root.joinpath("metadata.csv")
|
||||
self._table = pd.read_csv(
|
||||
csv_path,
|
||||
sep="|",
|
||||
header=None,
|
||||
quoting=csv.QUOTE_NONE,
|
||||
names=["fname", "raw_text", "normalized_text"])
|
||||
|
||||
def get_example(self, i):
|
||||
fname, raw_text, normalized_text = self._table.iloc[i]
|
||||
fname = str(self._wav_dir.joinpath(fname + ".wav"))
|
||||
return fname, raw_text, normalized_text
|
||||
|
||||
def __len__(self):
|
||||
return len(self._table)
|
||||
|
||||
|
||||
class LJSpeech(object):
|
||||
def __init__(self, config):
|
||||
super(LJSpeech, self).__init__()
|
||||
self.config = config
|
||||
self.sr = config['sr']
|
||||
self.n_mels = config['num_mels']
|
||||
self.preemphasis = config['preemphasis']
|
||||
self.n_fft = config['n_fft']
|
||||
self.win_length = config['win_length']
|
||||
self.hop_length = config['hop_length']
|
||||
self.fmin = config['fmin']
|
||||
self.fmax = config['fmax']
|
||||
|
||||
def __call__(self, metadatum):
|
||||
"""All the code for generating an Example from a metadatum. If you want a
|
||||
different preprocessing pipeline, you can override this method.
|
||||
This method may require several processor, each of which has a lot of options.
|
||||
In this case, you'd better pass a composed transform and pass it to the init
|
||||
method.
|
||||
"""
|
||||
fname, raw_text, normalized_text = metadatum
|
||||
|
||||
# load
|
||||
wav, _ = librosa.load(str(fname))
|
||||
|
||||
spec = librosa.stft(
|
||||
y=wav,
|
||||
n_fft=self.n_fft,
|
||||
win_length=self.win_length,
|
||||
hop_length=self.hop_length)
|
||||
mag = np.abs(spec)
|
||||
mel = librosa.filters.mel(sr=self.sr,
|
||||
n_fft=self.n_fft,
|
||||
n_mels=self.n_mels,
|
||||
fmin=self.fmin,
|
||||
fmax=self.fmax)
|
||||
mel = np.matmul(mel, mag)
|
||||
mel = np.log(np.maximum(mel, 1e-5))
|
||||
|
||||
characters = np.array(
|
||||
g2p.en.text_to_sequence(normalized_text), dtype=np.int64)
|
||||
return (mag, mel, characters)
|
||||
|
||||
|
||||
def batch_examples(batch):
|
||||
texts = []
|
||||
mels = []
|
||||
mel_inputs = []
|
||||
text_lens = []
|
||||
pos_texts = []
|
||||
pos_mels = []
|
||||
stop_tokens = []
|
||||
for data in batch:
|
||||
_, mel, text = data
|
||||
mel_inputs.append(
|
||||
np.concatenate(
|
||||
[np.zeros([mel.shape[0], 1], np.float32), mel[:, :-1]],
|
||||
axis=-1))
|
||||
text_lens.append(len(text))
|
||||
pos_texts.append(np.arange(1, len(text) + 1))
|
||||
pos_mels.append(np.arange(1, mel.shape[1] + 1))
|
||||
mels.append(mel)
|
||||
texts.append(text)
|
||||
stop_token = np.append(np.zeros([mel.shape[1] - 1], np.float32), 1.0)
|
||||
stop_tokens.append(stop_token)
|
||||
|
||||
# Sort by text_len in descending order
|
||||
texts = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(texts, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
mels = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(mels, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
mel_inputs = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(mel_inputs, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
pos_texts = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(pos_texts, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
pos_mels = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(pos_mels, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
stop_tokens = [
|
||||
i
|
||||
for i, _ in sorted(
|
||||
zip(stop_tokens, text_lens), key=lambda x: x[1], reverse=True)
|
||||
]
|
||||
text_lens = sorted(text_lens, reverse=True)
|
||||
|
||||
# Pad sequence with largest len of the batch
|
||||
texts = TextIDBatcher(pad_id=0)(texts) #(B, T)
|
||||
pos_texts = TextIDBatcher(pad_id=0)(pos_texts) #(B,T)
|
||||
pos_mels = TextIDBatcher(pad_id=0)(pos_mels) #(B,T)
|
||||
stop_tokens = TextIDBatcher(pad_id=1, dtype=np.float32)(pos_mels)
|
||||
mels = np.transpose(
|
||||
SpecBatcher(pad_value=0.)(mels), axes=(0, 2, 1)) #(B,T,num_mels)
|
||||
mel_inputs = np.transpose(
|
||||
SpecBatcher(pad_value=0.)(mel_inputs), axes=(0, 2, 1)) #(B,T,num_mels)
|
||||
|
||||
return (texts, mels, mel_inputs, pos_texts, pos_mels, stop_tokens)
|
||||
|
||||
|
||||
def batch_examples_vocoder(batch):
|
||||
mels = []
|
||||
mags = []
|
||||
for data in batch:
|
||||
mag, mel, _ = data
|
||||
mels.append(mel)
|
||||
mags.append(mag)
|
||||
|
||||
mels = np.transpose(SpecBatcher(pad_value=0.)(mels), axes=(0, 2, 1))
|
||||
mags = np.transpose(SpecBatcher(pad_value=0.)(mags), axes=(0, 2, 1))
|
||||
|
||||
return (mels, mags)
|
Binary file not shown.
Before Width: | Height: | Size: 322 KiB |
|
@ -1,202 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
from scipy.io.wavfile import write
|
||||
import numpy as np
|
||||
from tqdm import tqdm
|
||||
from matplotlib import cm
|
||||
from visualdl import LogWriter
|
||||
from ruamel import yaml
|
||||
from pathlib import Path
|
||||
import argparse
|
||||
from pprint import pprint
|
||||
import paddle.fluid as fluid
|
||||
import paddle.fluid.dygraph as dg
|
||||
from parakeet.g2p.en import text_to_sequence
|
||||
from parakeet.models.transformer_tts.utils import *
|
||||
from parakeet.models.transformer_tts import TransformerTTS
|
||||
from parakeet.models.waveflow import WaveFlowModule
|
||||
from parakeet.modules.weight_norm import WeightNormWrapper
|
||||
from parakeet.utils import io
|
||||
|
||||
|
||||
def add_config_options_to_parser(parser):
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--use_gpu", type=int, default=0, help="device to use")
|
||||
parser.add_argument(
|
||||
"--stop_threshold",
|
||||
type=float,
|
||||
default=0.5,
|
||||
help="The threshold of stop token which indicates the time step should stop generate spectrum or not."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max_len",
|
||||
type=int,
|
||||
default=1000,
|
||||
help="The max length of spectrum when synthesize. If the length of synthetical spectrum is lager than max_len, spectrum will be cut off."
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--checkpoint_transformer",
|
||||
type=str,
|
||||
help="transformer_tts checkpoint for synthesis")
|
||||
parser.add_argument(
|
||||
"--vocoder",
|
||||
type=str,
|
||||
default="griffin-lim",
|
||||
choices=['griffin-lim', 'waveflow'],
|
||||
help="vocoder method")
|
||||
parser.add_argument(
|
||||
"--config_vocoder", type=str, help="path of the vocoder config file")
|
||||
parser.add_argument(
|
||||
"--checkpoint_vocoder",
|
||||
type=str,
|
||||
help="vocoder checkpoint for synthesis")
|
||||
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=str,
|
||||
default="synthesis",
|
||||
help="path to save experiment results")
|
||||
|
||||
|
||||
def synthesis(text_input, args):
|
||||
local_rank = dg.parallel.Env().local_rank
|
||||
place = (fluid.CUDAPlace(local_rank) if args.use_gpu else fluid.CPUPlace())
|
||||
|
||||
with open(args.config) as f:
|
||||
cfg = yaml.load(f, Loader=yaml.Loader)
|
||||
|
||||
# tensorboard
|
||||
if not os.path.exists(args.output):
|
||||
os.mkdir(args.output)
|
||||
|
||||
writer = LogWriter(os.path.join(args.output, 'log'))
|
||||
|
||||
fluid.enable_dygraph(place)
|
||||
with fluid.unique_name.guard():
|
||||
network_cfg = cfg['network']
|
||||
model = TransformerTTS(
|
||||
network_cfg['embedding_size'], network_cfg['hidden_size'],
|
||||
network_cfg['encoder_num_head'], network_cfg['encoder_n_layers'],
|
||||
cfg['audio']['num_mels'], network_cfg['outputs_per_step'],
|
||||
network_cfg['decoder_num_head'], network_cfg['decoder_n_layers'])
|
||||
# Load parameters.
|
||||
global_step = io.load_parameters(
|
||||
model=model, checkpoint_path=args.checkpoint_transformer)
|
||||
model.eval()
|
||||
|
||||
# init input
|
||||
text = np.asarray(text_to_sequence(text_input))
|
||||
text = fluid.layers.unsqueeze(dg.to_variable(text).astype(np.int64), [0])
|
||||
mel_input = dg.to_variable(np.zeros([1, 1, 80])).astype(np.float32)
|
||||
pos_text = np.arange(1, text.shape[1] + 1)
|
||||
pos_text = fluid.layers.unsqueeze(
|
||||
dg.to_variable(pos_text).astype(np.int64), [0])
|
||||
|
||||
for i in range(args.max_len):
|
||||
pos_mel = np.arange(1, mel_input.shape[1] + 1)
|
||||
pos_mel = fluid.layers.unsqueeze(
|
||||
dg.to_variable(pos_mel).astype(np.int64), [0])
|
||||
mel_pred, postnet_pred, attn_probs, stop_preds, attn_enc, attn_dec = model(
|
||||
text, mel_input, pos_text, pos_mel)
|
||||
if stop_preds.numpy()[0, -1] > args.stop_threshold:
|
||||
break
|
||||
mel_input = fluid.layers.concat(
|
||||
[mel_input, postnet_pred[:, -1:, :]], axis=1)
|
||||
global_step = 0
|
||||
for i, prob in enumerate(attn_probs):
|
||||
for j in range(4):
|
||||
x = np.uint8(cm.viridis(prob.numpy()[j]) * 255)
|
||||
writer.add_image(
|
||||
'Attention_%d_0' % global_step,
|
||||
x,
|
||||
i * 4 + j)
|
||||
|
||||
if args.vocoder == 'griffin-lim':
|
||||
#synthesis use griffin-lim
|
||||
wav = synthesis_with_griffinlim(postnet_pred, cfg['audio'])
|
||||
elif args.vocoder == 'waveflow':
|
||||
# synthesis use waveflow
|
||||
wav = synthesis_with_waveflow(postnet_pred, args,
|
||||
args.checkpoint_vocoder, place)
|
||||
else:
|
||||
print(
|
||||
'vocoder error, we only support griffinlim and waveflow, but recevied %s.'
|
||||
% args.vocoder)
|
||||
|
||||
writer.add_audio(text_input + '(' + args.vocoder + ')', wav, 0,
|
||||
cfg['audio']['sr'])
|
||||
if not os.path.exists(os.path.join(args.output, 'samples')):
|
||||
os.mkdir(os.path.join(args.output, 'samples'))
|
||||
write(
|
||||
os.path.join(
|
||||
os.path.join(args.output, 'samples'), args.vocoder + '.wav'),
|
||||
cfg['audio']['sr'], wav)
|
||||
print("Synthesis completed !!!")
|
||||
writer.close()
|
||||
|
||||
|
||||
def synthesis_with_griffinlim(mel_output, cfg):
|
||||
# synthesis with griffin-lim
|
||||
mel_output = fluid.layers.transpose(
|
||||
fluid.layers.squeeze(mel_output, [0]), [1, 0])
|
||||
mel_output = np.exp(mel_output.numpy())
|
||||
basis = librosa.filters.mel(cfg['sr'],
|
||||
cfg['n_fft'],
|
||||
cfg['num_mels'],
|
||||
fmin=cfg['fmin'],
|
||||
fmax=cfg['fmax'])
|
||||
inv_basis = np.linalg.pinv(basis)
|
||||
spec = np.maximum(1e-10, np.dot(inv_basis, mel_output))
|
||||
|
||||
wav = librosa.core.griffinlim(
|
||||
spec**cfg['power'],
|
||||
hop_length=cfg['hop_length'],
|
||||
win_length=cfg['win_length'])
|
||||
|
||||
return wav
|
||||
|
||||
|
||||
def synthesis_with_waveflow(mel_output, args, checkpoint, place):
|
||||
fluid.enable_dygraph(place)
|
||||
args.config = args.config_vocoder
|
||||
args.use_fp16 = False
|
||||
config = io.add_yaml_config_to_args(args)
|
||||
|
||||
mel_spectrogram = fluid.layers.transpose(
|
||||
fluid.layers.squeeze(mel_output, [0]), [1, 0])
|
||||
mel_spectrogram = fluid.layers.unsqueeze(mel_spectrogram, [0])
|
||||
|
||||
# Build model.
|
||||
waveflow = WaveFlowModule(config)
|
||||
io.load_parameters(model=waveflow, checkpoint_path=checkpoint)
|
||||
for layer in waveflow.sublayers():
|
||||
if isinstance(layer, WeightNormWrapper):
|
||||
layer.remove_weight_norm()
|
||||
|
||||
# Run model inference.
|
||||
wav = waveflow.synthesize(mel_spectrogram, sigma=config.sigma)
|
||||
return wav.numpy()[0]
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description="Synthesis model")
|
||||
add_config_options_to_parser(parser)
|
||||
args = parser.parse_args()
|
||||
# Print the whole config setting.
|
||||
pprint(vars(args))
|
||||
synthesis(
|
||||
"Life was like a box of chocolates, you never know what you're gonna get.",
|
||||
args)
|
|
@ -1,17 +0,0 @@
|
|||
|
||||
# train model
|
||||
CUDA_VISIBLE_DEVICES=0 \
|
||||
python -u synthesis.py \
|
||||
--use_gpu=0 \
|
||||
--output='./synthesis' \
|
||||
--config='transformer_tts_ljspeech_ckpt_1.0/ljspeech.yaml' \
|
||||
--checkpoint_transformer='./transformer_tts_ljspeech_ckpt_1.0/step-120000' \
|
||||
--vocoder='waveflow' \
|
||||
--config_vocoder='./waveflow_res128_ljspeech_ckpt_1.0/waveflow_ljspeech.yaml' \
|
||||
--checkpoint_vocoder='./waveflow_res128_ljspeech_ckpt_1.0/step-2000000' \
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed in training!"
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
|
@ -1,219 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
from tqdm import tqdm
|
||||
from visualdl import LogWriter
|
||||
from collections import OrderedDict
|
||||
import argparse
|
||||
from pprint import pprint
|
||||
from ruamel import yaml
|
||||
from matplotlib import cm
|
||||
import numpy as np
|
||||
import paddle.fluid as fluid
|
||||
import paddle.fluid.dygraph as dg
|
||||
import paddle.fluid.layers as layers
|
||||
from parakeet.models.transformer_tts.utils import cross_entropy
|
||||
from data import LJSpeechLoader
|
||||
from parakeet.models.transformer_tts import TransformerTTS
|
||||
from parakeet.utils import io
|
||||
|
||||
|
||||
def add_config_options_to_parser(parser):
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--use_gpu", type=int, default=0, help="device to use")
|
||||
parser.add_argument("--data", type=str, help="path of LJspeech dataset")
|
||||
|
||||
g = parser.add_mutually_exclusive_group()
|
||||
g.add_argument("--checkpoint", type=str, help="checkpoint to resume from")
|
||||
g.add_argument(
|
||||
"--iteration",
|
||||
type=int,
|
||||
help="the iteration of the checkpoint to load from output directory")
|
||||
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=str,
|
||||
default="experiment",
|
||||
help="path to save experiment results")
|
||||
|
||||
|
||||
def main(args):
|
||||
local_rank = dg.parallel.Env().local_rank
|
||||
nranks = dg.parallel.Env().nranks
|
||||
parallel = nranks > 1
|
||||
|
||||
with open(args.config) as f:
|
||||
cfg = yaml.load(f, Loader=yaml.Loader)
|
||||
|
||||
global_step = 0
|
||||
place = fluid.CUDAPlace(local_rank) if args.use_gpu else fluid.CPUPlace()
|
||||
|
||||
if not os.path.exists(args.output):
|
||||
os.mkdir(args.output)
|
||||
|
||||
writer = LogWriter(os.path.join(args.output,
|
||||
'log')) if local_rank == 0 else None
|
||||
|
||||
fluid.enable_dygraph(place)
|
||||
network_cfg = cfg['network']
|
||||
model = TransformerTTS(
|
||||
network_cfg['embedding_size'], network_cfg['hidden_size'],
|
||||
network_cfg['encoder_num_head'], network_cfg['encoder_n_layers'],
|
||||
cfg['audio']['num_mels'], network_cfg['outputs_per_step'],
|
||||
network_cfg['decoder_num_head'], network_cfg['decoder_n_layers'])
|
||||
|
||||
model.train()
|
||||
optimizer = fluid.optimizer.AdamOptimizer(
|
||||
learning_rate=dg.NoamDecay(1 / (cfg['train']['warm_up_step'] *
|
||||
(cfg['train']['learning_rate']**2)),
|
||||
cfg['train']['warm_up_step']),
|
||||
parameter_list=model.parameters(),
|
||||
grad_clip=fluid.clip.GradientClipByGlobalNorm(cfg['train'][
|
||||
'grad_clip_thresh']))
|
||||
|
||||
# Load parameters.
|
||||
global_step = io.load_parameters(
|
||||
model=model,
|
||||
optimizer=optimizer,
|
||||
checkpoint_dir=os.path.join(args.output, 'checkpoints'),
|
||||
iteration=args.iteration,
|
||||
checkpoint_path=args.checkpoint)
|
||||
print("Rank {}: checkpoint loaded.".format(local_rank))
|
||||
|
||||
if parallel:
|
||||
strategy = dg.parallel.prepare_context()
|
||||
model = fluid.dygraph.parallel.DataParallel(model, strategy)
|
||||
|
||||
reader = LJSpeechLoader(
|
||||
cfg['audio'],
|
||||
place,
|
||||
args.data,
|
||||
cfg['train']['batch_size'],
|
||||
nranks,
|
||||
local_rank,
|
||||
shuffle=True).reader
|
||||
|
||||
iterator = iter(tqdm(reader))
|
||||
|
||||
global_step += 1
|
||||
|
||||
while global_step <= cfg['train']['max_iteration']:
|
||||
try:
|
||||
batch = next(iterator)
|
||||
except StopIteration as e:
|
||||
iterator = iter(tqdm(reader))
|
||||
batch = next(iterator)
|
||||
|
||||
character, mel, mel_input, pos_text, pos_mel, stop_tokens = batch
|
||||
|
||||
mel_pred, postnet_pred, attn_probs, stop_preds, attn_enc, attn_dec = model(
|
||||
character, mel_input, pos_text, pos_mel)
|
||||
|
||||
mel_loss = layers.mean(
|
||||
layers.abs(layers.elementwise_sub(mel_pred, mel)))
|
||||
post_mel_loss = layers.mean(
|
||||
layers.abs(layers.elementwise_sub(postnet_pred, mel)))
|
||||
loss = mel_loss + post_mel_loss
|
||||
|
||||
stop_loss = cross_entropy(
|
||||
stop_preds, stop_tokens, weight=cfg['network']['stop_loss_weight'])
|
||||
loss = loss + stop_loss
|
||||
|
||||
if local_rank == 0:
|
||||
writer.add_scalar('training_loss/mel_loss',
|
||||
mel_loss.numpy(),
|
||||
global_step)
|
||||
writer.add_scalar('training_loss/post_mel_loss',
|
||||
post_mel_loss.numpy(),
|
||||
global_step)
|
||||
writer.add_scalar('stop_loss', stop_loss.numpy(), global_step)
|
||||
|
||||
if parallel:
|
||||
writer.add_scalar('alphas/encoder_alpha',
|
||||
model._layers.encoder.alpha.numpy(),
|
||||
global_step)
|
||||
writer.add_scalar('alphas/decoder_alpha',
|
||||
model._layers.decoder.alpha.numpy(),
|
||||
global_step)
|
||||
else:
|
||||
writer.add_scalar('alphas/encoder_alpha',
|
||||
model.encoder.alpha.numpy(),
|
||||
global_step)
|
||||
writer.add_scalar('alphas/decoder_alpha',
|
||||
model.decoder.alpha.numpy(),
|
||||
global_step)
|
||||
|
||||
writer.add_scalar('learning_rate',
|
||||
optimizer._learning_rate.step().numpy(),
|
||||
global_step)
|
||||
|
||||
if global_step % cfg['train']['image_interval'] == 1:
|
||||
for i, prob in enumerate(attn_probs):
|
||||
for j in range(cfg['network']['decoder_num_head']):
|
||||
x = np.uint8(
|
||||
cm.viridis(prob.numpy()[j * cfg['train'][
|
||||
'batch_size'] // nranks]) * 255)
|
||||
writer.add_image(
|
||||
'Attention_%d_0' % global_step,
|
||||
x,
|
||||
i * 4 + j)
|
||||
|
||||
for i, prob in enumerate(attn_enc):
|
||||
for j in range(cfg['network']['encoder_num_head']):
|
||||
x = np.uint8(
|
||||
cm.viridis(prob.numpy()[j * cfg['train'][
|
||||
'batch_size'] // nranks]) * 255)
|
||||
writer.add_image(
|
||||
'Attention_enc_%d_0' % global_step,
|
||||
x,
|
||||
i * 4 + j)
|
||||
|
||||
for i, prob in enumerate(attn_dec):
|
||||
for j in range(cfg['network']['decoder_num_head']):
|
||||
x = np.uint8(
|
||||
cm.viridis(prob.numpy()[j * cfg['train'][
|
||||
'batch_size'] // nranks]) * 255)
|
||||
writer.add_image(
|
||||
'Attention_dec_%d_0' % global_step,
|
||||
x,
|
||||
i * 4 + j)
|
||||
|
||||
if parallel:
|
||||
loss = model.scale_loss(loss)
|
||||
loss.backward()
|
||||
model.apply_collective_grads()
|
||||
else:
|
||||
loss.backward()
|
||||
optimizer.minimize(loss)
|
||||
model.clear_gradients()
|
||||
|
||||
# save checkpoint
|
||||
if local_rank == 0 and global_step % cfg['train'][
|
||||
'checkpoint_interval'] == 0:
|
||||
io.save_parameters(
|
||||
os.path.join(args.output, 'checkpoints'), global_step, model,
|
||||
optimizer)
|
||||
global_step += 1
|
||||
|
||||
if local_rank == 0:
|
||||
writer.close()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description="Train TransformerTTS model")
|
||||
add_config_options_to_parser(parser)
|
||||
args = parser.parse_args()
|
||||
# Print the whole config setting.
|
||||
pprint(vars(args))
|
||||
main(args)
|
|
@ -1,15 +0,0 @@
|
|||
|
||||
# train model
|
||||
export CUDA_VISIBLE_DEVICES=0
|
||||
python -u train_transformer.py \
|
||||
--use_gpu=1 \
|
||||
--data='../../dataset/LJSpeech-1.1' \
|
||||
--output='./experiment' \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
#--checkpoint='./checkpoint/transformer/step-120000' \
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed in training!"
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
|
@ -1,144 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from visualdl import LogWriter
|
||||
import os
|
||||
from tqdm import tqdm
|
||||
from pathlib import Path
|
||||
from collections import OrderedDict
|
||||
import argparse
|
||||
from ruamel import yaml
|
||||
from pprint import pprint
|
||||
import paddle.fluid as fluid
|
||||
import paddle.fluid.dygraph as dg
|
||||
import paddle.fluid.layers as layers
|
||||
from data import LJSpeechLoader
|
||||
from parakeet.models.transformer_tts import Vocoder
|
||||
from parakeet.utils import io
|
||||
|
||||
|
||||
def add_config_options_to_parser(parser):
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--use_gpu", type=int, default=0, help="device to use")
|
||||
parser.add_argument("--data", type=str, help="path of LJspeech dataset")
|
||||
|
||||
g = parser.add_mutually_exclusive_group()
|
||||
g.add_argument("--checkpoint", type=str, help="checkpoint to resume from")
|
||||
g.add_argument(
|
||||
"--iteration",
|
||||
type=int,
|
||||
help="the iteration of the checkpoint to load from output directory")
|
||||
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=str,
|
||||
default="vocoder",
|
||||
help="path to save experiment results")
|
||||
|
||||
|
||||
def main(args):
|
||||
local_rank = dg.parallel.Env().local_rank
|
||||
nranks = dg.parallel.Env().nranks
|
||||
parallel = nranks > 1
|
||||
|
||||
with open(args.config) as f:
|
||||
cfg = yaml.load(f, Loader=yaml.Loader)
|
||||
|
||||
global_step = 0
|
||||
place = fluid.CUDAPlace(local_rank) if args.use_gpu else fluid.CPUPlace()
|
||||
|
||||
if not os.path.exists(args.output):
|
||||
os.mkdir(args.output)
|
||||
|
||||
writer = LogWriter(os.path.join(args.output,
|
||||
'log')) if local_rank == 0 else None
|
||||
|
||||
fluid.enable_dygraph(place)
|
||||
model = Vocoder(cfg['train']['batch_size'], cfg['vocoder']['hidden_size'],
|
||||
cfg['audio']['num_mels'], cfg['audio']['n_fft'])
|
||||
|
||||
model.train()
|
||||
optimizer = fluid.optimizer.AdamOptimizer(
|
||||
learning_rate=dg.NoamDecay(1 / (cfg['train']['warm_up_step'] *
|
||||
(cfg['train']['learning_rate']**2)),
|
||||
cfg['train']['warm_up_step']),
|
||||
parameter_list=model.parameters(),
|
||||
grad_clip=fluid.clip.GradientClipByGlobalNorm(cfg['train'][
|
||||
'grad_clip_thresh']))
|
||||
|
||||
# Load parameters.
|
||||
global_step = io.load_parameters(
|
||||
model=model,
|
||||
optimizer=optimizer,
|
||||
checkpoint_dir=os.path.join(args.output, 'checkpoints'),
|
||||
iteration=args.iteration,
|
||||
checkpoint_path=args.checkpoint)
|
||||
print("Rank {}: checkpoint loaded.".format(local_rank))
|
||||
|
||||
if parallel:
|
||||
strategy = dg.parallel.prepare_context()
|
||||
model = fluid.dygraph.parallel.DataParallel(model, strategy)
|
||||
|
||||
reader = LJSpeechLoader(
|
||||
cfg['audio'],
|
||||
place,
|
||||
args.data,
|
||||
cfg['train']['batch_size'],
|
||||
nranks,
|
||||
local_rank,
|
||||
is_vocoder=True).reader()
|
||||
|
||||
for epoch in range(cfg['train']['max_iteration']):
|
||||
pbar = tqdm(reader)
|
||||
for i, data in enumerate(pbar):
|
||||
pbar.set_description('Processing at epoch %d' % epoch)
|
||||
mel, mag = data
|
||||
mag = dg.to_variable(mag.numpy())
|
||||
mel = dg.to_variable(mel.numpy())
|
||||
global_step += 1
|
||||
|
||||
mag_pred = model(mel)
|
||||
loss = layers.mean(
|
||||
layers.abs(layers.elementwise_sub(mag_pred, mag)))
|
||||
|
||||
if parallel:
|
||||
loss = model.scale_loss(loss)
|
||||
loss.backward()
|
||||
model.apply_collective_grads()
|
||||
else:
|
||||
loss.backward()
|
||||
optimizer.minimize(loss)
|
||||
model.clear_gradients()
|
||||
|
||||
if local_rank == 0:
|
||||
writer.add_scalar('training_loss/loss', loss.numpy(),
|
||||
global_step)
|
||||
|
||||
# save checkpoint
|
||||
if local_rank == 0 and global_step % cfg['train'][
|
||||
'checkpoint_interval'] == 0:
|
||||
io.save_parameters(
|
||||
os.path.join(args.output, 'checkpoints'), global_step,
|
||||
model, optimizer)
|
||||
|
||||
if local_rank == 0:
|
||||
writer.close()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description="Train vocoder model")
|
||||
add_config_options_to_parser(parser)
|
||||
args = parser.parse_args()
|
||||
# Print the whole config setting.
|
||||
pprint(args)
|
||||
main(args)
|
|
@ -1,16 +0,0 @@
|
|||
|
||||
# train model
|
||||
CUDA_VISIBLE_DEVICES=0 \
|
||||
python -u train_vocoder.py \
|
||||
--use_gpu=1 \
|
||||
--data='../../dataset/LJSpeech-1.1' \
|
||||
--output='./vocoder' \
|
||||
--config='configs/ljspeech.yaml' \
|
||||
#--checkpoint='./checkpoint/vocoder/step-100000' \
|
||||
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed in training!"
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
|
@ -1,122 +0,0 @@
|
|||
# WaveFlow
|
||||
|
||||
PaddlePaddle dynamic graph implementation of [WaveFlow: A Compact Flow-based Model for Raw Audio](https://arxiv.org/abs/1912.01219).
|
||||
|
||||
- WaveFlow can synthesize 22.05 kHz high-fidelity speech around 40x faster than real-time on a Nvidia V100 GPU without engineered inference kernels, which is faster than [WaveGlow](https://github.com/NVIDIA/waveglow) and serveral orders of magnitude faster than WaveNet.
|
||||
- WaveFlow is a small-footprint flow-based model for raw audio. It has only 5.9M parameters, which is 15x smalller than WaveGlow (87.9M).
|
||||
- WaveFlow is directly trained with maximum likelihood without probability density distillation and auxiliary losses as used in Parallel WaveNet and ClariNet, which simplifies the training pipeline and reduces the cost of development.
|
||||
|
||||
## Project Structure
|
||||
```text
|
||||
├── configs # yaml configuration files of preset model hyperparameters
|
||||
├── benchmark.py # benchmark code to test the speed of batched speech synthesis
|
||||
├── synthesis.py # script for speech synthesis
|
||||
├── train.py # script for model training
|
||||
├── utils.py # helper functions for e.g., model checkpointing
|
||||
├── data.py # dataset and dataloader settings for LJSpeech
|
||||
├── waveflow.py # WaveFlow model high level APIs
|
||||
└── parakeet/models/waveflow/waveflow_modules.py # WaveFlow model implementation
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
There are many hyperparameters to be tuned depending on the specification of model and dataset you are working on.
|
||||
We provide `wavenet_ljspeech.yaml` as a hyperparameter set that works well on the LJSpeech dataset.
|
||||
Note that we use [convolutional queue](https://arxiv.org/abs/1611.09482) at audio synthesis to cache the intermediate hidden states, which will speed up the autoregressive inference over the height dimension. Current implementation only supports height dimension equals 8 or 16, i.e., where there is no dilation on the height dimension. Therefore, you can only set value of `n_group` key in the yaml config file to be either 8 or 16.
|
||||
|
||||
Also note that `train.py`, `synthesis.py`, and `benchmark.py` all accept a `--config` parameter. To ensure consistency, you should use the same config yaml file for both training, synthesizing and benchmarking. You can also overwrite these preset hyperparameters with command line by updating parameters after `--config`.
|
||||
For example `--config=${yaml} --batch_size=8` can overwrite the corresponding hyperparameters in the `${yaml}` config file. For more details about these hyperparameters, check `utils.add_config_options_to_parser`.
|
||||
|
||||
Additionally, you need to specify some additional parameters for `train.py`, `synthesis.py`, and `benchmark.py`, and the details can be found in `train.add_options_to_parser`, `synthesis.add_options_to_parser`, and `benchmark.add_options_to_parser`, respectively.
|
||||
|
||||
### Dataset
|
||||
|
||||
Download and unzip [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
||||
|
||||
```bash
|
||||
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
|
||||
tar xjvf LJSpeech-1.1.tar.bz2
|
||||
```
|
||||
|
||||
In this example, assume that the path of unzipped LJSpeech dataset is `./data/LJSpeech-1.1`.
|
||||
|
||||
### Train on single GPU
|
||||
|
||||
```bash
|
||||
export CUDA_VISIBLE_DEVICES=0
|
||||
python -u train.py \
|
||||
--config=./configs/waveflow_ljspeech.yaml \
|
||||
--root=./data/LJSpeech-1.1 \
|
||||
--name=${ModelName} --batch_size=4 \
|
||||
--use_gpu=true
|
||||
```
|
||||
|
||||
#### Save and Load checkpoints
|
||||
|
||||
Our model will save model parameters as checkpoints in `./runs/waveflow/${ModelName}/checkpoint/` every 10000 iterations by default, where `${ModelName}` is the model name for one single experiment and it could be whatever you like.
|
||||
The saved checkpoint will have the format of `step-${iteration_number}.pdparams` for model parameters and `step-${iteration_number}.pdopt` for optimizer parameters.
|
||||
|
||||
There are three ways to load a checkpoint and resume training (take an example that you want to load a 500000-iteration checkpoint):
|
||||
1. Use `--checkpoint=./runs/waveflow/${ModelName}/checkpoint/step-500000` to provide a specific path to load. Note that you only need to provide the base name of the parameter file, which is `step-500000`, no extension name `.pdparams` or `.pdopt` is needed.
|
||||
2. Use `--iteration=500000`.
|
||||
3. If you don't specify either `--checkpoint` or `--iteration`, the model will automatically load the latest checkpoint in `./runs/waveflow/${ModelName}/checkpoint`.
|
||||
|
||||
### Train on multiple GPUs
|
||||
|
||||
```bash
|
||||
export CUDA_VISIBLE_DEVICES=0,1,2,3
|
||||
python -u -m paddle.distributed.launch train.py \
|
||||
--config=./configs/waveflow_ljspeech.yaml \
|
||||
--root=./data/LJSpeech-1.1 \
|
||||
--name=${ModelName} --use_gpu=true
|
||||
```
|
||||
|
||||
Use `export CUDA_VISIBLE_DEVICES=0,1,2,3` to set the GPUs that you want to use to be visible. Then the `paddle.distributed.launch` module will use these visible GPUs to do data parallel training in multiprocessing mode.
|
||||
|
||||
### Monitor with Tensorboard
|
||||
|
||||
By default, the logs are saved in `./runs/waveflow/${ModelName}/logs/`. You can monitor logs using TensorBoard.
|
||||
|
||||
```bash
|
||||
tensorboard --logdir=${log_dir} --port=8888
|
||||
```
|
||||
|
||||
### Synthesize from a checkpoint
|
||||
|
||||
Check the [Save and load checkpoint](#save-and-load-checkpoints) section on how to load a specific checkpoint.
|
||||
The following example will automatically load the latest checkpoint:
|
||||
|
||||
```bash
|
||||
export CUDA_VISIBLE_DEVICES=0
|
||||
python -u synthesis.py \
|
||||
--config=./configs/waveflow_ljspeech.yaml \
|
||||
--root=./data/LJSpeech-1.1 \
|
||||
--name=${ModelName} --use_gpu=true \
|
||||
--output=./syn_audios \
|
||||
--sample=${SAMPLE} \
|
||||
--sigma=1.0
|
||||
```
|
||||
|
||||
In this example, `--output` specifies where to save the synthesized audios and `--sample` (<16) specifies which sample in the valid dataset (a split from the whole LJSpeech dataset, by default contains the first 16 audio samples) to synthesize based on the mel-spectrograms computed from the ground truth sample audio, e.g., `--sample=0` means to synthesize the first audio in the valid dataset.
|
||||
|
||||
### Benchmarking
|
||||
|
||||
Use the following example to benchmark the speed of batched speech synthesis, which reports how many times faster than real-time:
|
||||
|
||||
```bash
|
||||
export CUDA_VISIBLE_DEVICES=0
|
||||
python -u benchmark.py \
|
||||
--config=./configs/waveflow_ljspeech.yaml \
|
||||
--root=./data/LJSpeech-1.1 \
|
||||
--name=${ModelName} --use_gpu=true
|
||||
```
|
||||
|
||||
### Low-precision inference
|
||||
|
||||
This model supports the float16 low-precision inference. By appending the argument
|
||||
|
||||
```bash
|
||||
--use_fp16=true
|
||||
```
|
||||
|
||||
to the command of synthesis and benchmarking, one can experience the fast speed of low-precision inference.
|
|
@ -1,103 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import random
|
||||
from pprint import pprint
|
||||
|
||||
import argparse
|
||||
import numpy as np
|
||||
import paddle.fluid.dygraph as dg
|
||||
from paddle import fluid
|
||||
|
||||
import utils
|
||||
from parakeet.utils import io
|
||||
from waveflow import WaveFlow
|
||||
|
||||
|
||||
def add_options_to_parser(parser):
|
||||
parser.add_argument(
|
||||
'--model',
|
||||
type=str,
|
||||
default='waveflow',
|
||||
help="general name of the model")
|
||||
parser.add_argument(
|
||||
'--name', type=str, help="specific name of the training model")
|
||||
parser.add_argument(
|
||||
'--root', type=str, help="root path of the LJSpeech dataset")
|
||||
|
||||
parser.add_argument(
|
||||
'--use_gpu',
|
||||
type=utils.str2bool,
|
||||
default=True,
|
||||
help="option to use gpu training")
|
||||
parser.add_argument(
|
||||
'--use_fp16',
|
||||
type=utils.str2bool,
|
||||
default=True,
|
||||
help="option to use fp16 for inference")
|
||||
|
||||
parser.add_argument(
|
||||
'--iteration',
|
||||
type=int,
|
||||
default=None,
|
||||
help=("which iteration of checkpoint to load, "
|
||||
"default to load the latest checkpoint"))
|
||||
parser.add_argument(
|
||||
'--checkpoint',
|
||||
type=str,
|
||||
default=None,
|
||||
help="path of the checkpoint to load")
|
||||
|
||||
|
||||
def benchmark(config):
|
||||
pprint(vars(config))
|
||||
|
||||
# Get checkpoint directory path.
|
||||
run_dir = os.path.join("runs", config.model, config.name)
|
||||
checkpoint_dir = os.path.join(run_dir, "checkpoint")
|
||||
|
||||
# Configurate device.
|
||||
place = fluid.CUDAPlace(0) if config.use_gpu else fluid.CPUPlace()
|
||||
|
||||
with dg.guard(place):
|
||||
# Fix random seed.
|
||||
seed = config.seed
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
fluid.default_startup_program().random_seed = seed
|
||||
fluid.default_main_program().random_seed = seed
|
||||
print("Random Seed: ", seed)
|
||||
|
||||
# Build model.
|
||||
model = WaveFlow(config, checkpoint_dir)
|
||||
model.build(training=False)
|
||||
|
||||
# Run model inference.
|
||||
model.benchmark()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Create parser.
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Synthesize audio using WaveNet model")
|
||||
add_options_to_parser(parser)
|
||||
utils.add_config_options_to_parser(parser)
|
||||
|
||||
# Parse argument from both command line and yaml config file.
|
||||
# For conflicting updates to the same field,
|
||||
# the preceding update will be overwritten by the following one.
|
||||
config = parser.parse_args()
|
||||
config = io.add_yaml_config_to_args(config)
|
||||
benchmark(config)
|
|
@ -1,24 +0,0 @@
|
|||
valid_size: 16
|
||||
segment_length: 16000
|
||||
sample_rate: 22050
|
||||
fft_window_shift: 256
|
||||
fft_window_size: 1024
|
||||
fft_size: 1024
|
||||
mel_bands: 80
|
||||
mel_fmin: 0.0
|
||||
mel_fmax: 8000.0
|
||||
|
||||
seed: 1234
|
||||
learning_rate: 0.0002
|
||||
batch_size: 8
|
||||
test_every: 2000
|
||||
save_every: 10000
|
||||
max_iterations: 3000000
|
||||
|
||||
sigma: 1.0
|
||||
n_flows: 8
|
||||
n_group: 16
|
||||
n_layers: 8
|
||||
n_channels: 64
|
||||
kernel_h: 3
|
||||
kernel_w: 3
|
|
@ -1,144 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import random
|
||||
|
||||
import librosa
|
||||
import numpy as np
|
||||
from paddle import fluid
|
||||
|
||||
from parakeet.datasets import ljspeech
|
||||
from parakeet.data import SpecBatcher, WavBatcher
|
||||
from parakeet.data import DataCargo, DatasetMixin
|
||||
from parakeet.data import DistributedSampler, BatchSampler
|
||||
from scipy.io.wavfile import read
|
||||
|
||||
|
||||
class Dataset(ljspeech.LJSpeech):
|
||||
def __init__(self, config):
|
||||
super(Dataset, self).__init__(config.root)
|
||||
self.config = config
|
||||
|
||||
def _get_example(self, metadatum):
|
||||
fname, _, _ = metadatum
|
||||
wav_path = os.path.join(self.root, "wavs", fname + ".wav")
|
||||
|
||||
audio, loaded_sr = librosa.load(wav_path, sr=self.config.sample_rate)
|
||||
|
||||
return audio
|
||||
|
||||
|
||||
class Subset(DatasetMixin):
|
||||
def __init__(self, dataset, indices, valid):
|
||||
self.dataset = dataset
|
||||
self.indices = indices
|
||||
self.valid = valid
|
||||
self.config = dataset.config
|
||||
|
||||
def get_mel(self, audio):
|
||||
spectrogram = librosa.core.stft(
|
||||
audio,
|
||||
n_fft=self.config.fft_size,
|
||||
hop_length=self.config.fft_window_shift,
|
||||
win_length=self.config.fft_window_size)
|
||||
spectrogram_magnitude = np.abs(spectrogram)
|
||||
|
||||
# mel_filter_bank shape: [n_mels, 1 + n_fft/2]
|
||||
mel_filter_bank = librosa.filters.mel(sr=self.config.sample_rate,
|
||||
n_fft=self.config.fft_size,
|
||||
n_mels=self.config.mel_bands,
|
||||
fmin=self.config.mel_fmin,
|
||||
fmax=self.config.mel_fmax)
|
||||
# mel shape: [n_mels, num_frames]
|
||||
mel = np.dot(mel_filter_bank, spectrogram_magnitude)
|
||||
|
||||
# Normalize mel.
|
||||
clip_val = 1e-5
|
||||
ref_constant = 1
|
||||
mel = np.log(np.clip(mel, a_min=clip_val, a_max=None) * ref_constant)
|
||||
|
||||
return mel
|
||||
|
||||
def __getitem__(self, idx):
|
||||
audio = self.dataset[self.indices[idx]]
|
||||
segment_length = self.config.segment_length
|
||||
|
||||
if self.valid:
|
||||
# whole audio for valid set
|
||||
pass
|
||||
else:
|
||||
# Randomly crop segment_length from audios in the training set.
|
||||
# audio shape: [len]
|
||||
if audio.shape[0] >= segment_length:
|
||||
max_audio_start = audio.shape[0] - segment_length
|
||||
audio_start = random.randint(0, max_audio_start)
|
||||
audio = audio[audio_start:(audio_start + segment_length)]
|
||||
else:
|
||||
audio = np.pad(audio, (0, segment_length - audio.shape[0]),
|
||||
mode='constant',
|
||||
constant_values=0)
|
||||
|
||||
mel = self.get_mel(audio)
|
||||
|
||||
return audio, mel
|
||||
|
||||
def _batch_examples(self, batch):
|
||||
audios = [sample[0] for sample in batch]
|
||||
mels = [sample[1] for sample in batch]
|
||||
|
||||
audios = WavBatcher(pad_value=0.0)(audios)
|
||||
mels = SpecBatcher(pad_value=0.0)(mels)
|
||||
|
||||
return audios, mels
|
||||
|
||||
def __len__(self):
|
||||
return len(self.indices)
|
||||
|
||||
|
||||
class LJSpeech:
|
||||
def __init__(self, config, nranks, rank):
|
||||
place = fluid.CUDAPlace(rank) if config.use_gpu else fluid.CPUPlace()
|
||||
|
||||
# Whole LJSpeech dataset.
|
||||
ds = Dataset(config)
|
||||
|
||||
# Split into train and valid dataset.
|
||||
indices = list(range(len(ds)))
|
||||
train_indices = indices[config.valid_size:]
|
||||
valid_indices = indices[:config.valid_size]
|
||||
random.shuffle(train_indices)
|
||||
|
||||
# Train dataset.
|
||||
trainset = Subset(ds, train_indices, valid=False)
|
||||
sampler = DistributedSampler(len(trainset), nranks, rank)
|
||||
total_bs = config.batch_size
|
||||
assert total_bs % nranks == 0
|
||||
train_sampler = BatchSampler(
|
||||
sampler, total_bs // nranks, drop_last=True)
|
||||
trainloader = DataCargo(trainset, batch_sampler=train_sampler)
|
||||
|
||||
trainreader = fluid.io.PyReader(capacity=50, return_list=True)
|
||||
trainreader.decorate_batch_generator(trainloader, place)
|
||||
self.trainloader = (data for _ in iter(int, 1)
|
||||
for data in trainreader())
|
||||
|
||||
# Valid dataset.
|
||||
validset = Subset(ds, valid_indices, valid=True)
|
||||
# Currently only support batch_size = 1 for valid loader.
|
||||
validloader = DataCargo(validset, batch_size=1, shuffle=False)
|
||||
|
||||
validreader = fluid.io.PyReader(capacity=20, return_list=True)
|
||||
validreader.decorate_batch_generator(validloader, place)
|
||||
self.validloader = validreader
|
|
@ -1,113 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import random
|
||||
from pprint import pprint
|
||||
|
||||
import argparse
|
||||
import numpy as np
|
||||
import paddle.fluid.dygraph as dg
|
||||
from paddle import fluid
|
||||
|
||||
from parakeet.utils import io
|
||||
import utils
|
||||
from waveflow import WaveFlow
|
||||
|
||||
|
||||
def add_options_to_parser(parser):
|
||||
parser.add_argument(
|
||||
'--model',
|
||||
type=str,
|
||||
default='waveflow',
|
||||
help="general name of the model")
|
||||
parser.add_argument(
|
||||
'--name', type=str, help="specific name of the training model")
|
||||
parser.add_argument(
|
||||
'--root', type=str, help="root path of the LJSpeech dataset")
|
||||
|
||||
parser.add_argument(
|
||||
'--use_gpu',
|
||||
type=utils.str2bool,
|
||||
default=True,
|
||||
help="option to use gpu training")
|
||||
parser.add_argument(
|
||||
'--use_fp16',
|
||||
type=utils.str2bool,
|
||||
default=True,
|
||||
help="option to use fp16 for inference")
|
||||
|
||||
parser.add_argument(
|
||||
'--iteration',
|
||||
type=int,
|
||||
default=None,
|
||||
help=("which iteration of checkpoint to load, "
|
||||
"default to load the latest checkpoint"))
|
||||
parser.add_argument(
|
||||
'--checkpoint',
|
||||
type=str,
|
||||
default=None,
|
||||
help="path of the checkpoint to load")
|
||||
|
||||
parser.add_argument(
|
||||
'--output',
|
||||
type=str,
|
||||
default="./syn_audios",
|
||||
help="path to write synthesized audio files")
|
||||
parser.add_argument(
|
||||
'--sample',
|
||||
type=int,
|
||||
default=None,
|
||||
help="which of the valid samples to synthesize audio")
|
||||
|
||||
|
||||
def synthesize(config):
|
||||
pprint(vars(config))
|
||||
|
||||
# Get checkpoint directory path.
|
||||
run_dir = os.path.join("runs", config.model, config.name)
|
||||
checkpoint_dir = os.path.join(run_dir, "checkpoint")
|
||||
|
||||
# Configurate device.
|
||||
place = fluid.CUDAPlace(0) if config.use_gpu else fluid.CPUPlace()
|
||||
|
||||
with dg.guard(place):
|
||||
# Fix random seed.
|
||||
seed = config.seed
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
fluid.default_startup_program().random_seed = seed
|
||||
fluid.default_main_program().random_seed = seed
|
||||
print("Random Seed: ", seed)
|
||||
|
||||
# Build model.
|
||||
model = WaveFlow(config, checkpoint_dir)
|
||||
iteration = model.build(training=False)
|
||||
# Run model inference.
|
||||
model.infer(iteration)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Create parser.
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Synthesize audio using WaveNet model")
|
||||
add_options_to_parser(parser)
|
||||
utils.add_config_options_to_parser(parser)
|
||||
|
||||
# Parse argument from both command line and yaml config file.
|
||||
# For conflicting updates to the same field,
|
||||
# the preceding update will be overwritten by the following one.
|
||||
config = parser.parse_args()
|
||||
config = io.add_yaml_config_to_args(config)
|
||||
synthesize(config)
|
|
@ -1,134 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import random
|
||||
import subprocess
|
||||
import time
|
||||
from pprint import pprint
|
||||
|
||||
import argparse
|
||||
import numpy as np
|
||||
import paddle.fluid.dygraph as dg
|
||||
from paddle import fluid
|
||||
from visualdl import LogWriter
|
||||
|
||||
|
||||
import utils
|
||||
from parakeet.utils import io
|
||||
from waveflow import WaveFlow
|
||||
|
||||
|
||||
def add_options_to_parser(parser):
|
||||
parser.add_argument(
|
||||
'--model',
|
||||
type=str,
|
||||
default='waveflow',
|
||||
help="general name of the model")
|
||||
parser.add_argument(
|
||||
'--name', type=str, help="specific name of the training model")
|
||||
parser.add_argument(
|
||||
'--root', type=str, help="root path of the LJSpeech dataset")
|
||||
|
||||
parser.add_argument(
|
||||
'--use_gpu',
|
||||
type=utils.str2bool,
|
||||
default=True,
|
||||
help="option to use gpu training")
|
||||
|
||||
parser.add_argument(
|
||||
'--iteration',
|
||||
type=int,
|
||||
default=None,
|
||||
help=("which iteration of checkpoint to load, "
|
||||
"default to load the latest checkpoint"))
|
||||
parser.add_argument(
|
||||
'--checkpoint',
|
||||
type=str,
|
||||
default=None,
|
||||
help="path of the checkpoint to load")
|
||||
|
||||
|
||||
def train(config):
|
||||
use_gpu = config.use_gpu
|
||||
|
||||
# Get the rank of the current training process.
|
||||
rank = dg.parallel.Env().local_rank
|
||||
nranks = dg.parallel.Env().nranks
|
||||
parallel = nranks > 1
|
||||
|
||||
if rank == 0:
|
||||
# Print the whole config setting.
|
||||
pprint(vars(config))
|
||||
|
||||
# Make checkpoint directory.
|
||||
run_dir = os.path.join("runs", config.model, config.name)
|
||||
checkpoint_dir = os.path.join(run_dir, "checkpoint")
|
||||
if not os.path.exists(checkpoint_dir):
|
||||
os.makedirs(checkpoint_dir)
|
||||
|
||||
# Create tensorboard logger.
|
||||
vdl = LogWriter(os.path.join(run_dir, "logs")) \
|
||||
if rank == 0 else None
|
||||
|
||||
# Configurate device
|
||||
place = fluid.CUDAPlace(rank) if use_gpu else fluid.CPUPlace()
|
||||
|
||||
with dg.guard(place):
|
||||
# Fix random seed.
|
||||
seed = config.seed
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
fluid.default_startup_program().random_seed = seed
|
||||
fluid.default_main_program().random_seed = seed
|
||||
print("Random Seed: ", seed)
|
||||
|
||||
# Build model.
|
||||
model = WaveFlow(config, checkpoint_dir, parallel, rank, nranks, vdl)
|
||||
iteration = model.build()
|
||||
|
||||
while iteration < config.max_iterations:
|
||||
# Run one single training step.
|
||||
model.train_step(iteration)
|
||||
|
||||
iteration += 1
|
||||
|
||||
if iteration % config.test_every == 0:
|
||||
# Run validation step.
|
||||
model.valid_step(iteration)
|
||||
|
||||
if rank == 0 and iteration % config.save_every == 0:
|
||||
# Save parameters.
|
||||
model.save(iteration)
|
||||
|
||||
# Close TensorBoard.
|
||||
if rank == 0:
|
||||
vdl.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Create parser.
|
||||
parser = argparse.ArgumentParser(description="Train WaveFlow model")
|
||||
#formatter_class='default_argparse')
|
||||
add_options_to_parser(parser)
|
||||
utils.add_config_options_to_parser(parser)
|
||||
|
||||
# Parse argument from both command line and yaml config file.
|
||||
# For conflicting updates to the same field,
|
||||
# the preceding update will be overwritten by the following one.
|
||||
config = parser.parse_args()
|
||||
config = io.add_yaml_config_to_args(config)
|
||||
# Force to use fp32 in model training
|
||||
vars(config)["use_fp16"] = False
|
||||
train(config)
|
|
@ -1,90 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
|
||||
|
||||
def str2bool(v):
|
||||
return v.lower() in ("true", "t", "1")
|
||||
|
||||
|
||||
def add_config_options_to_parser(parser):
|
||||
parser.add_argument(
|
||||
'--valid_size', type=int, help="size of the valid dataset")
|
||||
parser.add_argument(
|
||||
'--segment_length',
|
||||
type=int,
|
||||
help="the length of audio clip for training")
|
||||
parser.add_argument(
|
||||
'--sample_rate', type=int, help="sampling rate of audio data file")
|
||||
parser.add_argument(
|
||||
'--fft_window_shift',
|
||||
type=int,
|
||||
help="the shift of fft window for each frame")
|
||||
parser.add_argument(
|
||||
'--fft_window_size',
|
||||
type=int,
|
||||
help="the size of fft window for each frame")
|
||||
parser.add_argument(
|
||||
'--fft_size', type=int, help="the size of fft filter on each frame")
|
||||
parser.add_argument(
|
||||
'--mel_bands',
|
||||
type=int,
|
||||
help="the number of mel bands when calculating mel spectrograms")
|
||||
parser.add_argument(
|
||||
'--mel_fmin',
|
||||
type=float,
|
||||
help="lowest frequency in calculating mel spectrograms")
|
||||
parser.add_argument(
|
||||
'--mel_fmax',
|
||||
type=float,
|
||||
help="highest frequency in calculating mel spectrograms")
|
||||
|
||||
parser.add_argument(
|
||||
'--seed', type=int, help="seed of random initialization for the model")
|
||||
parser.add_argument('--learning_rate', type=float)
|
||||
parser.add_argument(
|
||||
'--batch_size', type=int, help="batch size for training")
|
||||
parser.add_argument(
|
||||
'--test_every', type=int, help="test interval during training")
|
||||
parser.add_argument(
|
||||
'--save_every',
|
||||
type=int,
|
||||
help="checkpointing interval during training")
|
||||
parser.add_argument(
|
||||
'--max_iterations', type=int, help="maximum training iterations")
|
||||
|
||||
parser.add_argument(
|
||||
'--sigma',
|
||||
type=float,
|
||||
help="standard deviation of the latent Gaussian variable")
|
||||
parser.add_argument('--n_flows', type=int, help="number of flows")
|
||||
parser.add_argument(
|
||||
'--n_group',
|
||||
type=int,
|
||||
help="number of adjacent audio samples to squeeze into one column")
|
||||
parser.add_argument(
|
||||
'--n_layers',
|
||||
type=int,
|
||||
help="number of conv2d layer in one wavenet-like flow architecture")
|
||||
parser.add_argument(
|
||||
'--n_channels', type=int, help="number of residual channels in flow")
|
||||
parser.add_argument(
|
||||
'--kernel_h',
|
||||
type=int,
|
||||
help="height of the kernel in the conv2d layer")
|
||||
parser.add_argument(
|
||||
'--kernel_w', type=int, help="width of the kernel in the conv2d layer")
|
||||
|
||||
parser.add_argument('--config', type=str, help="Path to the config file.")
|
|
@ -1,292 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import itertools
|
||||
import os
|
||||
import time
|
||||
|
||||
import numpy as np
|
||||
import paddle.fluid.dygraph as dg
|
||||
from paddle import fluid
|
||||
from scipy.io.wavfile import write
|
||||
|
||||
from parakeet.utils import io
|
||||
from parakeet.modules import weight_norm
|
||||
from parakeet.models.waveflow import WaveFlowLoss, WaveFlowModule
|
||||
from data import LJSpeech
|
||||
import utils
|
||||
|
||||
|
||||
class WaveFlow():
|
||||
"""Wrapper class of WaveFlow model that supports multiple APIs.
|
||||
|
||||
This module provides APIs for model building, training, validation,
|
||||
inference, benchmarking, and saving.
|
||||
|
||||
Args:
|
||||
config (obj): config info.
|
||||
checkpoint_dir (str): path for checkpointing.
|
||||
parallel (bool, optional): whether use multiple GPUs for training.
|
||||
Defaults to False.
|
||||
rank (int, optional): the rank of the process in a multi-process
|
||||
scenario. Defaults to 0.
|
||||
nranks (int, optional): the total number of processes. Defaults to 1.
|
||||
vdl_logger (obj, optional): logger to visualize metrics.
|
||||
Defaults to None.
|
||||
|
||||
Returns:
|
||||
WaveFlow
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
config,
|
||||
checkpoint_dir,
|
||||
parallel=False,
|
||||
rank=0,
|
||||
nranks=1,
|
||||
vdl_logger=None):
|
||||
self.config = config
|
||||
self.checkpoint_dir = checkpoint_dir
|
||||
self.parallel = parallel
|
||||
self.rank = rank
|
||||
self.nranks = nranks
|
||||
self.vdl_logger = vdl_logger
|
||||
self.dtype = "float16" if config.use_fp16 else "float32"
|
||||
|
||||
def build(self, training=True):
|
||||
"""Initialize the model.
|
||||
|
||||
Args:
|
||||
training (bool, optional): Whether the model is built for training or inference.
|
||||
Defaults to True.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
config = self.config
|
||||
dataset = LJSpeech(config, self.nranks, self.rank)
|
||||
self.trainloader = dataset.trainloader
|
||||
self.validloader = dataset.validloader
|
||||
|
||||
waveflow = WaveFlowModule(config)
|
||||
|
||||
if training:
|
||||
optimizer = fluid.optimizer.AdamOptimizer(
|
||||
learning_rate=config.learning_rate,
|
||||
parameter_list=waveflow.parameters())
|
||||
|
||||
# Load parameters.
|
||||
iteration = io.load_parameters(
|
||||
model=waveflow,
|
||||
optimizer=optimizer,
|
||||
checkpoint_dir=self.checkpoint_dir,
|
||||
iteration=config.iteration,
|
||||
checkpoint_path=config.checkpoint)
|
||||
print("Rank {}: checkpoint loaded.".format(self.rank))
|
||||
|
||||
# Data parallelism.
|
||||
if self.parallel:
|
||||
strategy = dg.parallel.prepare_context()
|
||||
waveflow = dg.parallel.DataParallel(waveflow, strategy)
|
||||
|
||||
self.waveflow = waveflow
|
||||
self.optimizer = optimizer
|
||||
self.criterion = WaveFlowLoss(config.sigma)
|
||||
|
||||
else:
|
||||
# Load parameters.
|
||||
iteration = io.load_parameters(
|
||||
model=waveflow,
|
||||
checkpoint_dir=self.checkpoint_dir,
|
||||
iteration=config.iteration,
|
||||
checkpoint_path=config.checkpoint)
|
||||
print("Rank {}: checkpoint loaded.".format(self.rank))
|
||||
|
||||
for layer in waveflow.sublayers():
|
||||
if isinstance(layer, weight_norm.WeightNormWrapper):
|
||||
layer.remove_weight_norm()
|
||||
|
||||
self.waveflow = waveflow
|
||||
|
||||
return iteration
|
||||
|
||||
def train_step(self, iteration):
|
||||
"""Train the model for one step.
|
||||
|
||||
Args:
|
||||
iteration (int): current iteration number.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
self.waveflow.train()
|
||||
|
||||
start_time = time.time()
|
||||
audios, mels = next(self.trainloader)
|
||||
load_time = time.time()
|
||||
|
||||
outputs = self.waveflow(audios, mels)
|
||||
loss = self.criterion(outputs)
|
||||
|
||||
if self.parallel:
|
||||
# loss = loss / num_trainers
|
||||
loss = self.waveflow.scale_loss(loss)
|
||||
loss.backward()
|
||||
self.waveflow.apply_collective_grads()
|
||||
else:
|
||||
loss.backward()
|
||||
|
||||
self.optimizer.minimize(
|
||||
loss, parameter_list=self.waveflow.parameters())
|
||||
self.waveflow.clear_gradients()
|
||||
|
||||
graph_time = time.time()
|
||||
|
||||
if self.rank == 0:
|
||||
loss_val = float(loss.numpy()) * self.nranks
|
||||
log = "Rank: {} Step: {:^8d} Loss: {:<8.3f} " \
|
||||
"Time: {:.3f}/{:.3f}".format(
|
||||
self.rank, iteration, loss_val,
|
||||
load_time - start_time, graph_time - load_time)
|
||||
print(log)
|
||||
|
||||
vdl_writer = self.vdl_logger
|
||||
vdl_writer.add_scalar("Train-Loss-Rank-0", loss_val, iteration)
|
||||
|
||||
@dg.no_grad
|
||||
def valid_step(self, iteration):
|
||||
"""Run the model on the validation dataset.
|
||||
|
||||
Args:
|
||||
iteration (int): current iteration number.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
self.waveflow.eval()
|
||||
vdl_writer = self.vdl_logger
|
||||
|
||||
total_loss = []
|
||||
sample_audios = []
|
||||
start_time = time.time()
|
||||
|
||||
for i, batch in enumerate(self.validloader()):
|
||||
audios, mels = batch
|
||||
valid_outputs = self.waveflow(audios, mels)
|
||||
valid_z, valid_log_s_list = valid_outputs
|
||||
|
||||
# Visualize latent z and scale log_s.
|
||||
if self.rank == 0 and i == 0:
|
||||
vdl_writer.add_histogram("Valid-Latent_z", valid_z.numpy(),
|
||||
iteration)
|
||||
for j, valid_log_s in enumerate(valid_log_s_list):
|
||||
hist_name = "Valid-{}th-Flow-Log_s".format(j)
|
||||
vdl_writer.add_histogram(hist_name, valid_log_s.numpy(),
|
||||
iteration)
|
||||
|
||||
valid_loss = self.criterion(valid_outputs)
|
||||
total_loss.append(float(valid_loss.numpy()))
|
||||
|
||||
total_time = time.time() - start_time
|
||||
if self.rank == 0:
|
||||
loss_val = np.mean(total_loss)
|
||||
log = "Test | Rank: {} AvgLoss: {:<8.3f} Time {:<8.3f}".format(
|
||||
self.rank, loss_val, total_time)
|
||||
print(log)
|
||||
vdl_writer.add_scalar("Valid-Avg-Loss", loss_val, iteration)
|
||||
|
||||
@dg.no_grad
|
||||
def infer(self, iteration):
|
||||
"""Run the model to synthesize audios.
|
||||
|
||||
Args:
|
||||
iteration (int): iteration number of the loaded checkpoint.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
self.waveflow.eval()
|
||||
|
||||
config = self.config
|
||||
sample = config.sample
|
||||
|
||||
output = "{}/{}/iter-{}".format(config.output, config.name, iteration)
|
||||
if not os.path.exists(output):
|
||||
os.makedirs(output)
|
||||
|
||||
mels_list = [mels for _, mels in self.validloader()]
|
||||
if sample is not None:
|
||||
mels_list = [mels_list[sample]]
|
||||
else:
|
||||
sample = 0
|
||||
|
||||
for idx, mel in enumerate(mels_list):
|
||||
abs_idx = sample + idx
|
||||
filename = "{}/valid_{}.wav".format(output, abs_idx)
|
||||
print("Synthesize sample {}, save as {}".format(abs_idx, filename))
|
||||
|
||||
start_time = time.time()
|
||||
audio = self.waveflow.synthesize(mel, sigma=self.config.sigma)
|
||||
syn_time = time.time() - start_time
|
||||
|
||||
audio = audio[0]
|
||||
audio_time = audio.shape[0] / self.config.sample_rate
|
||||
print("audio time {:.4f}, synthesis time {:.4f}".format(audio_time,
|
||||
syn_time))
|
||||
|
||||
# Denormalize audio from [-1, 1] to [-32768, 32768] int16 range.
|
||||
audio = audio.numpy().astype("float32") * 32768.0
|
||||
audio = audio.astype('int16')
|
||||
write(filename, config.sample_rate, audio)
|
||||
|
||||
@dg.no_grad
|
||||
def benchmark(self):
|
||||
"""Run the model to benchmark synthesis speed.
|
||||
|
||||
Args:
|
||||
None
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
self.waveflow.eval()
|
||||
|
||||
mels_list = [mels for _, mels in self.validloader()]
|
||||
mel = fluid.layers.concat(mels_list, axis=2)
|
||||
mel = mel[:, :, :864]
|
||||
batch_size = 8
|
||||
mel = fluid.layers.expand(mel, [batch_size, 1, 1])
|
||||
|
||||
for i in range(10):
|
||||
start_time = time.time()
|
||||
audio = self.waveflow.synthesize(mel, sigma=self.config.sigma)
|
||||
print("audio.shape = ", audio.shape)
|
||||
syn_time = time.time() - start_time
|
||||
|
||||
audio_time = audio.shape[1] * batch_size / self.config.sample_rate
|
||||
print("audio time {:.4f}, synthesis time {:.4f}".format(audio_time,
|
||||
syn_time))
|
||||
print("{} X real-time".format(audio_time / syn_time))
|
||||
|
||||
def save(self, iteration):
|
||||
"""Save model checkpoint.
|
||||
|
||||
Args:
|
||||
iteration (int): iteration number of the model to be saved.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
io.save_parameters(self.checkpoint_dir, iteration, self.waveflow,
|
||||
self.optimizer)
|
|
@ -1,144 +0,0 @@
|
|||
# WaveNet
|
||||
|
||||
PaddlePaddle dynamic graph implementation of WaveNet, a convolutional network based vocoder. WaveNet is originally proposed in [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499). However, in this experiment, the implementation follows the teacher model in [ClariNet: Parallel Wave Generation in End-to-End Text-to-Speech](arxiv.org/abs/1807.07281).
|
||||
|
||||
|
||||
## Dataset
|
||||
|
||||
We experiment with the LJSpeech dataset. Download and unzip [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
|
||||
|
||||
```bash
|
||||
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
|
||||
tar xjvf LJSpeech-1.1.tar.bz2
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
├── data.py data_processing
|
||||
├── configs/ (example) configuration file
|
||||
├── synthesis.py script to synthesize waveform from mel_spectrogram
|
||||
├── train.py script to train a model
|
||||
└── utils.py utility functions
|
||||
```
|
||||
|
||||
## Saving & Loading
|
||||
`train.py` and `synthesis.py` have 3 arguments in common, `--checkpooint`, `iteration` and `output`.
|
||||
|
||||
1. `output` is the directory for saving results.
|
||||
During training, checkpoints are saved in `checkpoints/` in `output` and tensorboard log is save in `log/` in `output`. Other possible outputs are saved in `states/` in `outuput`.
|
||||
During synthesizing, audio files and other possible outputs are save in `synthesis/` in `output`.
|
||||
So after training and synthesizing with the same output directory, the file structure of the output directory looks like this.
|
||||
|
||||
```text
|
||||
├── checkpoints/ # checkpoint directory (including *.pdparams, *.pdopt and a text file `checkpoint` that records the latest checkpoint)
|
||||
├── states/ # audio files generated at validation and other possible outputs
|
||||
├── log/ # tensorboard log
|
||||
└── synthesis/ # synthesized audio files and other possible outputs
|
||||
```
|
||||
|
||||
2. `--checkpoint` and `--iteration` for loading from existing checkpoint. Loading existing checkpoiont follows the following rule:
|
||||
If `--checkpoint` is provided, the checkpoint specified by `--checkpoint` is loaded.
|
||||
If `--checkpoint` is not provided, we try to load the model specified by `--iteration` from the checkpoint directory. If `--iteration` is not provided, we try to load the latested checkpoint from checkpoint directory.
|
||||
|
||||
## Train
|
||||
|
||||
Train the model using train.py. For help on usage, try `python train.py --help`.
|
||||
|
||||
```text
|
||||
usage: train.py [-h] [--data DATA] [--config CONFIG] [--device DEVICE]
|
||||
[--checkpoint CHECKPOINT | --iteration ITERATION]
|
||||
output
|
||||
|
||||
Train a WaveNet model with LJSpeech.
|
||||
|
||||
positional arguments:
|
||||
output path to save results
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--data DATA path of the LJspeech dataset
|
||||
--config CONFIG path of the config file
|
||||
--device DEVICE device to use
|
||||
--checkpoint CHECKPOINT checkpoint to resume from
|
||||
--iteration ITERATION the iteration of the checkpoint to load from output directory
|
||||
```
|
||||
|
||||
- `--data` is the path of the LJSpeech dataset, the extracted folder from the downloaded archive (the folder which contains metadata.txt).
|
||||
- `--config` is the configuration file to use. The provided configurations can be used directly. And you can change some values in the configuration file and train the model with a different config.
|
||||
- `--device` is the device (gpu id) to use for training. `-1` means CPU.
|
||||
|
||||
- `--checkpoint` is the path of the checkpoint.
|
||||
- `--iteration` is the iteration of the checkpoint to load from output directory.
|
||||
- `output` is the directory to save results, all result are saved in this directory.
|
||||
|
||||
See [Saving-&-Loading](#Saving-&-Loading) for details of checkpoint loading.
|
||||
|
||||
|
||||
Example script:
|
||||
|
||||
```bash
|
||||
python train.py \
|
||||
--config=./configs/wavenet_single_gaussian.yaml \
|
||||
--data=./LJSpeech-1.1/ \
|
||||
--device=0 \
|
||||
experiment
|
||||
```
|
||||
|
||||
You can monitor training log via TensorBoard, using the script below.
|
||||
|
||||
```bash
|
||||
cd experiment/log
|
||||
tensorboard --logdir=.
|
||||
```
|
||||
|
||||
## Synthesis
|
||||
```text
|
||||
usage: synthesis.py [-h] [--data DATA] [--config CONFIG] [--device DEVICE]
|
||||
[--checkpoint CHECKPOINT | --iteration ITERATION]
|
||||
output
|
||||
|
||||
Synthesize valid data from LJspeech with a wavenet model.
|
||||
|
||||
positional arguments:
|
||||
output path to save the synthesized audio
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--data DATA path of the LJspeech dataset
|
||||
--config CONFIG path of the config file
|
||||
--device DEVICE device to use
|
||||
--checkpoint CHECKPOINT checkpoint to resume from
|
||||
--iteration ITERATION the iteration of the checkpoint to load from output directory
|
||||
```
|
||||
|
||||
- `--data` is the path of the LJspeech dataset. In principle, a dataset is not needed for synthesis, but since the input is mel spectrogram, we need to get mel spectrogram from audio files.
|
||||
- `--config` is the configuration file to use. You should use the same configuration with which you train you model.
|
||||
- `--device` is the device (gpu id) to use for training. `-1` means CPU.
|
||||
- `--checkpoint` is the checkpoint to load.
|
||||
- `--iteration` is the iteration of the checkpoint to load from output directory.
|
||||
- `output` is the directory to save synthesized audio. Audio file is saved in `synthesis/` in `output` directory.
|
||||
See [Saving-&-Loading](#Saving-&-Loading) for details of checkpoint loading.
|
||||
|
||||
|
||||
Example script:
|
||||
|
||||
```bash
|
||||
python synthesis.py \
|
||||
--config=./configs/wavenet_single_gaussian.yaml \
|
||||
--data=./LJSpeech-1.1/ \
|
||||
--device=0 \
|
||||
--checkpoint="experiment/checkpoints/step-1000000" \
|
||||
experiment
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
python synthesis.py \
|
||||
--config=./configs/wavenet_single_gaussian.yaml \
|
||||
--data=./LJSpeech-1.1/ \
|
||||
--device=0 \
|
||||
--iteration=1000000 \
|
||||
experiment
|
||||
```
|
|
@ -1,36 +0,0 @@
|
|||
data:
|
||||
batch_size: 16
|
||||
train_clip_seconds: 0.5
|
||||
sample_rate: 22050
|
||||
hop_length: 256
|
||||
win_length: 1024
|
||||
n_fft: 2048
|
||||
n_mels: 80
|
||||
valid_size: 16
|
||||
|
||||
|
||||
|
||||
model:
|
||||
upsampling_factors: [16, 16]
|
||||
n_loop: 10
|
||||
n_layer: 3
|
||||
filter_size: 2
|
||||
residual_channels: 128
|
||||
loss_type: "mog"
|
||||
output_dim: 30
|
||||
log_scale_min: -9
|
||||
|
||||
train:
|
||||
learning_rate: 0.001
|
||||
anneal_rate: 0.5
|
||||
anneal_interval: 200000
|
||||
gradient_max_norm: 100.0
|
||||
|
||||
checkpoint_interval: 10000
|
||||
snap_interval: 10000
|
||||
eval_interval: 10000
|
||||
|
||||
max_iterations: 2000000
|
||||
|
||||
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
data:
|
||||
batch_size: 16
|
||||
train_clip_seconds: 0.5
|
||||
sample_rate: 22050
|
||||
hop_length: 256
|
||||
win_length: 1024
|
||||
n_fft: 2048
|
||||
n_mels: 80
|
||||
valid_size: 16
|
||||
|
||||
|
||||
|
||||
model:
|
||||
upsampling_factors: [16, 16]
|
||||
n_loop: 10
|
||||
n_layer: 3
|
||||
filter_size: 2
|
||||
residual_channels: 128
|
||||
loss_type: "mog"
|
||||
output_dim: 3
|
||||
log_scale_min: -9
|
||||
|
||||
train:
|
||||
learning_rate: 0.001
|
||||
anneal_rate: 0.5
|
||||
anneal_interval: 200000
|
||||
gradient_max_norm: 100.0
|
||||
|
||||
checkpoint_interval: 10000
|
||||
snap_interval: 10000
|
||||
eval_interval: 10000
|
||||
|
||||
max_iterations: 2000000
|
||||
|
||||
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
data:
|
||||
batch_size: 16
|
||||
train_clip_seconds: 0.5
|
||||
sample_rate: 22050
|
||||
hop_length: 256
|
||||
win_length: 1024
|
||||
n_fft: 2048
|
||||
n_mels: 80
|
||||
valid_size: 16
|
||||
|
||||
|
||||
|
||||
model:
|
||||
upsampling_factors: [16, 16]
|
||||
n_loop: 10
|
||||
n_layer: 3
|
||||
filter_size: 2
|
||||
residual_channels: 128
|
||||
loss_type: "softmax"
|
||||
output_dim: 2048
|
||||
log_scale_min: -9
|
||||
|
||||
train:
|
||||
learning_rate: 0.001
|
||||
anneal_rate: 0.5
|
||||
anneal_interval: 200000
|
||||
gradient_max_norm: 100.0
|
||||
|
||||
checkpoint_interval: 10000
|
||||
snap_interval: 10000
|
||||
eval_interval: 10000
|
||||
|
||||
max_iterations: 2000000
|
||||
|
||||
|
||||
|
|
@ -1,164 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import csv
|
||||
import numpy as np
|
||||
import librosa
|
||||
from pathlib import Path
|
||||
import pandas as pd
|
||||
|
||||
from parakeet.data import batch_spec, batch_wav
|
||||
from parakeet.data import DatasetMixin
|
||||
|
||||
|
||||
class LJSpeechMetaData(DatasetMixin):
|
||||
def __init__(self, root):
|
||||
self.root = Path(root)
|
||||
self._wav_dir = self.root.joinpath("wavs")
|
||||
csv_path = self.root.joinpath("metadata.csv")
|
||||
self._table = pd.read_csv(
|
||||
csv_path,
|
||||
sep="|",
|
||||
header=None,
|
||||
quoting=csv.QUOTE_NONE,
|
||||
names=["fname", "raw_text", "normalized_text"])
|
||||
|
||||
def get_example(self, i):
|
||||
fname, raw_text, normalized_text = self._table.iloc[i]
|
||||
fname = str(self._wav_dir.joinpath(fname + ".wav"))
|
||||
return fname, raw_text, normalized_text
|
||||
|
||||
def __len__(self):
|
||||
return len(self._table)
|
||||
|
||||
|
||||
class Transform(object):
|
||||
def __init__(self, sample_rate, n_fft, win_length, hop_length, n_mels):
|
||||
self.sample_rate = sample_rate
|
||||
self.n_fft = n_fft
|
||||
self.win_length = win_length
|
||||
self.hop_length = hop_length
|
||||
self.n_mels = n_mels
|
||||
|
||||
def __call__(self, example):
|
||||
wav_path, _, _ = example
|
||||
|
||||
sr = self.sample_rate
|
||||
n_fft = self.n_fft
|
||||
win_length = self.win_length
|
||||
hop_length = self.hop_length
|
||||
n_mels = self.n_mels
|
||||
|
||||
wav, loaded_sr = librosa.load(wav_path, sr=None)
|
||||
assert loaded_sr == sr, "sample rate does not match, resampling applied"
|
||||
|
||||
# Pad audio to the right size.
|
||||
frames = int(np.ceil(float(wav.size) / hop_length))
|
||||
fft_padding = (n_fft - hop_length) // 2 # sound
|
||||
desired_length = frames * hop_length + fft_padding * 2
|
||||
pad_amount = (desired_length - wav.size) // 2
|
||||
|
||||
if wav.size % 2 == 0:
|
||||
wav = np.pad(wav, (pad_amount, pad_amount), mode='reflect')
|
||||
else:
|
||||
wav = np.pad(wav, (pad_amount, pad_amount + 1), mode='reflect')
|
||||
|
||||
# Normalize audio.
|
||||
wav = wav / np.abs(wav).max() * 0.999
|
||||
|
||||
# Compute mel-spectrogram.
|
||||
# Turn center to False to prevent internal padding.
|
||||
spectrogram = librosa.core.stft(
|
||||
wav,
|
||||
hop_length=hop_length,
|
||||
win_length=win_length,
|
||||
n_fft=n_fft,
|
||||
center=False)
|
||||
spectrogram_magnitude = np.abs(spectrogram)
|
||||
|
||||
# Compute mel-spectrograms.
|
||||
mel_filter_bank = librosa.filters.mel(sr=sr,
|
||||
n_fft=n_fft,
|
||||
n_mels=n_mels)
|
||||
mel_spectrogram = np.dot(mel_filter_bank, spectrogram_magnitude)
|
||||
mel_spectrogram = mel_spectrogram
|
||||
|
||||
# Rescale mel_spectrogram.
|
||||
min_level, ref_level = 1e-5, 20 # hard code it
|
||||
mel_spectrogram = 20 * np.log10(np.maximum(min_level, mel_spectrogram))
|
||||
mel_spectrogram = mel_spectrogram - ref_level
|
||||
mel_spectrogram = np.clip((mel_spectrogram + 100) / 100, 0, 1)
|
||||
|
||||
# Extract the center of audio that corresponds to mel spectrograms.
|
||||
audio = wav[fft_padding:-fft_padding]
|
||||
assert mel_spectrogram.shape[1] * hop_length == audio.size
|
||||
|
||||
# there is no clipping here
|
||||
return audio, mel_spectrogram
|
||||
|
||||
|
||||
class DataCollector(object):
|
||||
def __init__(self,
|
||||
context_size,
|
||||
sample_rate,
|
||||
hop_length,
|
||||
train_clip_seconds,
|
||||
valid=False):
|
||||
frames_per_second = sample_rate // hop_length
|
||||
train_clip_frames = int(
|
||||
np.ceil(train_clip_seconds * frames_per_second))
|
||||
context_frames = context_size // hop_length
|
||||
self.num_frames = train_clip_frames + context_frames
|
||||
|
||||
self.sample_rate = sample_rate
|
||||
self.hop_length = hop_length
|
||||
self.valid = valid
|
||||
|
||||
def random_crop(self, sample):
|
||||
audio, mel_spectrogram = sample
|
||||
audio_frames = int(audio.size) // self.hop_length
|
||||
max_start_frame = audio_frames - self.num_frames
|
||||
assert max_start_frame >= 0, "audio is too short to be cropped"
|
||||
|
||||
frame_start = np.random.randint(0, max_start_frame)
|
||||
# frame_start = 0 # norandom
|
||||
frame_end = frame_start + self.num_frames
|
||||
|
||||
audio_start = frame_start * self.hop_length
|
||||
audio_end = frame_end * self.hop_length
|
||||
|
||||
audio = audio[audio_start:audio_end]
|
||||
return audio, mel_spectrogram, audio_start
|
||||
|
||||
def __call__(self, samples):
|
||||
# transform them first
|
||||
if self.valid:
|
||||
samples = [(audio, mel_spectrogram, 0)
|
||||
for audio, mel_spectrogram in samples]
|
||||
else:
|
||||
samples = [self.random_crop(sample) for sample in samples]
|
||||
# batch them
|
||||
audios = [sample[0] for sample in samples]
|
||||
audio_starts = [sample[2] for sample in samples]
|
||||
mels = [sample[1] for sample in samples]
|
||||
|
||||
mels = batch_spec(mels)
|
||||
|
||||
if self.valid:
|
||||
audios = batch_wav(audios, dtype=np.float32)
|
||||
else:
|
||||
audios = np.array(audios, dtype=np.float32)
|
||||
audio_starts = np.array(audio_starts, dtype=np.int64)
|
||||
return audios, mels, audio_starts
|
|
@ -1,152 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import os
|
||||
import ruamel.yaml
|
||||
import argparse
|
||||
from tqdm import tqdm
|
||||
from paddle import fluid
|
||||
fluid.require_version('1.8.0')
|
||||
import paddle.fluid.dygraph as dg
|
||||
|
||||
from parakeet.modules.weight_norm import WeightNormWrapper
|
||||
from parakeet.data import SliceDataset, TransformDataset, DataCargo, SequentialSampler, RandomSampler
|
||||
from parakeet.models.wavenet import UpsampleNet, WaveNet, ConditionalWavenet
|
||||
from parakeet.utils.layer_tools import summary
|
||||
from parakeet.utils import io
|
||||
|
||||
from data import LJSpeechMetaData, Transform, DataCollector
|
||||
from utils import make_output_tree, valid_model, eval_model
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Synthesize valid data from LJspeech with a wavenet model.")
|
||||
parser.add_argument(
|
||||
"--data", type=str, help="path of the LJspeech dataset")
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--device", type=int, default=-1, help="device to use")
|
||||
|
||||
g = parser.add_mutually_exclusive_group()
|
||||
g.add_argument("--checkpoint", type=str, help="checkpoint to resume from")
|
||||
g.add_argument(
|
||||
"--iteration",
|
||||
type=int,
|
||||
help="the iteration of the checkpoint to load from output directory")
|
||||
|
||||
parser.add_argument(
|
||||
"output",
|
||||
type=str,
|
||||
default="experiment",
|
||||
help="path to save the synthesized audio")
|
||||
|
||||
args = parser.parse_args()
|
||||
with open(args.config, 'rt') as f:
|
||||
config = ruamel.yaml.safe_load(f)
|
||||
|
||||
if args.device == -1:
|
||||
place = fluid.CPUPlace()
|
||||
else:
|
||||
place = fluid.CUDAPlace(args.device)
|
||||
|
||||
dg.enable_dygraph(place)
|
||||
|
||||
ljspeech_meta = LJSpeechMetaData(args.data)
|
||||
|
||||
data_config = config["data"]
|
||||
sample_rate = data_config["sample_rate"]
|
||||
n_fft = data_config["n_fft"]
|
||||
win_length = data_config["win_length"]
|
||||
hop_length = data_config["hop_length"]
|
||||
n_mels = data_config["n_mels"]
|
||||
train_clip_seconds = data_config["train_clip_seconds"]
|
||||
transform = Transform(sample_rate, n_fft, win_length, hop_length, n_mels)
|
||||
ljspeech = TransformDataset(ljspeech_meta, transform)
|
||||
|
||||
valid_size = data_config["valid_size"]
|
||||
ljspeech_valid = SliceDataset(ljspeech, 0, valid_size)
|
||||
ljspeech_train = SliceDataset(ljspeech, valid_size, len(ljspeech))
|
||||
|
||||
model_config = config["model"]
|
||||
n_loop = model_config["n_loop"]
|
||||
n_layer = model_config["n_layer"]
|
||||
filter_size = model_config["filter_size"]
|
||||
context_size = 1 + n_layer * sum([filter_size**i for i in range(n_loop)])
|
||||
print("context size is {} samples".format(context_size))
|
||||
train_batch_fn = DataCollector(context_size, sample_rate, hop_length,
|
||||
train_clip_seconds)
|
||||
valid_batch_fn = DataCollector(
|
||||
context_size, sample_rate, hop_length, train_clip_seconds, valid=True)
|
||||
|
||||
batch_size = data_config["batch_size"]
|
||||
train_cargo = DataCargo(
|
||||
ljspeech_train,
|
||||
train_batch_fn,
|
||||
batch_size,
|
||||
sampler=RandomSampler(ljspeech_train))
|
||||
|
||||
# only batch=1 for validation is enabled
|
||||
valid_cargo = DataCargo(
|
||||
ljspeech_valid,
|
||||
valid_batch_fn,
|
||||
batch_size=1,
|
||||
sampler=SequentialSampler(ljspeech_valid))
|
||||
|
||||
if not os.path.exists(args.output):
|
||||
os.makedirs(args.output)
|
||||
|
||||
model_config = config["model"]
|
||||
upsampling_factors = model_config["upsampling_factors"]
|
||||
encoder = UpsampleNet(upsampling_factors)
|
||||
|
||||
n_loop = model_config["n_loop"]
|
||||
n_layer = model_config["n_layer"]
|
||||
residual_channels = model_config["residual_channels"]
|
||||
output_dim = model_config["output_dim"]
|
||||
loss_type = model_config["loss_type"]
|
||||
log_scale_min = model_config["log_scale_min"]
|
||||
decoder = WaveNet(n_loop, n_layer, residual_channels, output_dim, n_mels,
|
||||
filter_size, loss_type, log_scale_min)
|
||||
|
||||
model = ConditionalWavenet(encoder, decoder)
|
||||
summary(model)
|
||||
|
||||
# load model parameters
|
||||
checkpoint_dir = os.path.join(args.output, "checkpoints")
|
||||
if args.checkpoint:
|
||||
iteration = io.load_parameters(model, checkpoint_path=args.checkpoint)
|
||||
else:
|
||||
iteration = io.load_parameters(
|
||||
model, checkpoint_dir=checkpoint_dir, iteration=args.iteration)
|
||||
assert iteration > 0, "A trained model is needed."
|
||||
|
||||
# WARNING: don't forget to remove weight norm to re-compute each wrapped layer's weight
|
||||
# removing weight norm also speeds up computation
|
||||
for layer in model.sublayers():
|
||||
if isinstance(layer, WeightNormWrapper):
|
||||
layer.remove_weight_norm()
|
||||
|
||||
train_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True)
|
||||
train_loader.set_batch_generator(train_cargo, place)
|
||||
|
||||
valid_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True)
|
||||
valid_loader.set_batch_generator(valid_cargo, place)
|
||||
|
||||
synthesis_dir = os.path.join(args.output, "synthesis")
|
||||
if not os.path.exists(synthesis_dir):
|
||||
os.makedirs(synthesis_dir)
|
||||
|
||||
eval_model(model, valid_loader, synthesis_dir, iteration, sample_rate)
|
|
@ -1,201 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import os
|
||||
import ruamel.yaml
|
||||
import argparse
|
||||
import tqdm
|
||||
from visualdl import LogWriter
|
||||
from paddle import fluid
|
||||
fluid.require_version('1.8.0')
|
||||
import paddle.fluid.dygraph as dg
|
||||
|
||||
from parakeet.data import SliceDataset, TransformDataset, CacheDataset, DataCargo, SequentialSampler, RandomSampler
|
||||
from parakeet.models.wavenet import UpsampleNet, WaveNet, ConditionalWavenet
|
||||
from parakeet.utils.layer_tools import summary
|
||||
from parakeet.utils import io
|
||||
|
||||
from data import LJSpeechMetaData, Transform, DataCollector
|
||||
from utils import make_output_tree, valid_model
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Train a WaveNet model with LJSpeech.")
|
||||
parser.add_argument(
|
||||
"--data", type=str, help="path of the LJspeech dataset")
|
||||
parser.add_argument("--config", type=str, help="path of the config file")
|
||||
parser.add_argument("--device", type=int, default=-1, help="device to use")
|
||||
|
||||
g = parser.add_mutually_exclusive_group()
|
||||
g.add_argument("--checkpoint", type=str, help="checkpoint to resume from")
|
||||
g.add_argument(
|
||||
"--iteration",
|
||||
type=int,
|
||||
help="the iteration of the checkpoint to load from output directory")
|
||||
|
||||
parser.add_argument(
|
||||
"output", type=str, default="experiment", help="path to save results")
|
||||
|
||||
args = parser.parse_args()
|
||||
with open(args.config, 'rt') as f:
|
||||
config = ruamel.yaml.safe_load(f)
|
||||
|
||||
if args.device == -1:
|
||||
place = fluid.CPUPlace()
|
||||
else:
|
||||
place = fluid.CUDAPlace(args.device)
|
||||
|
||||
dg.enable_dygraph(place)
|
||||
|
||||
print("Command Line Args: ")
|
||||
for k, v in vars(args).items():
|
||||
print("{}: {}".format(k, v))
|
||||
|
||||
ljspeech_meta = LJSpeechMetaData(args.data)
|
||||
|
||||
data_config = config["data"]
|
||||
sample_rate = data_config["sample_rate"]
|
||||
n_fft = data_config["n_fft"]
|
||||
win_length = data_config["win_length"]
|
||||
hop_length = data_config["hop_length"]
|
||||
n_mels = data_config["n_mels"]
|
||||
train_clip_seconds = data_config["train_clip_seconds"]
|
||||
transform = Transform(sample_rate, n_fft, win_length, hop_length, n_mels)
|
||||
ljspeech = TransformDataset(ljspeech_meta, transform)
|
||||
|
||||
valid_size = data_config["valid_size"]
|
||||
ljspeech_valid = CacheDataset(SliceDataset(ljspeech, 0, valid_size))
|
||||
ljspeech_train = CacheDataset(
|
||||
SliceDataset(ljspeech, valid_size, len(ljspeech)))
|
||||
|
||||
model_config = config["model"]
|
||||
n_loop = model_config["n_loop"]
|
||||
n_layer = model_config["n_layer"]
|
||||
filter_size = model_config["filter_size"]
|
||||
context_size = 1 + n_layer * sum([filter_size**i for i in range(n_loop)])
|
||||
print("context size is {} samples".format(context_size))
|
||||
train_batch_fn = DataCollector(context_size, sample_rate, hop_length,
|
||||
train_clip_seconds)
|
||||
valid_batch_fn = DataCollector(
|
||||
context_size, sample_rate, hop_length, train_clip_seconds, valid=True)
|
||||
|
||||
batch_size = data_config["batch_size"]
|
||||
train_cargo = DataCargo(
|
||||
ljspeech_train,
|
||||
train_batch_fn,
|
||||
batch_size,
|
||||
sampler=RandomSampler(ljspeech_train))
|
||||
|
||||
# only batch=1 for validation is enabled
|
||||
valid_cargo = DataCargo(
|
||||
ljspeech_valid,
|
||||
valid_batch_fn,
|
||||
batch_size=1,
|
||||
sampler=SequentialSampler(ljspeech_valid))
|
||||
|
||||
make_output_tree(args.output)
|
||||
|
||||
if args.device == -1:
|
||||
place = fluid.CPUPlace()
|
||||
else:
|
||||
place = fluid.CUDAPlace(args.device)
|
||||
|
||||
model_config = config["model"]
|
||||
upsampling_factors = model_config["upsampling_factors"]
|
||||
encoder = UpsampleNet(upsampling_factors)
|
||||
|
||||
n_loop = model_config["n_loop"]
|
||||
n_layer = model_config["n_layer"]
|
||||
residual_channels = model_config["residual_channels"]
|
||||
output_dim = model_config["output_dim"]
|
||||
loss_type = model_config["loss_type"]
|
||||
log_scale_min = model_config["log_scale_min"]
|
||||
decoder = WaveNet(n_loop, n_layer, residual_channels, output_dim, n_mels,
|
||||
filter_size, loss_type, log_scale_min)
|
||||
|
||||
model = ConditionalWavenet(encoder, decoder)
|
||||
summary(model)
|
||||
|
||||
train_config = config["train"]
|
||||
learning_rate = train_config["learning_rate"]
|
||||
anneal_rate = train_config["anneal_rate"]
|
||||
anneal_interval = train_config["anneal_interval"]
|
||||
lr_scheduler = dg.ExponentialDecay(
|
||||
learning_rate, anneal_interval, anneal_rate, staircase=True)
|
||||
gradiant_max_norm = train_config["gradient_max_norm"]
|
||||
optim = fluid.optimizer.Adam(
|
||||
lr_scheduler,
|
||||
parameter_list=model.parameters(),
|
||||
grad_clip=fluid.clip.ClipByGlobalNorm(gradiant_max_norm))
|
||||
|
||||
train_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True)
|
||||
train_loader.set_batch_generator(train_cargo, place)
|
||||
|
||||
valid_loader = fluid.io.DataLoader.from_generator(
|
||||
capacity=10, return_list=True)
|
||||
valid_loader.set_batch_generator(valid_cargo, place)
|
||||
|
||||
max_iterations = train_config["max_iterations"]
|
||||
checkpoint_interval = train_config["checkpoint_interval"]
|
||||
snap_interval = train_config["snap_interval"]
|
||||
eval_interval = train_config["eval_interval"]
|
||||
checkpoint_dir = os.path.join(args.output, "checkpoints")
|
||||
log_dir = os.path.join(args.output, "log")
|
||||
writer = LogWriter(log_dir)
|
||||
|
||||
# load parameters and optimizer, and update iterations done so far
|
||||
if args.checkpoint is not None:
|
||||
iteration = io.load_parameters(
|
||||
model, optim, checkpoint_path=args.checkpoint)
|
||||
else:
|
||||
iteration = io.load_parameters(
|
||||
model,
|
||||
optim,
|
||||
checkpoint_dir=checkpoint_dir,
|
||||
iteration=args.iteration)
|
||||
|
||||
global_step = iteration + 1
|
||||
iterator = iter(tqdm.tqdm(train_loader))
|
||||
while global_step <= max_iterations:
|
||||
try:
|
||||
batch = next(iterator)
|
||||
except StopIteration as e:
|
||||
iterator = iter(tqdm.tqdm(train_loader))
|
||||
batch = next(iterator)
|
||||
|
||||
audio_clips, mel_specs, audio_starts = batch
|
||||
|
||||
model.train()
|
||||
y_var = model(audio_clips, mel_specs, audio_starts)
|
||||
loss_var = model.loss(y_var, audio_clips)
|
||||
loss_var.backward()
|
||||
loss_np = loss_var.numpy()
|
||||
|
||||
writer.add_scalar("loss", loss_np[0], global_step)
|
||||
writer.add_scalar("learning_rate",
|
||||
optim._learning_rate.step().numpy()[0], global_step)
|
||||
optim.minimize(loss_var)
|
||||
optim.clear_gradients()
|
||||
print("global_step: {}\tloss: {:<8.6f}".format(global_step, loss_np[
|
||||
0]))
|
||||
|
||||
if global_step % snap_interval == 0:
|
||||
valid_model(model, valid_loader, writer, global_step, sample_rate)
|
||||
|
||||
if global_step % checkpoint_interval == 0:
|
||||
io.save_parameters(checkpoint_dir, global_step, model, optim)
|
||||
|
||||
global_step += 1
|
|
@ -1,62 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import os
|
||||
import numpy as np
|
||||
import soundfile as sf
|
||||
import paddle.fluid.dygraph as dg
|
||||
|
||||
|
||||
def make_output_tree(output_dir):
|
||||
checkpoint_dir = os.path.join(output_dir, "checkpoints")
|
||||
if not os.path.exists(checkpoint_dir):
|
||||
os.makedirs(checkpoint_dir)
|
||||
|
||||
state_dir = os.path.join(output_dir, "states")
|
||||
if not os.path.exists(state_dir):
|
||||
os.makedirs(state_dir)
|
||||
|
||||
|
||||
def valid_model(model, valid_loader, writer, global_step, sample_rate):
|
||||
loss = []
|
||||
wavs = []
|
||||
model.eval()
|
||||
for i, batch in enumerate(valid_loader):
|
||||
# print("sentence {}".format(i))
|
||||
audio_clips, mel_specs, audio_starts = batch
|
||||
y_var = model(audio_clips, mel_specs, audio_starts)
|
||||
wav_var = model.sample(y_var)
|
||||
loss_var = model.loss(y_var, audio_clips)
|
||||
loss.append(loss_var.numpy()[0])
|
||||
wavs.append(wav_var.numpy()[0])
|
||||
|
||||
average_loss = np.mean(loss)
|
||||
writer.add_scalar("valid_loss", average_loss, global_step)
|
||||
for i, wav in enumerate(wavs):
|
||||
writer.add_audio("valid/sample_{}".format(i), wav, global_step,
|
||||
sample_rate)
|
||||
|
||||
|
||||
def eval_model(model, valid_loader, output_dir, global_step, sample_rate):
|
||||
model.eval()
|
||||
for i, batch in enumerate(valid_loader):
|
||||
# print("sentence {}".format(i))
|
||||
path = os.path.join(output_dir,
|
||||
"sentence_{}_step_{}.wav".format(i, global_step))
|
||||
audio_clips, mel_specs, audio_starts = batch
|
||||
wav_var = model.synthesis(mel_specs)
|
||||
wav_np = wav_var.numpy()[0]
|
||||
sf.write(path, wav_np, samplerate=sample_rate)
|
||||
print("generated {}".format(path))
|
|
@ -14,4 +14,4 @@
|
|||
|
||||
__version__ = "0.0.0"
|
||||
|
||||
from . import data, g2p, models, modules
|
||||
from parakeet import data, frontend, models, modules
|
||||
|
|
|
@ -0,0 +1,36 @@
|
|||
import parakeet
|
||||
|
||||
if __name__ == '__main__':
|
||||
import argparse
|
||||
import os
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
package_path = Path(__file__).parent
|
||||
print(package_path)
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparser = parser.add_subparsers(dest="cmd")
|
||||
|
||||
list_exp_parser = subparser.add_parser("list-examples")
|
||||
clone = subparser.add_parser("clone-example")
|
||||
clone.add_argument("experiment_name", type=str, help="experiment name")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.cmd == "list-examples":
|
||||
print(os.listdir(package_path / "examples"))
|
||||
exit(0)
|
||||
|
||||
if args.cmd == "clone-example":
|
||||
source = package_path / "examples" / (args.experiment_name)
|
||||
target = Path(os.getcwd()) / (args.experiment_name)
|
||||
if not os.path.exists(str(source)):
|
||||
raise ValueError("{} does not exist".format(str(source)))
|
||||
|
||||
if os.path.exists(str(target)):
|
||||
raise FileExistsError("{} already exists".format(str(target)))
|
||||
|
||||
shutil.copytree(str(source), str(target))
|
||||
print("{} copied!".format(args.experiment_name))
|
||||
exit(0)
|
|
@ -12,4 +12,5 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from .audio import AudioProcessor
|
||||
from .audio import AudioProcessor
|
||||
from .spec_normalizer import NormalizerBase, LogMagnitude
|
|
@ -15,278 +15,80 @@
|
|||
import librosa
|
||||
import soundfile as sf
|
||||
import numpy as np
|
||||
import scipy.io
|
||||
import scipy.signal
|
||||
|
||||
|
||||
class AudioProcessor(object):
|
||||
def __init__(
|
||||
self,
|
||||
sample_rate=None, # int, sampling rate
|
||||
num_mels=None, # int, bands of mel spectrogram
|
||||
min_level_db=None, # float, minimum level db
|
||||
ref_level_db=None, # float, reference level db
|
||||
n_fft=None, # int: number of samples in a frame for stft
|
||||
win_length=None, # int: the same meaning with n_fft
|
||||
hop_length=None, # int: number of samples between neighboring frame
|
||||
power=None, # float:power to raise before griffin-lim
|
||||
preemphasis=None, # float: preemphasis coefficident
|
||||
signal_norm=None, #
|
||||
symmetric_norm=False, # bool, apply clip norm in [-max_norm, max_form]
|
||||
max_norm=None, # float, max norm
|
||||
mel_fmin=None, # int: mel spectrogram's minimum frequency
|
||||
mel_fmax=None, # int: mel spectrogram's maximum frequency
|
||||
clip_norm=True, # bool: clip spectrogram's norm
|
||||
griffin_lim_iters=None, # int:
|
||||
do_trim_silence=False, # bool: trim silence
|
||||
sound_norm=False,
|
||||
**kwargs):
|
||||
def __init__(self,
|
||||
sample_rate:int,
|
||||
n_fft:int,
|
||||
win_length:int,
|
||||
hop_length:int,
|
||||
n_mels:int=80,
|
||||
f_min:int=0,
|
||||
f_max:int=None,
|
||||
window="hann",
|
||||
center="True",
|
||||
pad_mode="reflect"):
|
||||
# read & write
|
||||
self.sample_rate = sample_rate
|
||||
self.num_mels = num_mels
|
||||
self.min_level_db = min_level_db
|
||||
self.ref_level_db = ref_level_db
|
||||
|
||||
# stft related
|
||||
# stft
|
||||
self.n_fft = n_fft
|
||||
self.win_length = win_length or n_fft
|
||||
# hop length defaults to 1/4 window_length
|
||||
self.hop_length = hop_length or 0.25 * self.win_length
|
||||
self.win_length = win_length
|
||||
self.hop_length = hop_length
|
||||
self.window = window
|
||||
self.center = center
|
||||
self.pad_mode = pad_mode
|
||||
|
||||
# mel
|
||||
self.n_mels = n_mels
|
||||
self.f_min = f_min
|
||||
self.f_max = f_max
|
||||
|
||||
self.power = power
|
||||
self.preemphasis = float(preemphasis)
|
||||
self.mel_filter = self._create_mel_filter()
|
||||
self.inv_mel_filter = np.linalg.pinv(self.mel_filter)
|
||||
|
||||
def _create_mel_filter(self):
|
||||
mel_filter = librosa.filters.mel(
|
||||
self.sample_rate,
|
||||
self.n_fft,
|
||||
n_mels=self.n_mels,
|
||||
fmin=self.f_min,
|
||||
fmax=self.f_max)
|
||||
return mel_filter
|
||||
|
||||
self.griffin_lim_iters = griffin_lim_iters
|
||||
self.signal_norm = signal_norm
|
||||
self.symmetric_norm = symmetric_norm
|
||||
def read_wav(self, filename):
|
||||
# resampling may occur
|
||||
wav, _ = librosa.load(filename, sr=self.sample_rate)
|
||||
return wav
|
||||
|
||||
# mel transform related
|
||||
self.mel_fmin = mel_fmin
|
||||
self.mel_fmax = mel_fmax
|
||||
def write_wav(self, path, wav):
|
||||
sf.write(path, wav, samplerate=self.sample_rate)
|
||||
|
||||
self.max_norm = 1.0 if max_norm is None else float(max_norm)
|
||||
self.clip_norm = clip_norm
|
||||
self.do_trim_silence = do_trim_silence
|
||||
|
||||
self.sound_norm = sound_norm
|
||||
self.num_freq, self.frame_length_ms, self.frame_shift_ms = self._stft_parameters(
|
||||
)
|
||||
|
||||
def _stft_parameters(self):
|
||||
"""compute frame length and hop length in ms"""
|
||||
frame_length_ms = self.win_length * 1. / self.sample_rate
|
||||
frame_shift_ms = self.hop_length * 1. / self.sample_rate
|
||||
num_freq = 1 + self.n_fft // 2
|
||||
return num_freq, frame_length_ms, frame_shift_ms
|
||||
|
||||
def __repr__(self):
|
||||
"""object repr"""
|
||||
cls_name_str = self.__class__.__name__
|
||||
members = vars(self)
|
||||
dict_str = "\n".join(
|
||||
[" {}: {},".format(k, v) for k, v in members.items()])
|
||||
repr_str = "{}(\n{})\n".format(cls_name_str, dict_str)
|
||||
return repr_str
|
||||
|
||||
def save_wav(self, path, wav):
|
||||
"""save audio with scipy.io.wavfile in 16bit integers"""
|
||||
wav_norm = wav * (32767 / max(0.01, np.max(np.abs(wav))))
|
||||
scipy.io.wavfile.write(path, self.sample_rate,
|
||||
wav_norm.as_type(np.int16))
|
||||
|
||||
def load_wav(self, path, sr=None):
|
||||
"""load wav -> trim_silence -> rescale"""
|
||||
|
||||
x, sr = librosa.load(path, sr=None)
|
||||
assert self.sample_rate == sr, "audio sample rate: {}Hz != processor sample rate: {}Hz".format(
|
||||
sr, self.sample_rate)
|
||||
if self.do_trim_silence:
|
||||
try:
|
||||
x = self.trim_silence(x)
|
||||
except ValueError:
|
||||
print(" [!] File cannot be trimmed for silence - {}".format(
|
||||
path))
|
||||
if self.sound_norm:
|
||||
x = x / x.max() * 0.9 # why 0.9 ?
|
||||
return x
|
||||
|
||||
def trim_silence(self, wav):
|
||||
"""Trim soilent parts with a threshold and 0.01s margin"""
|
||||
margin = int(self.sample_rate * 0.01)
|
||||
wav = wav[margin:-margin]
|
||||
trimed_wav = librosa.effects.trim(
|
||||
def stft(self, wav):
|
||||
D = librosa.core.stft(
|
||||
wav,
|
||||
top_db=60,
|
||||
frame_length=self.win_length,
|
||||
hop_length=self.hop_length)[0]
|
||||
return trimed_wav
|
||||
|
||||
def apply_preemphasis(self, x):
|
||||
if self.preemphasis == 0.:
|
||||
raise RuntimeError(
|
||||
" !! Preemphasis coefficient should be positive. ")
|
||||
return scipy.signal.lfilter([1., -self.preemphasis], [1.], x)
|
||||
|
||||
def apply_inv_preemphasis(self, x):
|
||||
if self.preemphasis == 0.:
|
||||
raise RuntimeError(
|
||||
" !! Preemphasis coefficient should be positive. ")
|
||||
return scipy.signal.lfilter([1.], [1., -self.preemphasis], x)
|
||||
|
||||
def _amplitude_to_db(self, x):
|
||||
amplitude_min = np.exp(self.min_level_db / 20 * np.log(10))
|
||||
return 20 * np.log10(np.maximum(amplitude_min, x))
|
||||
|
||||
@staticmethod
|
||||
def _db_to_amplitude(x):
|
||||
return np.power(10., 0.05 * x)
|
||||
|
||||
def _linear_to_mel(self, spectrogram):
|
||||
_mel_basis = self._build_mel_basis()
|
||||
return np.dot(_mel_basis, spectrogram)
|
||||
|
||||
def _mel_to_linear(self, mel_spectrogram):
|
||||
inv_mel_basis = np.linalg.pinv(self._build_mel_basis())
|
||||
return np.maximum(1e-10, np.dot(inv_mel_basis, mel_spectrogram))
|
||||
|
||||
def _build_mel_basis(self):
|
||||
"""return mel basis for mel scale"""
|
||||
if self.mel_fmax is not None:
|
||||
assert self.mel_fmax <= self.sample_rate // 2
|
||||
return librosa.filters.mel(self.sample_rate,
|
||||
self.n_fft,
|
||||
n_mels=self.num_mels,
|
||||
fmin=self.mel_fmin,
|
||||
fmax=self.mel_fmax)
|
||||
|
||||
def _normalize(self, S):
|
||||
"""put values in [0, self.max_norm] or [-self.max_norm, self,max_norm]"""
|
||||
if self.signal_norm:
|
||||
S_norm = (S - self.min_level_db) / (-self.min_level_db)
|
||||
if self.symmetric_norm:
|
||||
S_norm = ((2 * self.max_norm) * S_norm) - self.max_norm
|
||||
if self.clip_norm:
|
||||
S_norm = np.clip(S_norm, -self.max_norm, self.max_norm)
|
||||
return S_norm
|
||||
else:
|
||||
S_norm = self.max_norm * S_norm
|
||||
if self.clip_norm:
|
||||
S_norm = np.clip(S_norm, 0, self.max_norm)
|
||||
return S_norm
|
||||
else:
|
||||
return S
|
||||
|
||||
def _denormalize(self, S):
|
||||
"""denormalize values"""
|
||||
S_denorm = S
|
||||
if self.signal_norm:
|
||||
if self.symmetric_norm:
|
||||
if self.clip_norm:
|
||||
S_denorm = np.clip(S_denorm, -self.max_norm, self.max_norm)
|
||||
S_denorm = (S_denorm + self.max_norm) * (
|
||||
-self.min_level_db) / (2 * self.max_norm
|
||||
) + self.min_level_db
|
||||
return S_denorm
|
||||
else:
|
||||
if self.clip_norm:
|
||||
S_denorm = np.clip(S_denorm, 0, self.max_norm)
|
||||
S_denorm = S_denorm * (-self.min_level_db
|
||||
) / self.max_norm + self.min_level_db
|
||||
return S_denorm
|
||||
else:
|
||||
return S
|
||||
|
||||
def _stft(self, y):
|
||||
return librosa.stft(
|
||||
y=y,
|
||||
n_fft=self.n_fft,
|
||||
n_fft = self.n_fft,
|
||||
hop_length=self.hop_length,
|
||||
win_length=self.win_length,
|
||||
hop_length=self.hop_length)
|
||||
window=self.window,
|
||||
center=self.center,
|
||||
pad_mode=self.pad_mode)
|
||||
return D
|
||||
|
||||
def _istft(self, S):
|
||||
return librosa.istft(
|
||||
S, hop_length=self.hop_length, win_length=self.win_length)
|
||||
def istft(self, D):
|
||||
wav = librosa.core.istft(
|
||||
D,
|
||||
hop_length=self.hop_length,
|
||||
win_length=self.win_length,
|
||||
window=self.window,
|
||||
center=self.center)
|
||||
return wav
|
||||
|
||||
def spectrogram(self, y):
|
||||
"""compute linear spectrogram(amplitude)
|
||||
preemphasis -> stft -> mag -> amplitude_to_db -> minus_ref_level_db -> normalize
|
||||
"""
|
||||
if self.preemphasis:
|
||||
D = self._stft(self.apply_preemphasis(y))
|
||||
else:
|
||||
D = self._stft(y)
|
||||
S = self._amplitude_to_db(np.abs(D)) - self.ref_level_db
|
||||
return self._normalize(S)
|
||||
def spectrogram(self, wav):
|
||||
D = self.stft(wav)
|
||||
return np.abs(D)
|
||||
|
||||
def melspectrogram(self, y):
|
||||
"""compute linear spectrogram(amplitude)
|
||||
preemphasis -> stft -> mag -> mel_scale -> amplitude_to_db -> minus_ref_level_db -> normalize
|
||||
"""
|
||||
if self.preemphasis:
|
||||
D = self._stft(self.apply_preemphasis(y))
|
||||
else:
|
||||
D = self._stft(y)
|
||||
S = self._amplitude_to_db(self._linear_to_mel(np.abs(
|
||||
D))) - self.ref_level_db
|
||||
return self._normalize(S)
|
||||
|
||||
def inv_spectrogram(self, spectrogram):
|
||||
"""convert spectrogram back to waveform using griffin_lim in librosa"""
|
||||
S = self._denormalize(spectrogram)
|
||||
S = self._db_to_amplitude(S + self.ref_level_db)
|
||||
if self.preemphasis:
|
||||
return self.apply_inv_preemphasis(self._griffin_lim(S**self.power))
|
||||
return self._griffin_lim(S**self.power)
|
||||
|
||||
def inv_melspectrogram(self, mel_spectrogram):
|
||||
S = self._denormalize(mel_spectrogram)
|
||||
S = self._db_to_amplitude(S + self.ref_level_db)
|
||||
S = self._mel_to_linear(np.abs(S))
|
||||
if self.preemphasis:
|
||||
return self.apply_inv_preemphasis(self._griffin_lim(S**self.power))
|
||||
return self._griffin_lim(S**self.power)
|
||||
|
||||
def out_linear_to_mel(self, linear_spec):
|
||||
"""convert output linear spec to mel spec"""
|
||||
S = self._denormalize(linear_spec)
|
||||
S = self._db_to_amplitude(S + self.ref_level_db)
|
||||
S = self._linear_to_mel(np.abs(S))
|
||||
S = self._amplitude_to_db(S) - self.ref_level_db
|
||||
mel = self._normalize(S)
|
||||
def mel_spectrogram(self, wav):
|
||||
S = self.spectrogram(wav)
|
||||
mel = np.dot(self.mel_filter, S)
|
||||
return mel
|
||||
|
||||
def _griffin_lim(self, S):
|
||||
angles = np.exp(2j * np.pi * np.random.rand(*S.shape))
|
||||
S_complex = np.abs(S).astype(np.complex)
|
||||
y = self._istft(S_complex * angles)
|
||||
for _ in range(self.griffin_lim_iters):
|
||||
angles = np.exp(1j * np.angle(self._stft(y)))
|
||||
y = self._istft(S_complex * angles)
|
||||
return y
|
||||
|
||||
@staticmethod
|
||||
def mulaw_encode(wav, qc):
|
||||
mu = 2**qc - 1
|
||||
# wav_abs = np.minimum(np.abs(wav), 1.0)
|
||||
signal = np.sign(wav) * np.log(1 + mu * np.abs(wav)) / np.log(1. + mu)
|
||||
# Quantize signal to the specified number of levels.
|
||||
signal = (signal + 1) / 2 * mu + 0.5
|
||||
return np.floor(signal, )
|
||||
|
||||
@staticmethod
|
||||
def mulaw_decode(wav, qc):
|
||||
"""Recovers waveform from quantized values."""
|
||||
mu = 2**qc - 1
|
||||
x = np.sign(wav) / mu * ((1 + mu)**np.abs(wav) - 1)
|
||||
return x
|
||||
|
||||
@staticmethod
|
||||
def encode_16bits(x):
|
||||
return np.clip(x * 2**15, -2**15, 2**15 - 1).astype(np.int16)
|
||||
|
||||
@staticmethod
|
||||
def quantize(x, bits):
|
||||
return (x + 1.) * (2**bits - 1) / 2
|
||||
|
||||
@staticmethod
|
||||
def dequantize(x, bits):
|
||||
return 2 * x / (2**bits - 1) - 1
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
|
||||
"""
|
||||
This modules contains normalizers for spectrogram magnitude.
|
||||
Normalizers are invertible transformations. They can be used to process
|
||||
magnitude of spectrogram before training and can also be used to recover from
|
||||
the generated spectrogram so as to be used with vocoders like griffin lim.
|
||||
|
||||
The base class describe the interface. `transform` is used to perform
|
||||
transformation and `inverse` is used to perform the inverse transformation.
|
||||
|
||||
check issues:
|
||||
https://github.com/mozilla/TTS/issues/377
|
||||
"""
|
||||
import numpy as np
|
||||
|
||||
class NormalizerBase(object):
|
||||
def transform(self, spec):
|
||||
raise NotImplementedError("transform must be implemented")
|
||||
|
||||
def inverse(self, normalized):
|
||||
raise NotImplementedError("inverse must be implemented")
|
||||
|
||||
class LogMagnitude(NormalizerBase):
|
||||
"""
|
||||
This is a simple normalizer used in Waveglow, Waveflow, tacotron2...
|
||||
"""
|
||||
def __init__(self, min=1e-7):
|
||||
self.min = min
|
||||
|
||||
def transform(self, x):
|
||||
x = np.maximum(x, self.min)
|
||||
x = np.log(x)
|
||||
return x
|
||||
|
||||
def inverse(self, x):
|
||||
return np.exp(x)
|
||||
|
||||
|
||||
class UnitMagnitude(NormalizerBase):
|
||||
# dbscale and (0, 1) normalization
|
||||
"""
|
||||
This is the normalizer used in the
|
||||
"""
|
||||
def __init__(self, min=1e-5):
|
||||
self.min = min
|
||||
|
||||
def transform(self, x):
|
||||
db_scale = 20 * np.log10(np.maximum(self.min, x)) - 20
|
||||
normalized = (db_scale + 100) / 100
|
||||
clipped = np.clip(normalized, 0, 1)
|
||||
return clipped
|
||||
|
||||
def inverse(self, x):
|
||||
denormalized = np.clip(x, 0, 1) * 100 - 100
|
||||
out = np.exp((denormalized + 20) / 20 * np.log(10))
|
||||
return out
|
|
@ -1,3 +0,0 @@
|
|||
{
|
||||
"python.pythonPath": "/Users/chenfeiyu/miniconda3/envs/paddle/bin/python"
|
||||
}
|
|
@ -13,6 +13,5 @@
|
|||
# limitations under the License.
|
||||
|
||||
from .dataset import *
|
||||
from .datacargo import *
|
||||
from .sampler import *
|
||||
from .batch import *
|
||||
|
|
|
@ -75,19 +75,16 @@ def batch_wav(minibatch, pad_value=0., dtype=np.float32):
|
|||
"""pad audios to the largest length and batch them.
|
||||
|
||||
Args:
|
||||
minibatch (List[np.ndarray]): list of rank-1 float arrays(mono-channel audio, shape(T,)) or list of rank-2 float arrays(multi-channel audio, shape(C, T), C stands for numer of channels, T stands for length), dtype float.
|
||||
minibatch (List[np.ndarray]): list of rank-1 float arrays(mono-channel audio, shape(T,)), dtype float.
|
||||
pad_value (float, optional): the pad value. Defaults to 0..
|
||||
dtype (np.dtype, optional): the data type of the output. Defaults to np.float32.
|
||||
|
||||
Returns:
|
||||
np.ndarray: the output batch. It is a rank-2 float array of shape(B, T) if the minibatch is a list of mono-channel audios, or a rank-3 float array of shape(B, C, T) if the minibatch is a list of multi-channel audios.
|
||||
np.ndarray: shape(B, T), the output batch.
|
||||
"""
|
||||
|
||||
peek_example = minibatch[0]
|
||||
if len(peek_example.shape) == 1:
|
||||
mono_channel = True
|
||||
elif len(peek_example.shape) == 2:
|
||||
mono_channel = False
|
||||
assert len(peek_example.shape) == 1, "we only handles mono-channel wav"
|
||||
|
||||
# assume (channel, n_samples) or (n_samples, )
|
||||
lengths = [example.shape[-1] for example in minibatch]
|
||||
|
@ -96,33 +93,27 @@ def batch_wav(minibatch, pad_value=0., dtype=np.float32):
|
|||
batch = []
|
||||
for example in minibatch:
|
||||
pad_len = max_len - example.shape[-1]
|
||||
if mono_channel:
|
||||
batch.append(
|
||||
np.pad(example, [(0, pad_len)],
|
||||
mode='constant',
|
||||
constant_values=pad_value))
|
||||
else:
|
||||
batch.append(
|
||||
np.pad(example, [(0, 0), (0, pad_len)],
|
||||
mode='constant',
|
||||
constant_values=pad_value))
|
||||
|
||||
batch.append(
|
||||
np.pad(example, [(0, pad_len)],
|
||||
mode='constant',
|
||||
constant_values=pad_value))
|
||||
return np.array(batch, dtype=dtype)
|
||||
|
||||
|
||||
class SpecBatcher(object):
|
||||
"""A wrapper class for `batch_spec`"""
|
||||
|
||||
def __init__(self, pad_value=0., dtype=np.float32):
|
||||
def __init__(self, pad_value=0., time_major=False, dtype=np.float32):
|
||||
self.pad_value = pad_value
|
||||
self.dtype = dtype
|
||||
self.time_major = time_major
|
||||
|
||||
def __call__(self, minibatch):
|
||||
out = batch_spec(minibatch, pad_value=self.pad_value, dtype=self.dtype)
|
||||
out = batch_spec(minibatch, pad_value=self.pad_value, time_major=self.time_major, dtype=self.dtype)
|
||||
return out
|
||||
|
||||
|
||||
def batch_spec(minibatch, pad_value=0., dtype=np.float32):
|
||||
def batch_spec(minibatch, pad_value=0., time_major=False, dtype=np.float32):
|
||||
"""Pad spectra to the largest length and batch them.
|
||||
|
||||
Args:
|
||||
|
@ -131,31 +122,28 @@ def batch_spec(minibatch, pad_value=0., dtype=np.float32):
|
|||
dtype (np.dtype, optional): data type of the output. Defaults to np.float32.
|
||||
|
||||
Returns:
|
||||
np.ndarray: a rank-3 array of shape(B, F, T) when the minibatch is a list of mono-channel spectrograms, or a rank-4 array of shape(B, C, F, T) when the minibatch is a list of multi-channel spectorgrams.
|
||||
np.ndarray: a rank-3 array of shape(B, F, T) or (B, T, F).
|
||||
"""
|
||||
# assume (F, T) or (C, F, T)
|
||||
# assume (F, T) or (T, F)
|
||||
peek_example = minibatch[0]
|
||||
if len(peek_example.shape) == 2:
|
||||
mono_channel = True
|
||||
elif len(peek_example.shape) == 3:
|
||||
mono_channel = False
|
||||
assert len(peek_example.shape) == 2, "we only handles mono channel spectrogram"
|
||||
|
||||
# assume (channel, F, n_frame) or (F, n_frame)
|
||||
lengths = [example.shape[-1] for example in minibatch]
|
||||
# assume (F, n_frame) or (n_frame, F)
|
||||
time_idx = 0 if time_major else -1
|
||||
lengths = [example.shape[time_idx] for example in minibatch]
|
||||
max_len = np.max(lengths)
|
||||
|
||||
batch = []
|
||||
for example in minibatch:
|
||||
pad_len = max_len - example.shape[-1]
|
||||
if mono_channel:
|
||||
pad_len = max_len - example.shape[time_idx]
|
||||
if time_major:
|
||||
batch.append(
|
||||
np.pad(example, [(0, 0), (0, pad_len)],
|
||||
mode='constant',
|
||||
constant_values=pad_value))
|
||||
np.pad(example, [(0, pad_len), (0, 0)],
|
||||
mode='constant',
|
||||
constant_values=pad_value))
|
||||
else:
|
||||
batch.append(
|
||||
np.pad(example, [(0, 0), (0, 0), (0, pad_len)],
|
||||
mode='constant',
|
||||
constant_values=pad_value))
|
||||
|
||||
np.pad(example, [(0, 0), (0, pad_len)],
|
||||
mode='constant',
|
||||
constant_values=pad_value))
|
||||
return np.array(batch, dtype=dtype)
|
||||
|
|
|
@ -1,126 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import six
|
||||
from .sampler import SequentialSampler, RandomSampler, BatchSampler
|
||||
|
||||
|
||||
class DataCargo(object):
|
||||
def __init__(self,
|
||||
dataset,
|
||||
batch_fn=None,
|
||||
batch_size=1,
|
||||
sampler=None,
|
||||
shuffle=False,
|
||||
batch_sampler=None,
|
||||
drop_last=False):
|
||||
"""An Iterable object of batches. It requires a dataset, a batch function and a sampler. The sampler yields the example ids, then the corresponding examples in the dataset are collected and transformed into a batch with the batch function.
|
||||
|
||||
Args:
|
||||
dataset (Dataset): the dataset used to build a data cargo.
|
||||
batch_fn (callable, optional): a callable that takes a list of examples of `dataset` and return a batch, it can be None if the dataset has a `_batch_examples` method which satisfy the requirement. Defaults to None.
|
||||
batch_size (int, optional): number of examples in a batch. Defaults to 1.
|
||||
sampler (Sampler, optional): an iterable of example ids(intergers), the example ids are used to pick examples. Defaults to None.
|
||||
shuffle (bool, optional): when sampler is not provided, shuffle = True creates a RandomSampler and shuffle=False creates a SequentialSampler internally. Defaults to False.
|
||||
batch_sampler (BatchSampler, optional): an iterable of lists of example ids(intergers), the list is used to pick examples, `batch_sampler` option is mutually exclusive with `batch_size`, `shuffle`, `sampler`, and `drop_last`. Defaults to None.
|
||||
drop_last (bool, optional): whether to drop the last minibatch. Defaults to False.
|
||||
"""
|
||||
self.dataset = dataset
|
||||
self.batch_fn = batch_fn or self.dataset._batch_examples
|
||||
|
||||
if batch_sampler is not None:
|
||||
# auto_collation with custom batch_sampler
|
||||
if batch_size != 1 or shuffle or sampler is not None or drop_last:
|
||||
raise ValueError('batch_sampler option is mutually exclusive '
|
||||
'with batch_size, shuffle, sampler, and '
|
||||
'drop_last')
|
||||
batch_size = None
|
||||
drop_last = False
|
||||
shuffle = False
|
||||
elif batch_size is None:
|
||||
raise ValueError(
|
||||
'batch sampler is none. then batch size must not be none.')
|
||||
elif sampler is None:
|
||||
if shuffle:
|
||||
sampler = RandomSampler(dataset)
|
||||
else:
|
||||
sampler = SequentialSampler(dataset)
|
||||
batch_sampler = BatchSampler(sampler, batch_size, drop_last)
|
||||
else:
|
||||
batch_sampler = BatchSampler(sampler, batch_size, drop_last)
|
||||
|
||||
self.batch_size = batch_size
|
||||
self.drop_last = drop_last
|
||||
self.sampler = sampler
|
||||
|
||||
self.batch_sampler = batch_sampler
|
||||
|
||||
def __iter__(self):
|
||||
return DataIterator(self)
|
||||
|
||||
def __call__(self):
|
||||
# protocol for paddle's DataLoader
|
||||
return DataIterator(self)
|
||||
|
||||
@property
|
||||
def _auto_collation(self):
|
||||
# use auto batching
|
||||
return self.batch_sampler is not None
|
||||
|
||||
@property
|
||||
def _index_sampler(self):
|
||||
if self._auto_collation:
|
||||
return self.batch_sampler
|
||||
else:
|
||||
return self.sampler
|
||||
|
||||
def __len__(self):
|
||||
return len(self._index_sampler)
|
||||
|
||||
|
||||
class DataIterator(object):
|
||||
def __init__(self, loader):
|
||||
"""Iterator object of DataCargo.
|
||||
|
||||
Args:
|
||||
loader (DataCargo): the data cargo to iterate.
|
||||
"""
|
||||
self.loader = loader
|
||||
self._dataset = loader.dataset
|
||||
|
||||
self._batch_fn = loader.batch_fn
|
||||
self._index_sampler = loader._index_sampler
|
||||
self._sampler_iter = iter(self._index_sampler)
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
# TODO(chenfeiyu): use dynamic batch size
|
||||
index = self._next_index()
|
||||
minibatch = [self._dataset[i] for i in index]
|
||||
minibatch = self._batch_fn(minibatch) # list[Example] -> Batch
|
||||
return minibatch
|
||||
|
||||
next = __next__ # Python 2 compatibility
|
||||
|
||||
def _next_index(self):
|
||||
if six.PY3:
|
||||
return next(self._sampler_iter)
|
||||
else:
|
||||
# six.PY2
|
||||
return self._sampler_iter.next()
|
||||
|
||||
def __len__(self):
|
||||
return len(self._index_sampler)
|
|
@ -13,62 +13,22 @@
|
|||
# limitations under the License.
|
||||
|
||||
import six
|
||||
import numpy as np
|
||||
from tqdm import tqdm
|
||||
import paddle
|
||||
from paddle.io import Dataset
|
||||
|
||||
|
||||
class DatasetMixin(object):
|
||||
"""Standard indexing interface for dataset. Inherit this class to
|
||||
get the indexing interface. Since it is a mixin class which does
|
||||
not have an `__init__` class, the subclass not need to call
|
||||
`super().__init__()`.
|
||||
"""
|
||||
def split(dataset, first_size):
|
||||
"""A utility function to split a dataset into two datasets."""
|
||||
first = SliceDataset(dataset, 0, first_size)
|
||||
second = SliceDataset(dataset, first_size, len(dataset))
|
||||
return first, second
|
||||
|
||||
def __getitem__(self, index):
|
||||
"""Standard indexing interface for dataset.
|
||||
|
||||
Args:
|
||||
index (slice, list[int], np.array or int): the index. if can be int, slice, list of integers, or ndarray of integers. It calls `get_example` to pick an example.
|
||||
|
||||
Returns:
|
||||
Example, or List[Example]: If `index` is an interger, it returns an
|
||||
example. If `index` is a slice, a list of intergers or an array of intergers,
|
||||
it returns a list of examples.
|
||||
"""
|
||||
if isinstance(index, slice):
|
||||
start, stop, step = index.indices(len(self))
|
||||
return [
|
||||
self.get_example(i) for i in six.moves.range(start, stop, step)
|
||||
]
|
||||
elif isinstance(index, (list, np.ndarray)):
|
||||
return [self.get_example(i) for i in index]
|
||||
else:
|
||||
# assumes it an integer
|
||||
return self.get_example(index)
|
||||
|
||||
def get_example(self, i):
|
||||
"""Get an example from the dataset. Custom datasets should have
|
||||
this method implemented.
|
||||
|
||||
Args:
|
||||
i (int): example index.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def __len__(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def __iter__(self):
|
||||
for i in range(len(self)):
|
||||
yield self.get_example(i)
|
||||
|
||||
|
||||
class TransformDataset(DatasetMixin):
|
||||
class TransformDataset(Dataset):
|
||||
def __init__(self, dataset, transform):
|
||||
"""Dataset which is transformed from another with a transform.
|
||||
|
||||
Args:
|
||||
dataset (DatasetMixin): the base dataset.
|
||||
dataset (Dataset): the base dataset.
|
||||
transform (callable): the transform which takes an example of the base dataset as parameter and return a new example.
|
||||
"""
|
||||
self._dataset = dataset
|
||||
|
@ -77,17 +37,17 @@ class TransformDataset(DatasetMixin):
|
|||
def __len__(self):
|
||||
return len(self._dataset)
|
||||
|
||||
def get_example(self, i):
|
||||
def __getitem__(self, i):
|
||||
in_data = self._dataset[i]
|
||||
return self._transform(in_data)
|
||||
|
||||
|
||||
class CacheDataset(DatasetMixin):
|
||||
class CacheDataset(Dataset):
|
||||
def __init__(self, dataset):
|
||||
"""A lazy cache of the base dataset.
|
||||
|
||||
Args:
|
||||
dataset (DatasetMixin): the base dataset to cache.
|
||||
dataset (Dataset): the base dataset to cache.
|
||||
"""
|
||||
self._dataset = dataset
|
||||
self._cache = dict()
|
||||
|
@ -95,24 +55,24 @@ class CacheDataset(DatasetMixin):
|
|||
def __len__(self):
|
||||
return len(self._dataset)
|
||||
|
||||
def get_example(self, i):
|
||||
def __getitem__(self, i):
|
||||
if not i in self._cache:
|
||||
self._cache[i] = self._dataset[i]
|
||||
return self._cache[i]
|
||||
|
||||
|
||||
class TupleDataset(object):
|
||||
class TupleDataset(Dataset):
|
||||
def __init__(self, *datasets):
|
||||
"""A compound dataset made from several datasets of the same length. An example of the `TupleDataset` is a tuple of examples from the constituent datasets.
|
||||
|
||||
Args:
|
||||
datasets: tuple[DatasetMixin], the constituent datasets.
|
||||
datasets: tuple[Dataset], the constituent datasets.
|
||||
"""
|
||||
if not datasets:
|
||||
raise ValueError("no datasets are given")
|
||||
length = len(datasets[0])
|
||||
for i, dataset in enumerate(datasets):
|
||||
if len(datasets) != length:
|
||||
if len(dataset) != length:
|
||||
raise ValueError(
|
||||
"all the datasets should have the same length."
|
||||
"dataset {} has a different length".format(i))
|
||||
|
@ -136,12 +96,20 @@ class TupleDataset(object):
|
|||
return self._length
|
||||
|
||||
|
||||
class DictDataset(object):
|
||||
class DictDataset(Dataset):
|
||||
def __init__(self, **datasets):
|
||||
"""A compound dataset made from several datasets of the same length. An example of the `DictDataset` is a dict of examples from the constituent datasets.
|
||||
"""
|
||||
A compound dataset made from several datasets of the same length. An
|
||||
example of the `DictDataset` is a dict of examples from the constituent
|
||||
datasets.
|
||||
|
||||
WARNING: paddle does not have a good support for DictDataset, because
|
||||
every batch yield from a DataLoader is a list, but it cannot be a dict.
|
||||
So you have to provide a collate function because you cannot use the
|
||||
default one.
|
||||
|
||||
Args:
|
||||
datasets: Dict[DatasetMixin], the constituent datasets.
|
||||
datasets: Dict[Dataset], the constituent datasets.
|
||||
"""
|
||||
if not datasets:
|
||||
raise ValueError("no datasets are given")
|
||||
|
@ -149,7 +117,7 @@ class DictDataset(object):
|
|||
for key, dataset in six.iteritems(datasets):
|
||||
if length is None:
|
||||
length = len(dataset)
|
||||
elif len(datasets) != length:
|
||||
elif len(dataset) != length:
|
||||
raise ValueError(
|
||||
"all the datasets should have the same length."
|
||||
"dataset {} has a different length".format(key))
|
||||
|
@ -168,14 +136,17 @@ class DictDataset(object):
|
|||
for i in six.moves.range(length)]
|
||||
else:
|
||||
return batches
|
||||
|
||||
def __len__(self):
|
||||
return self._length
|
||||
|
||||
|
||||
class SliceDataset(DatasetMixin):
|
||||
class SliceDataset(Dataset):
|
||||
def __init__(self, dataset, start, finish, order=None):
|
||||
"""A Dataset which is a slice of the base dataset.
|
||||
|
||||
Args:
|
||||
dataset (DatasetMixin): the base dataset.
|
||||
dataset (Dataset): the base dataset.
|
||||
start (int): the start of the slice.
|
||||
finish (int): the end of the slice, not inclusive.
|
||||
order (List[int], optional): the order, it is a permutation of the valid example ids of the base dataset. If `order` is provided, the slice is taken in `order`. Defaults to None.
|
||||
|
@ -197,7 +168,7 @@ class SliceDataset(DatasetMixin):
|
|||
def __len__(self):
|
||||
return self._size
|
||||
|
||||
def get_example(self, i):
|
||||
def __getitem__(self, i):
|
||||
if i >= 0:
|
||||
if i >= self._size:
|
||||
raise IndexError('dataset index out of range')
|
||||
|
@ -212,12 +183,12 @@ class SliceDataset(DatasetMixin):
|
|||
return self._dataset[index]
|
||||
|
||||
|
||||
class SubsetDataset(DatasetMixin):
|
||||
class SubsetDataset(Dataset):
|
||||
def __init__(self, dataset, indices):
|
||||
"""A Dataset which is a subset of the base dataset.
|
||||
|
||||
Args:
|
||||
dataset (DatasetMixin): the base dataset.
|
||||
dataset (Dataset): the base dataset.
|
||||
indices (Iterable[int]): the indices of the examples to pick.
|
||||
"""
|
||||
self._dataset = dataset
|
||||
|
@ -229,17 +200,17 @@ class SubsetDataset(DatasetMixin):
|
|||
def __len__(self):
|
||||
return self._size
|
||||
|
||||
def get_example(self, i):
|
||||
def __getitem__(self, i):
|
||||
index = self._indices[i]
|
||||
return self._dataset[index]
|
||||
|
||||
|
||||
class FilterDataset(DatasetMixin):
|
||||
class FilterDataset(Dataset):
|
||||
def __init__(self, dataset, filter_fn):
|
||||
"""A filtered dataset.
|
||||
|
||||
Args:
|
||||
dataset (DatasetMixin): the base dataset.
|
||||
dataset (Dataset): the base dataset.
|
||||
filter_fn (callable): a callable which takes an example of the base dataset and return a boolean.
|
||||
"""
|
||||
self._dataset = dataset
|
||||
|
@ -251,24 +222,24 @@ class FilterDataset(DatasetMixin):
|
|||
def __len__(self):
|
||||
return self._size
|
||||
|
||||
def get_example(self, i):
|
||||
def __getitem__(self, i):
|
||||
index = self._indices[i]
|
||||
return self._dataset[index]
|
||||
|
||||
|
||||
class ChainDataset(DatasetMixin):
|
||||
class ChainDataset(Dataset):
|
||||
def __init__(self, *datasets):
|
||||
"""A concatenation of the several datasets which the same structure.
|
||||
|
||||
Args:
|
||||
datasets (Iterable[DatasetMixin]): datasets to concat.
|
||||
datasets (Iterable[Dataset]): datasets to concat.
|
||||
"""
|
||||
self._datasets = datasets
|
||||
|
||||
def __len__(self):
|
||||
return sum(len(dataset) for dataset in self._datasets)
|
||||
|
||||
def get_example(self, i):
|
||||
def __getitem__(self, i):
|
||||
if i < 0:
|
||||
raise IndexError("ChainDataset doesnot support negative indexing.")
|
||||
|
||||
|
|
|
@ -21,95 +21,8 @@ So the sampler is only responsible for generating valid indices.
|
|||
|
||||
import numpy as np
|
||||
import random
|
||||
|
||||
|
||||
class Sampler(object):
|
||||
def __iter__(self):
|
||||
# return a iterator of indices
|
||||
# or a iterator of list[int], for BatchSampler
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class SequentialSampler(Sampler):
|
||||
def __init__(self, data_source):
|
||||
"""Sequential sampler, the simplest sampler that samples indices from 0 to N - 1, where N is the dataset is length.
|
||||
|
||||
Args:
|
||||
data_source (DatasetMixin): the dataset. This is used to get the dataset's length.
|
||||
"""
|
||||
self.data_source = data_source
|
||||
|
||||
def __iter__(self):
|
||||
return iter(range(len(self.data_source)))
|
||||
|
||||
def __len__(self):
|
||||
return len(self.data_source)
|
||||
|
||||
|
||||
class RandomSampler(Sampler):
|
||||
def __init__(self, data_source, replacement=False, num_samples=None):
|
||||
"""Random sampler.
|
||||
|
||||
Args:
|
||||
data_source (DatasetMixin): the dataset. This is used to get the dataset's length.
|
||||
replacement (bool, optional): whether replacement is enabled in sampling. When `replacement` is True, `num_samples` must be provided. Defaults to False.
|
||||
num_samples (int, optional): numbers of indices to draw. This option should only be provided when replacement is True. Defaults to None.
|
||||
"""
|
||||
self.data_source = data_source
|
||||
self.replacement = replacement
|
||||
self._num_samples = num_samples
|
||||
|
||||
if not isinstance(self.replacement, bool):
|
||||
raise ValueError("replacement should be a boolean value, but got "
|
||||
"replacement={}".format(self.replacement))
|
||||
|
||||
if self._num_samples is not None and not replacement:
|
||||
raise ValueError(
|
||||
"With replacement=False, num_samples should not be specified, "
|
||||
"since a random permutation will be performed.")
|
||||
|
||||
if not isinstance(self.num_samples, int) or self.num_samples <= 0:
|
||||
raise ValueError("num_samples should be a positive integer "
|
||||
"value, but got num_samples={}".format(
|
||||
self.num_samples))
|
||||
|
||||
@property
|
||||
def num_samples(self):
|
||||
if self._num_samples is None:
|
||||
return len(self.data_source)
|
||||
return self._num_samples
|
||||
|
||||
def __iter__(self):
|
||||
n = len(self.data_source)
|
||||
if self.replacement:
|
||||
return iter(
|
||||
np.random.randint(
|
||||
0, n, size=(self.num_samples, ), dtype=np.int64).tolist())
|
||||
return iter(np.random.permutation(n).tolist())
|
||||
|
||||
def __len__(self):
|
||||
return self.num_samples
|
||||
|
||||
|
||||
class SubsetRandomSampler(Sampler):
|
||||
"""Samples elements randomly from a given list of indices, without replacement.
|
||||
Arguments:
|
||||
indices (sequence): a sequence of indices
|
||||
"""
|
||||
|
||||
def __init__(self, indices):
|
||||
"""
|
||||
Args:
|
||||
indices (List[int]): indices to sample from.
|
||||
"""
|
||||
self.indices = indices
|
||||
|
||||
def __iter__(self):
|
||||
return (self.indices[i]
|
||||
for i in np.random.permutation(len(self.indices)))
|
||||
|
||||
def __len__(self):
|
||||
return len(self.indices)
|
||||
import paddle
|
||||
from paddle.io import Sampler
|
||||
|
||||
|
||||
class PartialyRandomizedSimilarTimeLengthSampler(Sampler):
|
||||
|
@ -285,92 +198,3 @@ class WeightedRandomSampler(Sampler):
|
|||
|
||||
def __len__(self):
|
||||
return self.num_samples
|
||||
|
||||
|
||||
class DistributedSampler(Sampler):
|
||||
def __init__(self, dataset_size, num_trainers, rank, shuffle=True):
|
||||
"""Sampler used for data parallel training. Indices are divided into num_trainers parts. Each trainer gets a subset and iter that subset. If the dataset has 16 examples, and there are 4 trainers.
|
||||
|
||||
Trainer 0 gets [0, 4, 8, 12];
|
||||
Trainer 1 gets [1, 5, 9, 13];
|
||||
Trainer 2 gets [2, 6, 10, 14];
|
||||
trainer 3 gets [3, 7, 11, 15].
|
||||
|
||||
It ensures that trainer get different parts of the dataset. If dataset's length cannot be perfectly devidef by num_trainers, some examples appended to the dataset, to ensures that every trainer gets the same amounts of examples.
|
||||
|
||||
Args:
|
||||
dataset_size (int): the length of the dataset.
|
||||
num_trainers (int): number of trainers(training processes).
|
||||
rank (int): local rank of the trainer.
|
||||
shuffle (bool, optional): whether to shuffle the indices before iteration. Defaults to True.
|
||||
"""
|
||||
self.dataset_size = dataset_size
|
||||
self.num_trainers = num_trainers
|
||||
self.rank = rank
|
||||
self.num_samples = int(np.ceil(dataset_size / num_trainers))
|
||||
self.total_size = self.num_samples * num_trainers
|
||||
assert self.total_size >= self.dataset_size
|
||||
self.shuffle = shuffle
|
||||
|
||||
def __iter__(self):
|
||||
indices = list(range(self.dataset_size))
|
||||
if self.shuffle:
|
||||
random.shuffle(indices)
|
||||
|
||||
# Append extra samples to make it evenly distributed on all trainers.
|
||||
indices += indices[:(self.total_size - self.dataset_size)]
|
||||
assert len(indices) == self.total_size
|
||||
|
||||
# Subset samples for each trainer.
|
||||
indices = indices[self.rank:self.total_size:self.num_trainers]
|
||||
assert len(indices) == self.num_samples
|
||||
|
||||
return iter(indices)
|
||||
|
||||
def __len__(self):
|
||||
return self.num_samples
|
||||
|
||||
|
||||
class BatchSampler(Sampler):
|
||||
"""Wraps another sampler to yield a mini-batch of indices."""
|
||||
|
||||
def __init__(self, sampler, batch_size, drop_last):
|
||||
"""
|
||||
Args:
|
||||
sampler (Sampler): Base sampler.
|
||||
batch_size (int): Size of mini-batch.
|
||||
drop_last (bool): If True, the sampler will drop the last batch if its size is less than batch_size.
|
||||
Example:
|
||||
>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=False))
|
||||
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
|
||||
>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=True))
|
||||
[[0, 1, 2], [3, 4, 5], [6, 7, 8]]
|
||||
"""
|
||||
if not isinstance(sampler, Sampler):
|
||||
raise ValueError("sampler should be an instance of "
|
||||
"Sampler, but got sampler={}".format(sampler))
|
||||
if not isinstance(batch_size, int) or batch_size <= 0:
|
||||
raise ValueError("batch_size should be a positive integer value, "
|
||||
"but got batch_size={}".format(batch_size))
|
||||
if not isinstance(drop_last, bool):
|
||||
raise ValueError("drop_last should be a boolean value, but got "
|
||||
"drop_last={}".format(drop_last))
|
||||
self.sampler = sampler
|
||||
self.batch_size = batch_size
|
||||
self.drop_last = drop_last
|
||||
|
||||
def __iter__(self):
|
||||
batch = []
|
||||
for idx in self.sampler:
|
||||
batch.append(idx)
|
||||
if len(batch) == self.batch_size:
|
||||
yield batch
|
||||
batch = []
|
||||
if len(batch) > 0 and not self.drop_last:
|
||||
yield batch
|
||||
|
||||
def __len__(self):
|
||||
if self.drop_last:
|
||||
return len(self.sampler) // self.batch_size
|
||||
else:
|
||||
return (len(self.sampler) + self.batch_size - 1) // self.batch_size
|
||||
|
|
|
@ -1,17 +0,0 @@
|
|||
# The Design of Dataset in Parakeet
|
||||
|
||||
## data & metadata
|
||||
A Dataset in Parakeet is basically a list of Records (or examples, instances if you prefer this glossary.) By being a list, we mean it can be indexed by `__getitem__`, and we can get the size of the dataset by `__len__`.
|
||||
|
||||
This might mean we should have load the whole dataset before hand. But in practice, we do not do this due to time, computation and memory of storage limits. We actually load some metadata instead, which gives us the size of the dataset, and metadata of each record. In this case, the metadata itself is a small dataset which helps us to load a larger dataset. We made `_load_metadata` a method for all datasets.
|
||||
|
||||
In most cases, metadata is provided with the data. So we can load it trivially. But in other cases, we need to scan the whole dataset to get metadata. For example, the length of the the sentences, the vocabuary or the statistics of the dataset, etc. In these cases, we'd betetr save the metadata, so we do not need to generate them again and again. When implementing a dataset, we do these work in `_prepare_metadata`.
|
||||
|
||||
In our initial cases, record is implemented as a tuple for simplicity. Actually, it can be implemented as a dict or namespace.
|
||||
|
||||
## preprocessing & batching
|
||||
One of the reasons we choose to load data lazily (only load metadata before hand, and load data only when needed) is computation overhead. For large dataset with complicated preprocessing, it may take several days to preprocess them. So we choose to preprocess it lazily. In practice, we implement preprocessing in `_get_example` which is called by `__getitem__`. This method preprocess only one record.
|
||||
|
||||
For deep learning practice, we typically batch examples. So the dataset should comes with a method to batch examples. Assuming the record is implemented as a tuple with several items. When an item is represented as a fix-sized array, to batch them is trivial, just `np.stack` suffices. But for array with dynamic size, padding is needed. We decide to implement a batching method for each item. Then batching a record can be implemented by these methods. For a dataset, a `_batch_examples` should be implemented. But in most cases, you can choose one from `batching.py`.
|
||||
|
||||
That is it!
|
|
@ -1,13 +1,2 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from parakeet.datasets.common import *
|
||||
from parakeet.datasets.ljspeech import *
|
|
@ -0,0 +1,21 @@
|
|||
from paddle.io import Dataset
|
||||
import os
|
||||
import librosa
|
||||
|
||||
class AudioFolderDataset(Dataset):
|
||||
def __init__(self, path, sample_rate, extension="wav"):
|
||||
self.root = os.path.expanduser(path)
|
||||
self.sample_rate = sample_rate
|
||||
self.extension = extension
|
||||
self.file_names = [
|
||||
os.path.join(self.root, x) for x in os.listdir(self.root) \
|
||||
if os.path.splitext(x)[-1] == self.extension]
|
||||
self.length = len(self.file_names)
|
||||
|
||||
def __len__(self):
|
||||
return self.length
|
||||
|
||||
def __getitem__(self, i):
|
||||
file_name = self.file_names[i]
|
||||
y, _ = librosa.load(file_name, sr=self.sample_rate) # pylint: disable=unused-variable
|
||||
return y
|
|
@ -1,101 +1,23 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from paddle.io import Dataset
|
||||
from pathlib import Path
|
||||
|
||||
import os
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
import librosa
|
||||
from .. import g2p
|
||||
|
||||
from ..data.sampler import SequentialSampler, RandomSampler, BatchSampler
|
||||
from ..data.dataset import DatasetMixin
|
||||
from ..data.datacargo import DataCargo
|
||||
from ..data.batch import TextIDBatcher, SpecBatcher
|
||||
|
||||
|
||||
class LJSpeech(DatasetMixin):
|
||||
class LJSpeechMetaData(Dataset):
|
||||
def __init__(self, root):
|
||||
super(LJSpeech, self).__init__()
|
||||
self.root = root
|
||||
self.metadata = self._prepare_metadata()
|
||||
self.root = Path(root).expanduser()
|
||||
wav_dir = self.root / "wavs"
|
||||
csv_path = self.root / "metadata.csv"
|
||||
records = []
|
||||
speaker_name = "ljspeech"
|
||||
with open(str(csv_path), 'rt') as f:
|
||||
for line in f:
|
||||
filename, _, normalized_text = line.strip().split("|")
|
||||
filename = str(wav_dir / (filename + ".wav"))
|
||||
records.append([filename, normalized_text, speaker_name])
|
||||
self.records = records
|
||||
|
||||
def _prepare_metadata(self):
|
||||
csv_path = os.path.join(self.root, "metadata.csv")
|
||||
metadata = pd.read_csv(
|
||||
csv_path,
|
||||
sep="|",
|
||||
header=None,
|
||||
quoting=3,
|
||||
names=["fname", "raw_text", "normalized_text"])
|
||||
return metadata
|
||||
|
||||
def _get_example(self, metadatum):
|
||||
"""All the code for generating an Example from a metadatum. If you want a
|
||||
different preprocessing pipeline, you can override this method.
|
||||
This method may require several processor, each of which has a lot of options.
|
||||
In this case, you'd better pass a composed transform and pass it to the init
|
||||
method.
|
||||
"""
|
||||
|
||||
fname, raw_text, normalized_text = metadatum
|
||||
wav_path = os.path.join(self.root, "wavs", fname + ".wav")
|
||||
|
||||
# load -> trim -> preemphasis -> stft -> magnitude -> mel_scale -> logscale -> normalize
|
||||
wav, sample_rate = librosa.load(
|
||||
wav_path,
|
||||
sr=None) # we would rather use functor to hold its parameters
|
||||
trimed, _ = librosa.effects.trim(wav)
|
||||
preemphasized = librosa.effects.preemphasis(trimed)
|
||||
D = librosa.stft(preemphasized)
|
||||
mag, phase = librosa.magphase(D)
|
||||
mel = librosa.feature.melspectrogram(S=mag)
|
||||
|
||||
mag = librosa.amplitude_to_db(S=mag)
|
||||
mel = librosa.amplitude_to_db(S=mel)
|
||||
|
||||
ref_db = 20
|
||||
max_db = 100
|
||||
mel = np.clip((mel - ref_db + max_db) / max_db, 1e-8, 1)
|
||||
mel = np.clip((mag - ref_db + max_db) / max_db, 1e-8, 1)
|
||||
|
||||
phonemes = np.array(
|
||||
g2p.en.text_to_sequence(normalized_text), dtype=np.int64)
|
||||
return (mag, mel, phonemes
|
||||
) # maybe we need to implement it as a map in the future
|
||||
|
||||
def _batch_examples(self, minibatch):
|
||||
mag_batch = []
|
||||
mel_batch = []
|
||||
phoneme_batch = []
|
||||
for example in minibatch:
|
||||
mag, mel, phoneme = example
|
||||
mag_batch.append(mag)
|
||||
mel_batch.append(mel)
|
||||
phoneme_batch.append(phoneme)
|
||||
mag_batch = SpecBatcher(pad_value=0.)(mag_batch)
|
||||
mel_batch = SpecBatcher(pad_value=0.)(mel_batch)
|
||||
phoneme_batch = TextIDBatcher(pad_id=0)(phoneme_batch)
|
||||
return (mag_batch, mel_batch, phoneme_batch)
|
||||
|
||||
def __getitem__(self, index):
|
||||
metadatum = self.metadata.iloc[index]
|
||||
example = self._get_example(metadatum)
|
||||
return example
|
||||
|
||||
def __iter__(self):
|
||||
for i in range(len(self)):
|
||||
yield self[i]
|
||||
def __getitem__(self, i):
|
||||
return self.records[i]
|
||||
|
||||
def __len__(self):
|
||||
return len(self.metadata)
|
||||
return len(self.records)
|
||||
|
||||
|
|
|
@ -1,99 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pathlib import Path
|
||||
import pandas as pd
|
||||
from ruamel.yaml import YAML
|
||||
import io
|
||||
|
||||
import librosa
|
||||
import numpy as np
|
||||
|
||||
from parakeet.g2p.en import text_to_sequence
|
||||
from parakeet.data.dataset import Dataset
|
||||
from parakeet.data.datacargo import DataCargo
|
||||
from parakeet.data.batch import TextIDBatcher, WavBatcher
|
||||
|
||||
|
||||
class VCTK(Dataset):
|
||||
def __init__(self, root):
|
||||
assert isinstance(root, (
|
||||
str, Path)), "root should be a string or Path object"
|
||||
self.root = root if isinstance(root, Path) else Path(root)
|
||||
self.text_root = self.root.joinpath("txt")
|
||||
self.wav_root = self.root.joinpath("wav48")
|
||||
|
||||
if not (self.root.joinpath("metadata.csv").exists() and
|
||||
self.root.joinpath("speaker_indices.yaml").exists()):
|
||||
self._prepare_metadata()
|
||||
self.speaker_indices, self.metadata = self._load_metadata()
|
||||
|
||||
def _load_metadata(self):
|
||||
yaml = YAML(typ='safe')
|
||||
speaker_indices = yaml.load(self.root.joinpath("speaker_indices.yaml"))
|
||||
metadata = pd.read_csv(
|
||||
self.root.joinpath("metadata.csv"), sep="|", quoting=3, header=1)
|
||||
return speaker_indices, metadata
|
||||
|
||||
def _prepare_metadata(self):
|
||||
metadata = []
|
||||
speaker_to_index = {}
|
||||
for i, speaker_folder in enumerate(self.text_root.iterdir()):
|
||||
if speaker_folder.is_dir():
|
||||
speaker_to_index[speaker_folder.name] = i
|
||||
for text_file in speaker_folder.iterdir():
|
||||
if text_file.is_file():
|
||||
with io.open(str(text_file)) as f:
|
||||
transcription = f.read().strip()
|
||||
wav_file = text_file.with_suffix(".wav")
|
||||
metadata.append(
|
||||
(wav_file.name, speaker_folder.name, transcription))
|
||||
metadata = pd.DataFrame.from_records(
|
||||
metadata, columns=["wave_file", "speaker", "text"])
|
||||
|
||||
# save them
|
||||
yaml = YAML(typ='safe')
|
||||
yaml.dump(speaker_to_index, self.root.joinpath("speaker_indices.yaml"))
|
||||
metadata.to_csv(
|
||||
self.root.joinpath("metadata.csv"),
|
||||
sep="|",
|
||||
quoting=3,
|
||||
index=False)
|
||||
|
||||
def _get_example(self, metadatum):
|
||||
wave_file, speaker, text = metadatum
|
||||
wav_path = self.wav_root.joinpath(speaker, wave_file)
|
||||
wav, sr = librosa.load(str(wav_path), sr=None)
|
||||
phoneme_seq = np.array(text_to_sequence(text))
|
||||
return wav, self.speaker_indices[speaker], phoneme_seq
|
||||
|
||||
def __getitem__(self, index):
|
||||
metadatum = self.metadata.iloc[index]
|
||||
example = self._get_example(metadatum)
|
||||
return example
|
||||
|
||||
def __len__(self):
|
||||
return len(self.metadata)
|
||||
|
||||
def _batch_examples(self, minibatch):
|
||||
wav_batch, speaker_batch, phoneme_batch = [], [], []
|
||||
for example in minibatch:
|
||||
wav, speaker_id, phoneme_seq = example
|
||||
wav_batch.append(wav)
|
||||
speaker_batch.append(speaker_id)
|
||||
phoneme_batch.append(phoneme_seq)
|
||||
wav_batch = WavBatcher(pad_value=0.)(wav_batch)
|
||||
speaker_batch = np.array(speaker_batch)
|
||||
phoneme_batch = TextIDBatcher(pad_id=0)(phoneme_batch)
|
||||
return wav_batch, speaker_batch, phoneme_batch
|
|
@ -0,0 +1,3 @@
|
|||
from parakeet.frontend.vocab import *
|
||||
from parakeet.frontend.phonectic import *
|
||||
from parakeet.frontend.punctuation import *
|
|
@ -0,0 +1,3 @@
|
|||
# number expansion is not that easy
|
||||
import num2words
|
||||
import inflect
|
|
@ -0,0 +1,24 @@
|
|||
def full2half_width(ustr):
|
||||
half = []
|
||||
for u in ustr:
|
||||
num = ord(u)
|
||||
if num == 0x3000: # 全角空格变半角
|
||||
num = 32
|
||||
elif 0xFF01 <= num <= 0xFF5E:
|
||||
num -= 0xfee0
|
||||
u = chr(num)
|
||||
half.append(u)
|
||||
return ''.join(half)
|
||||
|
||||
def half2full_width(ustr):
|
||||
full = []
|
||||
for u in ustr:
|
||||
num = ord(u)
|
||||
if num == 32: # 半角空格变全角
|
||||
num = 0x3000
|
||||
elif 0x21 <= num <= 0x7E:
|
||||
num += 0xfee0
|
||||
u = chr(num) # to unicode
|
||||
full.append(u)
|
||||
|
||||
return ''.join(full)
|
|
@ -0,0 +1,97 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from typing import Union
|
||||
from g2p_en import G2p
|
||||
from g2pM import G2pM
|
||||
from parakeet.frontend import Vocab
|
||||
from opencc import OpenCC
|
||||
from parakeet.frontend.punctuation import get_punctuations
|
||||
|
||||
class Phonetics(ABC):
|
||||
@abstractmethod
|
||||
def __call__(self, sentence):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def phoneticize(self, sentence):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def numericalize(self, phonemes):
|
||||
pass
|
||||
|
||||
class English(Phonetics):
|
||||
def __init__(self):
|
||||
self.backend = G2p()
|
||||
self.phonemes = list(self.backend.phonemes)
|
||||
self.punctuations = get_punctuations("en")
|
||||
self.vocab = Vocab(self.phonemes + self.punctuations)
|
||||
|
||||
def phoneticize(self, sentence):
|
||||
start = self.vocab.start_symbol
|
||||
end = self.vocab.end_symbol
|
||||
phonemes = ([] if start is None else [start]) \
|
||||
+ self.backend(sentence) \
|
||||
+ ([] if end is None else [end])
|
||||
return phonemes
|
||||
|
||||
def numericalize(self, phonemes):
|
||||
ids = [self.vocab.lookup(item) for item in phonemes if item in self.vocab.stoi]
|
||||
return ids
|
||||
|
||||
def reverse(self, ids):
|
||||
return [self.vocab.reverse(i) for i in ids]
|
||||
|
||||
def __call__(self, sentence):
|
||||
return self.numericalize(self.phoneticize(sentence))
|
||||
|
||||
@property
|
||||
def vocab_size(self):
|
||||
return len(self.vocab)
|
||||
|
||||
|
||||
class Chinese(Phonetics):
|
||||
def __init__(self):
|
||||
self.opencc_backend = OpenCC('t2s.json')
|
||||
self.backend = G2pM()
|
||||
self.phonemes = self._get_all_syllables()
|
||||
self.punctuations = get_punctuations("cn")
|
||||
self.vocab = Vocab(self.phonemes + self.punctuations)
|
||||
|
||||
def _get_all_syllables(self):
|
||||
all_syllables = set([syllable for k, v in self.backend.cedict.items() for syllable in v])
|
||||
return list(all_syllables)
|
||||
|
||||
def phoneticize(self, sentence):
|
||||
simplified = self.opencc_backend.convert(sentence)
|
||||
phonemes = self.backend(simplified)
|
||||
start = self.vocab.start_symbol
|
||||
end = self.vocab.end_symbol
|
||||
phonemes = ([] if start is None else [start]) \
|
||||
+ phonemes \
|
||||
+ ([] if end is None else [end])
|
||||
return self._filter_symbols(phonemes)
|
||||
|
||||
def _filter_symbols(self, phonemes):
|
||||
cleaned_phonemes = []
|
||||
for item in phonemes:
|
||||
if item in self.vocab.stoi:
|
||||
cleaned_phonemes.append(item)
|
||||
else:
|
||||
for char in item:
|
||||
if char in self.vocab.stoi:
|
||||
cleaned_phonemes.append(char)
|
||||
return cleaned_phonemes
|
||||
|
||||
def numericalize(self, phonemes):
|
||||
ids = [self.vocab.lookup(item) for item in phonemes]
|
||||
return ids
|
||||
|
||||
def __call__(self, sentence):
|
||||
return self.numericalize(self.phoneticize(sentence))
|
||||
|
||||
@property
|
||||
def vocab_size(self):
|
||||
return len(self.vocab)
|
||||
|
||||
def reverse(self, ids):
|
||||
return [self.vocab.reverse(i) for i in ids]
|
|
@ -0,0 +1,33 @@
|
|||
import abc
|
||||
import string
|
||||
|
||||
__all__ = ["get_punctuations"]
|
||||
|
||||
EN_PUNCT = [
|
||||
" ",
|
||||
"-",
|
||||
"...",
|
||||
",",
|
||||
".",
|
||||
"?",
|
||||
"!",
|
||||
]
|
||||
|
||||
CN_PUNCT = [
|
||||
"、",
|
||||
",",
|
||||
";",
|
||||
":",
|
||||
"。",
|
||||
"?",
|
||||
"!"
|
||||
]
|
||||
|
||||
def get_punctuations(lang):
|
||||
if lang == "en":
|
||||
return EN_PUNCT
|
||||
elif lang == "cn":
|
||||
return CN_PUNCT
|
||||
else:
|
||||
raise ValueError(f"language {lang} Not supported")
|
||||
|
|
@ -0,0 +1,78 @@
|
|||
from typing import Dict, Iterable, List
|
||||
from ruamel import yaml
|
||||
from collections import OrderedDict
|
||||
|
||||
class Vocab(object):
|
||||
def __init__(self, symbols: Iterable[str],
|
||||
padding_symbol="<pad>",
|
||||
unk_symbol="<unk>",
|
||||
start_symbol="<s>",
|
||||
end_symbol="</s>"):
|
||||
self.special_symbols = OrderedDict()
|
||||
for i, item in enumerate(
|
||||
[padding_symbol, unk_symbol, start_symbol, end_symbol]):
|
||||
if item:
|
||||
self.special_symbols[item] = len(self.special_symbols)
|
||||
|
||||
self.padding_symbol = padding_symbol
|
||||
self.unk_symbol = unk_symbol
|
||||
self.start_symbol = start_symbol
|
||||
self.end_symbol = end_symbol
|
||||
|
||||
|
||||
self.stoi = OrderedDict()
|
||||
self.stoi.update(self.special_symbols)
|
||||
|
||||
for i, s in enumerate(symbols):
|
||||
if s not in self.stoi:
|
||||
self.stoi[s] = len(self.stoi)
|
||||
self.itos = {v: k for k, v in self.stoi.items()}
|
||||
|
||||
def __len__(self):
|
||||
return len(self.stoi)
|
||||
|
||||
@property
|
||||
def num_specials(self):
|
||||
return len(self.special_symbols)
|
||||
|
||||
# special tokens
|
||||
@property
|
||||
def padding_index(self):
|
||||
return self.stoi.get(self.padding_symbol, -1)
|
||||
|
||||
@property
|
||||
def unk_index(self):
|
||||
return self.stoi.get(self.unk_symbol, -1)
|
||||
|
||||
@property
|
||||
def start_index(self):
|
||||
return self.stoi.get(self.start_symbol, -1)
|
||||
|
||||
@property
|
||||
def end_index(self):
|
||||
return self.stoi.get(self.end_symbol, -1)
|
||||
|
||||
def __repr__(self):
|
||||
fmt = "Vocab(size: {},\nstoi:\n{})"
|
||||
return fmt.format(len(self), self.stoi)
|
||||
|
||||
def __str__(self):
|
||||
return self.__repr__()
|
||||
|
||||
def lookup(self, symbol):
|
||||
return self.stoi[symbol]
|
||||
|
||||
def reverse(self, index):
|
||||
return self.itos[index]
|
||||
|
||||
def add_symbol(self, symbol):
|
||||
if symbol in self.stoi:
|
||||
return
|
||||
N = len(self.stoi)
|
||||
self.stoi[symbol] = N
|
||||
self.itos[N] = symbol
|
||||
|
||||
def add_symbols(self, symbols):
|
||||
for symbol in symbols:
|
||||
self.add_symbol(symbol)
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
# coding: utf-8
|
||||
"""Text processing frontend
|
||||
|
||||
All frontend module should have the following functions:
|
||||
|
||||
- text_to_sequence(text, p)
|
||||
- sequence_to_text(sequence)
|
||||
|
||||
and the property:
|
||||
|
||||
- n_vocab
|
||||
|
||||
"""
|
||||
from . import en
|
||||
|
||||
# optinoal Japanese frontend
|
||||
try:
|
||||
from . import jp
|
||||
except ImportError:
|
||||
jp = None
|
||||
|
||||
try:
|
||||
from . import ko
|
||||
except ImportError:
|
||||
ko = None
|
||||
|
||||
# if you are going to use the frontend, you need to modify _characters in symbol.py:
|
||||
# _characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!\'(),-.:;? ' + '¡¿ñáéíóúÁÉÍÓÚÑ'
|
||||
try:
|
||||
from . import es
|
||||
except ImportError:
|
||||
es = None
|
|
@ -1,34 +0,0 @@
|
|||
# coding: utf-8
|
||||
|
||||
from ..text.symbols import symbols
|
||||
from ..text import sequence_to_text
|
||||
|
||||
import nltk
|
||||
from random import random
|
||||
|
||||
n_vocab = len(symbols)
|
||||
|
||||
_arpabet = nltk.corpus.cmudict.dict()
|
||||
|
||||
|
||||
def _maybe_get_arpabet(word, p):
|
||||
try:
|
||||
phonemes = _arpabet[word][0]
|
||||
phonemes = " ".join(phonemes)
|
||||
except KeyError:
|
||||
return word
|
||||
|
||||
return '{%s}' % phonemes if random() < p else word
|
||||
|
||||
|
||||
def mix_pronunciation(text, p):
|
||||
text = ' '.join(_maybe_get_arpabet(word, p) for word in text.split(' '))
|
||||
return text
|
||||
|
||||
|
||||
def text_to_sequence(text, p=0.0):
|
||||
if p >= 0:
|
||||
text = mix_pronunciation(text, p)
|
||||
from ..text import text_to_sequence
|
||||
text = text_to_sequence(text, ["english_cleaners"])
|
||||
return text
|
|
@ -1,14 +0,0 @@
|
|||
# coding: utf-8
|
||||
from ..text.symbols import symbols
|
||||
from ..text import sequence_to_text
|
||||
|
||||
import nltk
|
||||
from random import random
|
||||
|
||||
n_vocab = len(symbols)
|
||||
|
||||
|
||||
def text_to_sequence(text, p=0.0):
|
||||
from ..text import text_to_sequence
|
||||
text = text_to_sequence(text, ["basic_cleaners"])
|
||||
return text
|
|
@ -1,77 +0,0 @@
|
|||
# coding: utf-8
|
||||
|
||||
import MeCab
|
||||
import jaconv
|
||||
from random import random
|
||||
|
||||
n_vocab = 0xffff
|
||||
|
||||
_eos = 1
|
||||
_pad = 0
|
||||
_tagger = None
|
||||
|
||||
|
||||
def _yomi(mecab_result):
|
||||
tokens = []
|
||||
yomis = []
|
||||
for line in mecab_result.split("\n")[:-1]:
|
||||
s = line.split("\t")
|
||||
if len(s) == 1:
|
||||
break
|
||||
token, rest = s
|
||||
rest = rest.split(",")
|
||||
tokens.append(token)
|
||||
yomi = rest[7] if len(rest) > 7 else None
|
||||
yomi = None if yomi == "*" else yomi
|
||||
yomis.append(yomi)
|
||||
|
||||
return tokens, yomis
|
||||
|
||||
|
||||
def _mix_pronunciation(tokens, yomis, p):
|
||||
return "".join(yomis[idx]
|
||||
if yomis[idx] is not None and random() < p else tokens[idx]
|
||||
for idx in range(len(tokens)))
|
||||
|
||||
|
||||
def mix_pronunciation(text, p):
|
||||
global _tagger
|
||||
if _tagger is None:
|
||||
_tagger = MeCab.Tagger("")
|
||||
tokens, yomis = _yomi(_tagger.parse(text))
|
||||
return _mix_pronunciation(tokens, yomis, p)
|
||||
|
||||
|
||||
def add_punctuation(text):
|
||||
last = text[-1]
|
||||
if last not in [".", ",", "、", "。", "!", "?", "!", "?"]:
|
||||
text = text + "。"
|
||||
return text
|
||||
|
||||
|
||||
def normalize_delimitor(text):
|
||||
text = text.replace(",", "、")
|
||||
text = text.replace(".", "。")
|
||||
text = text.replace(",", "、")
|
||||
text = text.replace(".", "。")
|
||||
return text
|
||||
|
||||
|
||||
def text_to_sequence(text, p=0.0):
|
||||
for c in [" ", " ", "「", "」", "『", "』", "・", "【", "】", "(", ")", "(", ")"]:
|
||||
text = text.replace(c, "")
|
||||
text = text.replace("!", "!")
|
||||
text = text.replace("?", "?")
|
||||
|
||||
text = normalize_delimitor(text)
|
||||
text = jaconv.normalize(text)
|
||||
if p > 0:
|
||||
text = mix_pronunciation(text, p)
|
||||
text = jaconv.hira2kata(text)
|
||||
text = add_punctuation(text)
|
||||
|
||||
return [ord(c) for c in text] + [_eos] # EOS
|
||||
|
||||
|
||||
def sequence_to_text(seq):
|
||||
return "".join(chr(n) for n in seq)
|
|
@ -1,17 +0,0 @@
|
|||
# coding: utf-8
|
||||
|
||||
from random import random
|
||||
|
||||
n_vocab = 0xffff
|
||||
|
||||
_eos = 1
|
||||
_pad = 0
|
||||
_tagger = None
|
||||
|
||||
|
||||
def text_to_sequence(text, p=0.0):
|
||||
return [ord(c) for c in text] + [_eos] # EOS
|
||||
|
||||
|
||||
def sequence_to_text(seq):
|
||||
return "".join(chr(n) for n in seq)
|
|
@ -1,89 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
from . import cleaners
|
||||
from .symbols import symbols
|
||||
|
||||
# Mappings from symbol to numeric ID and vice versa:
|
||||
_symbol_to_id = {s: i for i, s in enumerate(symbols)}
|
||||
_id_to_symbol = {i: s for i, s in enumerate(symbols)}
|
||||
|
||||
# Regular expression matching text enclosed in curly braces:
|
||||
_curly_re = re.compile(r'(.*?)\{(.+?)\}(.*)')
|
||||
|
||||
|
||||
def text_to_sequence(text, cleaner_names):
|
||||
'''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
|
||||
|
||||
The text can optionally have ARPAbet sequences enclosed in curly braces embedded
|
||||
in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street."
|
||||
|
||||
Args:
|
||||
text: string to convert to a sequence
|
||||
cleaner_names: names of the cleaner functions to run the text through
|
||||
|
||||
Returns:
|
||||
List of integers corresponding to the symbols in the text
|
||||
'''
|
||||
sequence = []
|
||||
|
||||
# Check for curly braces and treat their contents as ARPAbet:
|
||||
while len(text):
|
||||
m = _curly_re.match(text)
|
||||
if not m:
|
||||
sequence += _symbols_to_sequence(_clean_text(text, cleaner_names))
|
||||
break
|
||||
sequence += _symbols_to_sequence(
|
||||
_clean_text(m.group(1), cleaner_names))
|
||||
sequence += _arpabet_to_sequence(m.group(2))
|
||||
text = m.group(3)
|
||||
|
||||
# Append EOS token
|
||||
sequence.append(_symbol_to_id['~'])
|
||||
return sequence
|
||||
|
||||
|
||||
def sequence_to_text(sequence):
|
||||
'''Converts a sequence of IDs back to a string'''
|
||||
result = ''
|
||||
for symbol_id in sequence:
|
||||
if symbol_id in _id_to_symbol:
|
||||
s = _id_to_symbol[symbol_id]
|
||||
# Enclose ARPAbet back in curly braces:
|
||||
if len(s) > 1 and s[0] == '@':
|
||||
s = '{%s}' % s[1:]
|
||||
result += s
|
||||
return result.replace('}{', ' ')
|
||||
|
||||
|
||||
def _clean_text(text, cleaner_names):
|
||||
for name in cleaner_names:
|
||||
cleaner = getattr(cleaners, name)
|
||||
if not cleaner:
|
||||
raise Exception('Unknown cleaner: %s' % name)
|
||||
text = cleaner(text)
|
||||
return text
|
||||
|
||||
|
||||
def _symbols_to_sequence(symbols):
|
||||
return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)]
|
||||
|
||||
|
||||
def _arpabet_to_sequence(text):
|
||||
return _symbols_to_sequence(['@' + s for s in text.split()])
|
||||
|
||||
|
||||
def _should_keep_symbol(s):
|
||||
return s in _symbol_to_id and s is not '_' and s is not '~'
|
|
@ -1,110 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
'''
|
||||
Cleaners are transformations that run over the input text at both training and eval time.
|
||||
|
||||
Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
|
||||
hyperparameter. Some cleaners are English-specific. You'll typically want to use:
|
||||
1. "english_cleaners" for English text
|
||||
2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
|
||||
the Unidecode library (https://pypi.python.org/pypi/Unidecode)
|
||||
3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
|
||||
the symbols in symbols.py to match your data).
|
||||
'''
|
||||
|
||||
import re
|
||||
from unidecode import unidecode
|
||||
from .numbers import normalize_numbers
|
||||
|
||||
# Regular expression matching whitespace:
|
||||
_whitespace_re = re.compile(r'\s+')
|
||||
|
||||
# List of (regular expression, replacement) pairs for abbreviations:
|
||||
_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1])
|
||||
for x in [
|
||||
('mrs', 'misess'),
|
||||
('mr', 'mister'),
|
||||
('dr', 'doctor'),
|
||||
('st', 'saint'),
|
||||
('co', 'company'),
|
||||
('jr', 'junior'),
|
||||
('maj', 'major'),
|
||||
('gen', 'general'),
|
||||
('drs', 'doctors'),
|
||||
('rev', 'reverend'),
|
||||
('lt', 'lieutenant'),
|
||||
('hon', 'honorable'),
|
||||
('sgt', 'sergeant'),
|
||||
('capt', 'captain'),
|
||||
('esq', 'esquire'),
|
||||
('ltd', 'limited'),
|
||||
('col', 'colonel'),
|
||||
('ft', 'fort'),
|
||||
]]
|
||||
|
||||
|
||||
def expand_abbreviations(text):
|
||||
for regex, replacement in _abbreviations:
|
||||
text = re.sub(regex, replacement, text)
|
||||
return text
|
||||
|
||||
|
||||
def expand_numbers(text):
|
||||
return normalize_numbers(text)
|
||||
|
||||
|
||||
def lowercase(text):
|
||||
return text.lower()
|
||||
|
||||
|
||||
def collapse_whitespace(text):
|
||||
return re.sub(_whitespace_re, ' ', text)
|
||||
|
||||
|
||||
def convert_to_ascii(text):
|
||||
return unidecode(text)
|
||||
|
||||
|
||||
def add_punctuation(text):
|
||||
if len(text) == 0:
|
||||
return text
|
||||
if text[-1] not in '!,.:;?':
|
||||
text = text + '.' # without this decoder is confused when to output EOS
|
||||
return text
|
||||
|
||||
|
||||
def basic_cleaners(text):
|
||||
'''Basic pipeline that lowercases and collapses whitespace without transliteration.'''
|
||||
text = lowercase(text)
|
||||
text = collapse_whitespace(text)
|
||||
return text
|
||||
|
||||
|
||||
def transliteration_cleaners(text):
|
||||
'''Pipeline for non-English text that transliterates to ASCII.'''
|
||||
text = convert_to_ascii(text)
|
||||
text = lowercase(text)
|
||||
text = collapse_whitespace(text)
|
||||
return text
|
||||
|
||||
|
||||
def english_cleaners(text):
|
||||
'''Pipeline for English text, including number and abbreviation expansion.'''
|
||||
text = convert_to_ascii(text)
|
||||
#text = add_punctuation(text)
|
||||
text = lowercase(text)
|
||||
text = expand_numbers(text)
|
||||
text = expand_abbreviations(text)
|
||||
text = collapse_whitespace(text)
|
||||
return text
|
|
@ -1,78 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import re
|
||||
|
||||
valid_symbols = [
|
||||
'AA', 'AA0', 'AA1', 'AA2', 'AE', 'AE0', 'AE1', 'AE2', 'AH', 'AH0', 'AH1',
|
||||
'AH2', 'AO', 'AO0', 'AO1', 'AO2', 'AW', 'AW0', 'AW1', 'AW2', 'AY', 'AY0',
|
||||
'AY1', 'AY2', 'B', 'CH', 'D', 'DH', 'EH', 'EH0', 'EH1', 'EH2', 'ER', 'ER0',
|
||||
'ER1', 'ER2', 'EY', 'EY0', 'EY1', 'EY2', 'F', 'G', 'HH', 'IH', 'IH0',
|
||||
'IH1', 'IH2', 'IY', 'IY0', 'IY1', 'IY2', 'JH', 'K', 'L', 'M', 'N', 'NG',
|
||||
'OW', 'OW0', 'OW1', 'OW2', 'OY', 'OY0', 'OY1', 'OY2', 'P', 'R', 'S', 'SH',
|
||||
'T', 'TH', 'UH', 'UH0', 'UH1', 'UH2', 'UW', 'UW0', 'UW1', 'UW2', 'V', 'W',
|
||||
'Y', 'Z', 'ZH'
|
||||
]
|
||||
|
||||
_valid_symbol_set = set(valid_symbols)
|
||||
|
||||
|
||||
class CMUDict:
|
||||
'''Thin wrapper around CMUDict data. http://www.speech.cs.cmu.edu/cgi-bin/cmudict'''
|
||||
|
||||
def __init__(self, file_or_path, keep_ambiguous=True):
|
||||
if isinstance(file_or_path, str):
|
||||
with open(file_or_path, encoding='latin-1') as f:
|
||||
entries = _parse_cmudict(f)
|
||||
else:
|
||||
entries = _parse_cmudict(file_or_path)
|
||||
if not keep_ambiguous:
|
||||
entries = {
|
||||
word: pron
|
||||
for word, pron in entries.items() if len(pron) == 1
|
||||
}
|
||||
self._entries = entries
|
||||
|
||||
def __len__(self):
|
||||
return len(self._entries)
|
||||
|
||||
def lookup(self, word):
|
||||
'''Returns list of ARPAbet pronunciations of the given word.'''
|
||||
return self._entries.get(word.upper())
|
||||
|
||||
|
||||
_alt_re = re.compile(r'\([0-9]+\)')
|
||||
|
||||
|
||||
def _parse_cmudict(file):
|
||||
cmudict = {}
|
||||
for line in file:
|
||||
if len(line) and (line[0] >= 'A' and line[0] <= 'Z' or line[0] == "'"):
|
||||
parts = line.split(' ')
|
||||
word = re.sub(_alt_re, '', parts[0])
|
||||
pronunciation = _get_pronunciation(parts[1])
|
||||
if pronunciation:
|
||||
if word in cmudict:
|
||||
cmudict[word].append(pronunciation)
|
||||
else:
|
||||
cmudict[word] = [pronunciation]
|
||||
return cmudict
|
||||
|
||||
|
||||
def _get_pronunciation(s):
|
||||
parts = s.strip().split(' ')
|
||||
for part in parts:
|
||||
if part not in _valid_symbol_set:
|
||||
return None
|
||||
return ' '.join(parts)
|
|
@ -1,71 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
import inflect
|
||||
import re
|
||||
|
||||
_inflect = inflect.engine()
|
||||
_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
|
||||
_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
|
||||
_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
|
||||
_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
|
||||
_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
|
||||
_number_re = re.compile(r'[0-9]+')
|
||||
|
||||
|
||||
def _remove_commas(m):
|
||||
return m.group(1).replace(',', '')
|
||||
|
||||
|
||||
def _expand_decimal_point(m):
|
||||
return m.group(1).replace('.', ' point ')
|
||||
|
||||
|
||||
def _expand_dollars(m):
|
||||
match = m.group(1)
|
||||
parts = match.split('.')
|
||||
if len(parts) > 2:
|
||||
return match + ' dollars' # Unexpected format
|
||||
dollars = int(parts[0]) if parts[0] else 0
|
||||
cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
|
||||
if dollars and cents:
|
||||
dollar_unit = 'dollar' if dollars == 1 else 'dollars'
|
||||
cent_unit = 'cent' if cents == 1 else 'cents'
|
||||
return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
|
||||
elif dollars:
|
||||
dollar_unit = 'dollar' if dollars == 1 else 'dollars'
|
||||
return '%s %s' % (dollars, dollar_unit)
|
||||
elif cents:
|
||||
cent_unit = 'cent' if cents == 1 else 'cents'
|
||||
return '%s %s' % (cents, cent_unit)
|
||||
else:
|
||||
return 'zero dollars'
|
||||
|
||||
|
||||
def _expand_ordinal(m):
|
||||
return _inflect.number_to_words(m.group(0))
|
||||
|
||||
|
||||
def _expand_number(m):
|
||||
num = int(m.group(0))
|
||||
if num > 1000 and num < 3000:
|
||||
if num == 2000:
|
||||
return 'two thousand'
|
||||
elif num > 2000 and num < 2010:
|
||||
return 'two thousand ' + _inflect.number_to_words(num % 100)
|
||||
elif num % 100 == 0:
|
||||
return _inflect.number_to_words(num // 100) + ' hundred'
|
||||
else:
|
||||
return _inflect.number_to_words(
|
||||
num, andword='', zero='oh', group=2).replace(', ', ' ')
|
||||
else:
|
||||
return _inflect.number_to_words(num, andword='')
|
||||
|
||||
|
||||
def normalize_numbers(text):
|
||||
text = re.sub(_comma_number_re, _remove_commas, text)
|
||||
text = re.sub(_pounds_re, r'\1 pounds', text)
|
||||
text = re.sub(_dollars_re, _expand_dollars, text)
|
||||
text = re.sub(_decimal_number_re, _expand_decimal_point, text)
|
||||
text = re.sub(_ordinal_re, _expand_ordinal, text)
|
||||
text = re.sub(_number_re, _expand_number, text)
|
||||
return text
|
|
@ -1,30 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
'''
|
||||
Defines the set of symbols used in text input to the model.
|
||||
|
||||
The default is a set of ASCII characters that works well for English or text that has been run
|
||||
through Unidecode. For other data, you can modify _characters. See TRAINING_DATA.md for details.
|
||||
'''
|
||||
from .cmudict import valid_symbols
|
||||
|
||||
_pad = '_'
|
||||
_eos = '~'
|
||||
_characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!\'(),-.:;? '
|
||||
|
||||
# Prepend "@" to ARPAbet symbols to ensure uniqueness (some are the same as uppercase letters):
|
||||
_arpabet = ['@' + s for s in valid_symbols]
|
||||
|
||||
# Export all symbols:
|
||||
symbols = [_pad, _eos] + list(_characters) + _arpabet
|
|
@ -11,3 +11,11 @@
|
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from parakeet.models.clarinet import *
|
||||
from parakeet.models.waveflow import *
|
||||
from parakeet.models.wavenet import *
|
||||
|
||||
from parakeet.models.transformer_tts import *
|
||||
from parakeet.models.deepvoice3 import *
|
||||
# from parakeet.models.fastspeech import *
|
||||
|
|
|
@ -0,0 +1,158 @@
|
|||
import paddle
|
||||
from paddle import nn
|
||||
from paddle.nn import functional as F
|
||||
from paddle import distribution as D
|
||||
|
||||
from parakeet.models.wavenet import WaveNet, UpsampleNet, crop
|
||||
|
||||
__all__ = ["Clarinet"]
|
||||
|
||||
class ParallelWaveNet(nn.LayerList):
|
||||
def __init__(self, n_loops, n_layers, residual_channels, condition_dim,
|
||||
filter_size):
|
||||
"""ParallelWaveNet, an inverse autoregressive flow model, it contains several flows(WaveNets).
|
||||
|
||||
Args:
|
||||
n_loops (List[int]): `n_loop` for each flow.
|
||||
n_layers (List[int]): `n_layer` for each flow.
|
||||
residual_channels (int): `residual_channels` for every flow.
|
||||
condition_dim (int): `condition_dim` for every flow.
|
||||
filter_size (int): `filter_size` for every flow.
|
||||
"""
|
||||
super(ParallelWaveNet, self).__init__()
|
||||
for n_loop, n_layer in zip(n_loops, n_layers):
|
||||
# teacher's log_scale_min does not matter herem, -100 is a dummy value
|
||||
self.append(
|
||||
WaveNet(n_loop, n_layer, residual_channels, 3, condition_dim,
|
||||
filter_size, "mog", -100.0))
|
||||
|
||||
def forward(self, z, condition=None):
|
||||
"""Transform a random noise sampled from a standard Gaussian distribution into sample from the target distribution. And output the mean and log standard deviation of the output distribution.
|
||||
|
||||
Args:
|
||||
z (Variable): shape(B, T), random noise sampled from a standard gaussian disribution.
|
||||
condition (Variable, optional): shape(B, F, T), dtype float, the upsampled condition. Defaults to None.
|
||||
|
||||
Returns:
|
||||
(z, out_mu, out_log_std)
|
||||
z (Variable): shape(B, T), dtype float, transformed noise, it is the synthesized waveform.
|
||||
out_mu (Variable): shape(B, T), dtype float, means of the output distributions.
|
||||
out_log_std (Variable): shape(B, T), dtype float, log standard deviations of the output distributions.
|
||||
"""
|
||||
for i, flow in enumerate(self):
|
||||
theta = flow(z, condition) # w, mu, log_std [0: T]
|
||||
w, mu, log_std = paddle.chunk(theta, 3, axis=-1) # (B, T, 1) for each
|
||||
mu = paddle.squeeze(mu, -1) #[0: T]
|
||||
log_std = paddle.squeeze(log_std, -1) #[0: T]
|
||||
z = z * paddle.exp(log_std) + mu #[0: T]
|
||||
|
||||
if i == 0:
|
||||
out_mu = mu
|
||||
out_log_std = log_std
|
||||
else:
|
||||
out_mu = out_mu * paddle.exp(log_std) + mu
|
||||
out_log_std += log_std
|
||||
|
||||
return z, out_mu, out_log_std
|
||||
|
||||
|
||||
# Gaussian IAF model
|
||||
class Clarinet(nn.Layer):
|
||||
def __init__(self, encoder, teacher, student, stft,
|
||||
min_log_scale=-6.0, lmd=4.0):
|
||||
"""Clarinet model. Conditional Parallel WaveNet.
|
||||
|
||||
Args:
|
||||
encoder (UpsampleNet): an UpsampleNet to upsample mel spectrogram.
|
||||
teacher (WaveNet): a WaveNet, the teacher.
|
||||
student (ParallelWaveNet): a ParallelWaveNet model, the student.
|
||||
stft (STFT): a STFT model to perform differentiable stft transform.
|
||||
min_log_scale (float, optional): used only for computing loss, the minimal value of log standard deviation of the output distribution of both the teacher and the student . Defaults to -6.0.
|
||||
lmd (float, optional): weight for stft loss. Defaults to 4.0.
|
||||
"""
|
||||
super(Clarinet, self).__init__()
|
||||
self.encoder = encoder
|
||||
self.teacher = teacher
|
||||
self.student = student
|
||||
self.stft = stft
|
||||
|
||||
self.lmd = lmd
|
||||
self.min_log_scale = min_log_scale
|
||||
|
||||
def forward(self, audio, mel, audio_start, clip_kl=True):
|
||||
"""Compute loss of Clarinet model.
|
||||
|
||||
Args:
|
||||
audio (Variable): shape(B, T_audio), dtype flaot32, ground truth waveform.
|
||||
mel (Variable): shape(B, F, T_mel), dtype flaot32, condition(mel spectrogram here).
|
||||
audio_start (Variable): shape(B, ), dtype int64, audio starts positions.
|
||||
clip_kl (bool, optional): whether to clip kl_loss by maximum=100. Defaults to True.
|
||||
|
||||
Returns:
|
||||
Dict(str, Variable)
|
||||
loss (Variable): shape(1, ), dtype flaot32, total loss.
|
||||
kl (Variable): shape(1, ), dtype flaot32, kl divergence between the teacher's output distribution and student's output distribution.
|
||||
regularization (Variable): shape(1, ), dtype flaot32, a regularization term of the KL divergence.
|
||||
spectrogram_frame_loss (Variable): shape(1, ), dytpe: float, stft loss, the L1-distance of the magnitudes of the spectrograms of the ground truth waveform and synthesized waveform.
|
||||
"""
|
||||
batch_size, audio_length = audio.shape # audio clip's length
|
||||
|
||||
z = paddle.randn(audio.shape)
|
||||
condition = self.encoder(mel) # (B, C, T)
|
||||
condition_slice = crop(condition, audio_start, audio_length)
|
||||
|
||||
x, s_means, s_scales = self.student(z, condition_slice) # all [0: T]
|
||||
s_means = s_means[:, 1:] # (B, T-1), time steps [1: T]
|
||||
s_scales = s_scales[:, 1:] # (B, T-1), time steps [1: T]
|
||||
s_clipped_scales = paddle.clip(s_scales, self.min_log_scale, 100.)
|
||||
|
||||
# teacher outputs single gaussian
|
||||
y = self.teacher(x[:, :-1], condition_slice[:, :, 1:])
|
||||
_, t_means, t_scales = paddle.chunk(y, 3, axis=-1) # time steps [1: T]
|
||||
t_means = paddle.squeeze(t_means, [-1]) # (B, T-1), time steps [1: T]
|
||||
t_scales = paddle.squeeze(t_scales, [-1]) # (B, T-1), time steps [1: T]
|
||||
t_clipped_scales = paddle.clip(t_scales, self.min_log_scale, 100.)
|
||||
|
||||
s_distribution = D.Normal(s_means, paddle.exp(s_clipped_scales))
|
||||
t_distribution = D.Normal(t_means, paddle.exp(t_clipped_scales))
|
||||
|
||||
# kl divergence loss, so we only need to sample once? no MC
|
||||
kl = s_distribution.kl_divergence(t_distribution)
|
||||
if clip_kl:
|
||||
kl = paddle.clip(kl, -100., 10.)
|
||||
# context size dropped
|
||||
kl = paddle.reduce_mean(kl[:, self.teacher.context_size:])
|
||||
# major diff here
|
||||
regularization = F.mse_loss(t_scales[:, self.teacher.context_size:],
|
||||
s_scales[:, self.teacher.context_size:])
|
||||
|
||||
# introduce information from real target
|
||||
spectrogram_frame_loss = F.mse_loss(
|
||||
self.stft.magnitude(audio), self.stft.magnitude(x))
|
||||
loss = kl + self.lmd * regularization + spectrogram_frame_loss
|
||||
loss_dict = {
|
||||
"loss": loss,
|
||||
"kl_divergence": kl,
|
||||
"regularization": regularization,
|
||||
"stft_loss": spectrogram_frame_loss
|
||||
}
|
||||
return loss_dict
|
||||
|
||||
@paddle.no_grad()
|
||||
def synthesis(self, mel):
|
||||
"""Synthesize waveform using the encoder and the student network.
|
||||
|
||||
Args:
|
||||
mel (Variable): shape(B, F, T_mel), the condition(mel spectrogram here).
|
||||
|
||||
Returns:
|
||||
Variable: shape(B, T_audio), the synthesized waveform. (T_audio = T_mel * upscale_factor, where upscale_factor is the `upscale_factor` of the encoder.)
|
||||
"""
|
||||
condition = self.encoder(mel)
|
||||
samples_shape = (condition.shape[0], condition.shape[-1])
|
||||
z = paddle.randn(samples_shape)
|
||||
x, s_means, s_scales = self.student(z, condition)
|
||||
return x
|
||||
|
||||
|
||||
# TODO(chenfeiyu): ClariNetLoss
|
|
@ -1,16 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from .net import *
|
||||
from .parallel_wavenet import *
|
|
@ -1,221 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import itertools
|
||||
import numpy as np
|
||||
from scipy import signal
|
||||
from tqdm import trange
|
||||
|
||||
import paddle.fluid.layers as F
|
||||
import paddle.fluid.dygraph as dg
|
||||
import paddle.fluid.initializer as I
|
||||
import paddle.fluid.layers.distributions as D
|
||||
|
||||
from parakeet.modules.weight_norm import Conv2DTranspose
|
||||
from parakeet.models.wavenet import crop, WaveNet, UpsampleNet
|
||||
from parakeet.models.clarinet.parallel_wavenet import ParallelWaveNet
|
||||
from parakeet.models.clarinet.utils import conv2d
|
||||
|
||||
|
||||
# Gaussian IAF model
|
||||
class Clarinet(dg.Layer):
|
||||
def __init__(self,
|
||||
encoder,
|
||||
teacher,
|
||||
student,
|
||||
stft,
|
||||
min_log_scale=-6.0,
|
||||
lmd=4.0):
|
||||
"""Clarinet model.
|
||||
|
||||
Args:
|
||||
encoder (UpsampleNet): an UpsampleNet to upsample mel spectrogram.
|
||||
teacher (WaveNet): a WaveNet, the teacher.
|
||||
student (ParallelWaveNet): a ParallelWaveNet model, the student.
|
||||
stft (STFT): a STFT model to perform differentiable stft transform.
|
||||
min_log_scale (float, optional): used only for computing loss, the minimal value of log standard deviation of the output distribution of both the teacher and the student . Defaults to -6.0.
|
||||
lmd (float, optional): weight for stft loss. Defaults to 4.0.
|
||||
"""
|
||||
super(Clarinet, self).__init__()
|
||||
self.encoder = encoder
|
||||
self.teacher = teacher
|
||||
self.student = student
|
||||
self.stft = stft
|
||||
|
||||
self.lmd = lmd
|
||||
self.min_log_scale = min_log_scale
|
||||
|
||||
def forward(self, audio, mel, audio_start, clip_kl=True):
|
||||
"""Compute loss of Clarinet model.
|
||||
|
||||
Args:
|
||||
audio (Variable): shape(B, T_audio), dtype flaot32, ground truth waveform.
|
||||
mel (Variable): shape(B, F, T_mel), dtype flaot32, condition(mel spectrogram here).
|
||||
audio_start (Variable): shape(B, ), dtype int64, audio starts positions.
|
||||
clip_kl (bool, optional): whether to clip kl_loss by maximum=100. Defaults to True.
|
||||
|
||||
Returns:
|
||||
Dict(str, Variable)
|
||||
loss (Variable): shape(1, ), dtype flaot32, total loss.
|
||||
kl (Variable): shape(1, ), dtype flaot32, kl divergence between the teacher's output distribution and student's output distribution.
|
||||
regularization (Variable): shape(1, ), dtype flaot32, a regularization term of the KL divergence.
|
||||
spectrogram_frame_loss (Variable): shape(1, ), dytpe: float, stft loss, the L1-distance of the magnitudes of the spectrograms of the ground truth waveform and synthesized waveform.
|
||||
"""
|
||||
batch_size, audio_length = audio.shape # audio clip's length
|
||||
|
||||
z = F.gaussian_random(audio.shape)
|
||||
condition = self.encoder(mel) # (B, C, T)
|
||||
condition_slice = crop(condition, audio_start, audio_length)
|
||||
|
||||
x, s_means, s_scales = self.student(z, condition_slice) # all [0: T]
|
||||
s_means = s_means[:, 1:] # (B, T-1), time steps [1: T]
|
||||
s_scales = s_scales[:, 1:] # (B, T-1), time steps [1: T]
|
||||
s_clipped_scales = F.clip(s_scales, self.min_log_scale, 100.)
|
||||
|
||||
# teacher outputs single gaussian
|
||||
y = self.teacher(x[:, :-1], condition_slice[:, :, 1:])
|
||||
_, t_means, t_scales = F.split(y, 3, -1) # time steps [1: T]
|
||||
t_means = F.squeeze(t_means, [-1]) # (B, T-1), time steps [1: T]
|
||||
t_scales = F.squeeze(t_scales, [-1]) # (B, T-1), time steps [1: T]
|
||||
t_clipped_scales = F.clip(t_scales, self.min_log_scale, 100.)
|
||||
|
||||
s_distribution = D.Normal(s_means, F.exp(s_clipped_scales))
|
||||
t_distribution = D.Normal(t_means, F.exp(t_clipped_scales))
|
||||
|
||||
# kl divergence loss, so we only need to sample once? no MC
|
||||
kl = s_distribution.kl_divergence(t_distribution)
|
||||
if clip_kl:
|
||||
kl = F.clip(kl, -100., 10.)
|
||||
# context size dropped
|
||||
kl = F.reduce_mean(kl[:, self.teacher.context_size:])
|
||||
# major diff here
|
||||
regularization = F.mse_loss(t_scales[:, self.teacher.context_size:],
|
||||
s_scales[:, self.teacher.context_size:])
|
||||
|
||||
# introduce information from real target
|
||||
spectrogram_frame_loss = F.mse_loss(
|
||||
self.stft.magnitude(audio), self.stft.magnitude(x))
|
||||
loss = kl + self.lmd * regularization + spectrogram_frame_loss
|
||||
loss_dict = {
|
||||
"loss": loss,
|
||||
"kl_divergence": kl,
|
||||
"regularization": regularization,
|
||||
"stft_loss": spectrogram_frame_loss
|
||||
}
|
||||
return loss_dict
|
||||
|
||||
@dg.no_grad
|
||||
def synthesis(self, mel):
|
||||
"""Synthesize waveform using the encoder and the student network.
|
||||
|
||||
Args:
|
||||
mel (Variable): shape(B, F, T_mel), the condition(mel spectrogram here).
|
||||
|
||||
Returns:
|
||||
Variable: shape(B, T_audio), the synthesized waveform. (T_audio = T_mel * upscale_factor, where upscale_factor is the `upscale_factor` of the encoder.)
|
||||
"""
|
||||
condition = self.encoder(mel)
|
||||
samples_shape = (condition.shape[0], condition.shape[-1])
|
||||
z = F.gaussian_random(samples_shape)
|
||||
x, s_means, s_scales = self.student(z, condition)
|
||||
return x
|
||||
|
||||
|
||||
class STFT(dg.Layer):
|
||||
def __init__(self, n_fft, hop_length, win_length, window="hanning"):
|
||||
"""A module for computing differentiable stft transform. See `librosa.stft` for more details.
|
||||
|
||||
Args:
|
||||
n_fft (int): number of samples in a frame.
|
||||
hop_length (int): number of samples shifted between adjacent frames.
|
||||
win_length (int): length of the window function.
|
||||
window (str, optional): name of window function, see `scipy.signal.get_window` for more details. Defaults to "hanning".
|
||||
"""
|
||||
super(STFT, self).__init__()
|
||||
self.hop_length = hop_length
|
||||
self.n_bin = 1 + n_fft // 2
|
||||
self.n_fft = n_fft
|
||||
|
||||
# calculate window
|
||||
window = signal.get_window(window, win_length)
|
||||
if n_fft != win_length:
|
||||
pad = (n_fft - win_length) // 2
|
||||
window = np.pad(window, ((pad, pad), ), 'constant')
|
||||
|
||||
# calculate weights
|
||||
r = np.arange(0, n_fft)
|
||||
M = np.expand_dims(r, -1) * np.expand_dims(r, 0)
|
||||
w_real = np.reshape(window *
|
||||
np.cos(2 * np.pi * M / n_fft)[:self.n_bin],
|
||||
(self.n_bin, 1, 1, self.n_fft)).astype("float32")
|
||||
w_imag = np.reshape(window *
|
||||
np.sin(-2 * np.pi * M / n_fft)[:self.n_bin],
|
||||
(self.n_bin, 1, 1, self.n_fft)).astype("float32")
|
||||
|
||||
w = np.concatenate([w_real, w_imag], axis=0)
|
||||
self.weight = dg.to_variable(w)
|
||||
|
||||
def forward(self, x):
|
||||
"""Compute the stft transform.
|
||||
|
||||
Args:
|
||||
x (Variable): shape(B, T), dtype flaot32, the input waveform.
|
||||
|
||||
Returns:
|
||||
(real, imag)
|
||||
real (Variable): shape(B, C, 1, T), dtype flaot32, the real part of the spectrogram. (C = 1 + n_fft // 2)
|
||||
imag (Variable): shape(B, C, 1, T), dtype flaot32, the image part of the spectrogram. (C = 1 + n_fft // 2)
|
||||
"""
|
||||
# x(batch_size, time_steps)
|
||||
# pad it first with reflect mode
|
||||
pad_start = F.reverse(x[:, 1:1 + self.n_fft // 2], axis=1)
|
||||
pad_stop = F.reverse(x[:, -(1 + self.n_fft // 2):-1], axis=1)
|
||||
x = F.concat([pad_start, x, pad_stop], axis=-1)
|
||||
|
||||
# to BC1T, C=1
|
||||
x = F.unsqueeze(x, axes=[1, 2])
|
||||
out = conv2d(x, self.weight, stride=(1, self.hop_length))
|
||||
real, imag = F.split(out, 2, dim=1) # BC1T
|
||||
return real, imag
|
||||
|
||||
def power(self, x):
|
||||
"""Compute the power spectrogram.
|
||||
|
||||
Args:
|
||||
(real, imag)
|
||||
real (Variable): shape(B, C, 1, T), dtype flaot32, the real part of the spectrogram.
|
||||
imag (Variable): shape(B, C, 1, T), dtype flaot32, the image part of the spectrogram.
|
||||
|
||||
Returns:
|
||||
Variable: shape(B, C, 1, T), dtype flaot32, the power spectrogram.
|
||||
"""
|
||||
real, imag = self(x)
|
||||
power = real**2 + imag**2
|
||||
return power
|
||||
|
||||
def magnitude(self, x):
|
||||
"""Compute the magnitude spectrogram.
|
||||
|
||||
Args:
|
||||
(real, imag)
|
||||
real (Variable): shape(B, C, 1, T), dtype flaot32, the real part of the spectrogram.
|
||||
imag (Variable): shape(B, C, 1, T), dtype flaot32, the image part of the spectrogram.
|
||||
|
||||
Returns:
|
||||
Variable: shape(B, C, 1, T), dtype flaot32, the magnitude spectrogram. It is the square root of the power spectrogram.
|
||||
"""
|
||||
power = self.power(x)
|
||||
magnitude = F.sqrt(power)
|
||||
return magnitude
|
|
@ -1,77 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
import math
|
||||
import time
|
||||
import itertools
|
||||
import numpy as np
|
||||
|
||||
import paddle.fluid.layers as F
|
||||
import paddle.fluid.dygraph as dg
|
||||
import paddle.fluid.initializer as I
|
||||
import paddle.fluid.layers.distributions as D
|
||||
|
||||
from parakeet.modules.weight_norm import Linear, Conv1D, Conv1DCell, Conv2DTranspose
|
||||
from parakeet.models.wavenet import WaveNet
|
||||
|
||||
|
||||
class ParallelWaveNet(dg.Layer):
|
||||
def __init__(self, n_loops, n_layers, residual_channels, condition_dim,
|
||||
filter_size):
|
||||
"""ParallelWaveNet, an inverse autoregressive flow model, it contains several flows(WaveNets).
|
||||
|
||||
Args:
|
||||
n_loops (List[int]): `n_loop` for each flow.
|
||||
n_layers (List[int]): `n_layer` for each flow.
|
||||
residual_channels (int): `residual_channels` for every flow.
|
||||
condition_dim (int): `condition_dim` for every flow.
|
||||
filter_size (int): `filter_size` for every flow.
|
||||
"""
|
||||
super(ParallelWaveNet, self).__init__()
|
||||
self.flows = dg.LayerList()
|
||||
for n_loop, n_layer in zip(n_loops, n_layers):
|
||||
# teacher's log_scale_min does not matter herem, -100 is a dummy value
|
||||
self.flows.append(
|
||||
WaveNet(n_loop, n_layer, residual_channels, 3, condition_dim,
|
||||
filter_size, "mog", -100.0))
|
||||
|
||||
def forward(self, z, condition=None):
|
||||
"""Transform a random noise sampled from a standard Gaussian distribution into sample from the target distribution. And output the mean and log standard deviation of the output distribution.
|
||||
|
||||
Args:
|
||||
z (Variable): shape(B, T), random noise sampled from a standard gaussian disribution.
|
||||
condition (Variable, optional): shape(B, F, T), dtype float, the upsampled condition. Defaults to None.
|
||||
|
||||
Returns:
|
||||
(z, out_mu, out_log_std)
|
||||
z (Variable): shape(B, T), dtype float, transformed noise, it is the synthesized waveform.
|
||||
out_mu (Variable): shape(B, T), dtype float, means of the output distributions.
|
||||
out_log_std (Variable): shape(B, T), dtype float, log standard deviations of the output distributions.
|
||||
"""
|
||||
for i, flow in enumerate(self.flows):
|
||||
theta = flow(z, condition) # w, mu, log_std [0: T]
|
||||
w, mu, log_std = F.split(theta, 3, dim=-1) # (B, T, 1) for each
|
||||
mu = F.squeeze(mu, [-1]) #[0: T]
|
||||
log_std = F.squeeze(log_std, [-1]) #[0: T]
|
||||
z = z * F.exp(log_std) + mu #[0: T]
|
||||
|
||||
if i == 0:
|
||||
out_mu = mu
|
||||
out_log_std = log_std
|
||||
else:
|
||||
out_mu = out_mu * F.exp(log_std) + mu
|
||||
out_log_std += log_std
|
||||
|
||||
return z, out_mu, out_log_std
|
|
@ -1,38 +0,0 @@
|
|||
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
|
||||
from paddle import fluid
|
||||
from paddle.fluid.core import ops
|
||||
|
||||
|
||||
@fluid.framework.dygraph_only
|
||||
def conv2d(input,
|
||||
weight,
|
||||
stride=(1, 1),
|
||||
padding=((0, 0), (0, 0)),
|
||||
dilation=(1, 1),
|
||||
groups=1,
|
||||
use_cudnn=True,
|
||||
data_format="NCHW"):
|
||||
padding = tuple(pad for pad_dim in padding for pad in pad_dim)
|
||||
|
||||
attrs = ('strides', stride, 'paddings', padding, 'dilations', dilation,
|
||||
'groups', groups, 'use_cudnn', use_cudnn, 'use_mkldnn', False,
|
||||
'fuse_relu_before_depthwise_conv', False, "padding_algorithm",
|
||||
"EXPLICIT", "data_format", data_format)
|
||||
|
||||
out = ops.conv2d(input, weight, *attrs)
|
||||
return out
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue