update experiment guide

This commit is contained in:
iclementine 2020-11-17 16:33:13 +08:00
parent e470cda881
commit c8622b4699
3 changed files with 82 additions and 90 deletions

View File

@ -167,7 +167,7 @@ ljspeech = TransformDataset(meta, transform)
当然也可以选择专门写一个转换脚本把转换后的数据集保存下来,然后再写一个适配的 Dataset 子类去加载这些保存的数据。实际这么做的效率会更高。
接下来我们需要写一个函数将多个样例组成批次。因为其中的 mel 频谱是序列数据,所以我们需要进行 padding.
接下来我们需要写一个可调用对象将多个样例组成批次。因为其中的 ids 和 mel 频谱是序列数据,所以我们需要进行 padding.
```python
class LJSpeechCollector(object):
@ -187,7 +187,7 @@ class LJSpeechCollector(object):
return ids, np.transpose(mels, [0, 2, 1]), stop_probs
```
以上的组件准备就后,可以准备整个数据流。
以上的组件准备就后,可以准备整个数据流。
```python
def create_dataloader(source_path, valid_size, batch_size):
@ -213,4 +213,4 @@ def create_dataloader(source_path, valid_size, batch_size):
return train_loader, valid_loader
```
train_loader 和 valid_loader 可以被迭代。对其迭代器使用 next, 返回的是 paddle.Tensor 的 list, 代表一个 batch这些就可以直接用作 `paddle.nn.Layer` 的输入了。
train_loader 和 valid_loader 可以被迭代。对其迭代器使用 next, 返回的是 `paddle.Tensor` 的 list, 代表一个 batch这些就可以直接用作 `paddle.nn.Layer` 的输入了。

View File

@ -1,87 +0,0 @@
# How to build your own model and experiment?
For a general deep learning experiment, there are 4 parts to care for.
1. Preprocess dataset to meet the needs for model training and iterate over them in batches;
2. Define the model and the optimizer;
3. Write the training process (including forward-backward computation, parameter update, logging, evaluation, etc.)
4. Configure and launch the experiment.
## Data Processing
For processing data, `parakeet.data` provides `DatasetMixin`, `DataCargo` and `DataIterator`.
Dataset is an iterable object of examples. `DatasetMixin` provides the standard indexing interface, and other classes in [parakeet.data.dataset](../parakeet/data/dataset.py) provide flexible interfaces for building customized datasets.
`DataCargo` is an iterable object of batches. It differs from a dataset in that it can be iterated over in batches. In addition to a dataset, a `Sampler` and a `batch function` are required to build a `DataCargo`. `Sampler` specifies which examples to pick, and `batch function` specifies how to create a batch from them. Commonly used `Samplers` are provided by [parakeet.data](../parakeet/data/). Users should define a `batch function` for a datasets, in order to batch its examples.
`DataIterator` is an iterator class for `DataCargo`. It is create when explicitly creating an iterator of a `DataCargo` by `iter(DataCargo)`, or iterating over a `DataCargo` with `for` loop.
Data processing is splited into two phases: sample-level processing and batching.
1. Sample-level processing. This process is transforming an example into another. This process can be defined as `get_example()` method of a dataset, or as a `transform` (callable object) and build a `TransformDataset` with it.
2. Batching. It is the process of transforming a list of examples into a batch. The rationale is to transform an array of structures into a structure of arrays. We generally define a batch function (or a callable object) to do this.
To connect a `DataCargo` with Paddlepaddle's asynchronous data loading mechanism, we need to create a `fluid.io.DataLoader` and connect it to the `Datacargo`.
The overview of data processing in an experiment with Parakeet is :
```text
Dataset --(transform)--> Dataset --+
sampler --+
batch_fn --+-> DataCargo --> DataLoader
```
The user need to define a customized transform and a batch function to accomplish this process. See [data](./data.md) for more details.
## Model
Parakeet provides commonly used functions, modules and models for the users to define their own models. Functions contain no trainable `Parameter`s, and are used in modules and models. Modules and modes are subclasses of `fluid.dygraph.Layer`. The distinction is that `module`s tend to be generic, simple and highly reusable, while `model`s tend to be task-sepcific, complicated and not that reusable. Some models are so complicated that we extract building blocks from it as separate classes but if these building blocks are not common and reusable enough, they are considered as submodels.
In the structure of the project, modules are placed in [parakeet.modules](../parakeet/modules/), while models are in [parakeet.models](../parakeet/models) and grouped into folders like `waveflow` and `wavenet`, which include the whole model and their submodels.
When developers want to add new models to `parakeet`, they can consider the distinctions described above and put the code in an appropriate place.
## Training Process
Training process is basically running a training loop for multiple times. A typical training loop consists of the procedures below:
1. Iterating over training dataset;
2. Prerocessing mini-batches;
3. Forward/backward computations of the neural networks;
4. Updating Parameters;
5. Evaluating the model on validation dataset;
6. Logging or saving intermediate results;
7. Saving checkpoints of the model and the optimizer.
In section `DataProcessing` we have cover 1 and 2.
`Model` and `Optimizer` cover 3 and 4.
To keep the training loop clear, it's a good idea to define functions for saving/loading of checkpoints, evaluation on validation set, logging and saving of intermediate results, etc. For some complicated model, it is also recommended to define a function to create the model. This function can be used in both train and inference, to ensure that the model is identical at training and inference.
Code is typically organized in this way:
```text
├── configs/ (example configuration)
├── data.py (definition of custom Dataset, transform and batch function)
├── README.md (README for the experiment)
├── synthesis.py (code for inference)
├── train.py (code for training)
└── utils.py (all other utility functions)
```
## Configuration
Deep learning experiments have many options to configure. These configurations can be roughly grouped into different types: configurations about path of the dataset and path to save results, configurations about how to process data, configurations about the model and configurations about the training process.
Some configurations tend to change when running the code at different times, for example, path of the data and path to save results and whether to load model before training, etc. For these configurations, it's better to define them as command line arguments. We use `argparse` to handle them.
Other groups of configurations may overlap with others. For example, data processing and model may have some common options. The recommended way is to save them as configuration files, for example, `yaml` or `json`. We prefer `yaml`, for it is more human-reabable.
There are several examples in this repo, check [Parakeet/examples](../examples) for more details. `Parakeet/examples` is where we place our experiments. Though experiments are not a part of package `parakeet`, it is a part of repo `Parakeet`. They are provided as examples and allow for the users to run our experiment out-of-the-box. Feel free to add new examples and contribute to `Parakeet`.

View File

@ -0,0 +1,79 @@
# 如何准备自己的实验
对于一般的深度学习实验,有几个部分需要处理。
1. 按照模型的需要对数据进行预处理,并且按批次迭代数据集;
2. 定义模型以及优化器等组件;
3. 写出训练过程(一般包括 forward/backward 计算参数更新log 记录,可视化,定期评估等步骤);
4. 配置并运行实验。
## 数据处理
对于数据处理,`parakeet.data` 采用了 paddlepaddle 常用的 `Dataset -> DataLoader` 的流程。数据处理流程的概览如下:
```text
Dataset --(transform)--> Dataset --+
sampler --+
batch_fn --+-> DataLoader
```
其中 transform 代表的是对样例的预处理。可以使用 `parakeet.data` 中的 TransformDataset 来从一个 Dataset 构建另一个 Dataset.
得到想要的 Dataset 之后,提供 sampler 和 batch function, 即可据此构建 DataLoader. DataLoader 产生的结果可以直接用作模型的输入。
详细的使用方式参见 [data_cn](./data_cn.md).
## 模型
为了对模型的可复用行和功能做较好的平衡,我们把模型按照其特征分为几种。
对于较为常用,可以作为其他更大的模型的部分的模块,我们尽可能将其实现得足够简单和通用,因为它们会被复用。对于含有可训练参数的模块,一般实现为 `paddle.nn.Layer` 的子类,但它们不是直接面向一个任务,因此不会带上处理未加工的输入和输出的功能。对于不含有可训练参数的模块,可以直接实现为一个函数,其输入输出都是 `paddle.Tensor` 或其集合。
针对一个特定任务的开箱模型,一般实现为 `paddle.nn.Layer` 的子类,是一个任务的核心计算单元。为了方便地处理输入和输出,一般还可以为它添加处理未加工的输入输出的功能。比如对于 NLP 任务来说,尽管神经网络接受的输出是文本的 id, 但是为了使模型能够处理未加工的输入,文本预处理的功能,以及文本转 id 的字典,也都应该视作模型的一部分。
当一个模型足够复杂,对其进行模块化切分是更好的选择,尽管拆分出来的小模块的功能也不一定非常通用,可能只是用于某个模型,但是当作么做有利于代码的清晰简洁时,仍然推荐这么做。
在 parakeet 的目录结构中,复用性较高的模块被放在 [parakeet.modules](../parakeet/modules/), 但是针对特定任务的模型则放在 [parakeet.models](../parakeet/models).
当开发新的模型的时候,开发这需要考虑拆分模块的可行性,以及模块的通用程度,把它们分置于合适的目录。
## 训练流程
训练流程一般就是多次训练一个循环体。典型的循环体包含如下的过程:
1. 迭代数据集;
2. 处理批次数据;
3. 神经网络的 forward/backward 计算;
4. 参数更新;
5. 符合一定条件时,在验证数据集上评估模型;
6. 写日志,可视化,保存中间结果;
7. 保存模型和优化器的状态。
`数据处理` 一节包含了 1 和 2, 模型和优化器包含了 3 和 4. 那么 5,6,7 是训练流程主要要完成的事情。为了使训练循环体简洁清晰,推荐将模型的保存和加载,模型评估,写日志以及可视化等功能都实现成函数,尽管很多情况下,它们可能需要访问很多局部变量。我们也正在考虑使用一个 Experiment 或者 Trainer 类来规范化这些训练循环体的写法。这样可以把一些需要被许多函数访问的变量作为类内的变量,可以使代码简洁而不至于引入太多的全局变量。
实验代码一般以如下的方式组织:
```text
├── configs/ (实验配置)
├── data.py (Dataset, DataLoader 等的定义)
├── README.md (实验的帮助信息)
├── synthesis.py (用于生成的代码)
├── train.py (用于训练的代码)
└── utils.py (其他必要的辅助函数)
```
## 配置实验
深度学习实验常常有很多选项可配置。这些配置大概可以被分为几类:
1. 数据源以及数据处理方式配置;
2. 实验结果保存路径配置;
3. 数据预处理方式配置;
4. 模型结构和超参数配置;
5. 训练过程配置。
这些配置之间也可能存在某些重叠项,比如数据预处理部分的配置可能就和模型配置有关。比如说 mel 频谱的维数。
有部分配置是经常会发生改变的,比如数据源以及保存实验结果的路径,或者加载的 checkpoint 的路径等。对于这些配置,更好的做法是把它们实现为命令行参数。其余的不经常发生变动的参数,推荐将其写在配置文件中,我们推荐使用 `yaml` 作为配置文件,因为它允许添加注释,并且更加人类可读。
在这个软件源中包含了几个例子,可以在 [Parakeet/examples](../examples) 中查看。这些实验被作为样例提供给用户,可以直接运行。同时也欢迎用户添加新的模型和实验并为 `Parakeet` 贡献代码。