diff --git a/README.md b/README.md index 07bf7bc..35e5669 100644 --- a/README.md +++ b/README.md @@ -19,9 +19,7 @@
基于深度学习的开源中文知识图谱抽取框架
- - -DeepKE 是一个支持**低资源、长篇章**的知识抽取工具,可以基于pytorch实现命名实体识别、关系抽取和属性抽取功能。 +DeepKE 是一个支持**低资源、长篇章**的知识抽取工具,可以基于**PyTorch**实现**命名实体识别**、**关系抽取**和**属性抽取**功能。-
-
-
+
+
+
-
-
+
+
-
-
+
+
-
-
-
-
-
-
-
- 简体中文 |
- English
-
-
+
+ 简体中文 | English
+ Open source Chinese knowledge atlas extraction framework based on deep learning A Deep Learning Based
+ Figure 1: The framework of DeepKE
+
-
-
-
-
-
+
+
Knowledge Extraction Toolkit
for Knowledge Base Population
## Online Demo
-demo 's urls
-1.NER
+[demo](https://deepke.openkg.cn)
-```
-REGULAR
-```
+### Prediction
-2.RE
+There is a demonstration of prediction.
+
- 1.REGULAR
-
- 2.FEW-SHOT
-
- 3.DOCUMENT
+
-3.AE
+## Model Framework
+
+
+
+
-## Model architectures
-Deepke contains these models:
+## Quickstart
-1.NER
+Take the fully supervised attribute extraction for example.
-**[REGULAR](https://github.com/zjunlp/deepke/blob/test_new_deepke/example/ner/regular/README.md)**
+1. Download basic codes `git clone https://github.com/zjunlp/DeepKE.git `
+2. Create a virtual environment (recommend `anaconda`) `conda create -n deepke python=3.8`
+3. Enter the environment `conda activate deepke`
+4. Install dependent packages
+ - If use deepke directly: `pip install deepke`
+ - If modify source codes before usage:
+ run `python setup.py install` firstly,
+ after modification, run `python setup.py develop`
-2.RE
+5. Enter the corresponding directory `cd DeepKE/example/re/standard`
+6. Train `python run.py` (Parameters for training can be changed in the `conf` folder)
+7. Predict `python predict.py`(Parameters for prediction can be changed in the `conf` folder)
-**[REGULAR](https://github.com/zjunlp/deepke/blob/test_new_deepke/example/re/regular/README.md)**
+### Requirements
+> python == 3.8
-FEW-SHOT
+- torch == 1.5
+- hydra-core == 1.0.6
+- tensorboard == 2.4.1
+- matplotlib == 3.4.1
+- transformers == 3.4.0
+- jieba == 0.42.1
+- scikit-learn == 0.24.1
+- pytorch-transformers == 1.2.0
+- seqeval == 1.2.2
+- tqdm == 4.60.0
+- opt-einsum==3.3.0
+- ujson
-DOCUMENT
+### Introduction of Three Functions
-3.AE
+#### 1. Named Entity Recognition
-## Citation
+- Named entity recognition seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, organizations, etc.
+
+- The data is stored in `.txt` files. Some instances as following:
+
+ | Sentence | Person | Location | Organization |
+ | :----------------------------------------------------------: | :------------------------: | :------------: | :----------------------------: |
+ | 本报北京9月4日讯记者杨涌报道:部分省区人民日报宣传发行工作座谈会9月3日在4日在京举行。 | 杨涌 | 北京 | 人民日报 |
+ | 《红楼梦》是中央电视台和中国电视剧制作中心根据中国古典文学名著《红楼梦》摄制于1987年的一部古装连续剧,由王扶林导演,周汝昌、王蒙、周岭等多位红学家参与制作。 | 王扶林,周汝昌,王蒙,周岭 | 中国 | 中央电视台,中国电视剧制作中心 |
+ | 秦始皇兵马俑位于陕西省西安市,1961年被国务院公布为第一批全国重点文物保护单位,是世界八大奇迹之一。 | 秦始皇 | 陕西省,西安市 | 国务院 |
+
+- Read the detailed process in specific README
+ - **[STANDARD (Fully Supervised)](https://github.com/zjunlp/deepke/blob/test_new_deepke/example/ner/standard)**
+ - The standard module is implemented by the pretrained model *BERT*.
+ - Enter `DeepKE/example/ner/standard`.
+ - The dataset and parameters can be customized in the `data` folder and `conf` folder respectively.
+ - **Train**: `python run.py`
+ - **Predict**: `python predict.py`
+ - **[FEW-SHOT](https://github.com/zjunlp/DeepKE/tree/test_new_deepke/example/ner/few-shot)**
+ - This module is in the low-resouce scenario.
+ - Enter `DeepKE/example/ner/few-shot`.
+ - The directory where the model is loaded and saved and the configuration parameters can be cusomized in the `conf` folder.
+ - **Train with *CoNLL-2003***: `python run.py`
+ - **Train in the few-shot scenario**: `python run.py +train=few_shot`. Users can modify `load_path` in `conf/train/few_shot.yaml` with the use of existing loaded model.
+ - **Predict**: add `- predict` to `conf/config.yaml`, modify `loda_path` as the model path and `write_path` as the path where the predicted results are saved in `conf/predict.yaml`, and then run `python predict.py`
+
+#### 2. Relation Extraction
+
+- Relationship extraction is the task of extracting semantic relations between entities from a unstructured text.
+
+- The data is stored in `.csv` files. Some instances as following:
+
+ | Sentence | Relation | Head | Head_offset | Tail | Tail_offset |
+ | :----------------------------------------------------: | :------: | :--------: | :---------: | :--------: | :---------: |
+ | 《岳父也是爹》是王军执导的电视剧,由马恩然、范明主演。 | 导演 | 岳父也是爹 | 1 | 王军 | 8 |
+ | 《九玄珠》是在纵横中文网连载的一部小说,作者是龙马。 | 连载网站 | 九玄珠 | 1 | 纵横中文网 | 7 |
+ | 提起杭州的美景,西湖总是第一个映入脑海的词语。 | 所在城市 | 西湖 | 8 | 杭州 | 2 |
+
+- Read the detailed process in specific README
+
+ - **[STANDARD](https://github.com/zjunlp/deepke/blob/test_new_deepke/example/re/standard)**
+ - The standard module is implemented by common deep learning models, including CNN, RNN, Capsule, GCN, Transformer and the pretrained model.
+ - Enter the `DeepKE/example/re/standard` folder.
+ - The dataset and parameters can be customized in the `data` folder and `conf` folder respectively.
+ - **Train**: `python run.py`
+ - **Predict**: `python predict.py`
+
+ - **[FEW-SHOT](https://github.com/zjunlp/deepke/blob/test_new_deepke/example/re/few-shot)**
+ - This module is in the low-resouce scenario.
+ - Enter `DeepKE/example/re/few-shot` .
+ - **Train**: `python run.py`
+ Start with the model trained last time: modify `train_from_saved_model` in `conf/train.yaml`as the path where the model trained last time was saved.
+ And the path saving logs generated in training can be customized by `log_dir`.
+ - **Predict**: `python predict.py`
+
+ - **[DOCUMENT](https://github.com/zjunlp/deepke/blob/test_new_deepke/example/re/document)**
+ - Download the model `train_distant.json` from [*Google Drive*](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw) to `data/`.
+ - Enter `DeepKE/example/re/document` .
+ - **Train**: `python run.py`
+ Start with the model trained last time: modify `train_from_saved_model` in `conf/train.yaml`as the path where the model trained last time was saved.
+ And the path saving logs generated in training can be customized by `log_dir`.
+ - **Predict**: `python predict.py`
+
+#### 3. Attribute Extraction
+
+- Attribute extraction is to extract attributes for entities in a unstructed text.
+
+- The data is stored in `.csv` files. Some instances as following:
+
+ | Sentence | Att | Ent | Ent_offset | Val | Val_offset |
+ | :----------------------------------------------------------: | :------: | :------: | :--------: | :-----------: | :--------: |
+ | 张冬梅,女,汉族,1968年2月生,河南淇县人 | 民族 | 张冬梅 | 0 | 汉族 | 6 |
+ | 杨缨,字绵公,号钓溪,松溪县人,祖籍将乐,是北宋理学家杨时的七世孙 | 朝代 | 杨缨 | 0 | 北宋 | 22 |
+ | 2014年10月1日许鞍华执导的电影《黄金时代》上映 | 上映时间 | 黄金时代 | 19 | 2014年10月1日 | 0 |
+
+- Read the detailed process in specific README
+ - **[STANDARD](https://github.com/zjunlp/deepke/blob/test_new_deepke/example/ae/standard)**
+ - The standard module is implemented by common deep learning models, including CNN, RNN, Capsule, GCN, Transformer and the pretrained model.
+ - Enter the `DeepKE/example/ae/standard` folder.
+ - The dataset and parameters can be customized in the `data` folder and `conf` folder respectively.
+ - **Train**: `python run.py`
+ - **Predict**: `python predict.py`
+
+
+
+## Tips
+
+1. Using nearest mirror, like [THU](https://mirrors.tuna.tsinghua.edu.cn/help/anaconda/) in China, will speed up the installation of *Anaconda*.
+2. Using nearest mirror, like [aliyun](http://mirrors.aliyun.com/pypi/simple/) in China, will speed up `pip install XXX`.
+3. When encountering `ModuleNotFoundError: No module named 'past'`,run `pip install future` .
+4. It's slow to install the pretrained language models online. Recommend download pretrained models before use and save them in the `pretrained` folder. Read `README.md` in every task directory to check the specific requirement for saving pretrained models.
+
+
+
+## Developers
+
+Zhejiang University: Ningyu Zhang, Liankuan Tao, Haiyang Yu, Xiang Chen, Xin Xu, Xi Tian, Lei Li, Zhoubo Li, Shumin Deng, Yunzhi Yao, Hongbin Ye, Xin Xie, Guozhou Zheng, Huajun Chen
+
+Alibaba DAMO: Chuanqi Tan, Fei Huang