From 31329c8348ad2f998a20247fc3008727a93525f6 Mon Sep 17 00:00:00 2001 From: Shenzhi Wang Date: Thu, 9 May 2024 14:28:54 +0000 Subject: [PATCH] update readme Signed-off-by: Shenzhi Wang --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 5a5444e34..23d172017 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Model Summary -Llama3-70B-Chinese-Chat is **one of the first instruction-tuned LLM for Chinese & English users with various abilities** such as roleplaying, tool-using, and math, built upon the [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) model. +Llama3-70B-Chinese-Chat is **one of the first instruction-tuned LLMs for Chinese & English users with various abilities** such as roleplaying, tool-using, and math, built upon the [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) model. Developed by: [Shenzhi Wang](https://shenzhi-wang.netlify.app) (王慎执) and [Yaowei Zheng](https://github.com/hiyouga) (郑耀威) @@ -25,7 +25,7 @@ Training framework: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). Training details: -- epochs: 3 (we also provide a 2-epoch model version at the [`epoch_2` branch](https://e.gitee.com/wang-shenzhi/repos/wang-shenzhi/llama3-70b-chinese-chat/tree/epoch_2).) +- epochs: 3 (We also provide a 2-epoch model version at the [`epoch_2` branch](https://e.gitee.com/wang-shenzhi/repos/wang-shenzhi/llama3-70b-chinese-chat/tree/epoch_2)) - learning rate: 1.5e-6 - learning rate scheduler type: cosine - Warmup ratio: 0.1