docs: add all untracked content

This commit is contained in:
delong1998 2021-11-09 09:02:57 +00:00
parent 6c3a62d0eb
commit 78b4d6695e
94 changed files with 5661 additions and 0 deletions

37
SIG/SIG.md Normal file
View File

@ -0,0 +1,37 @@
---
title: SIG
description:
published: true
date: 2021-10-26T04:33:27.792Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:36.375Z
---
## SIG
优麒麟社区中所有的SIG 小组是开放的,任何人都可以参与。
在SIG团队项目的README.md文件中包含了该项目所属的SIG信息、交流方式、成员和联系方式等。我们欢迎大家通过README.md 文件中提到的联系方式包括邮件列表、公开例会等途径积极参与进SIG内的交流。
SIG 可以有自己的邮件列表、社群等,也可以有自己的贡献策略。
## SIG组的建立
一个新的SIG组的建立申请由相关提议人在技术委员会例会上进行申报并由委员会成员进行集体评议。如果申请通过提议人需要按照流程在社区提交PR建立相关的SIG页面等。PR经委员会成员审议合入。
新的SIG组运行初期可以由技术委员会指定一个委员作为该SIG组的导师为SIG组进行指导以确保该SIG组快速步入正轨。
## SIG组的撤销
技术委员会可以依据以下的原则经过讨论将SIG撤销
* SIG组的工作因为无法满足社区版本的要求而阻碍了 优麒麟 社区版本的发布。
* SIG组无法正常运转包括无固定例会无法及时响应社区issue所负责的软件没有及时更新等。
### 撤销流程
* 由技术委员会中的一个委员提出SIG组撤销申请。
* 该申请在技术委员会例会上进行讨论并投票决策。投票原则按照简单多数票原则。
当SIG组被撤销后该SIG组名下的软件包依照其合理归属划归其它SIG组。

View File

@ -0,0 +1,60 @@
---
title: 优麒麟社区SIG社区使用规范
description:
published: true
date: 2021-10-26T04:33:29.206Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:38.317Z
---
## 背景
为了推进社区工作的开放和透明深化官方开发者、社区开发者、各Linux发行版维护者、翻译贡献者、文档贡献者以及资深爱好者之间的交流与沟通扩大社区影响力以及吸引更多的社区爱好者加入优麒麟社区我们希望通过建立SIG(Special Inter Group)组,对社区运营过程中各项事务进行协调和组织,将优麒麟社区,发展为一个具有小马效应的活跃社区。
## 原则
1. SIG组的所有资料、规章、流程都是透明且开放的任何人和团队欢迎加入并参与贡献
2. 每个群组所承担的事务类型进行较为严格的隔离,并且避免同类型事务有多个群组的职能产生交叉
3. SIG组由核心成员(Maintainer)主导其余参与人员为Contributor重大决策需组内全体成员2/3以上人员通过并报备技术委员会审批
4. 通过邮件列表、IRC、微信群、telegram进行成员内交流和沟通
5. 定期举行组内会议,至少每月一次,会议纪要通过邮件列表发布并作为存根
## 组织架构
在技术委员会之下建议优先设立如下SIG组
1. 优麒麟开发者SIG
2. 优麒麟发布SIG
3. UKUI移植SIG
4. 国际化/本地化SIG
5. 文档SIG
6. 测试SIG
7. 安全审计SIG
8. 软件生态SIG
9. 应用兼容SIG
10. .……开放接口供社区爱好者创建新的SIG
## 职责
**优麒麟开发者 SIG**参与社区主版本的开发工作,如有需求,可再细分为应用开发、桌面环境开发、内核开发等。
**优麒麟发布SIG: **负责优麒麟的版本集成、制作、发布等相关事项。
**UKUI移植SIG:** 负责UKUI桌面环境向其他发行版本移植的协调、配合以及具体工作可再细分为UKUI for OpenEuler SIG、UKUI for Debian SIG、UKUI for OpenSUSE SIG、UKUI for Arch SIG等在推进UKUI移植工作的过程中也需积极参与到相应的开源社区并努力做贡献争取成为相应社区的官方认可成员。
**国际化/本地化SIG**负责优麒麟版本的国际化/本地化等翻译相关工作,可再按语言细分。
**文档SIG: **负责优麒麟社区配套文档的完善、维护、更新,比如技术规范、技术说明、帮助手册、教程、百科等文档类工作。
**测试SIG: **负责组织社区爱好者内测相关活动负责开源平台比如github上的bug管理、追踪以及与公司统一的问题反馈平台对接。
**安全审计SIG: **针对开源部分的代码进行安全规范的审核比如DBus、polkit等安全使用问题以及对系统的漏洞、CVE扫描并提出改进建议。
**软件生态SIG: **负责优麒麟软件生态的扩展搜罗原生好用的Linux应用以及处理社区开发者提交的新应用如有必要可再按应用种类细分。
**应用兼容SIG: **针对Linux应用兼容相关的技术比如flatpak、snap、Appimage、wine、安卓兼容等进行持续调研与探索并将成果应用在优麒麟上。
## 新SIG申请流程
github项目页申请 -> 技术委员会审核 -> 创建邮件列表等基础设施 -> 开始运作

View File

@ -0,0 +1,66 @@
---
title: 优麒麟社区SIG组章程
description:
published: true
date: 2021-10-26T04:33:30.608Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:40.258Z
---
## 背景
优麒麟社区是一个自由开放的社区为了确保社区工作的开放透明加强社区各贡献者、爱好者和优麒麟操作系统发行版本维护者之间的沟通交流扩大优麒麟社区的影响力吸引更多Linux爱好者加入优麒麟社区我们希望以建立SIG组的形式对社区工作中的各项事务进行组织协调将优麒麟社区打造成为一个活跃的Linux操作系统开源社区。
## 原则
1. 优麒麟社区中所有的SIG 小组都是开放的,任何人和组织都可以参与。
2. 在SIG组的README.md文件中包含了该项目所属的SIG相关信息、交流方式、成员和联系方式等。我们欢迎大家通过README.md 文件中提到的联系方式包括邮件列表、公开例会等途径积极参与进SIG内的交流。
3. 每个SIG组都由项目核心成员维护者Maintainer和贡献者Conrtibutor组成组内重大决策须由全体成员以超过2/3的票数投票决议并报备技术委员会。
4. 每个SIG 组都可以有自己的邮件列表、社群等也可以有自己的贡献策略但必须有自己的SIG章程。
5. 社区各SIG组之间负责的事务类型不允许交叉需要保持组之间的事务类型隔离。
6. 各SIG组内会议需定期举行。
## 组织架构
优麒麟社区目前设立SIG组:
## 新的SIG组申请流程
### 个人/团队申请
**申请:**由相关提议人在sig申请页面填写sig名称、sig描述、sig组owner(gitee帐号)、sig组maintainers(gitee帐号)、联络人邮箱相关信息发送申请技术委员会会将SIG审核会议参会时间反馈给申请者。
**审核:**由技术委员会与会人员就SIG相关业务范围、维护目标等与提议人沟通并审核评议。
**批准:**技术委员会审核通过后会在邮件发送官方确认信息同时基础设施sig组会完成相应sig组仓库创建和权限处理。
**运作:**SIG正式运作组内成员通过邮件列表、组内会议等进行沟通交流。新的SIG组运行初期可以由技术委员会指定一个委员作为该SIG组的导师为SIG组进行指导以确保该SIG组快速步入正轨。
### 企业申请
**申请:**由企业负责人在申请页面填写包含企业名称、授权代表和负责人姓名、职位信息等相关信息发送申请技术委员会会将SIG审核会议参会时间反馈给负责人。
**审核:**由技术委员会与会人员就SIG相关业务范围、维护目标等与负责人沟通并审核评议。
**批准:**技术委员会审核通过后会通过邮件发送官方确认信息同时基础设施sig组会完成相应sig组仓库创建和权限处理。
**运作:**SIG正式运作组内成员通过邮件列表、组内会议等进行沟通交流。新的SIG组运行初期可以由技术委员会指定一个委员作为该SIG组的导师为SIG组进行指导以确保该SIG组快速步入正轨。
## SIG组的撤销
以下情形发生时可以由sig组成员或者技术委员会提出撤销SIG组申请
* SIG组的工作因为无法满足社区版本的要求而阻碍了优麒麟社区版本的发布。
* SIG组无法正常运转包括无固定例会无法及时响应社区issue所负责的软件没有及时更新等。
### 撤销流程
* 由技术委员会中的一个委员提出SIG组撤销申请。
* 该申请在技术委员会例会上进行讨论并投票决策。投票原则按照简单多数票原则。
当SIG组被撤销后该SIG组名下的软件包依照其合理归属划归其它SIG组。
## 团队成员/成员权限变更
* sig组成立之后owener将获取community/sig/下相应sig组文件夹的管理权限通过修改相应的yaml配置文件修改成员和权限。

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,27 @@
---
title: 02. Design
description: How does CCAI work
published: true
date: 2021-10-28T03:16:15.779Z
tags:
editor: markdown
dateCreated: 2021-10-21T20:49:11.142Z
---
# The high level call flow of CCAI
> This section available for `1.1` release.
{.is-warning}
The picture below is showing the basic working model of CCAI as a whole services provider to provide high level APIs to external users of the services container. Basically, there are 2 methods to use those services which are provided in REST/gRPC APIs form. One is calling those APIs directly, the other one is calling simulation lib APIs (we will talk simulation lib later).
![image2.png](/temp/image2.png)
# CCAI (1.1 release) stack architecture
> This section available for `1.1` release.
{.is-warning}
The architecture picture below is showing those modules and stacks in a high level picture, it shows CCAIs components and their dependencies. It is up to date for CCAI 1.1 release.
![image1.png](/temp/image1.png)

View File

@ -0,0 +1,379 @@
---
title: 06. Develop
description: How to develop AI services for CCAI
published: true
date: 2021-10-28T06:05:03.185Z
tags: ccai
editor: markdown
dateCreated: 2021-10-26T00:44:23.821Z
---
# CCAI service work mode
![image4.png](/temp/image4.png)
AI services for CCAI include two parts, one is client-side, the other is server-side. Customer applications are running on client-side. The CCAI services are running on server-side. Client-side sends http post requests or gRPC requests to server-side, and server-side replies responses to client-side. So developing AI services means development of server-side programs.
# Preparation
CCAI includes four inference engines. They are Openvino, Pytorch, Onnx and Tensorflow. Each engine supports one type of model. The following sections describe how to use different inference engines in CCAI.
## Using OpenVINO as inference engine in CCAI
Before developing, deploying your CCAI services which will use OpenVINO as inference engine, the following three are the preconditions you need to prepare:
1. First of all, you must get ready with available openvino models for your AI services. You can get the neural network models in two ways, one is from open_model_zoo on github (https://github.com/openvinotoolkit/open_model_zoo), and the other one is obtaining those models by converting models from other frameworks by openvino tool --- Model Optimizer (MO).
2. For making those models accessible for CCAI, they need to be in the directory --- *container/script/integration_workdir/service_runtime_binary/lfs/models*.
- *.xml(necessary)
- *.bin(necessary)
- *.labels(Optional)
3. There are default pictures in the test program. In order to facilitate user testing, The test data need to be placed in this directory
*container/script/integration_workdir/service_runtime_binary/lfs/test_data*.
## Using PyTorch as inference engine in CCAI
If you only have PyTorch models or cannot convert PyTorch models into OpenVINO format successfully, you can choose to use PyTorch as an backend inference engine in CCAI, then please make sure you are ready with the following requirements:
1. Get your pytorch models, if your models include batch_normalization layers, you need to use model.eval() before saving models.
2. The weights need to be placed in the directory *container/script/integration_workdir/service_runtime_binary/lfs/models*.
- *.pt
- *.txt(Optional for labels)
There are default pictures in the test program. In order to facilitate user testing, The test data need to be placed in this directory *container/script/integration_workdir/service_runtime_binary/lfs/test_data*.
## Using ONNX runtime as inference engine in CCAI
ONNX Runtime is an accelerator for machine learning models with multi platform support and a flexible interface to integrate with hardware-specific libraries. ONNX Runtime can be used with models from PyTorch, Tensorflow/Keras, TFLite, scikit-learn, and other frameworks.
In CCAI, the ONNX runtime inference engine is used only for inferencing ONNX models.
Before developing AI services using ONNX runtime inference engine, please finish the following preparations:
1. Prepare your ONNX models. This model can be trained from any framework that supports export/conversion to ONNX format.
2. The model need to be placed in the directory *container/script/integration_workdir/service_runtime_binary/lfs/models*.
- *.onnx
3. There are default pictures in the test program. In order to facilitate user testing, The test data need to be placed in this directory *container/script/integration_workdir/service_runtime_binary/lfs/test_data*.
## Using TensorFlow as inference engine in CCAI
Tensorflow is the most popular machine learning framework developed by Google. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.
CCAI can leverage the Tensorflow framework to support tensorflow models. To use Tensorflow backend, please make sure you are ready with the following requirements:
1. The CCAI supports Tensforflow 2.x savedmodel format. The extension name is usually .pb. Currently, the 1.x format is not supported.
2. Place the model files into the directory *container/script/integration_workdir/service_runtime_binary/lfs/models* alone with label files if has.
3. There are default pictures in the test program. In order to facilitate user testing, The test data need to be placed in this directory *container/script/integration_workdir/service_runtime_binary/lfs/test_data*.
# Development services
CCAI included a key component known as API gateway, which provides both Fast-CGI support and gRPC support to export services to externals of CCAI containers. So you can develop CGI based services or gRPC based services for CCAI as the following sections describe.
> Notes: in those following sections, when we refer to a path with prefix “container” or “api-gateway”, like “container/…...” or “api-gateway/…...”, they are meaning the relative path under the project “container” which, as we had mentioned before, is the whole CCAI repositorys entry project, or relative path under the project “api-gateway” which is the project for developing all services.
{.is-info}
## Develop FCGI service
Develop fcgi AI services: you need to add new files or modify existing files under the directory api-gateway/cgi-bin
1. Add fcgi c++ services
- 16-*.conf : The file includes the configuration information of fastcgi. You can select one existing configuration file as a reference. For example, add a new conf file by copying the 16-classfication.conf file. Replace the classification in the conf file with the new service name.
- fcgi_*.cpp: it is a fast_cgi server-side programthe file includes the neural network inference and fastcgi processing program.
- For example, in the fcgi_classfication.cpp, the classification function does neural network inference and gets the results.
- You can create a new cpp file by copying the fcgi_classfication.cpp file. And replace the classification function in the cpp file with your own service inference function. Keep unchanged for the fastcgi processing part. For your own inference function, it should include both preprocessing and postprocessing parts.
- You also need to change the model_file param based on your models name in container/script/integration_workdir/service_runtime_binary/lfs/models .
- You need to change serverParams param to the real service url, just like the sample as “https://api.ai.qq.com/”.
- Neural network inference functions call low level runtime library APIs to do inference. Please refer to section 10 for the detailed description of runtime library APIs. For example, the classification inference function calls the API, vino_ie_pipeline_infer_image, to do inference. The parameters of this API are images, additionalInput , model_file, urlInfo, and rawDetectionResults. If the return value of API equals to 1 (res == 1), it gets the results from a local device. If it equals 0 (res==0), it gets the results from a remote device.
- Finally, using your own post-processing logic to process the inference result.
- test-script/test-demo/post_local_*_c.py: this is the fast cgi client-side test program. You can add a new file by copying the post_local_classfication_c.py file. Replace the classification in the file with the new service name. Modify the logic which is needed by your test file.
CMakeLists.txt: Add new service compilation by add_fcgi_binary()
2. Develop fcgi python services: Adding fcgi python services is very similar to adding fcgi c++ services. The only difference is that you need to call low level python APIs to do inference. To add a new fcgi python service, you need to implement the following three files.
- 16-*-py.conf
- fcgi_*.py:
- test-script/test-demo/post_local_*_py.py:
## Develop gRPC service
If you would like to modify the existing gRPC services, you can do by:
- Change service program for server-side in api-gateway/grpc/grpc_inference_service.cc
- Change test program for client-side in api-gateway/grpc/grpc_inference_service_test.py
- Change message in api-gateway/grpc/inference_service.proto
Then compile to get the new binaries.
You also can add a new gRPC server/client by your own, it is straightforward like general gRPC application development.
# Deploy services for CCAI
## Deploy into container
After you complete service development, you can compile those services to binary (for C++). And then deploy them in your host or into the CCAI container so that you can verify your services from outside of the CCAI container.
- If you are developing with our pre-constructed development container, you can copy generated binaries or python applications to specific folders so that API gateway can recognize them and enable them. For example, if your working space is under your host $HOME which was mounted into the container while the container booted.
> Note: the following commands should be executed within docker
{.is-info}
```bash
#copy fastcgi configuration file to target path
$> sudo cp your_fcgi.conf /etc/lighttpd/conf-enabled
#according to your_fcgi.conf, copy your binary or python script to correct path, for example:
$> sudo cp your_fcgi_service_binary /opt/fcgi/cgi-bi
#reboot CGI API gateway
$> sudo sv restart lighttpd
```bash
For gRPC service part, if youd like to change your service port (default is 8081 for our gRPC service), you need to add a line in file */etc/nghttpx/nghttpx.conf*, for example:
```bash
$> sudo echo “backend=localhost,<your_service_port>;/<pacakge_name>.<service_name>/<function>;proto=h2” >> /etc/nghttpx/nghttpx.conf
To make the changes effective, restart the service:
$> sudo sv restart nghttpx
Now you can verify your services by your test application from the host.
```
- If you are developing with your own development environment, then for testing your services, you can do as section a) above but change the path to your host path. .
- For services generated from both a) and b), you can always copy them to project api-gateway and regenerate the CCAI container by following instructions in section 3.3.1 and 5.1.
## Deploy on host
- Create a directory */opt/intel/service_runtime/service/your-service/* and put your binary file in this directory. Create a directory */opt/intel/service_runtime/service/lighttpd/conf-enabled/* and put your configuration file to this directory. The directory hierarchy example:
![image9.png](/temp/image9.png)
- Give permission for the user www-data to access your files, example:
```bash
$ chown -R www-data.www-data /opt/intel/service_runtime/service/
```
- Binaries and configuration files will be mounted to the same path in the container as the host, so your should set the *bin-path* to */opt/intel/service_runtime/service/your-service/your-binary* in your configuration file, example:
```
"bin-path" => "/opt/intel/service_runtime/service/your-service/your-binary"
```
## Specific to PyTorch service
- Currently the runtime inference library provides APIs to support Pytorch as an inference engine. These kinds of APIs are irt_infer_from_xxxx. Please refer to section 10 for detailed information. You need to pass PYTORCH to the API parameter to specify Pytorch as a backend engine. For example image API: irt_infer_from_image. The inputs are tensorData, model names , PYTORCH and urlinfo, the outputs are rawDetectionResults of tensorData.
- In the inference with pytorch, the normalization of the input image should be done using opencv. In the inference of openvino, the normalization of the picture can be transferred to the model file through the ”mean_values” and “scale_values” of MO of openvino.
## Specific to Onnx service
- The runtime inference library provides APIs to support ONNX as an inference engine. These kinds of APIs are irt_infer_from_xxxx. Please refer to section 10 for detailed information. You need to pass ONNXRT to the API parameter to specify ONNX as a backend engine. For example image API: irt_infer_from_image. The inputs are tensorData, model names , ONNXRT and urlinfo, the outputs are rawDetectionResults of tensorData.
- The ONNX model may need to do preprocessing for input data, such as, transpose or normalization. Please add these preprocessing parts to your Onnx service.
## Specific to Tensorflow service
- The runtime inference library provides APIs to support Tensorflow as an inference engine. These kinds of APIs are irt_infer_from_xxxx. Please refer to section 10 for detailed information. You need to pass TENSORFLOW to the API parameter to specify TENSORFLOW as a backend engine. For example image API: irt_infer_from_image. The inputs are tensorData, model names , TENSORFLOW and urlinfo, the outputs are rawDetectionResults of tensorData.
- The Tensorflow model may need to do preprocessing for input data, such as, transpose or normalization. Please add these preprocessing parts to your Tensorflow service.
# Sample: Add a service for CCAI
## Install packages:
> NOTE: If you are using the CCAI development docker image, you can skip this step. For more details about the CCAI development docker image, please refer to [Chapter 4].
{.is-info}
[Chapter 4]: https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/Setup
a. `libfcgi-dev`
b. `libpython3.8-dev`
c. `Openvino` (must be the same version in CCAI container)
## Compose the header file
Collect necessary parts from section 10.4.1 of this manual, copy and paste them into the header file “inferenceservice.h”. For example, we will use the image API. So the header should be:
```c++
#pragma once
// add necessary dependent headers
#include <memory>
#include <string>
#include <vector>
#include <opencv2/core.hpp>
// from 10.4.1.1
/**
*@brief Status code of inference
*/
#define RT_INFER_ERROR -1 //inference error
#define RT_LOCAL_INFER_OK 0 //inference successfully on local
#define RT_REMOTE_INFER_OK 1 //inference successfully on remote server
// from 10.4.1.2
/**
* @brief This is the parameters to do inference on remote server
*/
struct serverParams {
std::string url; //the address of server
std::string urlParam; //the post parameter of request
std::string response; //the response data of server
};
// from 10.4.1.4
/**
* @brief Do inference for image
* @param image Images input for network
* @param additionalInput Other inputs of network(except image input)
* @param xmls Path of IE model file(xml)
* @param rawDetectionResults Outputs of network, they are raw data.
* @param remoteSeverInfo parameters to do inference on remote server
* @return Status code of inference
*/
int vino_ie_pipeline_infer_image(std::vector<std::shared_ptr<cv::Mat>>& image,
std::vector<std::vector<float>>& additionalInput,
std::string xmls,
std::vector<std::vector<float>*>& rawDetectionResults,
struct serverParams& remoteServerInfo);
```
## Extract service runtime library from CCAI container
> NOTE: If you are using the CCAI development docker image, you can skip this step. For more details about the CCAI development docker image, please refer to Chapter 4.
{.is-info}
```bash
docker run --rm <image> tar -C /usr/lib/x86_64-linux-gnu -cf - libinferservice.so | tar -xf -
```
## Write the main source code:
Create file *demo.cpp*:
```c++
#include <algorithm>
#include <fstream>
#include <string>
#include <vector>
#include <fcgiapp.h>
#include <opencv2/imgcodecs.hpp>
#include "inferenceservice.h"
const char* model_path = "./models/bvlc_alexnet.xml";
const char* label_path = "./models/bvlc_alexnet.labels";
int main(int argc, char *argv[]) {
// read label file
std::vector<std::string> labels;
std::ifstream label_stream(label_path);
if (label_stream.is_open()) {
std::string line;
while (std::getline(label_stream, line)) {
size_t s, l;
if ((s = l = line.find('\'')) == std::string::npos)
s = 0;
else if ((l = line.find('\'', ++s)) != std::string::npos )
l = l - s;
labels.emplace_back(line, s, l);
}
}
// init fcgi handle
FCGX_Request cgi;
if (FCGX_Init() || FCGX_InitRequest(&cgi, 0, 0))
return 1;
while (!FCGX_Accept_r(&cgi)) {
std::string response = "Status: ";
size_t length;
char *val;
if (!(val = FCGX_GetParam("CONTENT_TYPE", cgi.envp)) ||
strcmp(val, "application/octet-stream")) {
response += "415 Unsupported Media Type";
} else if (!(val = FCGX_GetParam("CONTENT_LENGTH", cgi.envp)) ||
!(length = strtoul(val, &val, 10)) || *val) {
response += "411 Length Required";
} else {
// read body
std::vector<char> data(length);
FCGX_GetStr(data.data(), length, cgi.in);
// do inference
std::vector<float> result;
std::vector<std::vector<float>> supplement;
std::vector<std::vector<float>*> results = { &result };
struct serverParams remote_info;
cv::Mat image = cv::imdecode(data, 1);
std::vector<std::shared_ptr<cv::Mat>> images = {
std::make_shared<cv::Mat>(image)
};
int rc = vino_ie_pipeline_infer_image(images, supplement, model_path,
results, remote_info);
// generate response
response += "200 OK\r\n" "Content-Type: text/plain\r\n\r\n";
if (rc == RT_REMOTE_INFER_OK) {
response = "Status: 501 Not Implemented";
} else if (rc) {
response += "inference error";
} else {
auto max = std::max_element(result.cbegin(), result.cend());
int idx = std::distance(result.cbegin(), max);
response += "tag: " + labels[idx] + "\n"
"confidence: " + std::to_string(*max) + "\n";
}
}
FCGX_PutStr(response.c_str(), response.size(), cgi.out);
}
FCGX_Free(&cgi, 1);
return 0;
}
```
## Build the program
```bash
g++ -o fcgi_demo -I /opt/intel/openvino/opencv/include demo.cpp libinferservice.so -L/opt/intel/openvino/opencv/lib -lopencv_imgcodecs -lopencv_core -lfcgi -Wl,--allow-shlib-undefined,--no-as-needed -lpython3.8
```
## Write the configuration file
Create file *16-demo.conf*:
```
fastcgi.server += (
"/cgi-bin/fcgi_demo" => ((
"socket" => "/tmp/fcgi_demo.socket",
"bin-path" => "/opt/fcgi/cgi-bin/fcgi_demo",
"check-local" => "disable",
"max-procs" => 1,
"bin-copy-environment" => ("PATH", "SHELL", "USER",
"http_proxy", "HTTP_PROXY",
"https_proxy", "HTTPS_PROXY",
"no_proxy", "NO_PROXY", "cl_cache_dir"),
"bin-environment"=>(
"LD_LIBRARY_PATH"=>"/opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64"
),
))
)
```
## Build docker image
Create the *Dockerfile*:
```dockerfile
FROM service_runtime
COPY --chown=www-data:www-data fcgi_demo /opt/fcgi/cgi-bin/
COPY 16-demo.conf /etc/lighttpd/conf-enabled/
```
Build image:
```bash
docker build -t ccai-demo .
```
## Test
Start the ccai-demo container and run command:
```bash
curl -X POST -H "Content-Type: application/octet-stream" --data-binary @picture.jpg http://localhost:8080/cgi-bin/fcgi_demo
```

View File

@ -0,0 +1,89 @@
---
title: 09. Encryption and Authentication
description: How to enable Encryption and Authentication for CCAI
published: true
date: 2021-10-28T03:18:19.894Z
tags: ccai
editor: markdown
dateCreated: 2021-10-26T06:58:24.190Z
---
The framework supports encryption and authentication. You can choose to enable both or any one of them. For security reasons, it is better to have both of them enabled. But these features are disabled by default for better performance, because the server and clients are both running on localhost. Encryption and authentication can be enabled by changing the configuration files.
# Encryption
When encryption is enabled, the TLS is applied to communication between server and clients. We highly recommend that TLS v1.2 or v1.3 is used for encryption. TLS v1.0 and v1.1 are obsolete now. It should be only used for compatibility with old clients.
To enable encryption, a server private key and a certificate is required. The certificate can be acquired from a CA (Lets encrypt for example) or self signed as below showing.
- 1) Generate server key and certificate (for self-signed certificate only):
```bash
$ cat > certificate.conf << EOF
[req]
default_bits = 4096
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[dn]
C = CN
ST = BJ
O = IAGS SSE AEE Team
CN = localhost
[req_ext]
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
IP.1 = ::1
IP.2 = 127.0.0.1
EOF
$ openssl genrsa -out ca.key 4096
$ openssl req -new -x509 -key ca.key -sha256 -subj "/C=CN/ST=BJ/O=Fake CA" -days 365 -out ca.crt
$ openssl genrsa -out server.key 4096
$ openssl req -new -key server.key -out server.csr -config certificate.conf
$ openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.pem -days 365 -sha256 -extfile certificate.conf -extensions req_ext
```
There are several files generated. server.key is the server private key, and server.pem is the server certificate. ca.crt is the local root CA certificate which clients need to trust.
- 2) Enable TLS
To make the key and certificate file accessible to REST and gRPC server in container, start docker container with arguments:
-v /path/to/private_key:/etc/lighttpd/server.key:ro -v /path/to/certificate:/etc/lighttpd/server.pem:ro
/path/to/private_key and /path/to/certificate are the paths in the host where you save the key and certificate files generated in step 1) above.
This can be done by adding the line above as parameters to docker run” command in docker launching script located in:
/opt/intel/service_runtime/service_runtime.sh
Pay attention to the permission of the database in the container. It must be readable to lighttpd and nghttpx daemons.
> Note: once enabled encryption, all REST APIs in all REST API based test cases above or in your own application should change those URLs from http to https for valid access.
{.is-info}
# Enable authentication
- 1) Create user account database
```bash
# install Berkeley DB command line tools
$ sudo apt install db5.3-util
# add or change a user into the database
$ { echo "username"; openssl passwd -6 "password"; } | db5.3_load -Tt hash apiuser.db
# to examine the database
$ db5.3_dump -p apiuser.db
```
These are the basic operations to manipulate the account database. You can write a script to make life easier. Here is an example script from the official linux-pam repo: *https://raw.githubusercontent.com/linux-pam/linux-pam/master/modules/pam_userdb/create.pl*.
- 2) Enable authentication
To make the account database accessible to REST and gRPC server in container, start docker container with arguments:
```
-v /path/to/database:/etc/lighttpd/apiuser.db:ro
```
*/path/to/database* is the path in the host where you save the database generated in step 1) above.
This can be done by adding the line above as parameters to docker run” command in docker launching script located in: */opt/intel/service_runtime/service_runtime.sh*
Pay attention to the permission of the database in the container. It must be readable to lighttpd and grpc_inference_service daemons.

View File

@ -0,0 +1,384 @@
---
title: 05. Generate
description: How to generate CCAI packages and container image
published: true
date: 2021-10-28T06:02:54.662Z
tags: ccai
editor: markdown
dateCreated: 2021-10-21T21:44:32.771Z
---
# Build CCAI packages and generate CCAI container image form pre-built binaries
> Note: Due to the model files for the current sample test cases are big enough, and the build process will also generate deb packages for those big model files, so the build process typically will need 4G memory if youd like to build a full image/packages which include all services/models. it is better to have your build machine equipped with bigger memory, say 8G or bigger. Otherwise, you may encounter build failure due to out of memory. And if you have run related CCAI services (especially ASR/TTS/Classification ...) before building, due to those models already being loaded into memory, under this situation, one mitigation solution is : stop running unnecessary processes in the building machine, especially stop docker containers if possible.
Another option will be : only include specific models into the building process, the steps to do this will be shown in the following sections (section 4 below).
{.is-info}
Docker files and service framework binaries which can be used to re-generate the docker image and deb packages which includes the whole logic of this framework.
> Note: all components versions mentioned below in this document are for example, they will change/update according to new features/new releases in following days, so please replace the specific version with those exact versions/releases depending on what you get.
{.is-warning}
1) Download from https://www.ukylin.com/cooperation/intel.html and extract the tar file.
For example, download ccaisf_release_1.0-210201.tar.gz and then
```bash
$ tar xvf ccaisf_release_1.0-210201.tar.gz
```
2) Build docker image.
```bash
$ cd ccaisf_release_1.0-210201/
```
CCAI now supports several inference engines as backend engine, OpenVINO, PyTorch, ONNX runtime and TensorFlow. You can choose to use which one or multiple of them from the CCAI build menu, that also applies to some samples and example services. It can be achieved via menuconfig during the configuration process. The detailed instructions can be found in section 3.3 (How to build from source) above.
Once finish those choice, then build container images as following:
```bash
$ ./release_build.sh base_image
$ ./release_build.sh openvino_image
$ ./release_build.sh image
```
3) Build deb packages (service_runtime, health-monitor, simlib, test)
```bash
$ ./release_build.sh package
```
4) Build deb packages (models)
```basg
$ ./release_build.sh models_package
```
By default, only 3 model packages and 1 cl_cache package will be generated:
```
service-runtime-models-ocr_1.0-210201_all.deb
service-runtime-models-tts_1.0-210201_all.deb
service-runtime-models-wsj-dnn5b_1.0-210201_all.deb
service-runtime-models-cl-cache_1.0-210201_all.deb
```
If you want to build all model packages, you need to take the following steps:
* a) Edit file, *package/models/debian/control*, uncomment lines where you want the related models built into packages, the following lines are as example:
```
#Package: service-runtime-models-lspeech-s5-ext
#Architecture: all
#Depends: service-runtime, ${misc:Depends}
#Description: service-runtime-models
# service-runtime-models
#Package: service-runtime-models-classification
#Architecture: all
#Depends: service-runtime, ${misc:Depends}
#Description: service-runtime-models
# service-runtime-models
#Package: service-runtime-models-face-detection
#Architecture: all
#Depends: service-runtime, ${misc:Depends}
#Description: service-runtime-models
# service-runtime-models
#Package: service-runtime-models-facial-landmarks
#Architecture: all
#Depends: service-runtime, ${misc:Depends}
#Description: service-runtime-models
# service-runtime-models
#Package: service-runtime-models-deeplab
#Architecture: all
#Depends: service-runtime, ${misc:Depends}
#Description: service-runtime-models
# service-runtime-models
```
* b) Re-execute the model package build cmd
```bash
$ ./release_build.sh models_package
```
5) Execute docker images will list the docker images:
```bash
$ docker images
REPOSITORY TAG ...
service_runtime v1_20210129_121352 …
service_runtime_openvino ubuntu_20.04 ...
service_runtime_base ubuntu_20.04 …
...
```
6) Deb packages will be generated under ccaisf_release_1.0-210201/packages
```bash
$ ls ./packages
package/service-runtime_1.0-210201_all.deb
package/service-runtime-models-classification_1.0-210201_all.deb
package/service-runtime-models-deeplab_1.0-210201_all.deb
package/service-runtime-models-face-detection_1.0-210201_all.deb
package/service-runtime-models-facial-landmarks_1.0-210201_all.deb
package/service-runtime-models-lspeech-s5-ext_1.0-210201_all.deb
package/service-runtime-models-ocr_1.0-210201_all.deb
package/service-runtime-models-tts_1.0-210201_all.deb
package/service-runtime-models-wsj-dnn5b_1.0-210201_all.deb
package/service-runtime-models-cl-cache_1.0-210201_all.deb
package/service-runtime-simlib-test_1.0-210201_amd64.deb
package/service-runtime-test_1.0-210201_all.deb
```
If you see the results of step 5) and 6), the build is successful. If you want to run the service framework on the current machine, you only need to install all the deb packages:
> Note: If not for testing the OTA process, then please uninstall existing packages before installing the new ones to avoid “possible” conflicts with OTA logic.
{.is-info}
```bash
$ dpkg -i package/*.deb
```
If you want to run the AI service framework on other machines, there are two options.
**Option 1**:
Save the docker image to a tar file:
```bash
$ docker save service_runtime:v1_20210129_121352 -o service_runtime_latest.tar
```
And then copy the service_runtime_latest.tar and all deb packages to the other machine, load the `service_runtime_latest.tar` and install the deb packages on the other machine:
```bash
$ docker load -i service_runtime_latest.tar
$ dpkg -i package/*.deb
```
**Option 2**:
If you have docker registry server, you can push the docker image, service_runtime:latest to your docker registry:
```
$ docker tag service_runtime:v1_20210129_121352 <your registry>/service_runtime:latest
$ docker push <your registry>/service_runtime:latest
```
And then pull the docker image on the other machine:
```basg
$ docker pull <your registry>/service_runtime:latest
```
After that copy all deb packages to the other machine, and install all deb packages on the other machine:
```bash
$ dpkg -i package/*.deb
```
# How to build from source
Once you have access to the CCAI components, please follow the instructions below to check out the whole project.
## Download components
```bash
$> cd container/script/integration_workdir
$> make defconfig
```
If you want change the default configuration, you can execute make menuconfig,
```bash
$>make menuconfig
```
![image18.png](/temp/image18.png)
You can type Space/Enter to expand a branch, or enable/disable an option.
![image19.png](/temp/image19.png)
```bash
$> make base_image
```
This command will get all dependencies for build CCAI and generate basic container image which is the lowest image layer of CCAI framework image and includes basic system libraries and commands for constructing a workable system.
```bash
$> make inference_engine_image
```
This command will generate the 2nd layer on top of the basic layer from the step above, this layer includes CCAI framework backend stacks like OpenVINO and OpenCL driver etc.
```
$> make
```
This command will generate the toppest layer which includes the core of CCAI framework, and all services which will provide all supported services in REST APIs and gRPC APIs.
Once you finished steps above, you will get a docker image, `service_runtime` and a folder `service_runtime` under the folder, `integration_workdir`, to launch CCAI, you only need to execute:
```bash
$> cd ./service_runtime
$>./service_runtime.sh start
```
## build host packages
```bash
$> ../release.sh
```
This command will generate a tar file named `ccaisf_release_xxx.tar.gz` under the folder `integration_workdir/release`. Please refer to [Chapter 5.1] above to build CCAI deb packages.
[Chapter 5.1]: https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/Generate#build-ccai-packages-and-generate-ccai-container-image-form-pre-built-binaries
## install CCAI services and image on host
Please refer to [Chapter 5] above in detail on how to install CCAI.
[Chapter 5]: https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/Generate
# How to check all component versions.
The deb packages which include all *.service have a unified version, which can be queried through dpkg after install:
```bash
$ dpkg -l | grep service-runtime
```
After build, the docker image ID will be saved in a file, docker_image. docker_image will be packaged into service-runtime_xxx.deb and finally installed to /opt/intel/service_runtime/docker_image. service-runtime.service will launch the docker image of this ID.
So, comparing the results of $ cat /opt/intel/service_runtime/docker_image and the output of $ docker ps” can be used to confirm whether the running container is a correct version, for
example:
```bash
$ cat /opt/intel/service_runtime/docker_image
sha256:8ee35d329533e9c76903767cbd03761a3cf70ff1ebd17dea85be84028a317b06
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2406a9bee2ea 8ee35d329533 "/start.sh" 17 minutes ago Up 17 minutes (unhealthy) 0.0.0.0:8080-8081->8080-8081/tcp service_runtime_container
```
The 2 strings above should be matched (8ee35d329533).
# Generate CCAI OTA image
On the current stage (for Penguin Peak project), CCAI defines 3 Docker layers from top to bottom for constructing CCAI container image:
* service_runtime
* service_runtime_openvino
* service_runtime_base
{.grid-list}
Among them, the layer service_runtime is based on layer service_runtime_openvino, and the layer service_runtime_openvino is set up on top of the layer service_runtime_base. Once the whole container image is built, according to those instructions in the CCAI components build document, only the highest layer --- service_runtime needs to be installed to the OS of target devices because the dependent layers of the layer service_runtime will automatically be included into this image.
> VERY IMPORTANT:
> But, the images included service_runtime_base and service_runtime_openvino MUST be saved as base for future use. In other words, for making incremental OTA workable, you should save the 2 images:
> service_runtime_openvino
> service_runtime_base
> You could save them together with your build environment or also could save them in your docker image registry for easy maintenance.
{.is-warning}
When there is a new version of CCAI release, depending on the specific changes, it could involve the changes to any of the 3 layers of the container image.
Under the situation where the 2 bottom layers --- the layer service_runtime_base and the layer service_runtime_openvino are not changed, then you only need to build the image for the highest layer --- service_runtime directly, and because this layer is on top of the other 2 layers, generating the image will need the images you saved as description above, which include the layer service_runtime_openvino and service_runtime_base.
Under the situation where only service_runtime_base is not changed, then you will need to build the image service_runtime_openvino and the image service_runtime respectively with the saved image above which include the layer service_runtime_base.
Under the situation where all 3 layers were changed, then you will have to build all 3 images which included the 3 layers respectively. At this time, due to all 3 layers changed, you will not need any previous saved images. BUT also, you have to save those new generated images for any further incremental OTA images build.
Following, we show a few examples (only show how to build the Docker images, not the packages) here for demonstrating the process:
Example 1, Integration for the first time (pre-install case) or for any situation where the content in the layer service_runtime_base is changed, which means all images needed to be re-generated.
a) Execute the commands below
```bash
$ ./release_build.sh base_image
$ ./release_build.sh openvino_image
$ ./release_build.sh image
```
You will get 3 Docker images as shown below:
![image20.png](/temp/image20.png)
b) The image service_runtime needs to be installed to OS on the target device, and the image service_runtime_openvino and the image service_runtime_base need to be saved for future incremental OTA image creation.
To ease your life for maintaining the latest version of the image service_runtime_openvino and the image service_runtime_base, you may push them to your registry server with tags, for example:
```bash
$ docker tag IMAGE_ID REGISTRY_SERVER/service_runtime_openvino:ubuntu_20.04
$ docker tag IMAGE_ID REGISTRY_SERVER/service_runtime_base:ubuntu_20.04
$ docker push REGISTRY_SERVER/service_runtime_openvino:ubuntu_20.04
$ docker push REGISTRY_SERVER/service_runtime_base:ubuntu_20.04
```
Example 2, There is a new CCAI release, only the image including the layer service_runtime is changed.
a) Check Docker image in your integration environment or your docker image registry, you must have the latest version of the image service_runtime_openvino on your machine for generating the image service_runtime on top of it. In your local machine, execute cmd:
```bash
$ docker images
```
If there is no the image service_runtime_openvino found, you may need to pull it from your registry server if you had pushed it before, and make a correct image name with tag:
```bash
$ docker pull REGISTRY_SERVER/service_runtime_openvino:ubuntu_20.04
$ docker tag REGISTRY_SERVER/service_runtime_openvino:ubuntu_20.04 service_runtime_openvino:ubuntu_20.04
```
![image21.png](/temp/image21.png)
b) Build the image service_runtime following those instructions in CCAI build document
```bash
$ ./release_build.sh image
```
You will get the Docker image service_runtime which should be installed to the OS of the target device.
![image22.png](/temp/image22.png)
Example 3, There is a new CCAI release, where both the layer service_runtime_openvino and the layer service_runtime are changed.
a) Check Docker image in your integration environment, you must have the latest version of service_runtime_base on your machine.
```bash
$ docker images
```
If there is no service_runtime_base, you may need to copy it from previous saved image, or pull it from your registry server, and make a correct image name with tag:
```bash
$ docker pull REGISTRY_SERVER/service_runtime_base:ubuntu_20.04
$ docker tag REGISTRY_SERVER/service_runtime_base:ubuntu_20.04 service_runtime_base:ubuntu_20.04
```
![image24.png](/temp/image23.png)
b) Build both the image service_runtime_openvino and the image service_runtime respectively.
```bash
$ ./release_build.sh openvino_image
$ ./release_build.sh image
```
![image24.png](/temp/image24.png)
c) The newly generated image service_runtime needs to be installed to the OS of the target device, and the newly generated image service_runtime_openvino needs to be updated to your registry server.
> Meanwhile, please note, the newly generated image service_runtime_openvino should be saved (in local build environment or in docker registry server) for future incremental OTA image creation.
{.is-warning}
```bash
$ docker tag IMAGE_ID REGISTRY_SERVER/service_runtime_openvino:ubuntu_20.04
$ docker push REGISTRY_SERVER/service_runtime_openvino:ubuntu_20.04
```

View File

@ -0,0 +1,65 @@
---
title: 07. How to use AI Service
description: How to use AI services provided by CCAI
published: true
date: 2021-10-28T03:17:29.943Z
tags: ccai
editor: markdown
dateCreated: 2021-10-26T06:55:28.717Z
---
As mentioned above in chapter 6, CCAI services work mode are:
![image7.png](/temp/image4.png)
AI services for CCAI include two parts, one is client-side, the other is server-side. Customer applications are so called client-side. The CCAI services are server-side. Client-side sends http post requests or gRPC requests to server-side, and server-side replies responses to client-side.
# Request serving via REST APIs
For using REST APIs providing by CCAI, the common steps for implementing a request in your client application are:
- Construct post request
- url: AI services address. for example: url= 'http://localhost:8080/cgi-bin/fcgi_py_tts'.If client-side and server-side are not on the same machine, the localhost needs to be replaced with your ip address.
- post_parameter: different for different AI services.
- sending post request to fcgi AI service: `response = requests.post(url, post_parameter)`
- Get the inference result from AI services. The response is the result.
Please refer to [10.1( FCGI APIs Manual)] for detailed steps to implement different AI client applications.
[10.1( FCGI APIs Manual)]: https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/APIs-Reference-List#fcgi-apis-manual
# Request serving via gRPC APIs
For using gRPC APIs providing by CCAI, the common steps for implementing a request in your client application are:
- create call credential
```
metadata_plugin = BasicAuthenticationPlugin(username, password)
call_cred = grpc.metadata_call_credentials(metadata_plugin)
```
- Create channel credential
```
channel_cred = grpc.ssl_channel_credentials()
```
- Open grpc secure channel, If client-side and server-side are not on the same machine, the localhost needs to be replaced with your ip address.
```
credentials = grpc.composite_channel_credentials(channel_cred, call_cred)
with grpc.secure_channel('localhost:' + port, credentials) as channel:
```
- Get the result
```
stub = inference_service_pb2_grpc.InferenceStub(channel)
```
# Proxy setting
If you are behind a firewall or developing within another container or VM and want to communicate with a CCAI container running in the same physical machine, and your system has a proxy setting, you may need to check the proxy setting, the service IP address be set into the no_proxy.

View File

@ -0,0 +1,101 @@
---
title: 08. Integrate New AI Services
description: How to integrate new AI Services with CCAI Framework
published: true
date: 2021-10-28T03:18:07.698Z
tags: ccai
editor: markdown
dateCreated: 2021-10-26T06:57:20.093Z
---
Once you have new services, to make it work to be able to accept requests from outside of the CCAI container and give back the result of one specific AI task, you will have to deploy those services in the CCAI container.
# Where to put those services file to
Please extract the CCAI release tar file, saying ccaisf_release_xx-xxx.tar.gz and copy your files and directories organized in a runtime hierarchy to the folder *docker/app_rootfs*.
A fast CGI service example as:
```
ccaisf_release_xx-xxx/docker/app_rootfs
├── etc
│ ├── lighttpd
│ │ ├── conf-available
│ │ │ ├── 16-classification.conf
│ │ ├── conf-enabled
│ │ │ ├── 16-classification.conf -> ../conf-available/16-classification.conf
├── opt
│ ├── fcgi
│ │ └── cgi-bin
│ │ ├── fcgi_classfication
```
A gRPC service example as:
```
ccaisf_release_xx-xxx/docker/app_rootfs
├── etc
│ └── sv
│ ├── grpc_inference_service_speech_1
│ │ ├── run
│ │ └── supervise -> /tmp/grpc_inference_service_speech_1/supervise
│ ├── runit
│ │ └── runsvdir
│ │ └── default
│ │ ├── grpc_inference_service_speech_1 -> /etc/sv/grpc_inference_service_speech_1
└── usr
└── sbin
├── grpc_inference_service
```
# Where to put related Neural Network Models file
Models will be installed to the folder, /opt/intel/service_runtime/models on host, CCAI will map the folder to containers /opt/fcgi/cgi-bin/models, so you can write your debian package configuration files and build your deb package to install your models, otherwise you can also put your models to CCAI release folder to use CCAIs helper script to build your deb package:
1. Put models to ccaisf_release_xx-xxx/package/models.
2. Modify ccaisf_release_xx-xxx/package/models/debian/control to add your package.
3. Add a service-runtime-models-xxx.install file to ccaisf_release_xx-xxx/package/models/debian to install your models.
# How to enable services via API gateway
As we had described in chapter 6.3, for fast CGI services, you need to add/change conf files and put those conf files under specific folders so that API gateway will recognize your services and launch them according to the configuration file description.
For gRPC services, if it is a brand new services, you need to do the following steps to enable the service:
1. Create a folder under docker/app_rootfs/etc/sv/, example:
```bash
$> mkdir docker/app_rootfs/etc/sv/your_grpc_service
```
2. Write a script named run to launch your service, and put the script to docker/app_rootfs/etc/sv/your_grpc_service, example:
```bash
$>cat > docker/app_rootfs/etc/sv/your_grpc_service/run < EOF
!/bin/bash
exec /usr/sbin/your_service
EOF
```
3. Create a link to manage the service at runtime.
```bash
$>ln -sf /tmp/your_grpc_service/supervise docker/app_rootfs/etc/sv/your_grpc_service/
```
4. Create a link to enable your gRPC service
```bash
$>ln -sf /etc/sv/your_grpc_service docker/app_rootfs/etc/runit/runsvdir/default/
```
# How to generate new container image
Please refer to Chapter 5, execute `release_build.sh` to generate a new container image or deb package for models.
```bash
$> cd ccaisf_release_xx-xxx
$> ./release_build.sh image
$> ./release_build.sh models_package
```

View File

@ -0,0 +1,115 @@
---
title: 03. Integrate
description: Integrate and use CCAI runtime environment
published: true
date: 2021-10-28T03:16:33.069Z
tags: ccai
editor: markdown
dateCreated: 2021-10-21T21:05:40.662Z
---
# How to get CCAI components/images access
You can visit [https://www.ukylin.com/cooperation/intel.html](https://www.ukylin.com/cooperation/intel.html).
# How to install the pre-built runtime and verify it quickly
## Prepare and quick start
> Note: all components versions mentioned below in this document are for example, they will change/update according to new features/new releases in following days, so please replace the specific version with those exact versions/releases depending on what you get.
{.is-info}
This part is targeting only for the Cloud Client AI Service Framework released on 20210201+ with OV 2021.1 as default inference backend. Pre-condition to meet:
**System requirements**:
1. Install ubuntu 20.04 or UKylin release on your host.
Execute (if need)
```bash
$> sudo apt update
$> sudo apt install docker.io libgrpc++1
```
2. Configure Kernel to support docker, UKylin kernel (if you would like to use UKylin kernel instead of your own one) must be configured and compiled with the following required configuration options:
```
CONFIG_CGROUP_DEVICE=y
CONFIG_BRIDGE=y
```
## Proxy setting
If you are behind a firewall, you may need to set a proxy correctly for pulling our pre-built docker image of CCAI.Please execute the following commands to check the setting:
```bash
$> env | grep proxy
no_proxy=localhost
https_proxy=http://your-proxy-server:your-proxy-port
http_proxy=http://your-proxy-server:your-proxy-port
```
If you configure docker using proxy, please add *nvbox.bj.intel.com* to `no_proxy`.
## Container image preparation
If you want to have a quick try instead of building your own framework image system, you can pull our existing docker image directly. (Otherwise, please build your own docker image as described at following chapters) in the latter part of this document before you install host packages and run any testing.):
```bash
$> docker pull TBD
```
## Download and install service-framework packages/test cases/docker files in host
You can checkout `DEB` resources from *https://www.ukylin.com/cooperation/intel.html*.
```bash
$> sudo dpkg -i service_runtime_debs/*.deb
```
## Start/Stop service-framework
By default service-framework will be started automatically after the installation.You can manually stop / start service-framework by following the instructions:
```bash
# Stop:
$> sudo systemctl stop service-runtime-health-monitor.service
$> sudo systemctl stop service-runtime.service
# Start:
$> sudo systemctl start service-runtime-health-monitor.service
$> sudo systemctl start service-runtime.service
```
Now, the service container will use ports 8080 and 8081 for http services and gRPC services respectively, if youd like to change the ports to fit your requirements, you can change the default ports setting in file */lib/systemd/system/service-runtime.service*.
Open this file, and find this line:
```
ExecStart=/opt/intel/service_runtime/service_runtime.sh start --port-rest-api 8080 --port-grpc 8081
```
Then change the ports setting according to your needs.
## Verify CCAI functions with samples or test cases
You can execute docker ps | grep service_runtime_container to check if the CCAI container is running.
```bash
$ docker ps | grep service_runtime_container
c9372e841a00 574dee124467 "/start.sh" 16 minutes ago Up 16 minutes 127.0.0.1:8080-8081->8080-8081/tcp, 127.0.0.1:50006-50007->50006-50007/tcp service_runtime_container
```
You can execute docker exec -it service_runtime_container ps -ef to view all processes running in the CCAI container.
```
$ docker exec -it service_runtime_container ps -ef
UID PID PPID C STIME TTY TIME CMD
www-data 1 0 0 02:33 pts/0 00:00:00 /sbin/docker-init -- /start.
www-data 6 1 0 02:33 pts/0 00:00:00 /bin/bash /start.sh
```
If you want to do more tests, please refer to [Test Cases](https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/Testcases).
## How to file bug(s) for CCAI project
Please use this link for bug report/discussion and technical support.
*https://forum.ukylin.com*

View File

@ -0,0 +1,24 @@
---
title: 01. Overview
description: What is Intel Cloud-Client AI Service Framework (CCAI)
published: true
date: 2021-11-09T08:08:52.425Z
tags: ccai
editor: markdown
dateCreated: 2021-10-21T20:37:35.928Z
---
With the cloud-driven development model becoming more and more popular, we are also facing strong requirements to follow typical cloud applications development process and mode which can not only provide developer friendly experience but also keep and grow up our ecosystem significantly. Meanwhile, on the client side, Intel is providing more and more powerful hardware computation capability and introducing more flexible choice via various hardware accelerators to improve application performance. And specific to client AI usages, for making the non-AI expert developers enable AI features quickly and easily, a kind of high level usage driven APIs/SDK can provide an abstraction of low level AI inference framework but hide all in-depth AI related details so that developers can focus on their original business logics.
With all of the situations above considered, we build a high level usage driven API framework which provide cloud-like APIs and hide all low level backend details, meanwhile leverage as much as possible those local client AI accelerators, so that we could treat the local client platform as an extension of remote cloud.
In brief, one AI services API framework with the same development mode like developing traditional cloud applications but get significant benefits from local client platforms as inference platform with low latency, good privacy and independent to remote cloud and network bandwidth.
* **Service** abstract the AI capabilities as services APIs
* **Framework** - facilitate CSPs/ISVs to develop AI services for client
* **Client AI** - expose client HW AI capabilities to application developers
* **Cloud** provide seamless dev experience for cloud/web app developers
* **Intel** - platform differentiating capability developed by Intel
{.grid-list}
![image8.png](/temp/image8.png)

View File

@ -0,0 +1,166 @@
---
title: 04. Setup
description: How to setup development environment
published: true
date: 2021-10-28T03:16:48.227Z
tags: ccai
editor: markdown
dateCreated: 2021-10-21T21:20:23.500Z
---
For convenient development, we provide a development container which includes all dependencies for developing services for CCAI. This is an option for convenience, you still can develop CCAI service in your working environment like Ubuntu 20.04 etc.
# Download and run development docker image
You can download a pre-built for-development docker image and a launcher script directly for development from *https://www.ukylin.com/cooperation/intel.html*.
For example
```bash
/home/pub/images/service_runtime_devel.tar.gz .
/home/pub/images/service_runtime_devel.sh .
```
Load image:
```bash
$ gzip -cd service_runtime_devel.tar.gz | docker load
```
Launch the development docker image:
```bash
$ service_runtime_devel.sh
```
Now you can enter into the development container environment for CCAI service development. And if you would like to change CCAI framework core itself and you also have CCAI framework source code access, then you also can copy source code to an accessible fold (like your `$HOME`) and have CCAI core development within the container.
# Enter development container environment
After executing `service_runtime_devel.sh`, a container named `service_runtime_devel_container_$USER` will be run. To enter the container, you can execute:
```bash
$ docker exec -it service_runtime_devel_container_$USER /bin/bash
```
The host `$HOME` folder will be mounted to the container, so in the container, you can directly access the files in the host `$HOME` and edit/compile/verify your changes in CCAI services/core. For example, you can set up a link from the container folder to your home directory and after changes, just restart container image to verify your modification.
# Setup development environment directly in your machine
> Note: all components versions mentioned below in this document are for example, they will change/update according to new features/new releases in following days, so please replace the specific version with those exact versions/releases depending on what you get.
{.is-info}
The steps above are convenient for CCAI related development. But in case, where you cannot use the pre-built devel container image, then you still can set up the environment in your machines directly. Basically, you can use any development box for this, and we recommend having Ubuntu 20.04 as host OSes because we only verified those processes against it.
The basic dependencies could be found from section 3.2.1 above. Besides, you need to install some additional packages for having a simple CCAI host environment (non container).
Please follow instructions below:
```bash
$> wget https://apt.repos.intel.com/openvino/2021/GPG-PUB-KEY-INTEL-OPENVINO-2021
$> sudo apt-key add GPG-PUB-KEY-INTEL-OPENVINO-2021
$> echo "deb https://apt.repos.intel.com/openvino/2021 all main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2021.list
$> sudo apt-get update
$> sudo apt-get install \
lighttpd lighttpd-mod-authn-pam libfcgi nghttp2-proxy libgrpc++1 gnupg \
python3 python3-pip python3-setuptools libnuma1 ocl-icd-libopencl1 \
libgtk-3-0 gstreamer1.0-plugins-good gstreamer1.0-plugins-bad \
python3-gi libxv1 \
intel-openvino-runtime-ubuntu20-2021.3.394 \
intel-openvino-gstreamer-rt-ubuntu-focal-2021.3.394 \
intel-openvino-gva-rt-ubuntu-focal-2021.3.394 \
unzip intel-gpu-tools libcurl4 python3-yaml libpci3 \
$>sudo pip3 install --only-binary :all: flup \
numpy opencv-python cython nibabel scikit-learn scipy tqdm \
requests grpcio protobuf
$> sudo pip3 install llvmlite==0.31.0 numba==0.48 librosa==0.6.3 \
pytest grpcio-tools
$> wget https://github.com/intel/compute-runtime/releases/download/20.41.18123/intel-gmmlib_20.3.1_amd64.deb
$> wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.5186/intel-igc-core_1.0.5186_amd64.deb
$> wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.5186/intel-igc-opencl_1.0.5186_amd64.deb
$> wget https://github.com/intel/compute-runtime/releases/download/20.41.18123/intel-opencl_20.41.18123_amd64.deb
$> wget https://github.com/intel/compute-runtime/releases/download/20.41.18123/intel-ocloc_20.41.18123_amd64.deb
$> wget https://github.com/intel/compute-runtime/releases/download/20.41.18123/intel-level-zero-gpu_1.0.18123_amd64.deb
$> sudo dpkg -i *.deb
$> wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.7.1%2Bcpu.zip
$> unzip libtorch-cxx11-abi-shared-with-deps-1.7.1+cpu.zip
$> sudo mkdir -p /opt/fcgi/cgi-bin
$> sudo cp libtorch/lib/libc10.so /opt/fcgi/cgi-bin/
$> sudo cp /libtorch/lib/libgomp-75eea7e8.so.1 /opt/fcgi/cgi-bin/
$> sudo cp /libtorch/lib/libtorch_cpu.so /opt/fcgi/cgi-bin/
$> sudo cp /libtorch/lib/libtorch.so /opt/fcgi/cgi-bin/
$> sudo cp -r libtorch/ /opt/
$> wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.5.0.tar.gz
$> sudo mkdir /opt/tensorflow
$> sudo tar -C /opt/tensorflow -zxvf libtensorflow-cpu-linux-x86_64-2.5.0.tar.gz
$> sudo apt-get install intel-openvino-dev-ubuntu20-2021.1.1 \
build-essential cmake git git-lfs libfcgi-dev libcurl4-openssl-dev \
libssl-dev libpam0g-dev libgrpc-dev libgrpc++-dev \
libprotobuf-dev protobuf-compiler protobuf-compiler-grpc python3-dev \
libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libpci-dev \
docker.io
$> git clone https://github.com/pybind/pybind11
$> cd pybind11 && git checkout -b tmp f31df73
$> mkdir build && cd build && cmake .. && make -j$(nproc --all) install
```
The development process has nothing special but please noted:
1. If you are developing an application using CCAI existing services, youd better launch a CCAI container for testing. You can refer to [section 3.2] to get/install CCAI container and host packages.
2. If you are developing CCAI services which will be provided to applications to use, you can develop and verify them in host and if these services work as expected, you can then deploy your completed services into the CCAI container and then launch the CCAI container for verification. The development and deployment of services can be found in [chapter 5] and [6].
3. If you are developing the CCAI core itself, it is similar to step 2 above but what you need to change are core libraries and the container itself, you also can find useful information from [chapter 5] and [6].
[section 3.2]: https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/Integrate#how-to-install-the-pre-built-runtime-and-verify-it-quickly
[chapter 5]: https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/Generate
[6]: https://docs.ukylin.com/en/Intel-CCAI-Development-Manual/Develop
# Setup the Pulseaudio service
If you want to enable sound e2e cases, such as TTS or live asr cases, you need to enable the pulseaudio service on the host side.
(1) On the host PC, install the pulseaudio package if this package hasn't been installed.
For example:
```bash
$> sudo apt-get install pulseaudio
```
(2) Enable the TCP protocol of the pulseaudio.
Edit the configuration file. for example:
```bash
$> sudo vim /etc/pulse/default.pa
```
Find out the following tcp configuration:
```bash
#load-module module-native-protocol-tcp
```
Uncomment the tcp configuration(remove "#"), and add authentication:
```bash
load-module module-native-protocol-tcp auth-anonymous=1
```
Save and quit the configuration file.
(3) Restart the pulseaudio service. For example:
```bash
$> sudo systemctl restart pulseaudio
```
Or kill the pulseaudio service directly:
```bash
$> sudo kill -9 xxxx(pulseaudio thread number)
```

View File

@ -0,0 +1,777 @@
---
title: 11.Test cases and package installation
description:
published: true
date: 2021-10-28T02:40:55.574Z
tags: ccai
editor: markdown
dateCreated: 2021-10-27T08:26:21.922Z
---
# Enabled services for testing
For making the deliverables unified but still easy to validate basic functions of CCAI framework, we disabled all sample services by default from version 1.0-210201 due to, on the current stage, no landed user cases from customers. For testing purpose, to enable services, use these steps:
```
#Create a folder on the host side under /opt/intel/service_runtime.
```
```
$>sudo mkdir -p /opt/intel/service_runtime/rootfs/d/etc/runit/runsvdir/default
$>sudo chown www-data.www-data /opt/intel/service_runtime/rootfs/d/etc/runit/runsvdir/default
```
```
#Create the health monitor configuration file.
```
```
$>sudo mkdir -p /opt/intel/service_runtime/rootfs/f/opt/health_monitor
$>sudo bash -c 'cat > /opt/intel/service_runtime/rootfs/f/opt/health_monitor/config.yml << "EOF"
daemon_targets:
- lighttpd
- policy_daemon
grpc_targets:
fcgi_targets:
- fcgi_ocr.py
- fcgi_ocr
EOF'
$>sudo chown -R www-data.www-data /opt/intel/service_runtime/rootfs/f/opt/health_monitor
replace/add/remove services in above “fcgi_targets:” section will replace/enable/disable related services from health_monitor list. For version 1.0-210201, now you can use 3 services as:
- fcgi_ocr.py
- fcgi_ocr
- fcgi_tts.py //(Please noted, there is no fcgi_tts)
- fcgi_speech //(Please noted, there is no fcgi_speech.py)
```
```
#Restart container, and enable the services.
```
```
$>sudo systemctl restart service-runtime
$>docker exec -it service_runtime_container /bin/bash -c 'cd /etc/runit/runsvdir/default; for s in /etc/sv/*; do ln -sf $s; done'
```
```
#Remove those created folders under /opt/intel/service_runtime and restart the container, there would make any enabled services disabled again.
```
```
$>sudo rm -R /opt/intel/service_runtime/rootfs
```
# High Level APIs test cases
Exposed high level APIs test cases which were provided in individual test script way, the usage of each case can be found in the following pages and the cases list are:
**(Most test cases have default input which are the preinstalled files under specific folders with the deb package installation. From the WW4520 release, if you want to use your own input files like images (-i), text (-s), wav (-a), you now can pass input parameters to each test case.)**
## For testing all provided API in a bunch
You can test all python implementations with existing test case `set - test-script`/`run_test_script.sh`
Usage:
```
cd /opt/intel/service_runtime/test-script/
sudo ./run_test_script.sh
```
## For testing python implementation of related REST APIs.
`test-script/test-demo/post_local_asr_py.py` (the default input audio file(“-a”):
`how_are_you_doing.wav`; the default inference device(“-d”): GNA_AUTO)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_asr_py.py -a “AUDIO_FILE” -d “DEVICE”
```
Result:
```
{
"ret": 0,
"msg": "ok",
"data": {
"text": "HOW ARE YOU DOING\n"
},
"time": 0.777
}
processing time is: 0.7873961925506592
```
`test-script/test-demo/post_local_classfication_py.py` (default input file is classfication.jpg if without input parameter )
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_classfication_py.py -i “IMAGE_FILE”
```
Result:
```
{
"ret": 0,
"msg": "ok",
"data": {
"tag_list": [
{
"tag_name": "sandal",
"tag_confidence": 0.7865033745765686
}
]
},
"time": 0.332
}
processing time is: 0.33770060539245605
```
`test-script/test-demo/post_local_face_detection_py.py` (default input file if without input parameter: face-detection-adas-0001.png)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_face_detection_py.py -i “IMAGE_FILE”
```
Result:
```
{
"ret": 0,
"msg": "ok",
"data": {
"face_list": [
{
"x1": 611,
"y1": 106,
"x2": 827,
"y2": 322
},
{
"x1": 37,
"y1": 128,
"x2": 298,
"y2": 389
}
]
},
"time": 0.303
}
processing time is: 0.3306546211242676
```
`test-script/test-demo/post_local_facial_landmark_py.py` (default input file if without input parameter: face-detection-adas-0001.png)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_facial_landmark_py.py -i “IMAGE_FILE”
```
Result:
```
{
"ret": 0,
"msg": "ok",
"data": {
"image_width": 916,
"image_height": 502,
"face_shape_list": [
{
"x": 684.5769672393799,
"y": 198.69771003723145
},
{
"x": 664.4035325050354,
"y": 195.72683095932007
},
……
{
"x": 243.1659236550331,
"y": 211.56765642762184
}
]
},
"time": 0.644
}
processing time is: 0.6765329837799072
```
`test-script/test-demo/post_local_ocr_py.py` (default input file if without input parameter: intel.jpg )
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_ocr_py.py -i “IMAGE_FILE”
```
Result:
```
{
"ret": 0,
"msg": "ok",
"data": {
"item_list": [
{
"item": "",
"itemcoord": [
{
"x": 159,
"y": 91,
"width": 144,
"height": 83
}
],
"itemstring": "intel",
"word": [
{
"character": "i",
"confidence": 0.9997048449817794
},
{
"character": "n",
"confidence": 7342.882333942596
},
{
"character": "t",
"confidence": 0.03543140404695105
},
{
"character": "e",
"confidence": 2068.173618863451
},
{
"character": "l",
"confidence": 0.006070811107476452
}
]
},
{
"item": "",
"itemcoord": [
{
"x": 203,
"y": 152,
"width": 176,
"height": 78
}
],
"itemstring": "inside",
"word": [
{
"character": "i",
"confidence": 0.9999999989186683
},
{
"character": "n",
"confidence": 4.0572939432830824e-08
},
{
"character": "s",
"confidence": 0.0015244375426548887
},
{
"character": "i",
"confidence": 3807.4854890027605
},
{
"character": "d",
"confidence": 0.42974367345764747
},
{
"character": "e",
"confidence": 0.008770792351958176
}
]
}
]
},
"time": 5.69
}
processing time is: 5.702326774597168
```
`test-script/test-demo/post_local_policy_py.py `
**The default accelerator will be CPU if no change, once you set another accelerator with policy setting API, it will always take effect before you explicitly change it.**
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_policy_py.py -d CPU -l 1
```
Result:
```
successfully set the policy daemon
processing time is: 0.004211902618408203
```
`test-script/test-demo/post_local_tts_py.py` (default input file if without input parameter: test_sentence.txt)
**(So far, for easy testing of the pipeline, we had some rules on TTS input: the input must be an English string and be saved in test_sentence.txt.)**
Usage:
```
cd /opt/intel/service_runtime/test-script/
sudo python3 ./test-demo/post_local_tts_py.py -s “SENTENCE_FILE”
```
Result:
```
{
"ret": 0,
"msg": "ok",
"data": {
"format": 2,
"speech": "UklGRjL4Aw..."
"md5sum": "3bae7bf99ad32bc2880ef1938ba19590"
},
"time": 7.283
}
processing time is: 7.824979066848755
```
## For testing C++ implementation of related REST APIs
`test-script/test-demo/post_local_asr_c.py` (the default input audio file(“-a”): how_are_you_doing.wav; the default inference device(“-d”): GNA_AUTO)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_asr_c.py -a “AUDIO_FILE” -d “DEVICE”
```
Result:
```
{
"ret":0,
"msg":"ok",
"data":{
"text":HOW ARE YOU DOING
},
"time":0.695
}
processing time is: 0.6860318183898926
```
`test-script/test-demo/post_local_classfication_c.py` (default input file if without input parameter: classfication.jpg)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_classfication_c.py -i “IMAGE_FILE”
```
Result:
```
{
"ret":0,
"msg":"ok",
"data":{
"tag_list":[
{"tag_name":'sandal',"tag_confidence":0.786503}
]
},
"time":0.380
}
processing time is: 0.36538004875183105
```
`test-script/test-demo/post_local_face_detection_c.py` (default input file if without input parameter: face-detection-adas-0001.xml)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_face_detection_c.py -i “IMAGE_FILE”
```
Result:
```
{
"ret":0,
"msg":"ok",
"data":{
"face_list":[
{
"x1":655,
"y1":124,
"x2":783,
"y2":304
},
{
"x1":68,
"y1":149,
"x2":267,
"y2":367
} ]
},
"time":0.305
}
processing time is: 0.3104386329650879
```
`test-script/test-demo/post_local_facial_landmark_c.py` (default input file if without input parameter: face-detection-adas-0001.xml)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_facial_landmark_c.py -i “IMAGE_FILE”
```
Result:
```
{
"ret":0,
"msg":"ok",
"data":{
"image_width":916.000000,
"image_height":502.000000,
"face_shape_list":[
{"x":684.691284,
"y":198.765793},
{"x":664.316528,
"y":195.681824},
……
{"x":241.314194,
"y":211.847031} ]
},
"time":0.623
}
processing time is: 0.6292879581451416
```
`test-script/test-demo/post_local_ocr_c.py` (default input file if without input parameter: intel.jpg)
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_ocr_c.py -i “IMAGE_FILE”
```
Result:
```
{
"ret":0,
"msg":"ok",
"data":{
"item_list":[
{
"itemcoord":[
{
"x":161.903748,
"y":91.755684,
"width":141.737503,
"height":81.645004
}
],
"words":[
{
"character":i,
"confidence":0.999999
},
{
"character":n,
"confidence":0.999998
},
{
"character":t,
"confidence":0.621934
},
{
"character":e,
"confidence":0.999999
},
{
"character":l,
"confidence":0.999995
} ],
"itemstring":intel
},
{
"itemcoord":[
{
"x":205.378326,
"y":153.429291,
"width":175.314835,
"height":77.421722
}
],
"words":[
{
"character":i,
"confidence":1.000000
},
{
"character":n,
"confidence":1.000000
},
{
"character":s,
"confidence":1.000000
},
{
"character":i,
"confidence":0.776524
},
{
"character":d,
"confidence":1.000000
},
{
"character":e,
"confidence":1.000000
} ],
"itemstring":inside
} ]
},
"time":1.986
}
processing time is: 1.975726842880249
```
`test-script/test-demo/post_local_speech_c.py` (default input file if without input parameter: dev93_1_8.ark)
**This case is used to verify GNA accelerators. The default setting is GNA_AUTO. If GNA HW is ready, the inference will run on GNA HW. Otherwise, it will run on GNA_SW mode to simulate GNA HW.**
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_speech_c.py
```
Result:
```
{
"ret":0,
"msg":"ok",
"data":{
"input information(name:dimension)":{
"Parameter":[8,440]
},
"output information(name:dimension)":{
"affinetransform14/Fused_Add_":[8,3425]
}
},
"time":0.344222
}
{
"ret":0,
"msg":"ok",
"data":{
"result":"success!"
},
"time":0.484783
}
fcgi inference time: 0.009104
processing time is: 0.0262906551361084
```
`test-script/test-demo/post_local_policy_c.py`
**The default accelerator will be CPU if no change, once you set another accelerator with policy setting API, it will always take effect before you explicitly change it.**
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-demo/post_local_policy_c.py -d CPU -l 1
```
Result:
```
successfully set the policy daemon
processing time is: 0.0035839080810546875
```
## For testing C++ implementation of related gRPC APIs
`grpc_inference_service_test.py`
Usage:
```
cd /opt/intel/service_runtime/test-script/
python3 ./test-grpc/grpc_inference_service_test.py
```
Result:
```
HealthCheck resp: 0
ASR result:
{"text":"HOW ARE YOU DOING"}
Classification result:
[{"tag_name":"sandal","tag_confidence":0.743236}]
FaceDetection result:
{"x1":611,"y1":106,"x2":827,"y2":322},{"x1":37,"y1":128,"x2":298,"y2":389}
FacialLandmark result:
{"x":684,"y":198},{"x":664,"y":195}, …
OCR result:
[{"itemcoord":{"x":162,"y":91,"width": …
```
# Health-monitor mechanism test case
## Test case
This test case will take about two minutes, please wait.
`test-script/test-health-monitor/test_health_monitor.sh`
Usage:
```
cd /opt/intel/service_runtime/test-script/
./test-health-monitor/test_health_monitor.sh
```
Result:
```
*******************************
test fcgi and grpc daemon:
[sudo] password for lisa:
fcgi can automatic restart
grpc can automatic restart
*******************************
test container:
f64fed060cf1 fe50c5747d46 "/start.sh" 2 minutes ago Up 2 minutes (unhealthy) 0.0.0.0:8080-8081->8080-8081/tcp service_runtime_container
service_runtime_container
restart container...
7f20255d36a9 fe50c5747d46 "/start.sh" About a minute ago Up About a minute (health: starting) 0.0.0.0:8080-8081->8080-8081/tcp service_runtime_container
container can automatic restart
```
## How it work (in brief)
The health monitor mechanism consisted of 2 parts: health-monitor daemon installed in the host system, and its agent installed inside the container.
The agent will check all background running services, daemons and API gateways in a 60 seconds (it is the default value, can be customized via parameter to start command) interval and report the healthy status to the host health-monitor. Meanwhile, under situations where daemon or services fail to respond, according to specific cases, the agent may try to restart those failed processes and then confirm those processes work normally, or rely on API gateways to restart related services and then confirm those services work normally. Whichever cases, the agent will report those information to health-monitor as record and the preconditions of taking additional actions if needed. If the agent itself or API gateways cannot make the processes work again, then that information will also be reported to the host health-monitor and then the health-monitor will decide how to restart the docker instance according to some predefined rules.
In the test case above, it will try to kill these services and container instance respectively and then re-check the status to make sure the health monitor mechanism works as expected. The output log xxxxxx restart is meaning related services/components were killed and then restarted successfully.
For health monitor related log, you can find them by:
`$> sudo journalctl -f -u service-runtime-health-monitor.service `
# simulation lib test case
1) `/opt/intel/service_runtime/simlib/ie_sample is compiled with OpenVINO.`
Usage:
```
cd /opt/intel/service_runtime/simlib/
source /opt/intel/intel/openvino_2021/bin/setupvars.sh
./ie_sample ../models/wsj_dnn5b.xml ../test-script/input_data/dev93_1_8.ark
```
Result:
```
InferenceEngine: 0x7fc88ffbc020
Loading model to the device
numUtterances: 1
numFrames: 8
minput->byteSize(): 1760
minputHolder.as<void*>(): 0x55b0b18425b0
moutput->byteSize(): 13700
outputBuffer: 30.6944,17.725,20.8903,6.05325,9.35285,10.5059
```
2) `/opt/intel/service_runtime/simlib/simlib_sample is compiled with simulation lib.`
Usage:
```
cd /opt/intel/service_runtime/simlib/
LD_LIBRARY_PATH=`pwd` ./simlib_sample models/wsj_dnn5b.xml \
../test-script/input_data/dev93_1_8.ark
```
Result:
```
InferenceEngine: 0x7f8363399490
Loading model to the device
input[Parameter] size: 440
output[affinetransform14/Fused_Add_] size: 3425
numUtterances: 1
numFrames: 8
minput->byteSize(): 1760
minputHolder.as<void*>(): 0x55f105138fc0
moutput->byteSize(): 13700
outputBuffer: 30.6944,17.725,20.8903,6.05325,9.35285,10.5059
```
# Deb package for host installed application/service (if not install yet)
**Note: If not for testing the OTA process, then please uninstall existing packages before installing the new ones to avoid “possible” conflicts with OTA logic.**
`service-runtime_1.0-210201_all.deb`
`service-runtime-simlib-test_1.0-210201_amd64.deb`
`service-runtime-test_1.0-210201_all.deb`
Installation instructions:
`dpkg -i *.deb`
# Deb package for host installed neural network models (if not install yet)
**Note: If not for testing the OTA process, then please uninstall existing packages before install the new ones to avoid “possible” conflicts with OTA logic.**
`service-runtime-models-classification_1.0-210201_all.deb`
`service-runtime-models-deeplab_1.0-210201_all.deb`
`service-runtime-models-face-detection_1.0-210201_all.deb`
`service-runtime-models-facial-landmarks_1.0-210201_all.deb`
`service-runtime-models-lspeech-s5-ext_1.0-210201_all.deb`
`service-runtime-models-ocr_1.0-210201_all.deb`
`service-runtime-models-tts_1.0-210201_all.deb`
`service-runtime-models-wsj-dnn5b_1.0-210201_all.deb`
`service-runtime-models-cl-cache_1.0-210201_all.deb`
Installation instructions:
`dpkg -i *.deb`

46
en/README.md Normal file
View File

@ -0,0 +1,46 @@
---
title: README
description:
published: true
date: 2021-10-26T04:33:32.010Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:42.239Z
---
# docs
#### Description
ukylin文档平台所有文档存放地址
#### Software Architecture
Software architecture description
#### Installation
1. xxxx
2. xxxx
3. xxxx
#### Instructions
1. xxxx
2. xxxx
3. xxxx
#### Contribution
1. Fork the repository
2. Create Feat_xxx branch
3. Commit your code
4. Create Pull Request
#### Gitee Feature
1. You can use Readme\_XXX.md to support different languages, such as Readme\_en.md, Readme\_zh.md
2. Gitee blog [blog.gitee.com](https://blog.gitee.com)
3. Explore open source project [https://gitee.com/explore](https://gitee.com/explore)
4. The most valuable open source project [GVP](https://gitee.com/gvp)
5. The manual of Gitee [https://gitee.com/help](https://gitee.com/help)
6. The most popular members [https://gitee.com/gitee-stars/](https://gitee.com/gitee-stars/)

13
en/home.md Normal file
View File

@ -0,0 +1,13 @@
---
title: Homepage
description:
published: true
date: 2021-11-09T01:03:00.680Z
tags:
editor: markdown
dateCreated: 2021-10-21T09:57:27.780Z
---
# Homepage
Welcome to CCKylin documents!

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

12
home.md Normal file
View File

@ -0,0 +1,12 @@
---
title: 首页
description: 包括优麒麟的介绍、贡献攻略的入口链接等
published: true
date: 2021-11-09T06:18:31.945Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:38:19.396Z
---
# 首页
欢迎大家进入共创麒麟文档平台!

BIN
temp/image1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

BIN
temp/image10.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

BIN
temp/image11.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

BIN
temp/image12.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

BIN
temp/image13.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

BIN
temp/image14.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.0 KiB

BIN
temp/image15.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

BIN
temp/image16.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
temp/image17.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

BIN
temp/image18.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

BIN
temp/image19.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

BIN
temp/image2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

BIN
temp/image20.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

BIN
temp/image21.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

BIN
temp/image22.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

BIN
temp/image23.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

BIN
temp/image24.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

BIN
temp/image3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

BIN
temp/image4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

BIN
temp/image5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
temp/image6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

BIN
temp/image7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.1 KiB

BIN
temp/image8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

BIN
temp/image9.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

46
关于我们.md Normal file
View File

@ -0,0 +1,46 @@
---
title: 关于我们
description: 联系方式
published: true
date: 2021-11-09T06:20:18.832Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:30.480Z
---
优麒麟 是一个开放的组织,您有很多方式可以与我们联系。这个网页会告诉您一些常用方法,但绝不是全部,其他的联系方法可以参考其他网页部分。
## 一般信息
大部份 优麒麟 的信息都可以在我们的 http://ukylinos.ubuntukylin.com/index-cn.html 网站上找到,所以在与我们联系前请先浏览及搜索我们的网站。
我们的[常见问题集FAQ]()可以回答您的很多问题。
关于 优麒麟 的许多问题也可以联系我们的邮件列表:
http://ukylinos.ubuntukylin.com/sig/index-cn.html
如果您很确定我们的网站与说明文档不能解决您的问题,我们有交流社群供大家加入探讨,也许上面的用户和开发者可以很好的回答您的问题。
所有关于社区的问题都可以通过订阅我们的邮件列表:
http://ukylinos.ubuntukylin.com/sig/index-cn.html 、加入交流社群进行提交。
## 宣传
如果您想索取我们的一些文章,或您想把一些新闻放到我们的新闻网页上。请联络我们的[新闻宣传处]()。
## 活动
请将研讨会与展览会或其他任何型式的聚会活动邀请函送到:
http://ukylinos.ubuntukylin.com/sig/index-cn.html
## 协助优麒麟
若您愿意协助 优麒麟,请先参考[贡献攻略]()。
如果您愿意维护一个优麒麟镜像,请参考[优麒麟镜像]()。新的镜像在[这里]()注册。现有的镜像的问题可以报告到contact@ukylin.com
## 骚扰问题
优麒麟 是一个重视礼仪和言谈的社群。如果您是任何行为的受害者,或是感觉受到骚扰,无论是在研讨会,专案所组织的集体开发,还是在一般专案的互动中, 请通过以下电子邮件与社群团队联系contact@ukylin.com

11
历史.md Normal file
View File

@ -0,0 +1,11 @@
---
title: 历史
description: 历史版本
published: true
date: 2021-11-09T06:17:05.726Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:16:51.330Z
---
# 历史

View File

@ -0,0 +1,11 @@
---
title: PR任务合集
description:
published: true
date: 2021-11-09T06:44:29.275Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:44:27.986Z
---
# PR任务合集

View File

@ -0,0 +1,73 @@
---
title: 优麒麟社区贡献角色
description:
published: true
date: 2021-11-09T06:45:45.377Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:55.224Z
---
优麒麟社区中不同的贡献者角色具有不同的权利与责任,这些角色的大部分职责与权利仅限于各自的SIG组内。
各个社区成员都应熟悉社区内SIG的组织、角色、政策、软件、约定等以及相关的技术和/或写作能力。
## 项目贡献者Contributor
项目贡献者是社区SIG内持续活跃的参与者你们可以参与SIG组内各项活动。
### 职责
* 根据加入的不同SIG组的工作内容做出相关的贡献。
* 响应被分配的任务如PR、问题等等
* 若能力允许,可以协助其他社区成员做出多方面贡献。
* 如有需要,能够为自己的贡献成果做出长期的相应支持和维护。
### 权利
* 以超过2/3的票数参与SIG组内的重大决策。
* 对自己的工作做出技术性或非技术性的决定。
* 在SIG组内的项目核心成员选举中提名自己为候选人。
补充贡献频次较高的Contributor可以申请成为项目审核者Maintainer
## 项目维护者Maintainer
项目审核者负责审核SIG或SIG某些部分中代码的质量和正确性。
### 职责
* 评审项目贡献者提交的pr。
* 更新和维护软件包版本。
* 跟踪、发现,分发和修复软件包中的安全问题。
* 当软件包接口变更造成影响时通知所有相关的SIG项目组。
* 与上游社区进行合作包括但不限于推送变更至上游社区、跟踪上游社区的重要bug当需要寻求上游社区帮助时将错误转发至上游社区。
* 除了提供基本的测试用例用于测试回归,提交软件包至测试团队时,负责提供调试/分类软件包的信息,以供问题的分类;更新软件包时,负责提供相关的测试用例供质量检查人员使用。
### 权利
除了以上职责,你们还具有以下权利:
* 参与上游社区邮件列表、获取上游社区bug跟踪器账户。
* 投票选举项目Owner。
* 在SIG项目Owner选举中提名自己为候选人。
## 项目Owner
项目Owner是SIG组的核心成员是SIG组的组长或其他管理委员会成员负责承担了SIG团队所负责得项目的技术路线和内外资源协调等工作。所有Reviewer的职责和权利Owner均具有。
### 职责
* 负责确定SIG组内包括SIG技术方向的规划与决策、架构演进等在内的项目技术路线。
* 确定SIG的关键需求和发布计划。
* 参与社区PM活动并调整你所属SIG组的SIG计划以匹配社区版本的里程碑时间表。
* 代表SIG组成员参与社区技术委员会或其他理事会组织的活动和会议等社区活动。
* 定期召开SIG组内会议并做出相关决策。
### 权利
除了以上职责,你们还具有以下权利:
* 做出任何需要紧急行动的决定。
* 与技术委员会一起任命新的委员会成员。
* 以超过核心成员人数2/3的票数选举和撤销项目核心成员身份。

View File

@ -0,0 +1,109 @@
---
title: 优麒麟贡献攻略
description:
published: true
date: 2021-11-09T06:46:07.577Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:57.167Z
---
# 一、体验优麒麟
如果您是第一次使用优麒麟,不知道怎么使用,可以看这里,对我们有一个初步的了解
1. **什么是优麒麟?**—优麒麟社区简介
2. **如何使用优麒麟?**—下载地址&使用方法
3. **Q&A汇总**
# 二、签署CLA
在参与社区贡献前您需要签署优麒麟社区贡献者许可协议CLA
根据您的参与身份选择签署个人CLA、员工CLA或企业CLA请点击下方链接签署
* 个人CLA以个人身份参与社区请签署个人CLA[点这里]() **TODO**这里需要一个链接
* 企业CLA: 以企业身份参与社区请签署企业CLA[点这里]() **TODO**这里需要一个链接
* 员工CLA: 以企业员工的身份参与社区请签署员工CLA[点这里]() **TODO**这里需要一个链接
# 三、参与优麒麟社区
## 1、加入沟通平台
参与社区第一步,先找到组织并了解社区成员的日常沟通渠道以及沟通规范,具体渠道如下,点击对应链接加入对应组织:
* 邮件列表
* 论坛
* 社群、QQ群
* 文档平台使用手册
* 邮件平台使用手册
## 2、参与社区活动
您可以选择参与感兴趣的社区活动:
* 开发者定期会议
* 发布会
* 直播
* 课程
.........
## 3、参与SIG兴趣小组
SIG即Special Interest Group的缩写为了更好的管理和改善工作流程优麒麟社区按照不同的SIG来组织的因此在进行社区贡献之前需要先找到您感兴趣的SIG。
点击查看[优麒麟 SIG列表]()**TODO**这里需要一个链接选择感兴趣的SIG加入点击这里了解[SIG的使用规范]()
如果您感兴趣的领域没有成立对应的SIG组但是您希望在社区成立一个新的相关SIG进行维护和发展您可以进行SIG组创建具体流程如下
github项目页申请 -> 技术委员会审核 -> 创建邮件列表等基础设施 -> 开始运作
## 4、开启社区贡献之旅
在完成CLA协议签署并加入到感兴趣的SIG组之后您就可以开启您的社区贡献之旅啦参与贡献的第一步就是配置开发环境点击这里查看[开发环境配置指南]()**TODO**这里需要一个链接
在配置好开发环境之后,我们就可以开始选择感兴趣的方式进行贡献啦~具体贡献途径如下:
* 测试:
测试是最简单的贡献途径,在任何一个新版本、新软件或者新功能上线都需要进行多种测试保证功能能稳定运行。如果您刚开始进行贡献,不妨从测试入手。
点击这里获取目前[待测试的产品和软件列表]()
点击这里获得[社区测试规范]()
点击这里申请加入[新版本测试群组]()
* 提交Issue/解决已有Issue
**issue提交流程**在您感兴趣的SIG组内找到issue列表—参考[issue提交指南]()按照规范提交issue
点击这里获取不同SIG组的[issue列表集合]()
每个issue下面都有参与者的讨论欢迎您发表您的看法
**issue任务处理流程**在issue列表里领用issue[领用列表]())—参考[issue解决规范]()进行issue处理并提交成果
* 软件拓展建议
如果您在使用优麒麟中途发现有软件的缺失,可以点击这里[提交软件适配需求]()我们将在3天内对需求进行审核尽量在2周内完成适配上架。
* 贡献代码/工具
如果您想为优麒麟开发了中间件或者其他工具,点击这里进行[想法提交](),我们将分配对应研发为您提供开发工具、端口并解答在开发中遇到的问题,在开发完成之后,点击这里进行[工具提交](),我们的开发人员将会在测试审核之后进行上架,点这里查看[贡献规范]()
* 参与非代码贡献
如果您想进行非代码贡献,点击这里,在[非代码贡献指南]()中找到感兴趣的工作
# 四、和社区一起成长
## 1、担任社区的对应角色
社区中不同的角色对应不同的权利与责任,您可以根据自己擅长的领域来申请担任不同的角色,点击这里查看[角色说明](),如果您找到感兴趣的角色,可以点击这里进行[申请]()。
## 2、社区治理组织介绍
为了让社区更好的运营下去,优麒麟拥有自己的治理组织,点击这里查看[治理组织架构](),如果您在社区参与中遇到任何问题,都可以找到对应的治理组织进行反馈。

View File

@ -0,0 +1,11 @@
---
title: 安装指南
description:
published: true
date: 2021-11-09T06:43:26.041Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:43:24.760Z
---
# 安装指南

View File

@ -0,0 +1,11 @@
---
title: 查看源代码
description:
published: true
date: 2021-11-09T06:44:09.898Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:44:08.610Z
---
# 查看源代码

View File

@ -0,0 +1,11 @@
---
title: 翻译任务合集
description:
published: true
date: 2021-11-09T06:45:02.494Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:45:01.108Z
---
# 翻译任务合集

View File

@ -0,0 +1,11 @@
---
title: 贡献攻略
description:
published: true
date: 2021-11-09T06:43:50.145Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:43:48.867Z
---
# 贡献攻略

View File

@ -0,0 +1,11 @@
---
title: 适配任务合集
description:
published: true
date: 2021-11-09T06:44:43.888Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:44:42.602Z
---
# 适配任务合集

View File

@ -0,0 +1,66 @@
---
title: 非代码贡献指南
description:
published: true
date: 2021-11-09T06:46:41.558Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:55:01.150Z
---
如果你感兴趣的地方不在技术,但是又想参与到优麒麟的贡献之中,那么你还可以选择成为优麒麟社区志愿者。
不论你是在校学生职场程序员还是企业高管只要你对Linux开发及优麒麟感兴趣都可以申请成为优麒麟社区志愿者与社区共同成长进步接下来为你简单介绍下优麒麟社区志愿者构成。
# 优麒麟社区志愿者团队职责与权益:
## 一、职责:
### 1、核心组织者
* 制定流程规范,参与社区决策;
* 参与组织优麒麟开发者大赛、交流分享活动;
* 挖掘优秀的优麒麟开发者、爱好者加入社区;
* 将优麒麟仓库和ISO分发到新的开源镜像站
* 参与新版本、新功能上线之前的内测;
### 2、城市站/高校站:
* 发展或成立优麒麟城市/高校站;
* 参与优麒麟开发者大赛推广、定期组织城市/高校站的交流分享活动;
* 挖掘优秀的优麒麟开发者、爱好者加入,拓展城市/高校站的规模;
* 参与优麒麟的生态拓展工作;
* 参与新版本、新功能上线之前的内测;
### 3、媒体组
* 参与优麒麟品牌建设定期录制B站、抖音等账号视频进行推广
* 定期撰写优秀的优麒麟技术博文并投稿至公众号发布;
* 拓展合作KOL或者媒体进行资源互换
### 4、设计组
* 参与社区视觉设计;
## 二、权益
**你可以获得的权益:**
* 定期和优麒麟核心运营人员和技术大佬一对一交流的机会;
* 成为优麒麟特邀讲师,提升个人曝光及影响力;
* 获得优麒麟的实践活动证书;
* 内推获得优麒麟工作机会;
* 获得志愿者专属的优麒麟定制周边礼品;
* 获得优麒麟的宣传资源;
## 三、加入流程:
**TODO** 此处需要添加链接
1. 点击链接或扫码下方二维码填写报名申请表:
**TODO** 此处需要一个加入二维码
2. 对应小组/城市/高校负责人进行报名表审核;
3. 负责人7个工作日内联系您并给与回复。

11
最新动向.md Normal file
View File

@ -0,0 +1,11 @@
---
title: 最新动向/更改
description: 谁上传了日志,谁做了什么改变,周会纪要,新版本上线等等
published: true
date: 2021-11-09T06:17:47.576Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:16:25.733Z
---
# 最新动向/更改

11
社区产品/UKUI.md Normal file
View File

@ -0,0 +1,11 @@
---
title: UKUI
description:
published: true
date: 2021-11-09T07:56:06.489Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:56:04.983Z
---
# UKUI

View File

@ -0,0 +1,11 @@
---
title: intel套件
description:
published: true
date: 2021-11-09T07:56:45.480Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:56:44.175Z
---
# intel套件

View File

@ -0,0 +1,11 @@
---
title: 优麒麟开源操作系统
description:
published: true
date: 2021-11-09T07:55:35.079Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:55:33.790Z
---
# 优麒麟开源操作系统

View File

@ -0,0 +1,11 @@
---
title: Gitee CI/CD使用指南
description:
published: true
date: 2021-11-09T06:25:46.621Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:25:45.340Z
---
# Gitee CI/CD使用指南

View File

@ -0,0 +1,11 @@
---
title: 移植windows应用到优麒麟教程
description:
published: true
date: 2021-11-09T07:53:20.716Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:35:20.183Z
---
# 移植windows应用到优麒麟教程

View File

@ -0,0 +1,11 @@
---
title: 移植优麒麟应用到其他Linux发行版教程
description:
published: true
date: 2021-11-09T07:53:48.186Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:33:11.073Z
---
# 移植优麒麟应用到其他Linux发行版教程

View File

@ -0,0 +1,11 @@
---
title: 移植移动应用到优麒麟教程
description:
published: true
date: 2021-11-09T07:54:15.759Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:36:02.405Z
---
# 移植移动应用到优麒麟教程

View File

@ -0,0 +1,11 @@
---
title: 多语言本地化指南
description:
published: true
date: 2021-11-09T06:29:32.033Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:29:30.758Z
---
# 多语言本地化指南

View File

@ -0,0 +1,11 @@
---
title: 推荐开发者工具
description:
published: true
date: 2021-11-09T06:32:11.638Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:32:10.300Z
---
# 推荐开发者工具

View File

@ -0,0 +1,11 @@
---
title: 插件编写指南
description:
published: true
date: 2021-11-09T06:31:27.587Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:31:26.280Z
---
# 插件编写指南

View File

@ -0,0 +1,11 @@
---
title: 社区项目地图
description:
published: true
date: 2021-11-09T06:31:50.338Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:31:48.994Z
---
# 社区项目地图

View File

@ -0,0 +1,11 @@
---
title: 签名认证指南
description:
published: true
date: 2021-11-09T06:31:01.509Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:31:00.189Z
---
# 签名认证指南

View File

@ -0,0 +1,11 @@
---
title: 编码风格
description:
published: true
date: 2021-11-09T06:30:41.254Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:30:39.951Z
---
# 编码风格

View File

@ -0,0 +1,11 @@
---
title: 编译与构建指南
description:
published: true
date: 2021-11-09T06:27:50.526Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:27:49.228Z
---
# 编译与构建指南

View File

@ -0,0 +1,11 @@
---
title: 调试与追踪指南
description:
published: true
date: 2021-11-09T06:28:31.427Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:28:30.133Z
---
# 调试与追踪指南

View File

@ -0,0 +1,11 @@
---
title: 软件包维护指南
description:
published: true
date: 2021-11-09T06:30:05.026Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:30:03.392Z
---
# 软件包维护指南

View File

@ -0,0 +1,11 @@
---
title: 软件协议规范
description:
published: true
date: 2021-11-09T06:29:01.576Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:29:00.271Z
---
# 软件协议规范

View File

@ -0,0 +1,11 @@
---
title: 文档平台使用指南
description:
published: true
date: 2021-11-09T06:26:19.639Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:26:18.322Z
---
# 文档平台使用指南

View File

@ -0,0 +1,11 @@
---
title: 发行说明
description: 每一个版本的用户须知+系统介绍+发行日志+升级说明+开源软件协议
published: true
date: 2021-11-09T06:38:16.134Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:38:14.844Z
---
# 发行说明

View File

@ -0,0 +1,11 @@
---
title: 系统管理
description: 高级安装教程+双系统安装教程+软件包管理+用户与用户组管理+安全策略指南
published: true
date: 2021-11-09T06:36:48.492Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:36:47.131Z
---
# 系统管理

View File

@ -0,0 +1,11 @@
---
title: 网络设置
description: 有线网络连接+无线网络连接+VPN
published: true
date: 2021-11-09T06:37:43.910Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:37:42.620Z
---
# 网络设置

View File

@ -0,0 +1,11 @@
---
title: 翻译平台使用指南
description: 包含要求+贡献入口
published: true
date: 2021-11-09T06:26:54.786Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:26:53.452Z
---
# 翻译平台使用指南

View File

@ -0,0 +1,11 @@
---
title: 邮件列表使用指南
description:
published: true
date: 2021-11-09T02:19:10.264Z
tags:
editor: markdown
dateCreated: 2021-11-09T02:19:08.941Z
---
# 邮件列表使用指南

View File

@ -0,0 +1,31 @@
---
title: 技术委员会
description:
published: true
date: 2021-10-26T04:33:33.382Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:45.518Z
---
# 技术委员会
优麒麟 社区技术委员会是 优麒麟 社区的技术决策机构,负责社区技术的决策和一些相关的技术资源协调。
## 职责与权利
技术委员会拥有以下职责:
1. 决定任何技术政策问题:
包括技术政策手册的内容、开发人员的参考资料等。
2. 决定开发人员管辖权重叠的任何技术问题:
技术委员会可决定在开发人员需要解决技术策略或立场冲突的情况下意见不合的问题。
3. 当被需要做出决定时,技术委员会可做出相应决定:
任何人或组织均可将其自己的决定委托给技术委员会做,或向其征求意见。
4. 提供建议:
技术委员会可正式宣布其对任何事项的意见。 委员个人可就他们的意见和委员会可能的意见发表非正式声明。
5. 任命委员会新成员或删除现有成员。
6. 任命技术委员会主席:
主席由委员会从其成员中选出。委员会的所有成员可以是自己主动提名,也可以由组织推荐。
## 成员列表

View File

@ -0,0 +1,11 @@
---
title: 推广委员会
description:
published: true
date: 2021-11-09T06:55:54.791Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:55:53.508Z
---
# 推广委员会

View File

@ -0,0 +1,11 @@
---
title: 理事会
description:
published: true
date: 2021-11-09T06:51:50.311Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:51:48.950Z
---
# 理事会

View File

@ -0,0 +1,24 @@
---
title: 理事会秘书处
description:
published: true
date: 2021-11-09T06:55:16.113Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:49.450Z
---
## 理事会秘书处
优麒麟 社区尚处在起步阶段,在这一阶段,社区秘书处负责 优麒麟 社区的运作和社区管理工作。
## 职责与权利
社区秘书处可以:
* 制定社区运营规划
* 执行社区初创时筹备社区的工作
* 负责管理保障社区良好运行
* 优麒麟 社区其他未明确分配到责任人的工作
## 成员

View File

@ -0,0 +1,129 @@
---
title: 社区治理组织架构
description:
published: true
date: 2021-10-26T04:33:34.847Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:47.453Z
---
## 社区理事会
### 构成
* 理事长1人任期一年。一票否决
* 副理事2人
* 理事成员8个
### 职责
* 指导社区的发展方向,制定长期发展规划和实施指导意见;
* 审视技术委员会的工作,并提出指导意见;
* 审视用户委员会、品牌推广委员会的工作,对用户委员会、品牌推广委员会的工作规划和内容进行决策;
* 组织社区开源基础设施的建设和运营工作;
* 面向全球各行业宣传和推广共创麒麟操作系统,促进共创麒麟操作系统的广泛使用和生态建设;
* 吸引更多的企业、学术机构、开发者加入到社区,发展社区生态,提升社区活力。
## 成员
[成员列表]()
## 理事会秘书处
### 构成
* 秘书长 1 名,秘书数名
### 职责
* 执行理事会的各项决议,筹备和召开理事会会议,协调社区各个机构开展工作。
* 编写社区年度的工作报告,经理事会批准后对外发布。
* 处理社区的其他事务。
### 成员
[成员列表]()
## 技术委员会
优麒麟 社区技术委员会是 优麒麟 社区的技术决策机构,负责社区技术的决策和一些相关的技术资源协调。
### 构成
* 15人
### 职责
* 负责技术SIG
* 技术委员会拥有技术决策的最终裁决权;
* 决策社区技术的发展愿景和方向;
* 决策社区 SIG 的成立,撤销、合并等事务,解决 SIG 组之间的协作冲突,辅导、审视和监督 SIG 组的日常运作;
* 落实社区日常开发工作,保证共创麒麟操作系统版本高质量发布;
* 引导社区在体系架构、内核、安全等领域技术创新,保证社区具有持续的技术竞争力。
* 引导社区建立原创性开源项目,持续构建社区技术影响力。
### 成员
[成员列表]()
## 顾问委员会
### 构成
* 院士资深专家11-20人
### 职责
* 向理事会和技术委员会提供政策方向、技术趋势、开源规则等方面的顾问建议;
* 观察和评估理事会、技术委员会等执行过程和结果,并提出改进建议;
* 促进社区团体、开源界、其他各企业参与准则。
### 成员
[成员列表]()
## 推广委员会
### 构成
* 设主席 1 名,委员数名
### 职责
* 推广共创麒麟操作系统和社区,提升共创麒麟品牌的影响力。
* 引导共创麒麟操作系统的广泛使用,构建共创麒麟全球生态。
### 成员
[成员列表]()
## SIG
优麒麟社区中所有的SIG 小组是开放的,任何人都可以参与。
在SIG团队项目的README.md文件中包含了该项目所属的SIG信息、交流方式、成员和联系方式等。我们欢迎大家通过README.md 文件中提到的联系方式包括邮件列表、公开例会等途径积极参与进SIG内的交流。
SIG 可以有自己的邮件列表、社群等,也可以有自己的贡献策略。
### SIG组的建立
一个新的SIG组的建立申请由相关提议人在技术委员会例会上进行申报并由委员会成员进行集体评议。如果申请通过提议人需要按照流程在社区提交PR建立相关的SIG页面等。PR经委员会成员审议合入。
新的SIG组运行初期可以由技术委员会指定一个委员作为该SIG组的导师为SIG组进行指导以确保该SIG组快速步入正轨。
### SIG组的撤销
技术委员会可以依据以下的原则经过讨论将SIG撤销
* SIG组的工作因为无法满足社区版本的要求而阻碍了 优麒麟 社区版本的发布。
* SIG组无法正常运转包括无固定例会无法及时响应社区issue所负责的软件没有及时更新等。
### 撤销流程
* 由技术委员会中的一个委员提出SIG组撤销申请。
* 该申请在技术委员会例会上进行讨论并投票决策。投票原则按照简单多数票原则。
当SIG组被撤销后该SIG组名下的软件包依照其合理归属划归其它SIG组。
SIG列表

View File

@ -0,0 +1,25 @@
---
title: 顾问委员会
description:
published: true
date: 2021-10-26T04:33:37.631Z
tags:
editor: markdown
dateCreated: 2021-10-21T10:54:51.353Z
---
# 顾问委员会
## 构成
* 院士资深专家11-20人
## 职责
* 向理事会和技术委员会提供政策方向、技术趋势、开源规则等方面的顾问建议;
* 观察和评估理事会、技术委员会等执行过程和结果,并提出改进建议;
* 促进社区团体、开源界、其他各企业参与准则。
## 成员
[成员列表]()

View File

@ -0,0 +1,11 @@
---
title: issue报告规范
description:
published: true
date: 2021-11-09T07:28:25.526Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:28:24.199Z
---
# issue报告规范

View File

@ -0,0 +1,11 @@
---
title: 免责声明
description:
published: true
date: 2021-11-09T07:29:46.030Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:29:44.742Z
---
# 免责声明

View File

@ -0,0 +1,11 @@
---
title: 发行说明
description:
published: true
date: 2021-11-09T07:30:05.029Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:30:03.562Z
---
# 发行说明

View File

@ -0,0 +1,11 @@
---
title: 安全策略指南
description:
published: true
date: 2021-11-09T07:29:27.387Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:29:26.088Z
---
# 安全策略指南

View File

@ -0,0 +1,11 @@
---
title: 版本规划
description:
published: true
date: 2021-11-09T07:30:20.468Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:30:19.168Z
---
# 版本规划

View File

@ -0,0 +1,11 @@
---
title: 社区成员守则
description:
published: true
date: 2021-11-09T07:28:44.029Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:28:42.743Z
---
# 社区成员守则

View File

@ -0,0 +1,11 @@
---
title: 社区管理规范
description:
published: true
date: 2021-11-09T07:29:08.389Z
tags:
editor: markdown
dateCreated: 2021-11-09T07:29:07.101Z
---
# 社区管理规范

View File

@ -0,0 +1,11 @@
---
title: 贡献者协议
description:
published: true
date: 2021-11-09T06:56:36.746Z
tags:
editor: markdown
dateCreated: 2021-11-09T06:56:35.460Z
---
# 贡献者协议

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB