update some documents and add kmodel files
This commit is contained in:
commit
f6a0c0b0b7
|
@ -0,0 +1,35 @@
|
|||
# Face detection demo
|
||||
|
||||
### A face object detection task demo. Running MobileNet-yolo on K210-based edge devices.
|
||||
|
||||
---
|
||||
|
||||
## Training
|
||||
|
||||
kmodel from [GitHub](https://github.com/kendryte/kendryte-standalone-demo/blob/develop/face_detect/detect.kmodel).
|
||||
|
||||
## Deployment
|
||||
|
||||
### compile and burn
|
||||
|
||||
Use `(scons --)menuconfig` in bsp folder *(Ubiquitous/RT_Thread/bsp/k210)*, open:
|
||||
|
||||
- More Drivers --> ov2640 driver
|
||||
- Board Drivers Config --> Enable LCD on SPI0
|
||||
- Board Drivers Config --> Enable SDCARD (spi1(ss0))
|
||||
- Board Drivers Config --> Enable DVP(camera)
|
||||
- RT-Thread Components --> POSIX layer and C standard library --> Enable pthreads APIs
|
||||
- APP_Framework --> Framework --> support knowing framework --> kpu model postprocessing --> yolov2 region layer
|
||||
- APP_Framework --> Applications --> knowing app --> enable apps/face detect
|
||||
|
||||
`scons -j(n)` to compile and burn in by *kflash*.
|
||||
|
||||
### json config and kmodel
|
||||
|
||||
Copy json config for deployment o SD card */kmodel*. Example config file is *detect.json* in this directory. Copy final kmodel to SD card */kmodel* either.
|
||||
|
||||
---
|
||||
|
||||
## Run
|
||||
|
||||
In serial terminal, `face_detect` to start a detection thread, `face_detect_delete` to stop it. Detection results can be found in output.
|
Binary file not shown.
|
@ -182,6 +182,8 @@ void face_detect()
|
|||
printf("open ov2640 fail !!");
|
||||
return;
|
||||
}
|
||||
_ioctl_set_dvp_reso set_dvp_reso = {sensor_output_size[1], sensor_output_size[0]};
|
||||
ioctl(g_fd, IOCTRL_CAMERA_SET_DVP_RESO, &set_dvp_reso);
|
||||
showbuffer = (unsigned char *)malloc(sensor_output_size[0] * sensor_output_size[1] * 2);
|
||||
if (NULL == showbuffer) {
|
||||
close(g_fd);
|
||||
|
|
|
@ -0,0 +1,167 @@
|
|||
# Helmet detection demo
|
||||
|
||||
### A helmet and head without helmet object detection task demo. Running MobileNet-yolo on K210-based edge devices.
|
||||
|
||||
---
|
||||
|
||||
## Training
|
||||
|
||||
### Enviroment preparation
|
||||
|
||||
Model generated by [aXeleRate](https://forgeplus.trustie.net/projects/yangtuo250/aXeleRate) and converted to kmodel by [nncase](https://github.com/kendryte/nncase/tree/v0.1.0-rc5).
|
||||
|
||||
```shell
|
||||
# master branch for MobileNetv1-yolov2 and unstable branch to test MobileNetv1(v2)-yolov2(v3)
|
||||
git clone https://git.trustie.net/yangtuo250/aXeleRate.git (-b unstable)
|
||||
cd aXeleRate
|
||||
pip install -r requirments.txt && pip install -e .
|
||||
```
|
||||
|
||||
### training config setting
|
||||
|
||||
Example [config](https://forgeplus.trustie.net/projects/yangtuo250/aXeleRate/tree/master/configs/detector.json), some hyper-parameters:
|
||||
|
||||
- architecture: backbone, MobileNet7_5 for default, MobileNet1_0(α = 1.0) and above cannot run on K210 because of OOM on feature map in master branch. For unstable branch MobileNetV2_1_0 is OK.
|
||||
|
||||
- input_size: fixed model input size, single integer for height equals to width, otherwise a list([height, width]).
|
||||
- anchors: yolov2 anchor(for master) or anchor scaled to 1.0(for unstable), can be generate by [darknet](https://github.com/AlexeyAB/darknet).
|
||||
- labels: labels of all classes.
|
||||
- train(valid)_image(annot)_folder: path of images and annoations for training and validation.
|
||||
- saved_folder: path for trainig result storage(models, checkpoints, logs ...).
|
||||
|
||||
Mine config for unstable:
|
||||
```json
|
||||
{
|
||||
"model": {
|
||||
"type": "Detector",
|
||||
"architecture": "MobileNetV2_1_0",
|
||||
"input_size": [
|
||||
224,
|
||||
320
|
||||
],
|
||||
"anchors": [
|
||||
[
|
||||
[
|
||||
0.1043,
|
||||
0.1560
|
||||
],
|
||||
[
|
||||
0.0839,
|
||||
0.3036
|
||||
],
|
||||
[
|
||||
0.1109,
|
||||
0.3923
|
||||
],
|
||||
[
|
||||
0.1378,
|
||||
0.5244
|
||||
],
|
||||
[
|
||||
0.2049,
|
||||
0.6673
|
||||
]
|
||||
]
|
||||
],
|
||||
"labels": [
|
||||
"human"
|
||||
],
|
||||
"obj_thresh": 0.5,
|
||||
"iou_thresh": 0.45,
|
||||
"coord_scale": 1.0,
|
||||
"class_scale": 0.0,
|
||||
"object_scale": 5.0,
|
||||
"no_object_scale": 3.0
|
||||
},
|
||||
"weights": {
|
||||
"full": "",
|
||||
"backend": ""
|
||||
},
|
||||
"train": {
|
||||
"actual_epoch": 2000,
|
||||
"train_image_folder": "mydata/human/Images/train",
|
||||
"train_annot_folder": "mydata/human/Annotations/train",
|
||||
"train_times": 2,
|
||||
"valid_image_folder": "mydata/human/Images/val",
|
||||
"valid_annot_folder": "mydata/human/Annotations/val",
|
||||
"valid_times": 1,
|
||||
"valid_metric": "precision",
|
||||
"batch_size": 32,
|
||||
"learning_rate": 2e-5,
|
||||
"saved_folder": "mydata/human/results",
|
||||
"first_trainable_layer": "",
|
||||
"augmentation": true,
|
||||
"is_only_detect": false,
|
||||
"validation_freq": 5,
|
||||
"quantize": false,
|
||||
"class_weights": [1.0]
|
||||
},
|
||||
"converter": {
|
||||
"type": [
|
||||
"k210"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
*(For more detailed config usage, please refer to original aXeleRate repo.)*
|
||||
|
||||
### data preparation
|
||||
|
||||
Please refer to [VOC format](https://towardsdatascience.com/coco-data-format-for-object-detection-a4c5eaf518c5), path as config above.
|
||||
|
||||
### train it!
|
||||
|
||||
```shell
|
||||
python -m aXeleRate.train -c PATH_TO_YOUR_CONFIG
|
||||
```
|
||||
|
||||
### model convert
|
||||
|
||||
Please refer to [nncase repo](https://github.com/kendryte/nncase/tree/v0.1.0-rc5).
|
||||
|
||||
---
|
||||
|
||||
## Deployment
|
||||
|
||||
### compile and burn
|
||||
|
||||
Use `(scons --)menuconfig` in bsp folder *(Ubiquitous/RT_Thread/bsp/k210)*, open:
|
||||
|
||||
- More Drivers --> ov2640 driver
|
||||
- Board Drivers Config --> Enable LCD on SPI0
|
||||
- Board Drivers Config --> Enable SDCARD (spi1(ss0))
|
||||
- Board Drivers Config --> Enable DVP(camera)
|
||||
- RT-Thread Components --> POSIX layer and C standard library --> Enable pthreads APIs
|
||||
- APP_Framework --> Framework --> support knowing framework --> kpu model postprocessing --> yolov2 region layer
|
||||
- APP_Framework --> Applications --> knowing app --> enable apps/helmet detect
|
||||
|
||||
`scons -j(n)` to compile and burn in by *kflash*.
|
||||
|
||||
### json config and kmodel
|
||||
|
||||
Copy json config for deployment o SD card */kmodel*. Example config file is *helmet.json* in this directory. Something to be modified:
|
||||
|
||||
- net_input_size: same as *input_size* in training config file, but array only.
|
||||
- net_output_shape: final feature map size, can be found in **nncase** output.
|
||||
- sensor_output_size: image height and width from camera.
|
||||
- kmodel_size: kmodel size shown in file system.
|
||||
- anchors: same as *anchor* in training config file(multi-dimention anchors flatten to 1 dim).
|
||||
- labels: same as *label* in training config file.
|
||||
- obj_thresh: array, object threshold of each label.
|
||||
- nms_thresh: NMS threshold of boxes.
|
||||
|
||||
Copy final kmodel to SD card */kmodel* either.
|
||||
|
||||
---
|
||||
|
||||
## Run
|
||||
|
||||
In serial terminal, `helmet_detect` to start a detection thread, `helmet_detect_delete` to stop it. Detection results can be found in output.
|
||||
|
||||
---
|
||||
|
||||
## TODO
|
||||
|
||||
- [ ] Fix LCD real-time result display.
|
||||
- [ ] Test more object detection backbone and algorithm(like yolox).
|
|
@ -13,26 +13,26 @@
|
|||
256
|
||||
],
|
||||
"anchors": [
|
||||
1.0432,
|
||||
1.0920,
|
||||
0.8391,
|
||||
2.1250,
|
||||
1.1085,
|
||||
2.7463,
|
||||
1.3783,
|
||||
3.6706,
|
||||
2.0491,
|
||||
4.6711
|
||||
0.1384,
|
||||
0.276,
|
||||
0.308,
|
||||
0.504,
|
||||
0.5792,
|
||||
0.8952,
|
||||
1.072,
|
||||
1.6184,
|
||||
2.1128,
|
||||
3.184
|
||||
],
|
||||
"kmodel_path": "/kmodel/helmet.kmodel",
|
||||
"kmodel_size": 2714044,
|
||||
"obj_thresh": [
|
||||
0.85,
|
||||
0.6
|
||||
0.7,
|
||||
0.9
|
||||
],
|
||||
"labels": [
|
||||
"helmet",
|
||||
"head"
|
||||
"head",
|
||||
"helmet"
|
||||
],
|
||||
"nms_thresh": 0.3
|
||||
"nms_thresh": 0.45
|
||||
}
|
Binary file not shown.
|
@ -182,6 +182,8 @@ void helmet_detect()
|
|||
printf("open ov2640 fail !!");
|
||||
return;
|
||||
}
|
||||
_ioctl_set_dvp_reso set_dvp_reso = {sensor_output_size[1], sensor_output_size[0]};
|
||||
ioctl(g_fd, IOCTRL_CAMERA_SET_DVP_RESO, &set_dvp_reso);
|
||||
showbuffer = (unsigned char *)malloc(sensor_output_size[0] * sensor_output_size[1] * 2);
|
||||
if (NULL == showbuffer) {
|
||||
close(g_fd);
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
# Instrusion detect demo
|
||||
|
||||
### A human object detection task demo. Running MobileNet-yolo on K210-based edge devices.
|
||||
|
||||
***Training, deployment and running, please see helmet_detect***
|
|
@ -13,24 +13,24 @@
|
|||
320
|
||||
],
|
||||
"anchors": [
|
||||
1.0432,
|
||||
1.0920,
|
||||
0.8391,
|
||||
2.1250,
|
||||
1.1085,
|
||||
2.7463,
|
||||
1.3783,
|
||||
3.6706,
|
||||
2.0491,
|
||||
1.043,
|
||||
1.092,
|
||||
0.839,
|
||||
2.1252,
|
||||
1.109,
|
||||
2.7461,
|
||||
1.378,
|
||||
3.6708,
|
||||
2.049,
|
||||
4.6711
|
||||
],
|
||||
"kmodel_path": "/kmodel/human.kmodel",
|
||||
"kmodel_size": 1903016,
|
||||
"kmodel_size": 2713236,
|
||||
"obj_thresh": [
|
||||
0.35
|
||||
0.55
|
||||
],
|
||||
"labels": [
|
||||
"human"
|
||||
],
|
||||
"nms_thresh": 0.3
|
||||
"nms_thresh": 0.35
|
||||
}
|
Binary file not shown.
|
@ -182,6 +182,8 @@ void instrusion_detect()
|
|||
printf("open ov2640 fail !!");
|
||||
return;
|
||||
}
|
||||
_ioctl_set_dvp_reso set_dvp_reso = {sensor_output_size[1], sensor_output_size[0]};
|
||||
ioctl(g_fd, IOCTRL_CAMERA_SET_DVP_RESO, &set_dvp_reso);
|
||||
showbuffer = (unsigned char *)malloc(sensor_output_size[0] * sensor_output_size[1] * 2);
|
||||
if (NULL == showbuffer) {
|
||||
close(g_fd);
|
||||
|
|
|
@ -0,0 +1,71 @@
|
|||
# Machine learning demo using iris dataset
|
||||
|
||||
### Classification task demo, tested on stm32f4 and k210-based edge devices. Training on iris dataset by *Decision Tree classifier*, *Support Vector Machine classifier* and *Logistic Regression classifier*.
|
||||
|
||||
---
|
||||
|
||||
## Training
|
||||
|
||||
Model generated by [Sklearn](https://scikit-learn.org/stable/) and converted to C language by [micromlgen](https://forgeplus.trustie.net/projects/yangtuo250/micromlgen).
|
||||
|
||||
### Enviroment preparation
|
||||
|
||||
```shell
|
||||
pip install scikit-learn
|
||||
git clone https://git.trustie.net/yangtuo250/micromlgen.git -b C
|
||||
cd micromlgen && pip install -e .
|
||||
```
|
||||
|
||||
### Train it!
|
||||
|
||||
```python
|
||||
# load iris dataset
|
||||
from sklearn.datasets import load_iris
|
||||
X, y = load_iris(return_X_y=True)
|
||||
|
||||
# train SVC classifier and convert
|
||||
clf = SVC(kernel='linear', gamma=0.001).fit(X, y)
|
||||
print(port(clf, cplusplus=False, platform=platforms.STM32F4))
|
||||
|
||||
# train logistic regression classifier and convert
|
||||
clf = LogisticRegression(max_iter=1000).fit(X, y)
|
||||
print(port(clf, cplusplus=False, platform=platforms.STM32F4))
|
||||
|
||||
# train decision tree classifier and convert
|
||||
clf = DecisionTreeClassifier().fit(X, y)
|
||||
print(port(clf, cplusplus=False, platform=platforms.STM32F4)
|
||||
```
|
||||
Copy each content generated by print to a single C language file.
|
||||
|
||||
---
|
||||
|
||||
## Deployment
|
||||
|
||||
### compile and burn
|
||||
|
||||
Use `(scons --)menuconfig` in *bsp folder(Ubiquitous/RT_Thread/bsp/k210(or stm32f407-atk-coreboard))*, open **APP_Framwork --> Applications --> knowing app --> enable apps/iris ml demo** to enable this app. `scons -j(n)` to compile and burn in by *st-flash(for ARM)* or *kflash(for k210)*.
|
||||
|
||||
### testing set
|
||||
|
||||
Copy *iris.csv* to SD card */csv/iris.csv*.
|
||||
|
||||
---
|
||||
|
||||
## Run
|
||||
|
||||
In serial terminal:
|
||||
- `iris_SVC_predict` for SVC prediction
|
||||
- `iris_DecisonTree_predict` for decision tree prediction
|
||||
- `iris_LogisticRegression_predict` for logistic regression prediction
|
||||
|
||||
Example output:
|
||||
|
||||
```shell
|
||||
data 1: 5.1000 3.5000 1.4000 0.2000 result: 0
|
||||
data 2: 6.4000 3.2000 4.5000 1.5000 result: 1
|
||||
data 3: 5.8000 2.7000 5.1000 1.9000 result: 2
|
||||
data 4: 7.7000 3.8000 6.7000 2.2000 result: 2
|
||||
data 5: 5.5000 2.6000 4.4000 1.2000 result: 1
|
||||
data 6: 5.1000 3.8000 1.9000 0.4000 result: 0
|
||||
data 7: 5.8000 2.7000 3.9000 1.2000 result: 1
|
||||
```
|
|
@ -0,0 +1,10 @@
|
|||
# KPU(K210) YOLOv2 region layer
|
||||
|
||||
## Introduction
|
||||
|
||||
KPU(k210) accelerate most of CNN network layers, but do not support some of operators of YOLOv2 region layer. Such layers and operators will run on MCU instead.
|
||||
YOLOv2 region layer accept feature map(shape w\*h\*c) and return final detection boxes.
|
||||
|
||||
## Usage
|
||||
|
||||
Use `(scons --)menuconfig` in bsp folder *(Ubiquitous/RT_Thread/bsp/k210)*, open *APP_Framework --> Framework --> support knowing framework --> kpu model postprocessing --> yolov2 region layer*.
|
Loading…
Reference in New Issue