Yet Another Rust Neural Network Framework
Go to file
Andrey Tkachenko 765a1eccfa NativeBlas
2019-07-17 18:08:06 +04:00
datasets Remove mnsit datasets 2019-07-15 18:04:00 +04:00
yarnn NativeBlas 2019-07-17 18:08:06 +04:00
yarnn-examples NativeBlas 2019-07-17 18:08:06 +04:00
yarnn-models Got rid of AbstractLayer 2019-07-16 00:10:36 +04:00
yarnn-native-blas NativeBlas 2019-07-17 18:08:06 +04:00
.gitignore YARNN v2 2019-07-14 13:53:44 +04:00
Cargo.lock NativeBlas 2019-07-17 18:08:06 +04:00
Cargo.toml NativeBlas 2019-07-17 18:08:06 +04:00
LICENSE Initial commit 2019-07-04 11:42:43 +04:00
README.md Remove mnsit datasets 2019-07-15 18:04:00 +04:00

Yet Another Rust Neural Network framework aka YARNN

Inspired by darknet and leaf

What it can right now:

  • not requires std (only alloc for tensor allocations, bump allocator is ok, so it can be compiled to stm32f4 board)
  • available layers: Linear, ReLu, Sigmoid, Softmax(no backward), Conv2d, ZeroPadding2d, MaxPool2d, AvgPool2d(no backward), Flatten
  • available optimizers: Sgd, Adam, RMSProp
  • available losses: CrossEntropy(no forward), MeanSquareError
  • available backends: Native, NativeBlas(no convolution yet)

What it will can (I hope):

1st stage:

  • example of running yarnn in browser using WASM
  • example of running yarnn on stm32f4 board
  • finish AvgPool2d backpropogation
  • add Dropout layer
  • add BatchNorm layer
  • convolution with BLAS support

2nd stage:

  • CUDA support
  • OpenCL support

3rd stage:

  • DepthwiseConv2d layer
  • Conv3d layer
  • Deconv2d layer
  • k210 backend

Model definition example

use yarnn::model;
use yarnn::layer::*;
use yarnn::layers::*;

model! {
    MnistConvModel (h: u32, w: u32, c: u32) {
        input_shape: (c, h, w),
        layers: {
            Conv2d<N, B, O> {
                filters: 8
            },
            ReLu<N, B>,
            MaxPool2d<N, B> {
                pool: (2, 2)
            },

            Conv2d<N, B, O> {
                filters: 8
            },
            ReLu<N, B>,
            MaxPool2d<N, B> {
                pool: (2, 2)
            },

            Flatten<N, B>,
            Linear<N, B, O> {
                units: 10
            },

            Sigmoid<N, B>
        }
    }
}

Contributors are welcome