Models
The central part of an ML pipeline is the model. The first step of creating a Rune is finding (or training) a Machine Learning model that matches your application. Right now, we support TFlite
and tfjs
models (onnx support coming soon).
You have two options:
- Choose a pre-trained model Several pre-trained tflite/tfjs models are available for download from the TF Hub. You can choose and start playing with the models from the TF Hub.
- Custom Model
You can go on training a model by yourself. After getting the desired accuracy, convert a TensorFlow model into the
tflite
with the TensorFlow Lite Converter.
There are various techniques using which one can optimize a model to reduce the memory footprint without losing accuracy.
- Quantization
- Pruning
- Clustering It will help deploy simple yet powerful models on extremely low-power, low-cost microcontrollers at the network edge.
You can find more details on converting a Tensorflow model to tflite here.
We have created a few Colab Notebooks to show how to train a model from scratch and converted them into tflite.
- MicroSpeech: a Microspeech model for keyword spotting classification on the edge.
- Mask-Detection: a model that detects whether a person is wearing a mask or not.