wasi-nn: Support uint8 quantized networks (#2433)
Support (non-full) uint8 quantized networks. Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn. Example generated from `quantized_model.py`:  Visualization with [netron](https://netron.app/).
This commit is contained in:
2
core/iwasm/libraries/wasi-nn/.gitignore
vendored
Normal file
2
core/iwasm/libraries/wasi-nn/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
**/*.wasm
|
||||
**/*.tflite
|
||||
Reference in New Issue
Block a user