Files
wamr/core
tonibofarull 0b0af1b3df wasi-nn: Support uint8 quantized networks (#2433)
Support (non-full) uint8 quantized networks.
Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn.

Example generated from `quantized_model.py`:
![Screenshot from 2023-08-07 17-57-05](https://github.com/bytecodealliance/wasm-micro-runtime/assets/80318361/91f12ff6-870c-427a-b1dc-e307f7d1f5ee)

Visualization with [netron](https://netron.app/).
2023-08-11 07:55:40 +08:00
..
2023-03-19 08:05:57 +08:00
2023-03-19 08:05:57 +08:00