wasi-nn: Support uint8 quantized networks (#2433)

Support (non-full) uint8 quantized networks.
Inputs and outputs are still required to be `float`. The (de)quantization is done internally by wasi-nn.

Example generated from `quantized_model.py`:
![Screenshot from 2023-08-07 17-57-05](https://github.com/bytecodealliance/wasm-micro-runtime/assets/80318361/91f12ff6-870c-427a-b1dc-e307f7d1f5ee)

Visualization with [netron](https://netron.app/).
This commit is contained in:
tonibofarull
2023-08-11 01:55:40 +02:00
committed by GitHub
parent a550f4d9f7
commit 0b0af1b3df
7 changed files with 176 additions and 17 deletions

View File

@ -30,7 +30,6 @@ RUN make -j "$(grep -c ^processor /proc/cpuinfo)"
FROM ubuntu:22.04
COPY --from=base /home/wamr/product-mini/platforms/linux/build/libvmlib.so /libvmlib.so
COPY --from=base /home/wamr/product-mini/platforms/linux/build/iwasm /iwasm
ENTRYPOINT [ "/iwasm" ]