Source: xnnpack Section: math Homepage: https://github.com/google/XNNPACK Priority: optional Standards-Version: 4.5.0 Vcs-Git: https://salsa.debian.org/deeplearning-team/xnnpack.git Vcs-Browser: https://salsa.debian.org/deeplearning-team/xnnpack Maintainer: Debian Deep Learning Team Uploaders: Mo Zhou Rules-Requires-Root: no Build-Depends: cmake, debhelper-compat (= 13), googletest, # for libclog.a and clog.h libcpuinfo-dev (>= 0.0~git20200422.a1e0b95-2~), libfp16-dev, libfxdiv-dev, libpsimd-dev, libpthreadpool-dev, ninja-build Package: libxnnpack-dev Architecture: any Depends: libxnnpack0 (= ${binary:Version}), ${misc:Depends} Description: High-efficiency floating-point neural network inference operators (dev) XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe. . This package contains the development files. Package: libxnnpack0 Architecture: any Depends: ${misc:Depends}, ${shlibs:Depends} Description: High-efficiency floating-point neural network inference operators (libs) XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe. . This package contains the shared object.