Source: ml-dtypes Section: python Homepage: https://github.com/jax-ml/ml_dtypes Priority: optional Standards-Version: 4.7.0 Vcs-Git: https://salsa.debian.org/deeplearning-team/ml-dtypes.git Vcs-Browser: https://salsa.debian.org/deeplearning-team/ml-dtypes Maintainer: Debian Deep Learning Team Uploaders: Mo Zhou Build-Depends: debhelper-compat (= 13), dh-sequence-python3, python3-setuptools, python3-all, python3-all-dev, python3-numpy, python3-numpy-dev, python3-pytest , python3-absl , Package: ml-dtypes-dev Architecture: any Depends: ${misc:Depends}, ${python3:Depends}, ${shlibs:Depends}, Description: Several NumPy dtype extensions used in machine learning (development files) ml_dtypes is a stand-alone implementation of several NumPy dtype extensions used in machine learning libraries, including: . * bfloat16: an alternative to the standard float16 format * 8-bit floating point representations, parameterized by number of exponent and mantissa bits, as well as the bias (if any) and representability of infinity, NaN, and signed zero. float8_e3m4 float8_e4m3 float8_e4m3b11fnuz float8_e4m3fn float8_e4m3fnuz float8_e5m2 float8_e5m2fnuz float8_e8m0fnu * Microscaling (MX) sub-byte floating point representations: float4_e2m1fn float6_e2m3fn float6_e3m2fn * Narrow integer encodings: int2 int4 uint2 uint4 . This package contains header files and other data necessary for developing with ml_dtypes. Package: python3-ml-dtypes Architecture: any Depends: ${misc:Depends}, ${python3:Depends}, ${shlibs:Depends}, Description: Several NumPy dtype extensions used in machine learning ml_dtypes is a stand-alone implementation of several NumPy dtype extensions used in machine learning libraries, including: . * bfloat16: an alternative to the standard float16 format * 8-bit floating point representations, parameterized by number of exponent and mantissa bits, as well as the bias (if any) and representability of infinity, NaN, and signed zero. float8_e3m4 float8_e4m3 float8_e4m3b11fnuz float8_e4m3fn float8_e4m3fnuz float8_e5m2 float8_e5m2fnuz float8_e8m0fnu * Microscaling (MX) sub-byte floating point representations: float4_e2m1fn float6_e2m3fn float6_e3m2fn * Narrow integer encodings: int2 int4 uint2 uint4