Source: gloo-cuda Section: contrib/science Homepage: https://github.com/facebookincubator/gloo Priority: optional Standards-Version: 4.5.0 Vcs-Git: https://salsa.debian.org/deeplearning-team/gloo.git Vcs-Browser: https://salsa.debian.org/deeplearning-team/gloo Maintainer: Debian Deep Learning Team Uploaders: Mo Zhou Build-Depends: cmake, debhelper-compat (= 13), libgtest-dev, libhiredis-dev, libibverbs-dev, mpi-default-dev, nvidia-cuda-toolkit-gcc, libnccl-dev, Rules-Requires-Root: no Package: libgloo-cuda-dev Section: contrib/libdevel Architecture: any Depends: libgloo-cuda-0 (= ${binary:Version}), libhiredis-dev, libibverbs-dev, ${misc:Depends} Conflicts: libgloo-dev Replaces: libgloo-dev Description: Collective communications library (development files) Gloo is a collective communications library. It comes with a number of collective algorithms useful for machine learning applications. These include a barrier, broadcast, and allreduce. . Transport of data between participating machines is abstracted so that IP can be used at all times, or InifiniBand (or RoCE) when available. In the latter case, if the InfiniBand transport is used, GPUDirect can be used to accelerate cross machine GPU-to-GPU memory transfers. . Where applicable, algorithms have an implementation that works with system memory buffers, and one that works with NVIDIA GPU memory buffers. In the latter case, it is not necessary to copy memory between host and device; this is taken care of by the algorithm implementations. . This package ships the development files. Package: libgloo-cuda-0 Architecture: any Section: contrib/libs Depends: ${misc:Depends}, ${shlibs:Depends} Conflicts: libgloo0 Replaces: libgloo0 Description: Collective communications library (shared object) Gloo is a collective communications library. It comes with a number of collective algorithms useful for machine learning applications. These include a barrier, broadcast, and allreduce. . Transport of data between participating machines is abstracted so that IP can be used at all times, or InifiniBand (or RoCE) when available. In the latter case, if the InfiniBand transport is used, GPUDirect can be used to accelerate cross machine GPU-to-GPU memory transfers. . Where applicable, algorithms have an implementation that works with system memory buffers, and one that works with NVIDIA GPU memory buffers. In the latter case, it is not necessary to copy memory between host and device; this is taken care of by the algorithm implementations. . This package ships the shared object for Gloo.