Source: python-pomegranate Maintainer: Debian Python Team Uploaders: Steffen Moeller , Michael R. Crusoe Section: science Priority: optional Testsuite: autopkgtest-pkg-python Build-Depends: debhelper-compat (= 13), dh-python, python3-all, python3-all-dev, python3-setuptools, cython3 (>= 0.22.1), python3-numpy, python3-scipy (>= 0.17.0), python3-nose , python3-joblib (>= 0.9.0b4) , python3-networkx (>= 2.0) , python3-yaml , python3-pandas # for documentation - packaged raw for now # sphinx >= 1.6.0 # sphinx-rtd-theme >= 0.2.0, < 0.3.0 Standards-Version: 4.5.1 Vcs-Browser: https://salsa.debian.org/python-team/packages/python-pomegranate Vcs-Git: https://salsa.debian.org/python-team/packages/python-pomegranate.git Homepage: https://github.com/jmschrei/pomegranate Rules-Requires-Root: no Package: python3-pomegranate Architecture: any Section: python Depends: ${python3:Depends}, ${shlibs:Depends}, ${misc:Depends}, python3-numpy, python3-scipy (>= 0.17.0), python3-joblib (>= 0.9.0b4), python3-networkx (>= 2.0), python3-yaml Suggests: python-pomegranate-doc Description: Fast, flexible and easy to use probabilistic modelling pomegranate is a package for probabilistic models in Python that is implemented in cython for speed. It's focus is on merging the easy-to-use scikit-learn API with the modularity that comes with probabilistic modeling to allow users to specify complicated models without needing to worry about implementation details. The models are built from the ground up with big data processing in mind and so natively support features like out-of-core learning and parallelism. Package: python-pomegranate-doc Architecture: all Multi-Arch: foreign Section: doc Depends: ${sphinxdoc:Depends}, ${misc:Depends} Description: documentation accompanying probabilistic modelling library pomegranate is a package for probabilistic models in Python that is implemented in cython for speed. It's focus is on merging the easy-to-use scikit-learn API with the modularity that comes with probabilistic modeling to allow users to specify complicated models without needing to worry about implementation details. The models are built from the ground up with big data processing in mind and so natively support features like out-of-core learning and parallelism. . This is the common documentation package.