Metadata-Version: 2.4
Name: datasketch
Version: 1.10.0
Summary: Probabilistic data structures for processing and searching very large datasets
Project-URL: Homepage, https://ekzhu.github.io/datasketch
Project-URL: Bug Tracker, https://github.com/ekzhu/datasketch/issues
Project-URL: Documentation, https://ekzhu.github.io/datasketch
Project-URL: Source, https://github.com/ekzhu/datasketch
Author-email: ekzhu <ekzhu@cs.toronto.edu>
License: MIT
License-File: LICENSE
Keywords: database,datamining
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Database
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Requires-Python: >=3.9
Requires-Dist: numpy>=1.11
Requires-Dist: scipy>=1.0.0
Provides-Extra: aio
Requires-Dist: aiounittest; extra == 'aio'
Requires-Dist: motor>3.6.0; extra == 'aio'
Provides-Extra: benchmark
Requires-Dist: fonttools>=4.60.2; extra == 'benchmark'
Requires-Dist: matplotlib>=3.1.2; extra == 'benchmark'
Requires-Dist: nltk>=3.4.5; (python_version < '3.10') and extra == 'benchmark'
Requires-Dist: nltk>=3.9.4; (python_version >= '3.10') and extra == 'benchmark'
Requires-Dist: pandas>=0.25.3; extra == 'benchmark'
Requires-Dist: pillow>=12.2.0; (python_version >= '3.10') and extra == 'benchmark'
Requires-Dist: pyfarmhash>=0.2.2; extra == 'benchmark'
Requires-Dist: pyhash>=0.9.3; extra == 'benchmark'
Requires-Dist: scikit-learn>=0.21.3; extra == 'benchmark'
Requires-Dist: scipy>=1.3.3; extra == 'benchmark'
Requires-Dist: setsimilaritysearch>=0.1.7; extra == 'benchmark'
Provides-Extra: bloom
Requires-Dist: pybloomfilter3>=0.7.2; extra == 'bloom'
Provides-Extra: cassandra
Requires-Dist: cassandra-driver>=3.20; extra == 'cassandra'
Provides-Extra: experimental-aio
Requires-Dist: aiounittest; extra == 'experimental-aio'
Requires-Dist: motor>3.6.0; extra == 'experimental-aio'
Provides-Extra: redis
Requires-Dist: redis>=2.10.0; extra == 'redis'
Provides-Extra: test
Requires-Dist: cassandra-driver>=3.20; extra == 'test'
Requires-Dist: coverage; extra == 'test'
Requires-Dist: mock>=2.0.0; extra == 'test'
Requires-Dist: mockredispy; extra == 'test'
Requires-Dist: nose-exclude>=0.5.0; extra == 'test'
Requires-Dist: nose>=1.3.7; extra == 'test'
Requires-Dist: pygments>=2.20.0; extra == 'test'
Requires-Dist: pymongo>=3.9.0; extra == 'test'
Requires-Dist: pytest-asyncio; extra == 'test'
Requires-Dist: pytest-cov; extra == 'test'
Requires-Dist: pytest-rerunfailures; extra == 'test'
Requires-Dist: pytest; (python_version < '3.10') and extra == 'test'
Requires-Dist: pytest>=9.0.3; (python_version >= '3.10') and extra == 'test'
Requires-Dist: redis>=2.10.0; extra == 'test'
Description-Content-Type: text/x-rst

datasketch: Big Data Looks Small
================================

.. image:: https://static.pepy.tech/badge/datasketch/month
    :target: https://pepy.tech/project/datasketch

.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.598238.svg
   :target: https://zenodo.org/doi/10.5281/zenodo.598238

.. image:: https://codecov.io/gh/ekzhu/datasketch/branch/master/graph/badge.svg
    :target: https://codecov.io/gh/ekzhu/datasketch

datasketch gives you probabilistic data structures that can process and
search very large amount of data super fast, with little loss of
accuracy.

This package contains the following data sketches:

+-------------------------+-----------------------------------------------+
| Data Sketch             | Usage                                         |
+=========================+===============================================+
| `MinHash`_              | estimate Jaccard similarity and cardinality   |
+-------------------------+-----------------------------------------------+
| `Weighted MinHash`_     | estimate weighted Jaccard similarity          |
+-------------------------+-----------------------------------------------+
| `HyperLogLog`_          | estimate cardinality                          |
+-------------------------+-----------------------------------------------+
| `HyperLogLog++`_        | estimate cardinality                          |
+-------------------------+-----------------------------------------------+

The following indexes for data sketches are provided to support
sub-linear query time:

+---------------------------+-----------------------------+------------------------+
| Index                     | For Data Sketch             | Supported Query Type   |
+===========================+=============================+========================+
| `MinHash LSH`_            | MinHash, Weighted MinHash   | Jaccard Threshold      |
+---------------------------+-----------------------------+------------------------+
| `LSHBloom`_               | MinHash, Weighted MinHash   | Jaccard Threshold      |
+---------------------------+-----------------------------+------------------------+
| `MinHash LSH Forest`_     | MinHash, Weighted MinHash   | Jaccard Top-K          |
+---------------------------+-----------------------------+------------------------+
| `MinHash LSH Ensemble`_   | MinHash                     | Containment Threshold  |
+---------------------------+-----------------------------+------------------------+
| `HNSW`_                   | Any                         | Custom Metric Top-K    |
+---------------------------+-----------------------------+------------------------+

datasketch must be used with Python 3.9 or above, NumPy 1.11 or above, and Scipy.

Note that `MinHash LSH`_ and `MinHash LSH Ensemble`_ also support Redis and Cassandra 
storage layer (see `MinHash LSH at Scale`_).

Install
-------

To install datasketch using ``pip``:

.. code-block:: bash

    pip install datasketch

This will also install NumPy as dependency.

To install with Redis dependency:

.. code-block:: bash

    pip install datasketch[redis]

To install with Cassandra dependency:

.. code-block:: bash

    pip install datasketch[cassandra]

To install with Bloom filter dependency:

.. code-block:: bash

    pip install datasketch[bloom]

.. _`MinHash`: https://ekzhu.github.io/datasketch/minhash.html
.. _`Weighted MinHash`: https://ekzhu.github.io/datasketch/weightedminhash.html
.. _`HyperLogLog`: https://ekzhu.github.io/datasketch/hyperloglog.html
.. _`HyperLogLog++`: https://ekzhu.github.io/datasketch/hyperloglog.html#hyperloglog-plusplus
.. _`MinHash LSH`: https://ekzhu.github.io/datasketch/lsh.html
.. _`MinHash LSH Forest`: https://ekzhu.github.io/datasketch/lshforest.html
.. _`MinHash LSH Ensemble`: https://ekzhu.github.io/datasketch/lshensemble.html
.. _`LSHBloom`: https://ekzhu.github.io/datasketch/lshbloom.html
.. _`Minhash LSH at Scale`: http://ekzhu.github.io/datasketch/lsh.html#minhash-lsh-at-scale
.. _`HNSW`: https://ekzhu.github.io/datasketch/documentation.html#hnsw

Contributing
------------

We welcome contributions from everyone. Whether you're fixing bugs, adding features, improving documentation, or helping with tests, your contributions are valuable.

Development Setup
^^^^^^^^^^^^^^^^^

The project uses `uv` for fast and reliable Python package management. Follow these steps to set up your development environment:

1. **Install uv**: Follow the official installation guide at https://docs.astral.sh/uv/getting-started/installation/

2. **Clone the repository**:

   .. code-block:: bash

       git clone https://github.com/ekzhu/datasketch.git
       cd datasketch

3. **Set up the environment**:

   .. code-block:: bash

       # Create a virtual environment
       # (Optional: specify Python version with --python 3.x)
       uv venv
       # Activate the virtual environment (optional, uv run commands work without it)
       source .venv/bin/activate

       # Install all dependencies
       uv sync

4. **Verify installation**:

   .. code-block:: bash

       # Run tests to ensure everything works
       uv run pytest

5. **Optional dependencies** (for specific development needs):

   .. code-block:: bash

       # For testing
       uv sync --extra test

       # For Cassandra support
       uv sync --extra cassandra

       # For Redis support
       uv sync --extra redis

       # For all extras
       uv sync --all-extras

Learn more about `uv` at https://docs.astral.sh/uv/

Development Workflow
^^^^^^^^^^^^^^^^^^^^

1. **Fork the repository** on GitHub if you haven't already.

2. **Create a feature branch** for your changes:

   .. code-block:: bash

       git checkout -b feature/your-feature-name
       # Or for bug fixes:
       git checkout -b fix/issue-description

3. **Make your changes** following the project's coding standards.

4. **Run the tests** to ensure nothing is broken:

   .. code-block:: bash

       uv run pytest

5. **Check code quality** with ruff:

   .. code-block:: bash

       # Check for issues
       uvx ruff check .

       # Auto-fix formatting issues
       uvx ruff format .

6. **Commit your changes** with a clear, descriptive commit message:

   .. code-block:: bash

       git commit -m "Add feature: brief description of what was changed"

7. **Push to your fork** and create a pull request on GitHub:

   .. code-block:: bash

       git push origin your-branch-name

8. **Respond to feedback** from maintainers and iterate on your changes.

Guidelines
^^^^^^^^^^

- Follow PEP 8 style guidelines
- Write tests for new features
- Update documentation as needed
- Keep commits focused and atomic
- Be respectful in discussions

For more information, check the `GitHub issues <https://github.com/ekzhu/datasketch/issues>`_ for current priorities or areas needing help. You can also join the discussion on `project roadmap and priorities <https://github.com/ekzhu/datasketch/discussions/252>`_.