# great_expectations
**Repository Path**: jetcode/great_expectations
## Basic Information
- **Project Name**: great_expectations
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: 0.18.x
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-05-30
- **Last Updated**: 2025-05-30
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
[](https://pypi.python.org/pypi/great_expectations)
[](https://pypi.org/project/great-expectations/#history)
[](https://pypistats.org/packages/great-expectations)
[](https://dev.azure.com/great-expectations/great_expectations/_build/latest?definitionId=1&branchName=develop)
[](https://results.pre-commit.ci/latest/github/great-expectations/great_expectations/develop)
[](https://doi.org/10.5281/zenodo.5683574)
[](https://twitter.com/expectgreatdata)
[](https://greatexpectations.io/slack)
[](https://github.com/great-expectations/great_expectations/graphs/contributors)
[](https://github.com/charliermarsh/ruff)
Great Expectations
================================================================================
*Always know what to expect from your data.*
What is GX?
--------------------------------------------------------------------------------
Great Expectations (GX) helps data teams build a shared understanding of their data through quality testing, documentation, and profiling.
Data practitioners know that testing and documentation are essential for managing complex data pipelines. GX makes it possible for data science and engineering teams to quickly deploy extensible, flexible data quality testing into their data stacks. Its human-readable documentation makes the results accessible to technical and nontechnical users.
[See Down with Pipeline Debt!](https://greatexpectations.io/blog/down-with-pipeline-debt-introducing-great-expectations/) for an introduction to our philosophy of pipeline data quality testing.
Key features
--------------------------------------------------
### Seamless operation
GX fits into your existing tech stack, and can integrate with your CI/CD pipelines to add data quality exactly where you need it. Connect to and validate your data wherever it already is, so you can focus on honing your Expectation Suites to perfectly meet your data quality needs.
### Start fast
Get useful results quickly even for large data volumes. GX’s Data Assistants provide curated Expectations for different domains, so you can accelerate your data discovery to rapidly deploy data quality throughout your pipelines. Auto-generated Data Docs ensure your DQ documentation will always be up-to-date.

### Unified understanding
Expectations are GX’s workhorse abstraction: each Expectation declares an expected state of the data. The Expectation library provides a flexible, extensible vocabulary for data quality—one that’s human-readable, meaningful for technical and nontechnical users alike. Bundled into Expectation Suites, Expectations are the ideal tool for characterizing exactly what you expect from your data.
- `expect_column_values_to_not_be_null`
- `expect_column_values_to_match_regex`
- `expect_column_values_to_be_unique`
- `expect_column_values_to_match_strftime_format`
- `expect_table_row_count_to_be_between`
- `expect_column_median_to_be_between`
- ...and [many more](https://greatexpectations.io/expectations)
### Secure and transparent
GX doesn’t ask you to exchange security for your insight. It processes your data in place, on your systems, so your security and governance procedures can maintain control at all times. And because GX’s core is and always will be open source, its complete transparency is the opposite of a black box.
### Data contracts support
Checkpoints are a transparent, central, and automatable mechanism for testing Expectations and evaluating your data quality. Every Checkpoint run produces human-readable Data Docs reporting the results. You can also configure Checkpoints to take Actions based on the results of the evaluation, like sending alerts and preventing low-quality data from moving further in your pipelines.

### Readable for collaboration
Everyone stays on the same page about your data quality with GX’s inspectable, shareable, and human-readable Data Docs. You can publish Data Docs to the locations where you need them in a variety of formats, making it easy to integrate Data Docs into your existing data catalogs, dashboards, and other reporting and data governance tools.

Quick start
-------------------------------------------------------------
To see Great Expectations in action on your own data:
You can install it using pip
```
pip install great_expectations
```
and then run
```python
import great_expectations as gx
context = gx.get_context()
```
(We recommend deploying within a virtual environment. If you’re not familiar with pip, virtual environments, notebooks, or git, you may want to check out the [Supporting Resources](https://docs.greatexpectations.io/docs/terms/supporting_resource/), which will teach you how to get up and running in minutes.)
For full documentation, visit [https://docs.greatexpectations.io/](https://docs.greatexpectations.io/).
If you need help, hop into our [Slack channel](https://greatexpectations.io/slack)—there are always contributors and other users there.
Integrations
-------------------------------------------------------------------------------
Great Expectations works with the tools and systems that you're already using with your data, including:
| Integration | Notes | |
|---|---|---|
|
|
DataHub | Data Catalog |
|
|
AWS Glue | Data Integration |
|
|
Athena | Data Source |
|
|
AWS Redshift | Data Source |
|
|
AWS S3 | Data Source |
|
|
BigQuery | Data Source |
|
|
Databricks | Data Source |
|
|
Deepnote | Collaborative data notebook |
|
|
Google Cloud Platform (GCP) | Data Source |
|
|
Microsoft Azure Blob Storage | Data Source |
|
|
Microsoft SQL Server | Data Source |
|
|
MySQL | Data Source |
|
|
Pandas | Data Source |
|
|
PostgreSQL | Data Source |
|
|
Snowflake | Data Source |
|
|
Spark | Data Source |
|
|
SQLite | Data Source |
|
|
Trino | Data Source |
|
|
Apache Airflow | Orchestrator |
|
|
Flyte | Orchestrator |
|
|
Meltano | Orchestrator |
|
|
Prefect | Orchestrator |
|
|
ZenML | Orchestrator |
|
|
Slack | Plugin |
|
|
Jupyter Notebooks | Utility |