diff --git a/docs/best-practices/detection-and-coverage.mdx b/docs/best-practices/detection-and-coverage.mdx index 583f427d6..de1354d9b 100644 --- a/docs/best-practices/detection-and-coverage.mdx +++ b/docs/best-practices/detection-and-coverage.mdx @@ -60,7 +60,7 @@ To detect issues in sources updates, you should monitor volume, freshness and sc - Data updates - Elementary cloud provides automated monitors for freshness and volume. **These are metadata monitors.** - Updates freshness vs. data freshness - The automated freshness will detect delays in **updates**. \*\*\*\*However, sometimes the update will be on time, but the data itself will be outdated. - Data freshness (advanced) - Sometimes a table can update on time, but the data itself will be outdated. If you want to validate the freshness of the raw data by relaying on the actual timestamp, you can use: - - Explicit treshold [freshness dbt tests](https://www.elementary-data.com/dbt-test-hub) such as `dbt_utils.recency` , or [dbt source freshness](https://docs.getdbt.com/docs/deploy/source-freshness). + - Explicit threshold [freshness dbt tests](https://www.elementary-data.com/dbt-test-hub) such as `dbt_utils.recency` , or [dbt source freshness](https://docs.getdbt.com/docs/deploy/source-freshness). - Elementary `event_freshness_anomalies` to detect anomalies. - Data volume (advanced) - Although a table can be updated as expected, the data itself might still be imbalanced in terms of volume per specific segment. There are several tests available to monitor that: - Explicit [volume expectations](https://www.elementary-data.com/dbt-test-hub) such as `expect_table_row_count_to_be_between`. diff --git a/docs/best-practices/triage-and-response.mdx b/docs/best-practices/triage-and-response.mdx index b54a6a519..73f127648 100644 --- a/docs/best-practices/triage-and-response.mdx +++ b/docs/best-practices/triage-and-response.mdx @@ -129,7 +129,7 @@ These are the questions that should be asked, and product tips on how to answer - Does the incident break the pipeline / create delay? - Is the failure is a model failure, or a freshness issue? - - Do we run `dbt build` and this failure stoped the pipeline? + - Do we run `dbt build` and this failure stopped the pipeline? - Check the **Model runs** section of the dashboard to see if there are skipped models, as failures in build cause the downstream models to be skipped.