diff --git a/docs/mindarmour/docs/source_en/images/fuzz_architecture.png b/docs/mindarmour/docs/source_en/images/fuzz_architecture.png
new file mode 100644
index 0000000000000000000000000000000000000000..aed5732b18eef5293174cebca3dde81430ea0b15
Binary files /dev/null and b/docs/mindarmour/docs/source_en/images/fuzz_architecture.png differ
diff --git a/docs/mindarmour/docs/source_en/index.rst b/docs/mindarmour/docs/source_en/index.rst
index aea32ff30e98491b94622ff58b027002ea5d844f..d1d3bbb1a95e061d337d09250025649131a37298 100644
--- a/docs/mindarmour/docs/source_en/index.rst
+++ b/docs/mindarmour/docs/source_en/index.rst
@@ -1,5 +1,5 @@
MindArmour Documents
-=========================
+====================
AI is the catalyst for change but also faces challengs in security and privacy protection. MindArmour provides adversarial robustness, model security tests, differential privacy training, privacy risk assessment, and data drift detection.
@@ -8,7 +8,7 @@ AI is the catalyst for change but also faces challengs in security and privacy p
Typical Application Scenarios
------------------------------------------
+-----------------------------
1. `Adversarial Example `_
@@ -22,11 +22,15 @@ Typical Application Scenarios
Emhances model privacy and protects user data using differential training and protection suppression mechanisms.
-4. `Fuzz `_
+4. `Reliability `_
+
+ Detects data distribution changes in time and predicts the symptoms of model failure in advance, which is of great significance for the timely adjustment of the AI model through multiple data drift detection algorithms.
+
+5. `Fuzz `_
Provides a coverage-guided fuzzing tool that features flexible, customizable test policies and metrics, and uses neuron coverage to guide input mutation so that the input can activate neurons and distribute neuron values in a wider range. In this way, we can discover different types of model output results and incorrect behaviors.
-5. `Model Encryption `_
+6. `Model Encryption `_
Uses the symmetric encryption algorithm to encrypt the parameter files or inference models to protect the model files. Directly loads the ciphertext model to implement inference or incremental training when using the algorithm.
@@ -84,6 +88,7 @@ Typical Application Scenarios
:maxdepth: 1
:caption: References
+ design
differential_privacy_design
fuzzer_design
security_and_privacy