diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/README.md" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/README.md" index 38d1821a0ae62017802aafbb7f267dcfcc993a30..15204f1d491aa33ea6f3d43874fcd945ff3ad657 100644 --- "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/README.md" +++ "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/README.md" @@ -1,512 +1,859 @@ -# Nginx Plus集群管理 +# NGINX Plus 集群管理 -#### Lab介绍 +### Lab介绍 -本次Lab环境会使用3个Nginx Plus实例,实例部署的OS不限,可以是Docker环境,Ubuntu,Centos等,我的环境选用了Ubuntu 18.04.5,本章节将通过keepalived以及nginx-sync.sh实现Nginx Plus集群搭建、配置同步以及状态同步,本次Lab将包含: -1. 主备集群搭建 -2. SSH免登陆操作 -3. 集群配置同步 -4. Sticky learn状态同步 +本次Lab环境会使用2个nginx plus实例,实例部署的OS不限,可以是Docker环境,Ubuntu,Centos等,我的环境选用了Ubuntu 20.04,本章节将通过两种方式实现NGNIX集群: +1. 采用四七层分离架构,通过前端F5负载均衡构建NGINX软件资源集群 +2. 通过keepalived以及nginx-sync.sh实现NGINX集群搭建、配置同步以及状态同步 -#### 准备工具 -1. keepalived -```nginx -基于不同操作系统可以通过以下方式安装keepalived -yum install -y nginx-ha-keepalived -apt-get install nginx-ha-keepalived -keepalived安装完成后会增加在/etc/下的keepalived目录,keepalived守护进程将在目录下通过配置文件keepalived.conf进行Nginx Plus集群管理。 +本次Lab将包含: -我以ubuntu为例,接下来我们利用nginx-ha-setup快速生成Actvie/Passive的Nginx Plus集群。 -注:每个NginxPlus实例均要独立安装keepalived以及单独运行nginx-ha-keealived脚本 -``` -2. nginx-sync.sh -```nginx -基于不同操作系统可以通过以下方式安装nginx-sync.sh -sudo yum install nginx-sync -sudo apt-get install nginx-sync +1. 使用F5构建高性能NGINX软件资源池架构 +2. 集群内配置同步 +4. NGINX request limit 请求限速 +5. NGINX worker 共享内存实现进程间协同架构 +6. 集群内运行时状态共享机制 ( Sticky learn / requests limiting / requests limiting ) +7. 主备集群搭建 + +#### 测试拓扑 -Nginx Plus配置同步是利用nginx-sync.sh脚本实现从Primary节点同步相关nginx plus配置文件至其他节点,可根据在Master节点上的/etc/nginx-sync.conf,指定同步的内容以及同步的节点数。 +```mermaid +graph TD; +Client_10.1.1.4-->F5-VS_10.1.1.11:80; +F5-VS_10.1.1.11:80-->NGINX-PLUS-1_10.1.1.6:80; +F5-VS_10.1.1.11:80-->NGINX-PLUS-2_10.1.1.7:80; +NGINX-PLUS-1_10.1.1.6:80-->Backend_10.1.1.8:8080; +NGINX-PLUS-1_10.1.1.6:80-->Backend_10.1.1.8:8081; +NGINX-PLUS-2_10.1.1.7:80-->Backend_10.1.1.8:8080; +NGINX-PLUS-2_10.1.1.7:80-->Backend_10.1.1.8:8081; ``` -#### Chapter 1 主备集群搭建 +#### 预置环境初始化 +- [x] 用户名密码信息 + + - 四个 ubuntu 节点: + - root/default + - ubuntu/default + - F5 节点: + - 网页登录:admin/admin + - 终端登录:root/default + - 管理方式 + - 通过UDF实例 “ACCESS” 向导 webshell 登录命令行 + - TMUI登录F5管理页面 + - DASHBOARD查看nginx plus 节点的dashboard监控面板 + +- [x] 节点间免密登录,所有 ubuntu 节点间免密登录已完成 -1. 当集群中的所有实例均完成keepalived的安装后,可以通过nginx-ha-setup脚本,以向导式快速搭建HA集群 + - nginx-plus-1.nginx.local + - nginx-plus-2.nginx.local + - client.nginx.local + - backend.nginx.local -```nginx -#在节点A上执行nginx-ha-setup,将其设置成Master -root@vms31:/etc/keepalived# nginx-ha-setup -Thank you for using NGINX Plus! +- [x] 域名解析已完成 -This script is intended for use with RHEL/CentOS/SLES/Debian/Ubuntu-based systems. -It will configure highly available NGINX Plus environment in Active/Passive pair. + - 所有节点均可通过域名访问 + - 业务域名 app.nginx.local 访问 DNS 解析验证 + ```bash + root@Client:/# dig app.nginx.local | grep nginx + ; <<>> DiG 9.16.1-Ubuntu <<>> app.nginx.local + ;app.nginx.local. IN A + app.nginx.local. 6 IN A 10.1.1.11 + root@Client:/# + ``` + +- [x] F5 负载均衡配置已完成 -NOTE: you will need the following in order to continue: - - 2 running systems (nodes) with static IP addresses - - one free IP address to use as Cluster IP endpoint + - 本实验nginx初始化仅提供8080的dashboard页面,业务80端口未配置,检查 vs_nginx 的 pool pool_nginx_cluster member 状态均down -It is strongly recommended to run this script simultaneously on both nodes, -e.g. use two terminal windows and switch between them step by step. +![输入图片说明](images/image-20230601185008102.png) -It is recommended to run this script under screen(1) in order to allow -installation process to continue in case of unexpected session disconnect. -Press to continue... +- [x] **NGINX集群的两种实现方式** + 1. NGINX软件资源池模式,通过前端F5实现四七层分离 -Step 1: configuring internal management IP addresses. + 1. NGINX通过keepalived 实现 Active / Standby 部署 -In order to communicate with each other, both nodes must have at least one IP address. -The guessed primary IP of this node is: 192.168.5.31/24 -#这一步填写本机IP192.168.5.31/24 -Do you want to use this address for internal cluster communication? (y/n) -IP address of this host is set to: 192.168.5.31/24 -Primary network interface: ens32 +### 使用F5构建高性能NGINX软件资源池架构 -Now please enter IP address of a second node: 192.168.5.32 -You entered: 192.168.5.32 -Is it correct? (y/n) -IP address of the second node is set to: 192.168.5.32 -#这一步填写对端IP192.168.5.32 +#### nginx-plus-1 构建基础负载均衡实例 -Press to continue... +1. 构建基础的反向代理实例,创建负载策略如下: -Step 2: creating keepalived configuration +```bash +root@nginx-plus-1:/etc/nginx/conf.d# cat default.conf +server { + listen 80 default_server; + server_name app.nginx.local; + access_log /var/log/nginx/app.nginx.local.access.log main; + error_log /var/log/nginx/app.nginx.local.error.log notice; + proxy_set_header nginx $hostname; -Now you have to choose cluster IP address. -This address will be used as en entry point to all your cluster resources. -The chosen address must not be one already associated with a physical node. + location / { + proxy_pass http://app; + } +} +upstream app { + keepalive 32; -Enter cluster IP address: 192.168.5.100 -You entered: 192.168.5.100 -Is it correct? (y/n) + server backend.nginx.local:8080; + server backend.nginx.local:8081; -#这一步填写集群IP192.168.5.100,该IP会随着Master状态的failover,一起failover。 +} +root@nginx-plus-1:/etc/nginx/conf.d# nginx -s reload +root@nginx-plus-1:/etc/nginx/conf.d# ss -ntl | grep 80 +LISTEN 0 511 0.0.0.0:80 0.0.0.0:* +LISTEN 0 511 0.0.0.0:8080 0.0.0.0:* +root@nginx-plus-1:/etc/nginx/conf.d# +``` -You must choose which node should have the MASTER role in this cluster. +检查F5 pool member状态 -Please choose what the current node role is: -1) MASTER -2) BACKUP +![输入图片说明](images/image-20230601190131972.png) -(on the second node you should choose the opposite variant) -Press 1 or 2. -This is the MASTER node. +使用客户端访问测试 -#这一步选择该节点的初始化状态,1为Master,2为Backup,这里选择1. +```bash +root@Client:/# curl app.nginx.local +{ + "Date": "01/Jun/2023:21:58:53 +0800", + "Client IP": "10.1.1.4", + "Server IP": "10.1.1.8:8080", + "Nginx": "nginx-plus-1.nginx.local", + "Server Name": "backend.nginx.local", + "Request ID": "51bb340b0f87e6027b083ac4cb13dac5", + "URI": "/", + "User Agent": "curl/7.68.0", + "Doc Root": "/usr/share/nginx/html" +} +root@Client:/# +root@Client:/# curl app.nginx.local +{ + "Date": "01/Jun/2023:21:58:57 +0800", + "Client IP": "10.1.1.4", + "Server IP": "10.1.1.8:8081", + "Nginx": "nginx-plus-1.nginx.local", + "Server Name": "backend.nginx.local", + "Request ID": "181b4b185db6bf966b2abb5463a2a4c6", + "URI": "/", + "User Agent": "curl/7.68.0", + "Doc Root": "/usr/share/nginx/html" +} +root@Client:/# -Step 3: starting keepalived +# 查看访问日志, 请求被负载至后端两个节点中 +root@nginx-plus-1:/# tail -n 2 /var/log/nginx/app.nginx.local.access.log +cip: 10.1.1.5, time: [01/Jun/2023:21:58:53 +0800], request: "GET / HTTP/1.1" upstream_addr: 10.1.1.8:8080, response_code: 200, referer: "-" ua: "curl/7.68.0", xff: "10.1.1.4", upstream_cookie_uid: "8080", cookie_uid: "-"limit_conn: -, limit_req: PASSED, request_time: 0.002 +cip: 10.1.1.5, time: [01/Jun/2023:21:58:57 +0800], request: "GET / HTTP/1.1" upstream_addr: 10.1.1.8:8081, response_code: 200, referer: "-" ua: "curl/7.68.0", xff: "10.1.1.4", upstream_cookie_uid: "8081", cookie_uid: "-"limit_conn: -, limit_req: PASSED, request_time: 0.002 +root@nginx-plus-1:/# +``` +使用wrk压测10s,速率是 400 request per second,发送 4002个请求,查看负载效果 -keepalived is already running. +````bash +# 清空历史日志文件 +root@nginx-plus-1:/etc/nginx/conf.d# echo > /var/log/nginx/app.nginx.local.access.log -Press to continue... +# 压测4000个请求 +root@Client:/# wrk -d10 -R400 http://app.nginx.local +Running 10s test @ http://app.nginx.local + 2 threads and 10 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 3.04ms 565.90us 10.82ms 77.10% + Req/Sec -nan -nan 0.00 0.00% + 4002 requests in 10.01s, 2.23MB read +Requests/sec: 400.00 +Transfer/sec: 228.74KB +root@Client:/# -Step 4: configuring cluster +# +root@nginx-plus-1:/# grep 8080 /var/log/nginx/app.nginx.local.access.log | wc -l +2001 +root@nginx-plus-1:/# +root@nginx-plus-1:/# grep 8081 /var/log/nginx/app.nginx.local.access.log | wc -l +2001 +root@nginx-plus-1:/# +```` -Enabling keepalived and nginx at boot time... -Initial configuration complete! -keepalived logs are written to syslog and located here: -/var/log/syslog -Further configuration may be required according to your needs -and environment. -Main configuration file for keepalived can be found at: - /etc/keepalived/keepalived.conf +#### 思考:NGINX能否自身输出统计数据以优化运维? -To control keepalived, use 'service keepalived' command: - service keepalived status +访问 nginx-plus-1 实例的dashboard标签,通过dashboard可以看到一些统计数据 -keepalived documentation can be found at: -http://www.keepalived.org/ +![输入图片说明](images/image-20230601193535370.png) -NGINX-HA-keepalived documentation can be found at: -/usr/share/doc/nginx-ha-keepalived/README +通过 zone 启用统计信息收集,在 server / location / upstream 中分别启用 status_zone / zone 模块,创建共享内存空间,记录各模块统计信息与实现NGINX多进程间共享,关于该模块的详细资料参考如下 -Thank you for using NGINX Plus! +1. status_zone : [Module ngx_http_status_module (nginx.org)](http://nginx.org/en/docs/http/ngx_http_status_module.html#status) +2. zone: [Module ngx_http_upstream_module (nginx.org)](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#zone) -``` +在该服务中添加 status_zone 和 zone 配置 -当完成上述向导设置后,脚本将会创建keepalived.conf在/etc/keepalived/目录下,接下来需要另外一个实例上运行nginx-ha-setup进行Backup的设置。 +```bash +root@nginx-plus-1:/etc/nginx/conf.d# cat default.conf +server { + listen 80 default_server; + server_name app.nginx.local; + status_zone server-app.nginx.local; -```nginx -#在节点B上执行nginx-ha-setup,将其设置成Backup -root@vms32:/etc/keepalived# nginx-ha-setup -Thank you for using NGINX Plus! + access_log /var/log/nginx/app.nginx.local.access.log main; + error_log /var/log/nginx/app.nginx.local.error.log notice; + proxy_set_header nginx $hostname; -This script is intended for use with RHEL/CentOS/SLES/Debian/Ubuntu-based systems. -It will configure highly available NGINX Plus environment in Active/Passive pair. + location / { + status_zone location-app.nginx.local; + proxy_pass http://app; + } +} +upstream app { + keepalive 32; + zone upstream-app 128k; -NOTE: you will need the following in order to continue: - - 2 running systems (nodes) with static IP addresses - - one free IP address to use as Cluster IP endpoint + server backend.nginx.local:8080; + server backend.nginx.local:8081; +} +root@nginx-plus-1:/etc/nginx/conf.d# +root@nginx-plus-1:/etc/nginx/conf.d# nginx -s reload +``` -It is strongly recommended to run this script simultaneously on both nodes, -e.g. use two terminal windows and switch between them step by step. +刷新页面,观察dashboard 中新增加了http zone 和 http upstream 模块 -It is recommended to run this script under screen(1) in order to allow -installation process to continue in case of unexpected session disconnect. +![输入图片说明](images/image-20230601194414105.png) -Press to continue... -Step 1: configuring internal management IP addresses. +通过 HTTP Zone 统计不同server / location 的请求响应状态 -In order to communicate with each other, both nodes must have at least one IP address. +![输入图片说明](images/image-20230601194600541.png) -The guessed primary IP of this node is: 192.168.5.32/24 -#这一步填写本机IP192.168.5.32/24 +通过 http zone 收集upstream 信息 -Do you want to use this address for internal cluster communication? (y/n) -IP address of this host is set to: 192.168.5.32/24 -Primary network interface: ens32 +![输入图片说明](images/image-20230601194613173.png) -Now please enter IP address of a second node: 192.168.5.31 -You entered: 192.168.5.31 -Is it correct? (y/n) -IP address of the second node is set to: 192.168.5.31 -#这一步填写对端IP192.168.5.31 +#### 软件资源池保持策略一致性 -Press to continue... +软件资源池的实现需保障集群内多节点间策略一致性,在k8s环境可通过configmap等资源实现,在传统环境需通过其他方式或手工配置 -Step 2: creating keepalived configuration +##### 使用nginx-sync构建策略同步集群 -Now you have to choose cluster IP address. -This address will be used as en entry point to all your cluster resources. -The chosen address must not be one already associated with a physical node. +参考文档: [Synchronizing NGINX Configuration in a Cluster | NGINX Documentation](https://docs.nginx.com/nginx/admin-guide/high-availability/configuration-sharing/) -Enter cluster IP address: 192.168.5.100 -You entered: 192.168.5.100 -Is it correct? (y/n) +集群架构如下,集群内所有节点为full mesh,nginx-sync 仅负责nginx相关的配置同步,集群内任意节点均可向其他节点进行配置同步,nginx-sync的操作与数据层转发无关 -#这一步填写集群IP192.168.5.100 +![nginx-sync.sh](https://www.nginx.com/wp-content/uploads/2020/09/nginx-plus-config-synchronization.png) -You must choose which node should have the MASTER role in this cluster. +在所有nginx-plus节点中安装nginx-sync(需实现节点间免密登录) -Please choose what the current node role is: -1) MASTER -2) BACKUP +```bash +root@nginx-plus-1:/# apt update +root@nginx-plus-1:/# apt-get install nginx-sync -(on the second node you should choose the opposite variant) +# 所有节点中创建nginx-sync配置文件,将节点注册在NODES中 +root@nginx-plus-1:/# cat /etc/nginx-sync.conf +NODES="nginx-plus-1.nginx.local nginx-plus-2.nginx.local" +CONFPATHS="/etc/nginx/nginx.conf /etc/nginx/conf.d /etc/nginx/stream_conf.d" +# EXCLUDE="default.conf" +root@nginx-plus-1:/# -Press 1 or 2. -This is the BACKUP node. +# 通用参数说明 +NODES:配置同步的目标节点,使用空格或换行符分隔 +CONFPATHS:需同步的文件或者目录,使用空格或换行符分隔 +EXCLUDE:不进行同步的文件名,使用空格或换行符分隔 +``` -#这一步选择该节点的初始化状态,选择2为Backup节点。 +配置同步 + +```bash +root@nginx-plus-1:/# nginx-sync.sh -h + +nginx-sync.sh [-h | -c node_address| -C] [-r] [-d] [-u] [-l logfile] +nginx-sync.sh without arguments synchronizes configs from the master node +to the slave nodes + -c compares local configs with configs on node_address + -C compares local configs with configs of other nodes + -r enables verbose rsync output + -d enables verbose diff output + -u run without sudo if started from non root user + -l saves script output to the logfile + +Prerequisites: + * config file /etc/nginx-sync.conf + * set up ssh key authentication between nodes + +Config variables define a list of values, one per line: +NODES - list of the slave nodes +CONFPATHS - paths to directories and files to be synchronized +EXCLUDE - patterns to be excluded from synchronization +POSTSYNC - filename|sed_expression to be run after synchronization + +To setup ssh key authentication it is required: + * to generate key pair on the master node run + 'ssh-keygen -t rsa -b 2048' command from the root user + * copy public key /root/.ssh/id_rsa.pub to the slave nodes as + /root/.ssh/authorized_keys + * add directive 'PermitRootLogin without-password', that allows + only key authentication to log in with root credentials, to the + /etc/ssh/sshd_config on the slave nodes and reload sshd + * to ensure key authentication works run from the master node + ssh echo test + command, where is slave node's name or ip address + + Sample config file content: + +NODES="node2.example.com +node3.example.com" +CONFPATHS="/etc/nginx +/etc/ssl/nginx" +EXCLUDE="default.conf" -Step 3: starting keepalived +root@nginx-plus-1:/# -keepalived is already running. +# 配置同步,将本节点配置同步至集群内其他节点 +root@nginx-plus-1:/# nginx-sync.sh + * Synchronization started at Thu Jun 1 15:01:16 UTC 2023 -Press to continue... + * Checking prerequisites -Step 4: configuring cluster + * Testing local nginx configuration file -Enabling keepalived and nginx at boot time... -Initial configuration complete! +nginx: the configuration file /etc/nginx/nginx.conf syntax is ok +nginx: configuration file /etc/nginx/nginx.conf test is successful +nginx version: nginx/1.23.4 (nginx-plus-r29) + * Deleting remote backup directory -keepalived logs are written to syslog and located here: -/var/log/syslog + * Backing up configuration on nginx-plus-1.nginx.local -Further configuration may be required according to your needs -and environment. -Main configuration file for keepalived can be found at: - /etc/keepalived/keepalived.conf + * Updating configuration on nginx-plus-1.nginx.local -To control keepalived, use 'service keepalived' command: - service keepalived status + * Testing nginx config on nginx-plus-1.nginx.local -keepalived documentation can be found at: -http://www.keepalived.org/ +nginx: the configuration file /etc/nginx/nginx.conf syntax is ok +nginx: configuration file /etc/nginx/nginx.conf test is successful +nginx version: nginx/1.23.4 (nginx-plus-r29) + * Deleting remote backup directory -NGINX-HA-keepalived documentation can be found at: -/usr/share/doc/nginx-ha-keepalived/README + * Backing up configuration on nginx-plus-2.nginx.local -Thank you for using NGINX Plus! -``` + * Updating configuration on nginx-plus-2.nginx.local -当完成上述向导设置后,脚本将会创建keepalived.conf在/etc/keepalived/目录下,这时候集群主备状态设置完成。 + * Testing nginx config on nginx-plus-2.nginx.local -2. 通过以下多种方式查看集群状态: +nginx: the configuration file /etc/nginx/nginx.conf syntax is ok +nginx: configuration file /etc/nginx/nginx.conf test is successful +nginx is not running on nginx-plus-2.nginx.local, skipping reload + * Synchronization ended at Thu Jun 1 15:01:38 UTC 2023 +root@nginx-plus-1:/# ``` -Ip addr show -cat /var/run/nginx-ha-keepalived.state -service keepalived status -/var/log/messages --- CentOS, RHEL, and SLES‑based -/var/log/syslog --- Ubuntu and Debian‑based -service keepalived dump -``` -本次通过cat /var/run/nginx-ha-keepalived.state方式查看实例状态 -```nginx -#主机 -root@vms31:/etc/keepalived# cat /var/run/nginx-ha-keepalived.state -STATE=MASTER -#备机 -root@vms32:/etc/keepalived# cat /var/run/nginx-ha-keepalived.state -STATE=BACKUP +通过nginx-sync工具将以上步骤中的nginx配置实现集群内同步,在nginx-plus-2节点中检查结果 + +1. NGINX状态检查,配置同步成功 + + ```bash + root@nginx-plus-2:/etc/nginx/conf.d# ll + -rw-r--r-- 1 root root 538 Jun 1 19:39 default.conf + -rw-r--r-- 1 root root 242 May 30 21:25 nginx-plus-mgmt.conf + root@nginx-plus-2:/etc/nginx/conf.d# ss -ntl | grep 80 + LISTEN 0 511 0.0.0.0:80 0.0.0.0:* + LISTEN 0 511 0.0.0.0:8080 0.0.0.0:* + root@nginx-plus-2:/etc/nginx/conf.d# + ``` + +2. F5 pool member 状态检查,服务启动成功,两个member均为up状态 + +![输入图片说明](images/image-20230601200049014.png) + +3. NGINX软件资源池验证 + + 清空两个NGINX实例的统计数据,nginx-plus支持api delete操作方法,实验环境也可通过重启nginx进程快速清空所有统计数据 + + - api操作方法 + + ```bash + 参考模块说明文档:http://nginx.org/en/docs/http/ngx_http_api_module.html, api接口 /http/limit_reqs/{httpLimitReqZoneName} + 实验演示: + root@nginx-plus-1:/# curl -X DELETE nginx-plus-1.nginx.local:8080/api/8/http/limit_reqs/limit_req_by_clientip + ``` + + - 重启nginx进程 + + ```bash + root@nginx-plus-1:/# systemctl restart nginx + root@nginx-plus-2:/# systemctl restart nginx + ``` + +4. 压测查看负载均衡效果,持续10s,速率400QPS,共发送4000个请求 + + ```bash + root@Client:/# wrk -d10 -R400 http://app.nginx.local + Running 10s test @ http://app.nginx.local + 2 threads and 10 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 3.17ms 1.04ms 35.14ms 96.15% + Req/Sec -nan -nan 0.00 0.00% + 4002 requests in 10.00s, 2.23MB read + Requests/sec: 400.03 + Transfer/sec: 228.69KB + root@Client:/# + ``` + +通过dashboard确认两个nginx实例的状态,平均每个实例处理2000个请求,且均轮询的负载均衡至后端两个节点中 + +`nginx-plus-1` +![输入图片说明](images/image-20230601200734106.png) +`nginx-plus-2` + +![输入图片说明](images/image-20230601200749164.png) + + + + +#### 思考:如果针对app.nginx.local的应用需要实施限速策略,限制每秒每客户端IP的请求数为100,策略应如何实施? + +1. 共两个NGINX实例,每个实例均限制每秒100qps,则每客户端IP 的总 qps 是 200 + + ```bash + root@nginx-plus-1:/etc/nginx/conf.d# cat default.conf + limit_req_zone $binary_remote_addr zone=limit_req_by_clientip:5m rate=100r/s; + limit_req_log_level info; + + server { + listen 80 default_server; + server_name app.nginx.local; + status_zone server-app.nginx.local; + + access_log /var/log/nginx/app.nginx.local.access.log main; + error_log /var/log/nginx/app.nginx.local.error.log notice; + proxy_set_header nginx $hostname; + + limit_req zone=limit_req_by_clientip burst=100 nodelay; + + location / { + status_zone location-app.nginx.local; + proxy_pass http://app; + } + } + upstream app { + keepalive 32; + zone upstream-app 128k; + + server backend.nginx.local:8080; + server backend.nginx.local:8081; + + } + root@nginx-plus-1:/etc/nginx/conf.d# nginx-sync.sh + + # 清空统计数据 + root@nginx-plus-1:/# systemctl restart nginx + + # 压测,客户端发起400/s的请求压力,被F5负载分担至两台NGINX中请求速率为200/s,则每秒超过rate的数量为100;两个burst分别允许突发100个,则有200个单独的请求被放过放行;被 rate limit 的请求数为3800 [ (100 rate * 20s - 100 burst ) * 2 instance ] + + root@Client:/# wrk -d20 -R400 http://app.nginx.local + Running 20s test @ http://app.nginx.local + 2 threads and 10 connections + Thread calibration: mean lat.: 2.958ms, rate sampling interval: 10ms + Thread calibration: mean lat.: 2.855ms, rate sampling interval: 10ms + Thread Stats Avg Stdev Max +/- Stdev + Latency 2.86ms 0.88ms 10.81ms 65.48% + Req/Sec 210.62 55.15 333.00 72.32% + 8001 requests in 20.00s, 3.70MB read + Non-2xx or 3xx responses: 3800 + Requests/sec: 399.97 + Transfer/sec: 189.39KB + root@Client:/# + ``` + + 查看nginx dashboard + +![输入图片说明](images/image-20230601205437300.png) + + +##### 构建集群内状态同步集群 + +2. 共两个NGINX实例,每个实例均限制每秒50qps,则客户端IP的总 qps 是100,但,该场景下 NGINX 集群扩容如何保持原始策略一致性? + + 通过在多个NGINX实例间实现limit_req_zone的共享内存,实现集群内状态信息同步 + + **启用集群状态同步功能** + + 参考链接:[Runtime State Sharing in a Cluster | NGINX Documentation](https://docs.nginx.com/nginx/admin-guide/high-availability/zone_sync/) + + 通过在stream中启用zone_sync模块实现集群见信息共享,状态同步zone_sync模块适用于以下三种场景: + + 1. Sticky learn会话保持信息 + 2. Request limiting限速 + 3. Key-value storage + + + + 以Request limiting为例,演示状态同步过程,信息同步的过程支持采用ssl加密,详细内容请查阅文档示例 + + ```bash + # 同步集群server支持两种添加方式,分别为静态IP和DNS,示例如下,在stream_conf.d目录中创建nginx-ha.conf的配置文件,通过np-all.nginx.local实现集群内节点发现 + root@nginx-plus-1:/etc/nginx/stream_conf.d# cat nginx-ha.conf + stream { + resolver 10.1.1.5 valid=10s ipv6=off status_zone=zone-nginx-ha; + resolver_timeout 2s; + server { + listen 12345; + zone_sync; + # zone_sync_server 10.1.1.6:12345; + # zone_sync_server 10.1.1.7:12345; + zone_sync_server np-all.nginx.local:12345 resolve; + } + } + root@nginx-plus-1:/etc/nginx/stream_conf.d# + root@nginx-plus-1:/etc/nginx/stream_conf.d# nginx-sync.sh + + # DNS解析测试 + root@nginx-plus-1:/# dig np-all.nginx.local | grep nginx + ; <<>> DiG 9.16.1-Ubuntu <<>> np-all.nginx.local + ;np-all.nginx.local. IN A + np-all.nginx.local. 6 IN A 10.1.1.7 + np-all.nginx.local. 6 IN A 10.1.1.6 + root@nginx-plus-1:/# + + + # NGINX集群创建完成后,需要在每节点中分别重启NGINX进程 + root@nginx-plus-1:/etc/nginx/stream_conf.d# systemctl restart nginx + root@nginx-plus-2:/etc/nginx/stream_conf.d# systemctl restart nginx + ``` + + 状态检查,在dashboard中新增cluster tab页,但目前没有启用zone同步,故zones为空 + +![输入图片说明](images/image-20230601232436188.png) +启用集群限速功能,开启 limit_req_zone的同步功能,修改default.conf,添加 sync 选项 + +```bash +# 修改关键配置如下 +limit_req_zone $binary_remote_addr zone=limit_req_by_clientip:5m rate=100r/s sync; + +# 配置重载与集群内配置同步 +root@nginx-plus-1:/etc/nginx/conf.d# nginx -s reload +root@nginx-plus-1:/etc/nginx/conf.d# nginx-sync.sh ``` +确认dashboard zones 中有 limit_req_by_clientip -#### Chapter 2 SSH免登陆 - -由于Nginx-sync需要通过ssh到其他节点,执行相关命令如配置验证,reload nginx等。所以在完成nginx-sync安装后,需要提前设置免密码ssh登录,使得Master无需密码登录所有peer节点。 - -1. 第一步,需要在Master节点上生成OpenSSH的密钥对: - -```nginx -root@vms31:/etc/keepalived# sudo ssh-keygen -t rsa -b 2048 -Generating public/private rsa key pair. -Enter file in which to save the key (/root/.ssh/id_rsa): -/root/.ssh/id_rsa already exists. -Overwrite (y/n)? y -Enter passphrase (empty for no passphrase): -Enter same passphrase again: -Your identification has been saved in /root/.ssh/id_rsa. -Your public key has been saved in /root/.ssh/id_rsa.pub. -The key fingerprint is: -SHA256:1WyGXUDlm5XFC5rQEeqF5z3oGKpLMsDpW0vxFfEYu9s root@vms31.rhce.cc -The key's randomart image is: -+---[RSA 2048]----+ -| o .+=oo..| -| *.o* + +| -| + ++oO o.o| -|. . +.+=o = | -| + . oSo o oo | -|. . o . + + . | -| . = o o E . | -| + = . | -| . . o. | -+----[SHA256]-----+ - - -root@vms31:/# cat /root/.ssh/id_rsa.pub -ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDoSOaRM8+oCbssUzskLs02XtXw6ncpQ2hSip7Vg3Vbo0lfmk7sG3a5C9s0YXzGX7H2IUpNWrSWrKOrRva1kYt503dXJeE8sfrUKF95Ydh4a867tke1NtlumOcdtWfPQmb9im39bpR/pNteRLGlr7Izo5Cx7cy3bLvj+hheXhhD5NOib8FhiJyUmzqqx6ikOPSgxtzCcdN7eWrYpvFA2waP+1i9KYyXjl67IohqAwZ4XCX8kQ9oSnHaS1sNpEHxebehRoMeutmENCycVk8Dvqhw1HZnzo0FKNDwqWmAEkMfLxj7GBah3jmSe3rWpAeYFj6pIk+mK2rPZKz9KUati/WH root@vms31.rhce.cc -``` +![输入图片说明](images/image-20230601232800287.png) +针对集群限速功能模拟客户端进行压力测试 -2. 第二步,在Backup节点上创建/root/.ssh +```bash +# 清空统计数据 +root@nginx-plus-1:/# systemctl restart nginx +root@nginx-plus-2:/# systemctl restart nginx -```nginx -root@vms33:~#sudo mkdir /root/.ssh -root@vms33:~#sudo echo 'from="192.168.5.31" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDoSOaRM8+oCbssUzskLs02XtXw6ncpQ2hSip7Vg3Vbo0lfmk7sG3a5C9s0YXzGX7H2IUpNWrSWrKOrRva1kYt503dXJeE8sfrUKF95Ydh4a867tke1NtlumOcdtWfPQmb9im39bpR/pNteRLGlr7Izo5Cx7cy3bLvj+hheXhhD5NOib8FhiJyUmzqqx6ikOPSgxtzCcdN7eWrYpvFA2waP+1i9KYyXjl67IohqAwZ4XCX8kQ9oSnHaS1sNpEHxebehRoMeutmENCycVk8Dvqhw1HZnzo0FKNDwqWmAEkMfLxj7GBah3jmSe3rWpAeYFj6pIk+mK2rPZKz9KUati/WH root@vms31.rhce.cc' >> /root/.ssh/authorized_keys -#将在Master节点上生成的SSH公钥,通过echo方式,写入至/root/.ssh/authorized_keys,并且指定了源为192.168.5.31 +# 模拟请求,共发起8000个请求,每nginx节点处理 +root@Client:/# wrk -d20 -R400 http://app.nginx.local +Running 20s test @ http://app.nginx.local + 2 threads and 10 connections + Thread calibration: mean lat.: 2.776ms, rate sampling interval: 10ms + Thread calibration: mean lat.: 2.768ms, rate sampling interval: 10ms + + Thread Stats Avg Stdev Max +/- Stdev + Latency 2.80ms 0.93ms 18.27ms 76.24% + Req/Sec 210.96 53.28 555.00 75.47% + 8002 requests in 20.00s, 3.28MB read + Non-2xx or 3xx responses: 5886 +Requests/sec: 400.01 +Transfer/sec: 167.92KB +root@Client:/# ``` -3. 第三步,在Backup节点添加PermitRootLogin without-password至/etc/ssh/sshd_config +检查nginx-plus-1的状态,共接收4002个请求,拦截2930个,放行1072个,rate = 53 QPS -```nginx -root@vms33:~/.ssh# vi /etc/ssh/sshd_config -# $OpenBSD: sshd_config,v 1.101 2017/03/14 07:19:07 djm Exp $ +![输入图片说明](images/image-20230601234223430.png) +检查nginx-plus-2的状态,共接收400个请求,拦截2956个,放行1044个,rate = 52 QPS -# This is the sshd server system-wide configuration file. See -# sshd_config(5) for more information. -# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin -# The strategy used for options in the default sshd_config shipped with -# OpenSSH is to specify options with their default value where -# possible, but leave them commented. Uncommented options override the -# default value. +![输入图片说明](images/image-20230601234237446.png) -#Port 22 -#AddressFamily any -#ListenAddress 0.0.0.0 -#ListenAddress :: +通过集群间状态同步实现所有实例共同维护一致性策略,实现集群级别限速 -#HostKey /etc/ssh/ssh_host_rsa_key -#HostKey /etc/ssh/ssh_host_ecdsa_key -#HostKey /etc/ssh/ssh_host_ed25519_key -# Ciphers and keying -#RekeyLimit default none -# Logging -#SyslogFacility AUTH -#LogLevel INFO +#### **key value 表内信息实时同步** + +```bash +# 以下为keyval_zone模块的配置示例,通过api创建hash计算的key value表,支持表空间信息在集群多实例内共享,保持策略一致性与简化运维 +keyval_zone zone=api_routetable:2m state=/var/lib/nginx/state/one.keyval sync; +keyval $arg_apiuser $apiroute zone=api_routetable; + +server { + location / { + return 200 $text; + } +} +``` -# Authentication: -#LoginGraceTime 2m -#PermitRootLogin prohibit-password -PermitRootLogin yes -PermitRootLogin without-password -#添加PermitRootLogin without-password,以实现免登陆。 -#StrictModes yes -#MaxAuthTries 6 -#MaxSessions 10 -#PubkeyAuthentication yes +### 通过keepalived构建Active-Standby 高可用集群 -# Expect .ssh/authorized_keys2 to be disregarded by default in future. -#AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2 +#### 软件安装 -#AuthorizedPrincipalsFile none +```bash +# 基于不同操作系统可以通过以下方式安装keepalived +root@nginx-plus-1:/# apt-get update +root@nginx-plus-1:/# apt-get install nginx-ha-keepalived -#AuthorizedKeysCommand none +# keepalived安装完成后会增加在/etc/下的keepalived目录,keepalived守护进程将在目录下通过配置文件keepalived.conf进行Nginx Plus集群管理。 +# 以ubuntu为例,接下来我们利用nginx-ha-setup快速生成Actvie/Passive的Nginx Plus集群,每个NginxPlus实例均要独立安装keepalived以及单独运行nginx-ha-keealived脚本 ``` +**集群初始化** -4. 最后在backup节点上sudo service ssh reload +当集群中的所有实例均完成keepalived的安装后,在所有节点中使用 nginx-ha-setup 初始化高可用集群 -5. 验证从Master节点是否成功免登陆ssh到Backup节点。 +```bash +root@nginx-plus-1:/# nginx-ha-setup +Thank you for using NGINX Plus! +This script is intended for use with RHEL/CentOS/SLES/Debian/Ubuntu-based systems. +It will configure highly available NGINX Plus environment in Active/Passive pair. -#### Chapter 3 集群配置同步 +NOTE: you will need the following in order to continue: + - 2 running systems (nodes) with static IP addresses + - one free IP address to use as Cluster IP endpoint -在完成Charter 2的SSH免登陆后,还需要对同步即可在Master上创建配置同步.conf,在/etc/下创建nginx-sync.conf. -```nginx -root@vms31:/# vi /etc/nginx-sync.conf -NODES="192.168.5.33" -CONFPATHS="/etc/nginx/nginx.conf /etc/nginx/conf.d" -EXCLUDE="default.conf" -``` -通用参数说明 +It is strongly recommended to run this script simultaneously on both nodes, +e.g. use two terminal windows and switch between them step by step. -NODES:配置同步的目标节点,使用空格或换行符分隔。 +It is recommended to run this script under screen(1) in order to allow +installation process to continue in case of unexpected session disconnect. -CONFPATHS:需同步的文件或者目录,使用空格或换行符分隔。 +Press to continue... -EXCLUDE:不进行同步的文件名,使用空格或换行符分隔。 +Step 1: configuring internal management IP addresses. -详细设置请查看https://docs.nginx.com/nginx/admin-guide/high-availability/configuration-sharing/ +In order to communicate with each other, both nodes must have at least one IP address. -最终在Master节点上执行nginx-sync.sh,查看配置是否成功同步。 +The guessed primary IP of this node is: 10.1.1.6/24 +Do you want to use this address for internal cluster communication? (y/n) +Please use 'y' or 'n' +IP address of this host is set to: 10.1.1.6/24 +Primary network interface: ens5 -#### Chapter 4 Sticky learn状态同步 +Now please enter IP address of a second node: 10.1.1.7/24 +You entered: 10.1.1.7/24 +Is it correct? (y/n) +IP address of the second node is set to: 10.1.1.7/24 -Nginx Plus实例在集群中可以共享其状态信息,具体可共享状态信息如下: +Press to continue... -Sticky learn会话保持信息 +Step 2: creating keepalived configuration -Request limiting限速 +Now you have to choose cluster IP address. +This address will be used as en entry point to all your cluster resources. +The chosen address must not be one already associated with a physical node. -Key-value storage +Enter cluster IP address: 10.1.1.10/24 +You entered: 10.1.1.10/24 +Is it correct? (y/n) -所有Nginx Plus实例可以共享状态信息至集群中的其他成员,通过共享内存中的Zone名实现共享。 +You must choose which node should have the MASTER role in this cluster. -本章节以sticky learn状态信息同步为例,演示会话保持信息同步。 +Please choose what the current node role is: +1) MASTER +2) BACKUP -1. 准备两台web服务器修改两台Nginx plus Web服务器中的配置。 -```nginx -#web1服务器:vi /etc/nginx/conf.d/web1.conf -server { - listen 80 default_server; - server_name localhost; +(on the second node you should choose the opposite variant) - #charset koi8-r; - #access_log /var/log/nginx/host.access.log main; +Press 1 or 2. +This is the MASTER node. - location / { - root /usr/share/nginx/html; - index index.html index.htm; - add_header Set-Cookie "jsessionid=1111aaaabbbb2222.a"; - } -} +Step 3: starting keepalived +keepalived is already running. +Press to continue... +Step 4: configuring cluster -#web2服务器:vi /etc/nginx/conf.d/web2.conf -server { - listen 80 default_server; - server_name localhost; +Enabling keepalived and nginx at boot time... +Initial configuration complete! - #charset koi8-r; - #access_log /var/log/nginx/host.access.log main; +keepalived logs are written to syslog and located here: +/var/log/syslog - location / { - root /usr/share/nginx/html; - index index.html index.htm; - add_header Set-Cookie "jsessionid=5555xxxxyyyy6666.b"; - } -} -``` +Further configuration may be required according to your needs +and environment. +Main configuration file for keepalived can be found at: + /etc/keepalived/keepalived.conf -2. 修改需要进行状态同步的两个Nginx Plus实例/etc/nginx/nginx.conf配置文件 -```nginx -First -#Master节点监听9000端口,用作接收同步信息,同时,本实例的同步对象为192.168.5.33,在http block前加入以下stream block: -stream { - server { - listen 9000; - zone_sync; - zone_sync_server 192.168.5.33:9000; - } -} +To control keepalived, use 'service keepalived' command: + service keepalived status -#在http block中使用以下upstream配置,可以看出配置类似sticky learn,唯一区别是在最后加入sync。 - -upstream backend { - server 192.168.5.32; - server 192.168.5.33; - sticky learn - create=$upstream_cookie_jsessionid - lookup=$cookie_jsessionid - zone=client_session:1m - timeout=1h - sync; - } +keepalived documentation can be found at: +http://www.keepalived.org/ +NGINX-HA-keepalived documentation can be found at: +/usr/share/doc/nginx-ha-keepalived/README +Thank you for using NGINX Plus! +root@nginx-plus-1:/# +``` + +**配置检查** + +当完成上述向导设置后,脚本将会创建keepalived.conf在/etc/keepalived/目录下,生成配置如下: -Second -#然后在另外一个同样配置监听9000端口,用作接收同步信息,同时,本实例的同步对象为192.168.5.31,在http block前加入以下stream block: -stream { - server { - listen 9000; - zone_sync; - zone_sync_server 192.168.5.31:9000; - } +```bash +root@nginx-plus-1:/# cat /etc/keepalived/keepalived.conf +global_defs { + vrrp_version 3 } -#同样在http block中使用以下upstream配置,可以看出配置类似sticky learn,唯一区别是在最后加入sync。 - -upstream backend { - server 192.168.5.32; - server 192.168.5.33; - sticky learn - create=$upstream_cookie_jsessionid - lookup=$cookie_jsessionid - zone=client_session:1m - timeout=1h - sync; - } +vrrp_script chk_manual_failover { + script "/usr/lib/keepalived/nginx-ha-manual-failover" + interval 10 + weight 50 +} + +vrrp_script chk_nginx_service { + script "/usr/lib/keepalived/nginx-ha-check" + interval 3 + weight 50 +} + +vrrp_instance VI_1 { + interface ens5 + priority 101 + virtual_router_id 51 + advert_int 1 + accept + garp_master_refresh 5 + garp_master_refresh_repeat 1 + unicast_src_ip 10.1.1.6/24 + unicast_peer { + 10.1.1.7/24 + } + virtual_ipaddress { + 10.1.1.10/24 + } + track_script { + chk_nginx_service + chk_manual_failover + } + notify "/usr/lib/keepalived/nginx-ha-notify" +} +root@nginx-plus-1:/# ``` -3. 为了进一步验证同步结果,需要打开API以及dashboard,故通过修改/etc/nginx/conf.d/share.conf -```nginx -server { - listen 80 default_server; - server_name localhost; +当完成上述向导设置后,集群构建完成 + +#### 状态检查 + +通过以下多种方式查看集群状态: + +```bash +# 虚IP检查 +root@nginx-plus-1:/# ip a +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever +2: ens5: mtu 1500 qdisc fq_codel state UP group default qlen 1000 + link/ether 52:54:00:46:14:45 brd ff:ff:ff:ff:ff:ff + inet 10.1.1.6/24 brd 10.1.1.255 scope global ens5 + valid_lft forever preferred_lft forever + inet 10.1.1.10/24 scope global secondary ens5 + valid_lft forever preferred_lft forever + inet6 fe80::5054:ff:fe46:1445/64 scope link + valid_lft forever preferred_lft forever +root@nginx-plus-1:/# + +# 运行状态检查 +root@nginx-plus-1:/# cat /var/run/nginx-ha-keepalived.state +STATE=MASTER - location / { - proxy_pass http://backend; - } - location = /dashboard.html { - root /usr/share/nginx/html; - } +# keepalived 状态检查 +root@nginx-plus-1:/# systemctl status keepalived +● keepalived.service - LVS and VRRP High Availability Monitor + Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled) + Active: active (running) since Thu 2023-06-01 01:46:13 CST; 22h ago + Main PID: 616 (keepalived) + Tasks: 2 (limit: 2344) + Memory: 4.6M + CGroup: /system.slice/keepalived.service + ├─616 /usr/sbin/keepalived + └─617 /usr/sbin/keepalived + +Jun 01 23:40:59 nginx-plus-1.nginx.local Keepalived_vrrp[617]: (VI_1) received lower priority (150) advert from 10.1.1.7 - discarding +Jun 01 23:40:59 nginx-plus-1.nginx.local Keepalived_vrrp[617]: (VI_1) Entering MASTER STATE +Jun 01 23:40:59 nginx-plus-1.nginx.local nginx-ha-keepalived[172544]: Transition to state 'MASTER' on VRRP instance 'VI_1'. +Jun 01 23:41:03 nginx-plus-1.nginx.local Keepalived_vrrp[617]: Script `chk_nginx_service` now returning 0 +Jun 01 23:41:03 nginx-plus-1.nginx.local Keepalived_vrrp[617]: VRRP_Script(chk_nginx_service) succeeded +Jun 01 23:41:03 nginx-plus-1.nginx.local Keepalived_vrrp[617]: (VI_1) Changing effective priority from 151 to 201 +root@nginx-plus-1:/# + +# keepalived 进程信息 dump +root@nginx-plus-1:/# service keepalived dump +Dumping VRRP stats (/tmp/keepalived.stats) and data (/tmp/keepalived.data) +root@nginx-plus-1:/# cat /tmp/keepalived.data | grep State + State = MASTER + State = idle + State = idle + State = UP, RUNNING, no broadcast, loopback, no multicast + State = UP, RUNNING +root@nginx-plus-1:/# +``` - location /api { - api write=on; - } +#### 状态切换 + +```bash +# 在客户端中发起测试,解析ha.nginx.local至vip 10.1.1.10 中,测试访问由nginx-plus-1节点处理 +root@Client:/# dig ha.nginx.local | grep nginx +; <<>> DiG 9.16.1-Ubuntu <<>> ha.nginx.local +;ha.nginx.local. IN A +ha.nginx.local. 10 IN A 10.1.1.10 +root@Client:/# curl ha.nginx.local +{ + "Date": "02/Jun/2023:00:05:09 +0800", + "Client IP": "", + "Server IP": "10.1.1.8:8081", + "Nginx": "nginx-plus-1.nginx.local", + "Server Name": "backend.nginx.local", + "Request ID": "1a71757daf2145fdd85b3c18547d722e", + "URI": "/", + "User Agent": "curl/7.68.0", + "Doc Root": "/usr/share/nginx/html" +} +root@Client:/# + +# 停止服务 +root@nginx-plus-1:/# systemctl stop nginx + +# 重新发起测试,服务自动切换至nginx-plus-2中 +root@Client:/# curl ha.nginx.local +{ + "Date": "02/Jun/2023:00:05:33 +0800", + "Client IP": "", + "Server IP": "10.1.1.8:8080", + "Nginx": "nginx-plus-2.nginx.local", + "Server Name": "backend.nginx.local", + "Request ID": "672063afc9581b990ac0bd645f51a88b", + "URI": "/", + "User Agent": "curl/7.68.0", + "Doc Root": "/usr/share/nginx/html" } +root@Client:/# ``` -4. 通过浏览器访问http://实例ip,生成1条sticky learn记录,并通过curl命令查看两个实例是否都有对应记录。 -```nginx -curl -s '127.0.0.1/api/6/stream/zone_sync' | jq -``` -![输入图片说明](https://images.gitee.com/uploads/images/2021/1005/183324_955dae7e_9793073.png "屏幕截图.png") diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web1.conf" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web1.conf" deleted file mode 100644 index 64e8c7d20d6a95b3716032a65d3a3caef4581f76..0000000000000000000000000000000000000000 --- "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web1.conf" +++ /dev/null @@ -1,14 +0,0 @@ -server { - listen 80 default_server; - server_name localhost; - - #charset koi8-r; - #access_log /var/log/nginx/host.access.log main; - - location / { - root /usr/share/nginx/html; - index index.html index.htm; - add_header Set-Cookie "jsessionid=1111aaaabbbb2222.a"; - } -} - diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web2.conf" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web2.conf" deleted file mode 100644 index ac5fa92c5275edb3ac6afe2a5eedb3cf33491201..0000000000000000000000000000000000000000 --- "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/web2.conf" +++ /dev/null @@ -1,14 +0,0 @@ -server { - listen 80 default_server; - server_name localhost; - - #charset koi8-r; - #access_log /var/log/nginx/host.access.log main; - - location / { - root /usr/share/nginx/html; - index index.html index.htm; - add_header Set-Cookie "jsessionid=5555xxxxyyyy6666.b"; - } -} - diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/.keep" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/.keep" similarity index 100% rename from "4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/conf.d/.keep" rename to "4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/.keep" diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601185008102.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601185008102.png" new file mode 100644 index 0000000000000000000000000000000000000000..5f9658fba77492ec56d7363e7c17377057653be3 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601185008102.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601190131972.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601190131972.png" new file mode 100644 index 0000000000000000000000000000000000000000..3464dce8221e6121979310f52fd5fa6859e07003 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601190131972.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601193535370.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601193535370.png" new file mode 100644 index 0000000000000000000000000000000000000000..5b06c1e4a30d60cfc0f8e50bbfa0acfde6744cd9 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601193535370.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194414105.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194414105.png" new file mode 100644 index 0000000000000000000000000000000000000000..0b24d3cb7f7a7df1de1d3c70de5639599a1bcdab Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194414105.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194600541.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194600541.png" new file mode 100644 index 0000000000000000000000000000000000000000..a3a8f18326e0fc051477377c071d366a7cc996c5 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194600541.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194613173.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194613173.png" new file mode 100644 index 0000000000000000000000000000000000000000..a0fe7ae1cc839ca743a6fdc11928c24bb195efe0 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601194613173.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200049014.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200049014.png" new file mode 100644 index 0000000000000000000000000000000000000000..a0cafb88e01a7cdf81c4d178c8137c53a1b11efc Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200049014.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200734106.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200734106.png" new file mode 100644 index 0000000000000000000000000000000000000000..7e1801110ae538e83f3099e5a70dce01fa571662 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200734106.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200749164.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200749164.png" new file mode 100644 index 0000000000000000000000000000000000000000..b1079226a31e45141d1d85a57a5e30ed7013cb7a Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601200749164.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601205437300.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601205437300.png" new file mode 100644 index 0000000000000000000000000000000000000000..a74c858a1b17f7f9fe0662e4187b421d9132a348 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601205437300.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601232436188.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601232436188.png" new file mode 100644 index 0000000000000000000000000000000000000000..f58834fae7a8adbbc5514faca2d740cbf25436e7 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601232436188.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601232800287.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601232800287.png" new file mode 100644 index 0000000000000000000000000000000000000000..8dfb3f77f3a60164c666dd98c268521a404848c5 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601232800287.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601234223430.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601234223430.png" new file mode 100644 index 0000000000000000000000000000000000000000..6607371e96ceb99d2951be57cb49adc550b6d0bf Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601234223430.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601234237446.png" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601234237446.png" new file mode 100644 index 0000000000000000000000000000000000000000..387c33410c91a5b8b68fa5e3a16e7d53c18002d6 Binary files /dev/null and "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/images/image-20230601234237446.png" differ diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx-plus-all-conf" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx-plus-all-conf" new file mode 100644 index 0000000000000000000000000000000000000000..e48b2b945709915809b5a2ef8963dd4531a2814d --- /dev/null +++ "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx-plus-all-conf" @@ -0,0 +1,220 @@ +root@nginx-plus-1:/etc/nginx/conf.d# nginx -T +nginx: the configuration file /etc/nginx/nginx.conf syntax is ok +nginx: configuration file /etc/nginx/nginx.conf test is successful +# configuration file /etc/nginx/nginx.conf: + +user nginx; +worker_processes auto; + +error_log /var/log/nginx/error.log notice; +pid /var/run/nginx.pid; +worker_rlimit_nofile 8192; + +events { + worker_connections 4096; +} + + +http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + log_format main 'cip: $remote_addr, time: [$time_local], request: "$request" ' + 'upstream_addr: $upstream_addr, response_code: $status, referer: "$http_referer" ' + 'ua: "$http_user_agent", xff: "$http_x_forwarded_for", upstream_cookie_uid: "$upstream_cookie_uidsession", cookie_uid: "$cookie_uidsession"' + 'limit_conn: $limit_conn_status, limit_req: $limit_req_status, request_time: $request_time'; + + access_log /var/log/nginx/access.log main buffer=16k; + + sendfile on; + tcp_nopush on; + + keepalive_timeout 128; + resolver 10.1.1.5 valid=10s ipv6=off status_zone=np-resolver; + resolver_timeout 2s; + + gzip on; + + + proxy_buffering on; + proxy_redirect off; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection ""; + proxy_set_header Accept-Encoding ""; + proxy_set_header Host "$host"; + proxy_http_version 1.1; + + proxy_connect_timeout 1; + proxy_read_timeout 3; + proxy_send_timeout 3; + + proxy_ignore_client_abort on; + + + include /etc/nginx/conf.d/*.conf; +} + +include /etc/nginx/stream_conf.d/*.conf; + +# configuration file /etc/nginx/mime.types: + +types { + text/html html htm shtml; + text/css css; + text/xml xml; + image/gif gif; + image/jpeg jpeg jpg; + application/javascript js; + application/atom+xml atom; + application/rss+xml rss; + + text/mathml mml; + text/plain txt; + text/vnd.sun.j2me.app-descriptor jad; + text/vnd.wap.wml wml; + text/x-component htc; + + image/avif avif; + image/png png; + image/svg+xml svg svgz; + image/tiff tif tiff; + image/vnd.wap.wbmp wbmp; + image/webp webp; + image/x-icon ico; + image/x-jng jng; + image/x-ms-bmp bmp; + + font/woff woff; + font/woff2 woff2; + + application/java-archive jar war ear; + application/json json; + application/mac-binhex40 hqx; + application/msword doc; + application/pdf pdf; + application/postscript ps eps ai; + application/rtf rtf; + application/vnd.apple.mpegurl m3u8; + application/vnd.google-earth.kml+xml kml; + application/vnd.google-earth.kmz kmz; + application/vnd.ms-excel xls; + application/vnd.ms-fontobject eot; + application/vnd.ms-powerpoint ppt; + application/vnd.oasis.opendocument.graphics odg; + application/vnd.oasis.opendocument.presentation odp; + application/vnd.oasis.opendocument.spreadsheet ods; + application/vnd.oasis.opendocument.text odt; + application/vnd.openxmlformats-officedocument.presentationml.presentation + pptx; + application/vnd.openxmlformats-officedocument.spreadsheetml.sheet + xlsx; + application/vnd.openxmlformats-officedocument.wordprocessingml.document + docx; + application/vnd.wap.wmlc wmlc; + application/wasm wasm; + application/x-7z-compressed 7z; + application/x-cocoa cco; + application/x-java-archive-diff jardiff; + application/x-java-jnlp-file jnlp; + application/x-makeself run; + application/x-perl pl pm; + application/x-pilot prc pdb; + application/x-rar-compressed rar; + application/x-redhat-package-manager rpm; + application/x-sea sea; + application/x-shockwave-flash swf; + application/x-stuffit sit; + application/x-tcl tcl tk; + application/x-x509-ca-cert der pem crt; + application/x-xpinstall xpi; + application/xhtml+xml xhtml; + application/xspf+xml xspf; + application/zip zip; + + application/octet-stream bin exe dll; + application/octet-stream deb; + application/octet-stream dmg; + application/octet-stream iso img; + application/octet-stream msi msp msm; + + audio/midi mid midi kar; + audio/mpeg mp3; + audio/ogg ogg; + audio/x-m4a m4a; + audio/x-realaudio ra; + + video/3gpp 3gpp 3gp; + video/mp2t ts; + video/mp4 mp4; + video/mpeg mpeg mpg; + video/quicktime mov; + video/webm webm; + video/x-flv flv; + video/x-m4v m4v; + video/x-mng mng; + video/x-ms-asf asx asf; + video/x-ms-wmv wmv; + video/x-msvideo avi; +} + +# configuration file /etc/nginx/conf.d/default.conf: +limit_req_zone $binary_remote_addr zone=limit_req_by_clientip:5m rate=100r/s sync; +limit_req_log_level info; + +server { + listen 80 default_server; + server_name app.nginx.local; + status_zone server-app.nginx.local; + + access_log /var/log/nginx/app.nginx.local.access.log main; + error_log /var/log/nginx/app.nginx.local.error.log notice; + proxy_set_header nginx $hostname; + + limit_req zone=limit_req_by_clientip burst=100 nodelay; + + location / { + status_zone location-app.nginx.local; + proxy_pass http://app; + } +} +upstream app { + keepalive 32; + zone upstream-app 128k; + + server backend.nginx.local:8080; + server backend.nginx.local:8081; + +} + +# configuration file /etc/nginx/conf.d/nginx-plus-mgmt.conf: +server { + listen 8080 default_server; + access_log off; + + location /api/ { + api write=on; + } + location = /dashboard.html { + root /usr/share/nginx/html; + } + location / { + return 200; + } +} + +# configuration file /etc/nginx/stream_conf.d/nginx-ha.conf: +stream { + resolver 10.1.1.5 valid=10s ipv6=off status_zone=zone-nginx-ha; + resolver_timeout 2s; + server { + listen 12345; + zone_sync; + # zone_sync_server 10.1.1.6:12345; + # zone_sync_server 10.1.1.7:12345; + zone_sync_server np-all.nginx.local:12345 resolve; + } +} + +root@nginx-plus-1:~# \ No newline at end of file diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.backup" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.backup" deleted file mode 100644 index d018762152a8a79893215ef207c23b40c58a2475..0000000000000000000000000000000000000000 --- "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.backup" +++ /dev/null @@ -1,68 +0,0 @@ - -user nginx; -worker_processes auto; - -error_log /var/log/nginx/error.log notice; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - -stream { - server { - listen 9000; - zone_sync; - zone_sync_server 192.168.5.31:9000; - } -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - # '$status $body_bytes_sent "$http_referer" ' - # '$hostname and $host' - # '$http_x_forwarded_for' - # '"$http_user_agent" "$http_x_forwarded_for"'; - log_format main '“$time_local” client=$remote_addr ' - 'method=$request_method request="$request” ' - 'request_length=$request_length ' - 'status=$status bytes_sent=$bytes_sent ' - 'body_bytes_sent=$body_bytes_sent ' - 'referer=$http_referer ' - 'user_agent="$http_user_agent" ' - 'host=$host' - 'xff=$http_x_forwarded_for' - 'upstream_addr=$upstream_addr ' - 'upstream_status=$upstream_status ' - 'request_time=$request_time ' - 'upstream_response_time=$upstream_response_time ' - 'upstream_connect_time=$upstream_connect_time ' - 'upstream_header_time=$upstream_header_time'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; - -upstream backend { - server 192.168.5.32; - server 192.168.5.33; - sticky learn - create=$upstream_cookie_jsessionid - lookup=$cookie_jsessionid - zone=client_session:1m - timeout=1h - sync; - } -} diff --git "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.master" "b/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.master" deleted file mode 100644 index 5f7307d2b6cf74b41ef93716487fb1fd4bf823a5..0000000000000000000000000000000000000000 --- "a/4 NGINX\351\233\206\347\276\244\347\256\241\347\220\206/nginx/nginx.conf.master" +++ /dev/null @@ -1,68 +0,0 @@ - -user nginx; -worker_processes auto; - -error_log /var/log/nginx/error.log notice; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - -stream { - server { - listen 9000; - zone_sync; - zone_sync_server 192.168.5.33:9000; - } -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - # '$status $body_bytes_sent "$http_referer" ' - # '$hostname and $host' - # '$http_x_forwarded_for' - # '"$http_user_agent" "$http_x_forwarded_for"'; - log_format main '“$time_local” client=$remote_addr ' - 'method=$request_method request="$request” ' - 'request_length=$request_length ' - 'status=$status bytes_sent=$bytes_sent ' - 'body_bytes_sent=$body_bytes_sent ' - 'referer=$http_referer ' - 'user_agent="$http_user_agent" ' - 'host=$host' - 'xff=$http_x_forwarded_for' - 'upstream_addr=$upstream_addr ' - 'upstream_status=$upstream_status ' - 'request_time=$request_time ' - 'upstream_response_time=$upstream_response_time ' - 'upstream_connect_time=$upstream_connect_time ' - 'upstream_header_time=$upstream_header_time'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; - -upstream backend { - server 192.168.5.32; - server 192.168.5.33; - sticky learn - create=$upstream_cookie_jsessionid - lookup=$cookie_jsessionid - zone=client_session:1m - timeout=1h - sync; - } -}