# 容器化日志收集系统 **Repository Path**: liumeng90/containerized-log-collection-system ## Basic Information - **Project Name**: 容器化日志收集系统 - **Description**: 模拟企业的日志业务处理场景,基于k8s以容器化的形式部署kafka来收集网站日志,并编写python脚本模拟消费者消费kafka中的日志存入mysql。nginx网站服务、prometheus监控、filebeat日志采集也都是以容器化的形式部署,是日志业务容器化的一种尝试,相对虚拟机实现,容器化方式整体上提高了资源利用率,让业务扩展变得更加方便,集群的运维管理变得更加便捷 - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2023-07-05 - **Last Updated**: 2023-07-05 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # 容器化日志处理系统 ## 一、项目介绍 模拟企业的日志业务处理场景,基于k8s以容器化的形式部署kafka来收集网站日志,并编写python脚本模拟消费者消费kafka中的日志,格式化后存入mysql数据库。nginx网站服务、prometheus监控、filebeat日志采集也都是以容器化的形式部署,是日志业务容器化的一种尝试,相对虚拟机实现,容器化方式整体上提高了资源利用率,让业务扩展变得更加方便,集群的运维管理变得更加便捷 ## 二、项目环境 centos-7.9、k8s-1.23.6、docker-20.10.18、flanne-0.19.2、helm-3.9、nginx-1.23.2、kafka-2.8.1、zookeeper-3.7.0、prometheus-2.36.2、filebeat-7.17.3、grafana-9.1.5、mysql-5.7.37、python-3.6.8、nfs-1.3.0 ## 三、实现步骤 ### 1、规划好项目架构图,首先搭建k8s集群,保证各组件各节点正常运行(因自己的系统负载有限,这里仅采用一个node的实验环境) ![输入图片说明](%E9%A1%B9%E7%9B%AE%E6%9E%B6%E6%9E%84%E5%9B%BE.png) ### 2、搭建nfs服务器,设置共享目录,开放读写权限,为容器提供数据持久化存储,以及网页内容一致性 ![输入图片说明](nfs%E7%8A%B6%E6%80%81.png) ### 3、在k8s中创建两类nginx应用,以NodePort进行服务暴露,并创建hap实现自动扩缩 ``` 部署网站www.yueball.com: [root@k8s-master kafka]# cat yueball-nginx.yaml apiVersion: v1 kind: PersistentVolume metadata: namespace: bigdata name: nginx-pv labels: type: nginx-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteMany storageClassName: nfs #pv对应的类型 nfs: path: "/opt/yueball_nginx" #nfs共享的目录 server: 192.168.50.4 #nfs服务器的ip地址 readOnly: false #访问模式 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: namespace: bigdata name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: nfs #使用nfs类型的pv --- apiVersion: apps/v1 kind: Deployment metadata: namespace: bigdata name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 3 template: metadata: labels: run: my-nginx spec: volumes: - name: nfs-pv persistentVolumeClaim: claimName: nginx-pvc - name: nginxcm configMap: name: nginxcm items: - key: nginx.conf path: nginx.conf - name: nginxsc secret: secretName: nginxsc containers: - name: my-nginx image: nginx ports: - containerPort: 80 - containerPort: 443 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: nfs-pv - name: nginxcm mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: nginxsc mountPath: /etc/nginx/conf.d resources: limits: cpu: "60m" requests: cpu: "30m" --- apiVersion: v1 kind: Service metadata: name: my-nginx namespace: bigdata labels: run: my-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP selector: run: my-nginx --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: yueball-hpa namespace: bigdata spec: minReplicas: 1 #最小pod数量 maxReplicas: 10 #最大pod数量 targetCPUUtilizationPercentage: 70 # CPU使用率指标 scaleTargetRef: # 指定要控制的nginx信息 apiVersion: apps/v1 kind: Deployment name: my-nginx [root@k8s-master kafka]# kubectl apply -f yueball-nginx.yaml persistentvolume/nginx-pv created persistentvolumeclaim/nginx-pvc created deployment.apps/my-nginx created service/my-nginx created horizontalpodautoscaler.autoscaling/yueball-hpa unchanged 部署网站www.hunau.edu: [root@k8s-master kafka]# cat hunau-nginx.yaml apiVersion: v1 kind: PersistentVolume metadata: namespace: bigdata name: nginx-pv2 labels: type: nginx-pv2 spec: capacity: storage: 5Gi accessModes: - ReadWriteMany storageClassName: nfs #pv对应的类型 nfs: path: "/opt/hunau_nginx" #nfs共享的目录 server: 192.168.50.4 #nfs服务器的ip地址 readOnly: false #访问模式 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: namespace: bigdata name: nginx-pvc2 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: nfs #使用nfs类型的pv --- apiVersion: apps/v1 kind: Deployment metadata: namespace: bigdata name: my-nginx2 spec: selector: matchLabels: run: my-nginx2 replicas: 3 template: metadata: labels: run: my-nginx2 spec: volumes: - name: nfs-pv persistentVolumeClaim: claimName: nginx-pvc2 - name: nginxcm configMap: name: nginxcm items: - key: nginx.conf path: nginx.conf - name: nginxsc secret: secretName: nginxsc containers: - name: my-nginx2 image: nginx ports: - containerPort: 80 - containerPort: 443 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: nfs-pv - name: nginxcm mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: nginxsc mountPath: /etc/nginx/conf.d resources: limits: cpu: "60m" requests: cpu: "30m" --- apiVersion: v1 kind: Service metadata: name: my-nginx2 namespace: bigdata labels: run: my-nginx2 spec: type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP selector: run: my-nginx2 --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hunau-hpa namespace: bigdata spec: minReplicas: 1 #最小pod数量 maxReplicas: 10 #最大pod数量 targetCPUUtilizationPercentage: 70 # CPU使用率指标 scaleTargetRef: # 指定要控制的nginx信息 apiVersion: apps/v1 kind: Deployment name: my-nginx2 [root@k8s-master kafka]# kubectl apply -f hunau-nginx.yaml persistentvolume/nginx-pv2 created persistentvolumeclaim/nginx-pvc2 created deployment.apps/my-nginx2 created service/my-nginx2 created horizontalpodautoscaler.autoscaling/hunau-hpa unchanged 查看部署信息: [root@k8s-master kafka]# kubectl get all -n bigdata NAME READY STATUS RESTARTS AGE pod/my-nginx-ddc7cd9dd-7h4sc 0/1 ContainerCreating 0 71s pod/my-nginx-ddc7cd9dd-fxspx 0/1 ContainerCreating 0 71s pod/my-nginx-ddc7cd9dd-mwdnn 0/1 ContainerCreating 0 71s pod/my-nginx2-776d9c5f8b-8s4dq 1/1 Running 0 19s pod/my-nginx2-776d9c5f8b-j4v9r 1/1 Running 0 19s pod/my-nginx2-776d9c5f8b-lbfpj 0/1 ContainerCreating 0 19s pod/nfs-client-provisioner-6867bc679d-zxgzr 1/1 Running 3 (10h ago) 11h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-nginx NodePort 10.1.245.134 8001:31125/TCP 71s service/my-nginx2 NodePort 10.1.82.130 8002:32765/TCP 19s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-nginx 0/3 3 0 71s deployment.apps/my-nginx2 2/3 3 2 19s deployment.apps/nfs-client-provisioner 1/1 1 1 3d18h NAME DESIRED CURRENT READY AGE replicaset.apps/my-nginx-ddc7cd9dd 3 3 0 71s replicaset.apps/my-nginx2-776d9c5f8b 3 3 2 19s replicaset.apps/nfs-client-provisioner-6867bc679d 1 1 1 3d18h NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/hunau-hpa Deployment/my-nginx2 0%/70% 1 10 3 3d9h horizontalpodautoscaler.autoscaling/yueball-hpa Deployment/my-nginx 0%/70% 1 10 3 3d9h ``` ### 4、部署ingress-nginx-controller实现七层基于域名(yueball.com和hunau.edu)的负载均衡 ``` 创建ingress-nginx-controller: [root@k8s-master kafka]# cat hostNetwork.yaml apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx automountServiceAccountToken: true --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx data: --- # Source: ingress-nginx/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch --- # Source: ingress-nginx/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - events verbs: - create - patch --- # Source: ingress-nginx/templates/controller-rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: http appProtocol: http - name: https port: 443 protocol: TCP targetPort: https appProtocol: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 kind: DaemonSet metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: hostNetwork: true dnsPolicy: ClusterFirst containers: - name: controller image: willdockerhub/ingress-nginx-controller:v1.0.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key - --watch-ingress-without-class=true securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi nodeSelector: kubernetes.io/os: linux serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission --- # Source: ingress-nginx/templates/controller-ingressclass.yaml # We don't support namespaced ingressClass yet # So a ClusterRole and a ClusterRoleBinding is required apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: nginx namespace: ingress-nginx spec: controller: k8s.io/ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml # before changing this value, check the required kubernetes version # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission webhooks: - name: validate.nginx.ingress.kubernetes.io matchPolicy: Equivalent rules: - apiGroups: - networking.k8s.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /networking/v1/ingresses --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook rules: - apiGroups: - '' resources: - secrets verbs: - get - create --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create namespace: ingress-nginx annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: hzde0128/kube-webhook-certgen:v1.0 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc - --namespace=$(POD_NAMESPACE) - --secret-name=ingress-nginx-admission env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch namespace: ingress-nginx annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-4.0.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: hzde0128/kube-webhook-certgen:v1.0 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=$(POD_NAMESPACE) - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission nodeSelector: kubernetes.io/os: linux securityContext: runAsNonRoot: true runAsUser: 2000 [root@k8s-master kafka]# kubectl apply -f hostNetwork.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created daemonset.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created 对nginx服务创建域名解析: [root@k8s-master kafka]# cat nginx-Ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: yueball namespace: bigdata annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: www.yueball.com http: paths: - path: "/" pathType: Prefix backend: service: name: my-nginx port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hunau namespace: bigdata annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: www.hunau.edu http: paths: - path: "/" pathType: Prefix backend: service: name: my-nginx2 port: number: 80 [root@k8s-master kafka]# kubectl apply -f nginx-Ingress.yaml ingress.networking.k8s.io/yueball created ingress.networking.k8s.io/hunau created [root@k8s-master kafka]# kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create-7442z 0/1 Completed 0 3m12s 10.244.1.248 k8s-node1 ingress-nginx-admission-patch-grdfs 0/1 Completed 1 3m12s 10.244.1.247 k8s-node1 ingress-nginx-controller-4pj9z 1/1 Running 0 3m12s 192.168.50.4 k8s-node2 ingress-nginx-controller-88dpz 1/1 Running 0 3m12s 192.168.50.3 k8s-node1 ingress-nginx-controller-bcv2d 1/1 Running 0 3m12s 192.168.50.6 k8s-master [root@k8s-master kafka]# kubectl get ingress -n bigdata NAME CLASS HOSTS ADDRESS PORTS AGE hunau www.hunau.edu 192.168.50.4,192.168.50.6 80 90s yueball www.yueball.com 192.168.50.4,192.168.50.6 80 90s ``` ### 5、安装helm插件,创建storageClass资源,利用helm在k8s中快速部署zookeeper、kafka集群 ``` 安装helm: # 下载包 $ wget https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz # 解压压缩包 $ tar -xf helm-v3.7.1-linux-amd64.tar.gz # 制作软连接 $ ln -s /opt/helm/linux-amd64/helm /usr/local/bin/helm # 验证 $ helm version $ helm help 创建storageClass资源名为bigdata-nfs-storage: [root@k8s-master kafka]# cat bigdata-sc.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: bigdata #根据实际环境设定namespace,下面类同 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner namespace: bigdata rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: bigdata # replace with namespace where provisioner is deployed roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: bigdata # replace with namespace where provisioner is deployed rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner namespace: bigdata subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: bigdata roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner namespace: bigdata spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes #容器内挂载点 env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.50.4 - name: NFS_PATH value: /opt/nfsdata volumes: - name: nfs-client-root #宿主机挂载点 nfs: server: 192.168.50.4 path: /opt/nfsdata --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: bigdata-nfs-storage namespace: bigdata provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' reclaimPolicy: Retain #回收策略:Retain(保留)、 Recycle(回收)或者Delete(删除) volumeBindingMode: Immediate #volumeBindingMode存储卷绑定策略 allowVolumeExpansion: true #pvc是否允许扩容 [root@k8s-master kafka]# kubectl apply -f bigdata-sc.yaml 先安装zookeeper: helm repo add bitnami https://charts.bitnami.com/bitnami [root@k8s-master zookeeper]# helm install zookeeper . \ > --namespace bigdata \ > --set replicaCount=1 --set auth.enabled=false \ > --set allowAnonymousLogin=true \ > --set persistence.storageClass=bigdata-nfs-storage \ > --set persistence.size=1Gi NAME: zookeeper LAST DEPLOYED: Sun Oct 30 17:59:21 2022 NAMESPACE: bigdata STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: zookeeper CHART VERSION: 7.6.0 APP VERSION: 3.7.0 ** Please be patient while the chart is being deployed ** ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster: zookeeper.bigdata.svc.cluster.local To connect to your ZooKeeper server run the following commands: export POD_NAME=$(kubectl get pods --namespace bigdata -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}") kubectl exec -it $POD_NAME -- zkCli.sh To connect to your ZooKeeper server from outside the cluster execute the following commands: export NODE_IP=$(kubectl get nodes --namespace bigdata -o jsonpath="{.items[0].status.addresses[0].address}") export NODE_PORT=$(kubectl get --namespace bigdata -o jsonpath="{.spec.ports[0].nodePort}" services zookeeper) zkCli.sh $NODE_IP:$NODE_PORT 再安装kafka: helm install kafka . \ --namespace bigdata \ --set zookeeper.enabled=false \ --set replicaCount=1 \ --set externalZookeeper.servers=zookeeper.bigdata.svc.cluster.local \ --set persistence.storageClass=bigdata-nfs-storage \ --set persistence.size=1Gi NAME: kafka LAST DEPLOYED: Mon Oct 24 15:57:27 2022 NAMESPACE: bigdata STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: kafka CHART VERSION: 14.9.0 APP VERSION: 2.8.1 ** Please be patient while the chart is being deployed ** Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster: kafka.bigdata.svc.cluster.local Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster: kafka-0.kafka-headless.bigdata.svc.cluster.local:9092 To create a pod that you can use as a Kafka client run the following commands: kubectl run kafka-client --restart='Always' --image docker.io/bitnami/kafka:2.8.1-debian-10-r73 --namespace bigdata --command -- sleep infinity kubectl exec --tty -i kafka-client --namespace bigdata -- bash PRODUCER: kafka-console-producer.sh \ --broker-list kafka-0.kafka-headless.bigdata.svc.cluster.local:9092 \ --topic test CONSUMER: kafka-console-consumer.sh \ --bootstrap-server kafka.bigdata.svc.cluster.local:9092 \ --topic test \ --from-beginning 查看部署结果: [root@k8s-master kafka]# kubectl get pod,svc -n bigdata NAME READY STATUS RESTARTS AGE pod/kafka-0 1/1 Running 0 9m8s pod/my-nginx-ddc7cd9dd-7h4sc 1/1 Running 0 3h32m pod/my-nginx2-776d9c5f8b-j4v9r 1/1 Running 0 3h31m pod/nfs-client-provisioner-6867bc679d-zxgzr 1/1 Running 3 (14h ago) 15h pod/zookeeper-0 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kafka ClusterIP 10.1.224.117 9092/TCP 9m8s service/kafka-headless ClusterIP None 9092/TCP,9093/TCP 9m8s service/my-nginx NodePort 10.1.245.134 8001:31125/TCP 3h32m service/my-nginx2 NodePort 10.1.82.130 8002:32765/TCP 3h31m service/zookeeper ClusterIP 10.1.93.160 2181/TCP,2888/TCP,3888/TCP 14m service/zookeeper-headless ClusterIP None 2181/TCP,2888/TCP,3888/TCP 14m ``` ### 6、节点级别部署filebeat,收集网站日志传给kafka,编写python脚本消费数据存入主数据库 ``` 添加helm源: helm repo add elastic https://helm.elastic.co 自定义参数文件: [root@k8s-master filebeat]# cat my-values.yaml daemonset: filebeatConfig: filebeat.yml: | filebeat.inputs: - type: container paths: - /var/log/containers/my-nginx*.log output.elasticsearch: enabled: false host: '' hosts: '' output.kafka: enabled: true hosts: ["kafka-headless.bigdata.svc.cluster.local:9092"] topic: test 安装: helm install filebeat elastic/filebeat -f my-values.yaml --namespace bigdata NAME: filebeat LAST DEPLOYED: Mon Oct 24 16:13:56 2022 NAMESPACE: bigdata STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Watch all containers come up. $ kubectl get pods --namespace=bigdata -l app=filebeat-filebeat -w 编写python脚本消费kafka中的数据: [root@k8s-master kafka]# cat python_consumer.py import json import requests import time import pymysql taobao_url = "https://ip.taobao.com/outGetIpInfo?accessKey=alibaba-inc&ip=" # 查询ip地址的信息(省份和运营商isp),通过taobao网的接口 def resolv_ip(ip): response = requests.get(taobao_url + ip) if response.status_code == 200: tmp_dict = json.loads(response.text) prov = tmp_dict["data"]["region"] isp = tmp_dict["data"]["isp"] return prov, isp return None, None # 将日志里读取的格式转换为我们指定的格式 def trans_time(dt): # 把字符串转成时间格式 timeArray = time.strptime(dt, "%d/%b/%Y:%H:%M:%S") # timeStamp = int(time.mktime(timeArray)) # 把时间格式转成字符串 new_time = time.strftime("%Y-%m-%d %H:%M:%S", timeArray) return new_time # 从kafka里获取数据,清洗为我们需要的ip,时间,带宽 from pykafka import KafkaClient client = KafkaClient(hosts="192.168.50.4:30001") topic = client.topics['test'] balanced_consumer = topic.get_balanced_consumer( consumer_group='testgroup', auto_commit_enable=True, zookeeper_connect='192.168.50.4:32474' ) for message in balanced_consumer: if message is not None: line = json.loads(message.value.decode("utf-8")) log = line["message"] tmp_lst = log.split() print(tmp_lst) # consumer = topic.get_simple_consumer() db = pymysql.connect(host="192.168.50.88", user="root", password="123456", port=3306, database="kafka", charset="utf8") cursor = db.cursor() for message in balanced_consumer: if message is not None: line = json.loads(message.value.decode("utf-8")) log = line["message"] tmp_lst = log.split() print(tmp_lst) ip = tmp_lst[0] dt = tmp_lst[5].replace("[", "") bt = tmp_lst[9] dt = trans_time(dt) prov, isp = resolv_ip(ip) if prov and isp: try: print(prov, isp, dt,bt) # cursor.execute('insert into nginxlog(dt,prov,isp,bd) values("%s", "%s", "%s", "%s")' % (dt, prov, isp, bt)) # db.commit() print("保存成功") except Exception as err: print("修改失败", err) # db.rollback() db.close() ``` ### 7、helm部署Prometheus和grafana来监控k8s集群以及mysql主从集群,并设置邮件报警 ``` # 添加repo helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update prometheus-community helm search repo prometheus-community/prometheus # 拉包 helm pull prometheus-community/prometheus # 解包 tar -xf prometheus-15.12.2.tgz 修改values.yaml,charts/kube-state-metrics/values.yaml镜像路径以及版本改成如下图 grep -A3 'image:' prometheus/values.yaml ![Prometheus_value](D:\三创\容器化日志收集系统\Prometheus_value.png) 部署Prometheus: [root@k8s-master prometheus]# helm install prometheus ./ \ > -n bigdata \ > --create-namespace \ > --set server.ingress.enabled=true \ > --set server.ingress.hosts='{prometheus.k8s.local}' \ > --set server.ingress.paths='{/}' \ > --set server.ingress.pathType=Prefix \ > --set alertmanager.ingress.enabled=true \ > --set alertmanager.ingress.hosts='{alertmanager.k8s.local}' \ > --set alertmanager.ingress.paths='{/}' \ > --set alertmanager.ingress.pathType=Prefix \ > --set grafana.ingress.enabled=true \ > --set grafana.ingress.hosts='{grafana.k8s.local}' \ > --set grafana.ingress.paths='{/}' \ > --set grafana.ingress.pathType=Prefix NAME: prometheus LAST DEPLOYED: Mon Oct 24 23:01:25 2022 NAMESPACE: bigdata STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: prometheus-server.bigdata.svc.cluster.local From outside the cluster, the server URL(s) are: http://prometheus.k8s.local 访问:192.168.50.4:32356 The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster: prometheus-alertmanager.bigdata.svc.cluster.local From outside the cluster, the alertmanager URL(s) are: http://alertmanager.k8s.local ################################################################################# ###### WARNING: Pod Security Policy has been moved to a global property. ##### ###### use .Values.podSecurityPolicy.enabled with pod-based ##### ###### annotations ##### ###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) ##### ################################################################################# The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster: prometheus-pushgateway.bigdata.svc.cluster.local Get the PushGateway URL by running these commands in the same shell: export NODE_PORT=$(kubectl get --namespace bigdata -o jsonpath="{.spec.ports[0].nodePort}" services prometheus-pushgateway) export NODE_IP=$(kubectl get nodes --namespace bigdata -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT For more information on running Prometheus, visit: https://prometheus.io/ helm repo add grafana https://grafana.github.io/helm-charts helm repo update grafana helm search repo grafana/grafana helm pull grafana/grafana tar -xf grafana-6.38.3.tgz 修改镜像values.yaml: ![grafna_value](D:\三创\容器化日志收集系统\grafna_value.png) 部署grafana: [root@k8s-master grafana]# helm install grafana ./ \ > -n bigdata \ > --create-namespace \ > --set ingress.enabled=true \ > --set ingress.hosts='{grafana.k8s.local}' \ > --set ingress.paths='{/}' \ > --set ingress.pathType=Prefix W1024 23:20:50.700507 96273 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1024 23:20:50.703253 96273 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1024 23:20:50.825464 96273 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W1024 23:20:50.825692 96273 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME: grafana LAST DEPLOYED: Mon Oct 24 23:20:50 2022 NAMESPACE: bigdata STATUS: deployed REVISION: 1 NOTES: 1. Get your 'admin' user password by running: kubectl get secret --namespace bigdata grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo 2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: grafana.bigdata.svc.cluster.local If you bind grafana to 80, please update values in values.yaml and reinstall: Details refer to https://grafana.com/docs/installation/configuration/#http-port. Or grafana would always crash. From outside the cluster, the server URL(s) are: http://grafana.k8s.local 3. Login with the password from step 1 and the username: admin ################################################################################# ###### WARNING: Persistence is disabled!!! You will lose your data when ##### ###### the Grafana pod is terminated. ##### ################################################################################# 获取登录密码:kubectl get secret --namespace bigdata grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo 访问:192.168.50.4:32269 ``` ## 效果: k8s集群pod状态运行正常: ![输入图片说明](%E9%9B%86%E7%BE%A4pod%E7%8A%B6%E6%80%81.png) prometheus和grafana网页服务正常: ![输入图片说明](prometheus.png) ![输入图片说明](grafana.png) ## 问题: - 日志时间与本地不一致 - 输入中文到nginx的html时,windows浏览器打开会出现乱码 - nginx pod没有指定正确的nfs挂载目录的话(例如所挂载的目录nfs没有exports),这时访问页面会报forbidden错误403 - nfs的挂载目录没有设置权限,导致容器不能正常读写 - python消费者脚本不能连接kafka - grafana上添加prometheus数据源报错,Error reading Prometheus: Query error: 500 Internal Server Error - pvc为pending状态:pvc日志报错Normal ExternalProvisioning 7s (x19 over 4m22s) persistentvolume-controller ``` 修改apiserver的配置 [root@k8sm storage]# vi /etc/kubernetes/manifests/kube-apiserver.yaml apiVersion: v1 ··· - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key - --feature-gates=RemoveSelfLink=false # 添加这个配置 重启下kube-apiserver.yaml ———————————————— 版权声明:本文为CSDN博主「南巷Dong」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。 原文链接:https://blog.csdn.net/fclwd/article/details/123674933 ``` ## 改进: - 数据库读写分离 - ELK数据存储展示 - k8s集群多主多node - 提高kafka分区数,提高吞吐压力测试 - sidecar形式部署filebeat