diff --git "a/Hadoop/UserGuide/CDH5.15.1\345\256\211\350\243\205\351\233\206\347\276\244Hadoop.md" "b/Hadoop/UserGuide/CDH5.15.1\345\256\211\350\243\205\351\233\206\347\276\244Hadoop.md"
index 4a4cd6ae4c413d36d9a07915977afd5ab5f22a0b..f7e2132e0ea378c19fb45f2ee6d1bf7cf7124277 100644
--- "a/Hadoop/UserGuide/CDH5.15.1\345\256\211\350\243\205\351\233\206\347\276\244Hadoop.md"
+++ "b/Hadoop/UserGuide/CDH5.15.1\345\256\211\350\243\205\351\233\206\347\276\244Hadoop.md"
@@ -141,6 +141,12 @@
fs.defaultFS
hdfs://master:8020
+
+
+ hadoop.tmp.dir
+ /data/hadoop
+
+
4. hdfs-site.xml
@@ -152,7 +158,7 @@
- hadoop.tmp.dir
+ dfs.datanode.data.dir
/data/hadoop
diff --git "a/Linux/Issue/Linux\347\263\273\347\273\237\347\243\201\347\233\230\347\233\270\345\205\263\351\227\256\351\242\230.md" "b/Linux/Issue/Linux\347\263\273\347\273\237\347\243\201\347\233\230\347\233\270\345\205\263\351\227\256\351\242\230.md"
index 8e6507f66a8f87a43eae3f0543bdadb0ab4b0e86..73ae7e82b019651d0df4c6aded76bcd321fb8073 100644
--- "a/Linux/Issue/Linux\347\263\273\347\273\237\347\243\201\347\233\230\347\233\270\345\205\263\351\227\256\351\242\230.md"
+++ "b/Linux/Issue/Linux\347\263\273\347\273\237\347\243\201\347\233\230\347\233\270\345\205\263\351\227\256\351\242\230.md"
@@ -7,8 +7,15 @@
mount /dev/sdb /data
du -h /data
+ 也可以直接指定格式化类型
+
+ mkfs.ext4 /dev/vdd
+
### 2. 测试磁盘IO情况,很慢说明磁盘有问题,或者IO堵塞
+
+[1] 测试硬盘
+
- 安装一个测试软件
yum intall hdparm -y
@@ -17,6 +24,16 @@
hdparm -tT /dev/sdf
+[2] 测试数据占用
+
+ - 安装监控软件
+
+ yum install iotop -y
+
+ - 测试IO
+
+ iotop -a
+
### 3. 磁盘自动挂载配置
diff --git "a/Linux/Issue/Linux\347\263\273\347\273\237\347\275\221\347\273\234\347\233\270\345\205\263\351\227\256\351\242\230.md" "b/Linux/Issue/Linux\347\263\273\347\273\237\347\275\221\347\273\234\347\233\270\345\205\263\351\227\256\351\242\230.md"
index b993817169858f7e38e9f66bb31a0b41283065b5..769a449e179fc3ea490efc2e2a66524bb7417dab 100644
--- "a/Linux/Issue/Linux\347\263\273\347\273\237\347\275\221\347\273\234\347\233\270\345\205\263\351\227\256\351\242\230.md"
+++ "b/Linux/Issue/Linux\347\263\273\347\273\237\347\275\221\347\273\234\347\233\270\345\205\263\351\227\256\351\242\230.md"
@@ -4,3 +4,8 @@
### 1. 判断一台主机的端口是否连通
telnet 121.199.167.99 61616
+
+### 网卡开启/关闭
+
+ ifconfig eth1 down
+ ifconfig eth1 up
\ No newline at end of file
diff --git "a/MySQL/UserGuide/Window10\345\256\211\350\243\205MySQL5.7\347\232\204ZIP\345\214\205.md" "b/MySQL/UserGuide/Window10\345\256\211\350\243\205MySQL5.7\347\232\204ZIP\345\214\205.md"
index 6cfbfa30cabd5171f2608531d6eead44dae10a91..38dd50f8b3868ca751fafe0a2605cfc5d7e43383 100644
--- "a/MySQL/UserGuide/Window10\345\256\211\350\243\205MySQL5.7\347\232\204ZIP\345\214\205.md"
+++ "b/MySQL/UserGuide/Window10\345\256\211\350\243\205MySQL5.7\347\232\204ZIP\345\214\205.md"
@@ -25,7 +25,7 @@ https://dev.mysql.com/downloads/mysql/

-> 以管理员身份运行DOS窗口,进入到/Soft/Mysql/bin目录下(这里是我的安装目录)
+> 以管理员身份运行DOS窗口,进入到/Soft/Mysql/bin目录下(这里是我的安装目录) (进行C盘 window/system32找到cmd.exe 点击右键 以管理员用户执行就可以)
[2] 创建数据目录
diff --git "a/Tools/Http/\344\277\256\346\224\271Httpd\347\253\257\345\217\243.md" "b/Tools/Http/\344\277\256\346\224\271Httpd\347\253\257\345\217\243.md"
new file mode 100644
index 0000000000000000000000000000000000000000..33d74d85d475ae72d83394ced9b8b741cc825b47
--- /dev/null
+++ "b/Tools/Http/\344\277\256\346\224\271Httpd\347\253\257\345\217\243.md"
@@ -0,0 +1,116 @@
+## 修改Httpd的端口
+
+### 环境
+
+> 当前已经安装好了httpd的服务(若是没有安装可以执行以下命令进行安装)
+
+```shell
+yum install httpd -y
+```
+
+### 修改Httpd默认80端口
+
+```shell
+vim /etc/httpd/conf/httpd.conf
+
+修改以下单位配置:
+
+Listen 172.16.4.42:8888
+ServerAdmin root@172.16.4.42
+ServerName 172.16.4.42:8888
+```
+
+### 修改之后启动不起来问题
+
+> 报错信息
+
+```shell
+-- Documentation: http://www.freedesktop.org/wiki/Software/systemd/multiseat
+--
+-- A new session with the ID 8 has been created for the user root.
+--
+-- The leading process of the session is 23622.
+Nov 02 03:21:46 source systemd[1]: Started Session 8 of user root.
+-- Subject: Unit session-8.scope has finished start-up
+-- Defined-By: systemd
+-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
+--
+-- Unit session-8.scope has finished starting up.
+--
+-- The start-up result is done.
+Nov 02 03:21:46 source sshd[23622]: pam_unix(sshd:session): session opened for user root by (uid=0)
+Nov 02 03:21:57 source polkitd[904]: Registered Authentication Agent for unix-process:24708:1552626 (system bus name :1.45 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/P
+Nov 02 03:21:57 source systemd[1]: Starting The Apache HTTP Server...
+-- Subject: Unit httpd.service has begun start-up
+-- Defined-By: systemd
+-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
+--
+-- Unit httpd.service has begun starting up.
+Nov 02 03:22:04 source httpd[24714]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using fe80::fcfc:feff:fe02:286e. Set the 'ServerName' directive globally to s
+Nov 02 03:22:04 source httpd[24714]: (13)Permission denied: AH00072: make_sock: could not bind to address [::]:8888
+Nov 02 03:22:04 source httpd[24714]: (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:8888
+Nov 02 03:22:04 source httpd[24714]: no listening sockets available, shutting down
+Nov 02 03:22:04 source httpd[24714]: AH00015: Unable to open logs
+Nov 02 03:22:04 source systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILURE
+Nov 02 03:22:04 source kill[25489]: kill: cannot find process ""
+Nov 02 03:22:04 source systemd[1]: httpd.service: control process exited, code=exited status=1
+Nov 02 03:22:04 source systemd[1]: Failed to start The Apache HTTP Server.
+-- Subject: Unit httpd.service has failed
+-- Defined-By: systemd
+-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
+--
+-- Unit httpd.service has failed.
+--
+-- The result is failed.
+Nov 02 03:22:04 source systemd[1]: Unit httpd.service entered failed state.
+Nov 02 03:22:04 source systemd[1]: httpd.service failed.
+Nov 02 03:22:04 source polkitd[904]: Unregistered Authentication Agent for unix-process:24708:1552626 (system bus name :1.45, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.U
+```
+
+> 这是由于selinux引发的问题,然后运用临时禁止即可启动成功。现记录至此以备以后运用。
+
+### 解决方案(关闭selinux)
+
+[1] 临时生效
+
+```shell
+setenforce 0
+```
+
+[2] 永久生效
+
+```shell
+vim /etc/selinux/config
+
+SELINUX=enforcing
+改为
+SELINUX=disabled
+```
+
+[3] 启动
+
+```shell
+systemctl start httpd
+```
+
+[4] 验证
+
+```shell
+[root@source ~]# lsof -i :8888
+COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
+httpd 5885 root 3u IPv4 21430 0t0 TCP source:ddi-tcp-1 (LISTEN)
+httpd 6056 apache 3u IPv4 21430 0t0 TCP source:ddi-tcp-1 (LISTEN)
+httpd 6057 apache 3u IPv4 21430 0t0 TCP source:ddi-tcp-1 (LISTEN)
+httpd 6059 apache 3u IPv4 21430 0t0 TCP source:ddi-tcp-1 (LISTEN)
+httpd 6060 apache 3u IPv4 21430 0t0 TCP source:ddi-tcp-1 (LISTEN)
+httpd 6061 apache 3u IPv4 21430 0t0 TCP source:ddi-tcp-1 (LISTEN)
+```
+
+[5] 设置防火墙策略
+
+```shell
+[root@source ~]# firewall-cmd --permanent --zone=public --add-port=8888/tcp
+success
+[root@source ~]# firewall-cmd --reload
+success
+```
\ No newline at end of file
diff --git "a/Zookeeper/UserGuide/Centos7\344\275\277\347\224\250Zookeeper\347\256\200\345\215\225\344\275\277\347\224\250.md" "b/Zookeeper/UserGuide/Centos7\344\275\277\347\224\250Zookeeper\347\256\200\345\215\225\344\275\277\347\224\250.md"
new file mode 100644
index 0000000000000000000000000000000000000000..4de522ee8d6ef9dc8f11bcaaaa55ea1426036e96
--- /dev/null
+++ "b/Zookeeper/UserGuide/Centos7\344\275\277\347\224\250Zookeeper\347\256\200\345\215\225\344\275\277\347\224\250.md"
@@ -0,0 +1,79 @@
+## Centos7使用Zookeeper简单使用
+
+### 环境
+
+> 首先已经安装好了Zookeeper的单机或集群环境
+
+> 安装Zookeeper单机请参考 [Centos7安装单机版kafka](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/master/%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97/Kafka/UserGuide/Centos7%E5%AE%89%E8%A3%85%E5%8D%95%E6%9C%BA%E7%89%88kafka.md)
+
+> 安装Zookeeper集群请参考 [Centos7安装集群版kafka](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/master/%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97/Kafka/UserGuide/Centos7%E5%AE%89%E8%A3%85%E9%9B%86%E7%BE%A4%E7%89%88kafka.md)
+
+### 把Zookeeper加入到系统服务中
+
+[1] 创建一个新的配置文件
+
+```shell
+vim /usr/lib/systemd/system/zookeeper.service
+
+
+[Unit]
+Description=zookeeper.service
+After=network.target
+
+[Service]
+Type=simple
+Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/install/jdk1.8.0_241/bin"
+Environment=ZOO_LOG_DIR=/data/zookeeper/logs
+PIDFile=/data/zookeeper/data/zookeeper_server.pid
+ExecStart=/opt/install/zookeeper-3.6.0/bin/zkServer.sh start
+ExecStop=/opt/install/zookeeper-3.6.0/bin/zkServer.sh stop
+ExecReload=/opt/install/zookeeper-3.6.0/bin/zkServer.sh restart
+Restart=on-failure
+User=root
+Group=root
+
+[Install]
+WantedBy=multi-user.target
+```
+
+[2] 对此配置文件授权(这个文件需要授权,不授权不起效果)
+
+```shell
+chmod +x /usr/lib/systemd/system/zookeeper.service
+```
+
+[3] 系统重新加载
+
+```shell
+systemctl daemon-reload
+```
+
+[4] 验证(.service可以省略)
+
+```shell
+ systemctl enable zookeeper.service # 设置某服务开机启动
+ systemctl start zookeeper.service # 启动某服务
+ systemctl stop zookeeper.service # 停止某服务
+ systemctl reload zookeeper.service # 重启某服务
+```
+
+### 修改Zookeeper默认日志数据目录
+
+[1] 修改Zookeeper的配置文件($ZOOKEEPER_HOME/conf/zoo.cfg)
+
+```shell
+找到自己的Zookeeper的安装目录下的zoo.cfg文件
+
+vim $ZOOKEEPER_HOME/conf/zoo.cfg
+
+添加一句配置(目录指定自定义,但是要确保Zookeeper有权限操作)
+
+dataLogDir=/data/zookeeper/logs
+```
+
+[2] 重启Zookeeper的服务,
+
+```shell
+systemctl stop zookeeper.service
+systemctl start zookeeper.service
+```
\ No newline at end of file
diff --git "a/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\345\215\225\346\234\272\347\211\210.md" "b/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\345\215\225\346\234\272\347\211\210.md"
index 121b281a4c417fda695faae91d9baf83ec1513a9..5b87ba52bf7db3913ff9e362fbbeee5406d6d92c 100644
--- "a/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\345\215\225\346\234\272\347\211\210.md"
+++ "b/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\345\215\225\346\234\272\347\211\210.md"
@@ -3,13 +3,13 @@
> Zookeeper 的实施部署分为两种模式,分别是 Standalone模式(单机模式),Replicated模式(集群模式)
> 当前介绍一下单机模式的安装与配置
-> 基础环境准备及检测,请参考[Linux环境准备及检测.md](https://github.com/ItdeerLab/itdeerlab-notes/blob/notes/Linux/Soft/Linux%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%E5%8F%8A%E6%A3%80%E6%B5%8B.md)
+> 基础环境准备及检测,请参考[Linux环境准备及检测.md](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Linux/Soft/Linux%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%E5%8F%8A%E6%A3%80%E6%B5%8B.md)
### 依赖软件安装
[1] JDK (JDK 8)
-> 需要提前安装好JDK,安装JDK可以参考 [JDK在Centos7.2的安装配置](https://github.com/ItdeerLab/itdeerlab-notes/blob/notes/JDK/UserGuide/JDK%E5%9C%A8Centos7.2%E7%9A%84%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE.md)
+> 需要提前安装好JDK,安装JDK可以参考 [JDK在Centos7.2的安装配置](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/JDK/UserGuide/JDK%E5%9C%A8Centos7.2%E7%9A%84%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE.md)
### 安装Zookeeper(3.6.0)
@@ -69,13 +69,10 @@ export PATH=$ZOOKEEPER_HOME/bin:$PATH
[4] 启动
```
-终端启动
+终端启动 (本身是后台启动)
[root@itdeer zookeeper]# ./bin/zkServer.sh start ##关掉终端就会停止
-后台启动
-
-[root@itdeer zookeeper]# nohup ./bin/zkServer.sh start >/dev/null 2>&1 &
```
[5] 检测
@@ -113,4 +110,8 @@ WatchedEvent state:SyncConnected type:None path:null
```
[root@itdeer zookeeper]# rm -fr ../apache-zookeeper-3.6.0-bin.tar.gz
-```
\ No newline at end of file
+```
+
+### 配置Zookeeper的日志目录及加入系统服务
+
+> 请参考 [Centos7使用Zookeeper简单使用](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/master/Zookeeper/UserGuide/Centos7%E4%BD%BF%E7%94%A8Zookeeper%E7%AE%80%E5%8D%95%E4%BD%BF%E7%94%A8.md)
\ No newline at end of file
diff --git "a/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\351\233\206\347\276\244\347\211\210.md" "b/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\351\233\206\347\276\244\347\211\210.md"
index 6dd1b94e746515dd02198c85eecaf83465696772..a51d31a6bb9f174df14513fc94b4e8113f7ede78 100644
--- "a/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\351\233\206\347\276\244\347\211\210.md"
+++ "b/Zookeeper/UserGuide/Centos7\345\256\211\350\243\205Zookeeper\351\233\206\347\276\244\347\211\210.md"
@@ -3,14 +3,12 @@
> Zookeeper 的实施部署分为两种模式,分别是 Standalone模式(单机模式),Replicated模式(集群模式)
> 当前介绍一下集群模式的安装与配置
-
-
> 准备三台这样的机器(集群模式要求至少三台机器,而且最好是奇数个,而且每台机器都在根目录下创建一个/data的挂载点)
> master 192.168.1.91
> node1 192.168.1.92
> node2 192.168.1.93
-> 基础环境准备及检测,请参考[Linux环境准备及检测.md](https://github.com/masterLab/masterlab-notes/blob/notes/Linux/Soft/Linux%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%E5%8F%8A%E6%A3%80%E6%B5%8B.md)
+> 基础环境准备及检测,请参考[Linux环境准备及检测.md](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Linux/Soft/Linux%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%E5%8F%8A%E6%A3%80%E6%B5%8B.md)
### 依赖软件安装
@@ -18,7 +16,7 @@
> 可以配置一台,然后进行复制也是可以的,比较快捷(这里统一安装到/opt/install 提前创建一个install的目录)
-> 需要提前安装好JDK,安装JDK可以参考 [JDK在Centos7.2的安装配置](https://github.com/masterLab/masterlab-notes/blob/notes/JDK/UserGuide/JDK%E5%9C%A8Centos7.2%E7%9A%84%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE.md)
+> 需要提前安装好JDK,安装JDK可以参考 [JDK在Centos7.2的安装配置](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/JDK/UserGuide/JDK%E5%9C%A8Centos7.2%E7%9A%84%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE.md)
### 安装Zookeeper(3.6.0)
@@ -168,4 +166,8 @@
[root@master zookeeper]# rm -fr ../apache-zookeeper-3.6.0-bin.tar.gz
-> 至此Zookeeper的集群版安装完成,可以正常的使用
\ No newline at end of file
+> 至此Zookeeper的集群版安装完成,可以正常的使用
+
+### 配置Zookeeper的日志目录及加入系统服务
+
+> 请参考 [Centos7使用Zookeeper简单使用](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/master/Zookeeper/UserGuide/Centos7%E4%BD%BF%E7%94%A8Zookeeper%E7%AE%80%E5%8D%95%E4%BD%BF%E7%94%A8.md)
\ No newline at end of file
diff --git "a/\346\227\266\345\272\217\346\225\260\346\215\256\345\272\223/Druid/Architecture/Druid0.17.0\347\211\210\346\234\254\351\233\206\347\276\244\345\256\236\346\226\275\346\226\207\346\241\243.md" "b/\346\227\266\345\272\217\346\225\260\346\215\256\345\272\223/Druid/Architecture/Druid0.17.0\347\211\210\346\234\254\351\233\206\347\276\244\345\256\236\346\226\275\346\226\207\346\241\243.md"
index c0300ff47a7d2aa1bdd463df7798ddbc9ebfd351..6d88a4474f73131d3b5c6100e2128730c41f6236 100644
--- "a/\346\227\266\345\272\217\346\225\260\346\215\256\345\272\223/Druid/Architecture/Druid0.17.0\347\211\210\346\234\254\351\233\206\347\276\244\345\256\236\346\226\275\346\226\207\346\241\243.md"
+++ "b/\346\227\266\345\272\217\346\225\260\346\215\256\345\272\223/Druid/Architecture/Druid0.17.0\347\211\210\346\234\254\351\233\206\347\276\244\345\256\236\346\226\275\346\226\207\346\241\243.md"
@@ -97,44 +97,52 @@
[1] 下载安装包(Druid的历史版本列表 https://archive.apache.org/dist/druid/)
- mkdir /opt/install && cd /opt/install
+```shell
+mkdir /opt/install && cd /opt/install
- wget https://archive.apache.org/dist/druid/0.17.0/apache-druid-0.17.0-bin.tar.gz
+wget https://archive.apache.org/dist/druid/0.17.0/apache-druid-0.17.0-bin.tar.gz
+```
[2] 解压
- tar -zxvf apache-druid-0.17.0-bin.tar.gz
+```shell
+tar -zxvf apache-druid-0.17.0-bin.tar.gz
- mv mv apache-druid-0.17.0 druid-0.17.0
+mv mv apache-druid-0.17.0 druid-0.17.0
- rm -fr apache-druid-0.17.0-bin.tar.gz
+rm -fr apache-druid-0.17.0-bin.tar.gz
+```
[3] 创建数据库(库名称为druid)
- CREATE USER 'druid'@'%' IDENTIFIED BY 'druid';
-
- CREATE DATABASE druid DEFAULT CHARACTER SET utf8;
+```shell
+CREATE USER 'druid'@'%' IDENTIFIED BY 'druid';
+
+CREATE DATABASE druid DEFAULT CHARACTER SET utf8;
- GRANT ALL PRIVILEGES ON *.* TO 'druid'@'%' WITH GRANT OPTION;
+GRANT ALL PRIVILEGES ON *.* TO 'druid'@'%' WITH GRANT OPTION;
- commit;
+commit;
- 上传MySQL的驱动程序包
- extensions/mysql-metadata-storage/
+上传MySQL的驱动程序包
+extensions/mysql-metadata-storage/
- [root@master druid-0.17.0]# ll extensions/mysql-metadata-storage/
- total 1004
- -rw-r--r-- 1 root root 1007502 Jun 1 23:44 mysql-connector-java-5.1.47.jar
- -rw-r--r-- 1 501 wheel 17958 Jan 22 14:35 mysql-metadata-storage-0.17.0.jar
+[root@master druid-0.17.0]# ll extensions/mysql-metadata-storage/
+total 1004
+-rw-r--r-- 1 root root 1007502 Jun 1 23:44 mysql-connector-java-5.1.47.jar
+-rw-r--r-- 1 501 wheel 17958 Jan 22 14:35 mysql-metadata-storage-0.17.0.jar
+```
[4] 配置环境变量
- vim /etc/profile.d/druid.sh
+```shell
+vim /etc/profile.d/druid.sh
- export DRUID_HOME=/opt/install/druid-0.17.0
- export PATH=$DRUID_HOME/bin:$PATH
+export DRUID_HOME=/opt/install/druid-0.17.0
+export PATH=$DRUID_HOME/bin:$PATH
- source /etc/profile
+source /etc/profile
+```
[5] 修改配置
@@ -142,110 +150,122 @@
- Zookeeper配置
- vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
+```shell
+vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
- druid.zk.service.host=master:2181,node1:2181,node2:2181
+druid.zk.service.host=master:2181,node1:2181,node2:2181
- zookeeper1 zookeeper2为Zookeeper的主机名称
+zookeeper1 zookeeper2为Zookeeper的主机名称
+```
- 元数据存储
- vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
+```shell
+vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
- druid.extensions.loadList=[".....","mysql-metadata-storage"] 添加一项 mysql-metadata-storage
+druid.extensions.loadList=[".....","mysql-metadata-storage"] 添加一项 mysql-metadata-storage
- 注释掉元数据默认的存储方式 derby
- #druid.metadata.storage.type=derby
- #druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
- #druid.metadata.storage.connector.host=localhost
- #druid.metadata.storage.connector.port=1527
+注释掉元数据默认的存储方式 derby
+#druid.metadata.storage.type=derby
+#druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
+#druid.metadata.storage.connector.host=localhost
+#druid.metadata.storage.connector.port=1527
- 打开并修改MySQL的元数据连接信息
- druid.metadata.storage.type=mysql
- druid.metadata.storage.connector.connectURI=jdbc:mysql://master:3306/druid
- druid.metadata.storage.connector.user=druid
- druid.metadata.storage.connector.password=druid
+打开并修改MySQL的元数据连接信息
+druid.metadata.storage.type=mysql
+druid.metadata.storage.connector.connectURI=jdbc:mysql://master:3306/druid
+druid.metadata.storage.connector.user=druid
+druid.metadata.storage.connector.password=druid
+```
- 深度存储(本示例使用的是HDFS的存储方式,集群模式下不支持Local本地存储)
- S3存储方式
- vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
+```shell
+vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
- 将“ druid-s3-extensions”添加到中druid.extensions.loadList
- druid.extensions.loadList=[".....","druid-s3-extensions"] 添加一项 druid-s3-extensions
+将“ druid-s3-extensions”添加到中druid.extensions.loadList
+druid.extensions.loadList=[".....","druid-s3-extensions"] 添加一项 druid-s3-extensions
- 在“深度存储”和“索引服务日志”下注释掉本地存储的配置并进行相应的修改
- #druid.storage.type=local
- #druid.storage.storageDirectory=var/druid/segments
+在“深度存储”和“索引服务日志”下注释掉本地存储的配置并进行相应的修改
+#druid.storage.type=local
+#druid.storage.storageDirectory=var/druid/segments
- druid.storage.type=s3
- druid.storage.bucket=your-bucket
- druid.storage.baseKey=druid/segments
- druid.s3.accessKey=...
- druid.s3.secretKey=...
+druid.storage.type=s3
+druid.storage.bucket=your-bucket
+druid.storage.baseKey=druid/segments
+druid.s3.accessKey=...
+druid.s3.secretKey=...
- #druid.indexer.logs.type=file
- #druid.indexer.logs.directory=var/druid/indexing-logs
+#druid.indexer.logs.type=file
+#druid.indexer.logs.directory=var/druid/indexing-logs
- druid.indexer.logs.type=s3
- druid.indexer.logs.s3Bucket=your-bucket
- druid.indexer.logs.s3Prefix=druid/indexing-logs
+druid.indexer.logs.type=s3
+druid.indexer.logs.s3Bucket=your-bucket
+druid.indexer.logs.s3Prefix=druid/indexing-logs
+```
- HDFS存储方式
- vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
+```shell
+vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
- 将“ druid-hdfs-storage”添加到中druid.extensions.loadList
- druid.extensions.loadList=[".....","druid-hdfs-storage"] 添加一项 druid-hdfs-storage
+将“ druid-hdfs-storage”添加到中druid.extensions.loadList
+druid.extensions.loadList=[".....","druid-hdfs-storage"] 添加一项 druid-hdfs-storage
- 在“深度存储”和“索引服务日志”下注释掉本地存储的配置并进行相应的修改
- #druid.storage.type=local
- #druid.storage.storageDirectory=var/druid/segments
+在“深度存储”和“索引服务日志”下注释掉本地存储的配置并进行相应的修改
+#druid.storage.type=local
+#druid.storage.storageDirectory=var/druid/segments
- druid.storage.type=hdfs
- druid.storage.storageDirectory=/data/druid/segments
+druid.storage.type=hdfs
+druid.storage.storageDirectory=/data/druid/segments
- #druid.indexer.logs.type=file
- #druid.indexer.logs.directory=var/druid/indexing-logs
+#druid.indexer.logs.type=file
+#druid.indexer.logs.directory=var/druid/indexing-logs
- druid.indexer.logs.type=hdfs
- druid.indexer.logs.directory=/druid/indexing-logs
+druid.indexer.logs.type=hdfs
+druid.indexer.logs.directory=/druid/indexing-logs
- 将您的Hadoop配置XML(core-site.xml,hdfs-site.xml,yarn-site.xml,mapred-site.xml)放在Druid进程的类路径上。可以复制conf/druid/cluster/_common/目录下
+将您的Hadoop配置XML(core-site.xml,hdfs-site.xml,yarn-site.xml,mapred-site.xml)放在Druid进程的类路径上。可以复制conf/druid/cluster/_common/目录下
- cp $HADOOP_HOME/etc/hadoop/core-site.xml $DRUID_HOME/conf/druid/cluster/_common/
- cp $HADOOP_HOME/etc/hadoop/hdfs-site.xml $DRUID_HOME/conf/druid/cluster/_common/
- cp $HADOOP_HOME/etc/hadoop/yarn-site.xml $DRUID_HOME/conf/druid/cluster/_common/
- cp $HADOOP_HOME/etc/hadoop/mapred-site.xml $DRUID_HOME/conf/druid/cluster/_common/
+cp $HADOOP_HOME/etc/hadoop/core-site.xml $DRUID_HOME/conf/druid/cluster/_common/
+cp $HADOOP_HOME/etc/hadoop/hdfs-site.xml $DRUID_HOME/conf/druid/cluster/_common/
+cp $HADOOP_HOME/etc/hadoop/yarn-site.xml $DRUID_HOME/conf/druid/cluster/_common/
+cp $HADOOP_HOME/etc/hadoop/mapred-site.xml $DRUID_HOME/conf/druid/cluster/_common/
+```
> data配置修改
- Historical
- vim $DRUID_HOME/conf/druid/cluster/data/historical/runtime.properties
-
- druid.processing.buffer.sizeBytes=500000000 ## buffer 缓存的大小
- druid.processing.numMergeBuffers=4 ## 根据具体的场景进行修改,默认4也是可以的
- druid.processing.numThreads=5 ## 能使用的最大核数为机器(CPU -1)
- druid.processing.tmpDir=/data/druid/processing ## 进程信息存储路径
+```shell
+vim $DRUID_HOME/conf/druid/cluster/data/historical/runtime.properties
- druid.segmentCache.locations=[{"path":"/data/druid/segment-cache","maxSize":300000000000}] ## segment-cache的目录配置及大小
- druid.server.maxSize=300000000000
+druid.processing.buffer.sizeBytes=500000000 ## buffer 缓存的大小
+druid.processing.numMergeBuffers=4 ## 根据具体的场景进行修改,默认4也是可以的
+druid.processing.numThreads=5 ## 能使用的最大核数为机器(CPU -1)
+druid.processing.tmpDir=/data/druid/processing ## 进程信息存储路径
+
+druid.segmentCache.locations=[{"path":"/data/druid/segment-cache","maxSize":300000000000}] ## segment-cache的目录配置及大小
+druid.server.maxSize=300000000000
+```
- middleManager
- vim $DRUID_HOME/conf/druid/cluster/data/middleManager/runtime.properties
+```shell
+vim $DRUID_HOME/conf/druid/cluster/data/middleManager/runtime.properties
- druid.worker.capacity=4 ## 当前机器能启动的最多的任务
+druid.worker.capacity=4 ## 当前机器能启动的最多的任务
- druid.indexer.task.baseTaskDir=/data/druid/task ## 任务的日志存储目录
+druid.indexer.task.baseTaskDir=/data/druid/task ## 任务的日志存储目录
- druid.indexer.task.hadoopWorkingPath=/data/druid/hadoop-tmp
+druid.indexer.task.hadoopWorkingPath=/data/druid/hadoop-tmp
- druid.indexer.fork.property.druid.processing.numMergeBuffers=2 ## 根据场景修改
- druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000 ## 根据场景修改
- druid.indexer.fork.property.druid.processing.numThreads=1 ## 根据场景修改
+druid.indexer.fork.property.druid.processing.numMergeBuffers=2 ## 根据场景修改
+druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000 ## 根据场景修改
+druid.indexer.fork.property.druid.processing.numThreads=1 ## 根据场景修改
+```
> query配置修改
@@ -255,97 +275,117 @@
[1] 复制安装包
- scp -r /opt/install/druid0.17.0 node1:/opt/install
- scp -r /opt/install/druid0.17.0 node2:/opt/install
- scp -r /opt/install/druid0.17.0 node3:/opt/install
- scp -r /opt/install/druid0.17.0 node4:/opt/install
- scp -r /opt/install/druid0.17.0 node5:/opt/install
+```shell
+scp -r /opt/install/druid0.17.0 node1:/opt/install
+scp -r /opt/install/druid0.17.0 node2:/opt/install
+scp -r /opt/install/druid0.17.0 node3:/opt/install
+scp -r /opt/install/druid0.17.0 node4:/opt/install
+scp -r /opt/install/druid0.17.0 node5:/opt/install
+```
[2] 复制/etc/hosts(在安装Hadoop的时候已经做了这里就不用copy了)
- scp /etc/hosts node1:/etc/
- scp /etc/hosts node2:/etc/
- scp /etc/hosts node3:/etc/
- scp /etc/hosts node4:/etc/
- scp /etc/hosts node5:/etc/
+```shell
+scp /etc/hosts node1:/etc/
+scp /etc/hosts node2:/etc/
+scp /etc/hosts node3:/etc/
+scp /etc/hosts node4:/etc/
+scp /etc/hosts node5:/etc/
+```
[3] 复制Druid的环境变量
- scp /etc/profile.d/druid.sh node1:/etc/profile.d/
- scp /etc/profile.d/druid.sh node2:/etc/profile.d/
- scp /etc/profile.d/druid.sh node3:/etc/profile.d/
- scp /etc/profile.d/druid.sh node4:/etc/profile.d/
- scp /etc/profile.d/druid.sh node5:/etc/profile.d/
+```shell
+scp /etc/profile.d/druid.sh node1:/etc/profile.d/
+scp /etc/profile.d/druid.sh node2:/etc/profile.d/
+scp /etc/profile.d/druid.sh node3:/etc/profile.d/
+scp /etc/profile.d/druid.sh node4:/etc/profile.d/
+scp /etc/profile.d/druid.sh node5:/etc/profile.d/
+```
[4] 让所有机器生效(所有机器都执行)
- source /etc/profile
+```shell
+source /etc/profile
+```
[5] 更改通用配置(duird所在的机器都需要更改)
- vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
+```shell
+vim $DRUID_HOME/conf/druid/cluster/_common/common.runtime.properties
- #
- # Hostname
- #
- druid.host=master ## 把主机名更改成所在机器的主机名,这个必须要更改,不然有些服务找不到localhost
+#
+# Hostname
+#
+druid.host=master ## 把主机名更改成所在机器的主机名,这个必须要更改,不然有些服务找不到localhost
+```
### 启动
[1] Zookeeper服务
- 检测是否正在运行
+```shell
+检测是否正在运行
- [root@zookeeper1 ~]# zkServer.sh status
- ZooKeeper JMX enabled by default
- Using config: /opt/install/zookeeper3.6.0/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost.
- Mode: follower
+[root@zookeeper1 ~]# zkServer.sh status
+ZooKeeper JMX enabled by default
+Using config: /opt/install/zookeeper3.6.0/bin/../conf/zoo.cfg
+Client port found: 2181. Client address: localhost.
+Mode: follower
- [root@zookeeper2 ~]# zkServer.sh status
- ZooKeeper JMX enabled by default
- Using config: /opt/install/zookeeper3.6.0/bin/../conf/zoo.cfg
- Client port found: 2181. Client address: localhost.
- Mode: leader
+[root@zookeeper2 ~]# zkServer.sh status
+ZooKeeper JMX enabled by default
+Using config: /opt/install/zookeeper3.6.0/bin/../conf/zoo.cfg
+Client port found: 2181. Client address: localhost.
+Mode: leader
+```
[2] 主服务器
- 终端运行
- start-cluster-master-no-zk-server
+```shell
+终端运行
+start-cluster-master-no-zk-server
- 后台运行
- nohup start-cluster-master-no-zk-server 1>/dev/null 2>&1 &
+后台运行
+nohup start-cluster-master-no-zk-server 1>/dev/null 2>&1 &
- 检测端口
- lsof -i :8081
+检测端口
+lsof -i :8081
+```
[3] 数据服务器
- 终端运行
- start-cluster-data-server
+```shell
+终端运行
+start-cluster-data-server
+
+后台运行
+nohup start-cluster-data-server 1>/dev/null 2>&1 &
- 后台运行
- nohup start-cluster-data-server 1>/dev/null 2>&1 &
-
- 检测端口
- lsof -i :8083
+检测端口
+lsof -i :8083
+```
[4] 查询服务器
-
- 终端运行
- start-cluster-query-server
- 后台运行
- nohup start-cluster-query-server 1>/dev/null 2>&1 &
+```shell
+终端运行
+start-cluster-query-server
- 检测端口
- lsof -i :8082
+后台运行
+nohup start-cluster-query-server 1>/dev/null 2>&1 &
+
+检测端口
+lsof -i :8082
+```
### 停止
- 在druid服务所在的机器执行
+```shell
+在druid服务所在的机器执行
- service --down即可停止当前机器锁运行的服务
+service --down即可停止当前机器锁运行的服务
+```
### UI界面访问Druid集群
@@ -373,7 +413,7 @@
01. 下载发数程序
-```
+```shell
[root@itdeer kafka]# mkdir /opt/tokafka
[root@itdeer tokafka]# cd /opt/tokafka
@@ -386,7 +426,7 @@
02. 复制发数包
-```
+```shell
[root@itdeer DataToKafka]# cp target/DataToKafka-3.0.0.jar ../
[root@itdeer DataToKafka]# config target/DataToKafka-3.0.0.jar ../
[root@itdeer DataToKafka]# cd ../
@@ -394,7 +434,7 @@
03. 配置发数文件
-```
+```shell
[root@itdeer tokafka]# vim config/runtime.json
"bootstrapServers": "itdeer:9092",
@@ -405,13 +445,13 @@
04. 启动
-```
+```shell
[root@itdeer toData]# nohup java -jar -Duser.timezone=GMT+8 DataToKafka-3.0.0.jar >/dev/null 2>&1 &
```
05. 检测进程
-```
+```shell
[root@itdeer tokafka]# jps
4512 Jps
2530 QuorumPeerMain
@@ -421,7 +461,7 @@
06. 检测数据
-```
+```shell
[root@itdeer kafka]# ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic tsdb
FHS.126LIC_1009.DACA.PV,60.732,true,2020-03-26 11:25:11
FHS.126TI_1030.DACA.PV,509.675,false,2020-03-26 11:25:11
diff --git "a/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Centos7\345\256\211\350\243\205\345\215\225\346\234\272\347\211\210kafka.md" "b/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Centos7\345\256\211\350\243\205\345\215\225\346\234\272\347\211\210kafka.md"
index d99f931d6a1499bebfb71b20596241e69f8d53bf..7338d25ec79aac69387d389aef8474de89cb1827 100644
--- "a/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Centos7\345\256\211\350\243\205\345\215\225\346\234\272\347\211\210kafka.md"
+++ "b/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Centos7\345\256\211\350\243\205\345\215\225\346\234\272\347\211\210kafka.md"
@@ -1,18 +1,18 @@
### Centos7安装单机版Kafka
-基础环境准备及检测,请参考[Linux环境准备及检测.md](https://github.com/ItdeerLab/itdeerlab-notes/blob/notes/Linux/Soft/Linux%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%E5%8F%8A%E6%A3%80%E6%B5%8B.md)
+基础环境准备及检测,请参考[Linux环境准备及检测.md](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Linux/Soft/Linux%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%E5%8F%8A%E6%A3%80%E6%B5%8B.md)
### 依赖软件安装
[1] JDK (JDK 8)
-> 需要提前安装好JDK,安装JDK可以参考 [JDK在Centos7.2的安装配置](https://github.com/ItdeerLab/itdeerlab-notes/blob/notes/JDK/UserGuide/JDK%E5%9C%A8Centos7.2%E7%9A%84%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE.md)
+> 需要提前安装好JDK,安装JDK可以参考 [JDK在Centos7.2的安装配置](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/JDK/UserGuide/JDK%E5%9C%A8Centos7.2%E7%9A%84%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE.md)
[2] Zookeeper(3.6.0)
> Kafka自带有Zookeeper,这里将使用自己单独安装的Zookeeper,使用自带的也是比较简单,在启动Kafka之前,先启动Zookeeper一个命令就可以
-> 安装Zookeeper单机版可以参考 [Centos7安装Zookeeper单机版.md](https://github.com/ItdeerLab/itdeerlab-notes/blob/notes/Zookeeper/UserGuide/Centos7%E5%AE%89%E8%A3%85Zookeeper%E5%8D%95%E6%9C%BA%E7%89%88.md)
+> 安装Zookeeper单机版可以参考 [Centos7安装Zookeeper单机版.md](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Zookeeper/UserGuide/Centos7%E5%AE%89%E8%A3%85Zookeeper%E5%8D%95%E6%9C%BA%E7%89%88.md)
### 安装Kafka(2.2.0)
@@ -95,4 +95,8 @@ java 2684 root 120u IPv6 11993 0t0 TCP itdeer:XmlIpcRegSvc->itdeer:6
```
[root@itdeer kafka]# rm -fr ../kafka_2.12-2.2.2.tgz
-```
\ No newline at end of file
+```
+
+### 配置Kafka的日志目录及加入系统服务
+
+> 请参考 [Kafka在Centos7上的简单配置](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/master/%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97/Kafka/UserGuide/Kafka%E5%9C%A8Centos7%E4%B8%8A%E7%9A%84%E7%AE%80%E5%8D%95%E9%85%8D%E7%BD%AE.md)
\ No newline at end of file
diff --git "a/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Centos7\345\256\211\350\243\205\351\233\206\347\276\244\347\211\210kafka.md" "b/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Centos7\345\256\211\350\243\205\351\233\206\347\276\244\347\211\210kafka.md"
new file mode 100644
index 0000000000000000000000000000000000000000..4c0c16a1a587ef25a3e59c89b9bb65ef5c99e4c7
--- /dev/null
+++ "b/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Centos7\345\256\211\350\243\205\351\233\206\347\276\244\347\211\210kafka.md"
@@ -0,0 +1,156 @@
+### Centos7安装单机版Kafka
+
+基础环境准备及检测,请参考[Linux环境准备及检测.md](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Linux/Soft/Linux%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87%E5%8F%8A%E6%A3%80%E6%B5%8B.md)
+
+### 依赖软件安装
+
+[1] JDK (JDK 8)
+
+> 需要提前安装好JDK,安装JDK可以参考 [JDK在Centos7.2的安装配置](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/JDK/UserGuide/JDK%E5%9C%A8Centos7.2%E7%9A%84%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE.md)
+
+[2] Zookeeper(3.6.0)
+
+> Kafka自带有Zookeeper,这里将使用自己单独安装的Zookeeper,使用自带的也是比较简单,在启动Kafka之前,先启动Zookeeper一个命令就可以
+
+> 安装Zookeeper集群请参考 [Centos7安装Zookeeper集群版](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Zookeeper/UserGuide/Centos7%E5%AE%89%E8%A3%85Zookeeper%E9%9B%86%E7%BE%A4%E7%89%88.md)
+
+[3] 做ip映射
+
+```
+vim /etc/hosts
+
+master 192.168.1.91
+node1 192.168.1.92
+node2 192.168.1.93
+```
+
+### 安装Kafka(2.2.0)
+
+
+[1] 下载Kafka
+
+```shell
+[root@master ~]# mkdir -p /opt/install
+
+[root@master ~]# cd /opt/install
+
+[root@master install]# wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.2.2/kafka_2.12-2.2.2.tgz
+```
+
+[2] 解压
+
+```shell
+[root@master install]# tar -zxf kafka_2.12-2.2.2.tgz
+
+[root@master install]# mv kafka_2.12-2.2.0 kafka
+```
+
+[3] 配置
+
+1. 更改数据目录
+
+```shell
+[root@master install]# cd kafka
+[root@master kafka]# cp config/server.properties config/server.properties.bak
+[root@master kafka]# vim config/server.properties
+
+
+broker.id=1(其他的节点需要修改,保持唯一即可)
+
+listeners=PLAINTEXT://kafka1:9092
+advertised.listeners=PLAINTEXT://kafka1:9092
+
+
+log.dirs=/data/kafka/kafka-logs ## 一般情况会单独挂载一个数据盘到/data,可以自定义 默认情况下在tmp下,机器重启之后就会被清除掉
+
+
+zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 ## Zookeeper的连接地址
+
+```
+
+2. 设置环境变量
+
+```shell
+[root@master kafka]# vim /etc/profile.d/kafka.sh
+
+export KAFKA_HOME=/opt/install/kafka
+export PATH=$KAFKA_HOME/bin:$PATH
+
+[root@master kafka]# source /etc/profile
+```
+
+3. 向其他节点进行Copy
+
+```shell
+
+在 node1 node2上创建 mkdir /opt/install
+
+scp -r /opt/install/kafka node1:/opt/install/
+scp -r /opt/install/kafka node2:/opt/install/
+
+
+scp /etc/hosts node1:/etc/
+scp /etc/hosts node2:/etc/
+
+scp /etc/profile.d/kafka.sh node1:/etc/profile.d/
+scp /etc/profile.d/kafka.sh node2:/etc/profile.d/
+
+每台节点执行 source /etc/profile
+```
+
+4. 更改配置
+
+```
+vim $KAFKA_HOME/config/server.properties
+
+node1上改为 broker.id=2
+node2上改为 broker.id=3
+```
+
+
+[4] 启动
+
+```shell
+终端启动
+
+[root@master kafka]# /bin/kafka-server-start.sh config/server.properties ##关掉终端就会停止
+
+后台启动
+
+[root@master kafka]# /bin/kafka-server-start.sh -daemon config/server.properties
+
+从节点也要启动
+[root@node1 kafka]# /bin/kafka-server-start.sh -daemon config/server.properties
+[root@node2 kafka]# /bin/kafka-server-start.sh -daemon config/server.properties
+```
+
+[5] 检测
+
+1. 看进程
+
+```shell
+[root@master kafka]# jps
+2530 QuorumPeerMain
+3114 Jps
+2684 Kafka
+```
+
+2. 看端口
+
+```shell
+[root@master kafka]# lsof -i :9092
+COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
+java 2684 root 103u IPv6 29706 0t0 TCP *:XmlIpcRegSvc (LISTEN)
+java 2684 root 119u IPv6 14326 0t0 TCP master:60510->master:XmlIpcRegSvc (ESTABLISHED)
+java 2684 root 120u IPv6 11993 0t0 TCP master:XmlIpcRegSvc->master:60510 (ESTABLISHED)
+```
+
+[6] 删除安装包
+
+```shell
+[root@master kafka]# rm -fr ../kafka_2.12-2.2.2.tgz
+```
+
+### 配置Kafka的日志目录及加入系统服务
+
+> 请参考 [Kafka在Centos7上的简单配置](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/master/%E6%B6%88%E6%81%AF%E9%98%9F%E5%88%97/Kafka/UserGuide/Kafka%E5%9C%A8Centos7%E4%B8%8A%E7%9A%84%E7%AE%80%E5%8D%95%E9%85%8D%E7%BD%AE.md)
\ No newline at end of file
diff --git "a/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Kafka\345\234\250Centos7\344\270\212\347\232\204\347\256\200\345\215\225\351\205\215\347\275\256.md" "b/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Kafka\345\234\250Centos7\344\270\212\347\232\204\347\256\200\345\215\225\351\205\215\347\275\256.md"
new file mode 100644
index 0000000000000000000000000000000000000000..d7c81ecd3317be2a2dcd302f356a1cf7699d3cd5
--- /dev/null
+++ "b/\346\266\210\346\201\257\351\230\237\345\210\227/Kafka/UserGuide/Kafka\345\234\250Centos7\344\270\212\347\232\204\347\256\200\345\215\225\351\205\215\347\275\256.md"
@@ -0,0 +1,81 @@
+## Kafka在Centos7上的简单配置
+
+### 环境
+
+> 首先已经安装好了Kafka的单机或集群环境
+
+> 安装Kafka单机请参考 [Centos7安装Kafka单机版](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Zookeeper/UserGuide/Centos7%E5%AE%89%E8%A3%85Zookeeper%E5%8D%95%E6%9C%BA%E7%89%88.md)
+
+> 安装Kafka集群请参考 [Centos7安装Kafka集群版](https://gitee.com/ItdeerLab/itdeerlab-notes/blob/notes/Zookeeper/UserGuide/Centos7%E5%AE%89%E8%A3%85Zookeeper%E9%9B%86%E7%BE%A4%E7%89%88.md)
+
+> Zookeeper的服务正常的启动
+
+### 把Kafka的服务加入到系统服务中
+
+[1] 创建一个新的配置文件(需要Zookeeper启动之后,若是Zookeeper没有加入到系统服务中,可以把 After=network.target zookeeper.service 改为 After=network.target)
+
+```shell
+vim /usr/lib/systemd/system/kafka.service
+
+
+[Unit]
+Description=Apache Kafka server (broker)
+Description=kafka.service
+After=network.target zookeeper.service
+
+[Service]
+Type=simple
+Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/install/jdk1.8.0_241/bin"
+ExecStart=/opt/install/kafka_2.12-2.2.0/bin/kafka-server-start.sh /opt/install/kafka_2.12-2.2.0/config/server.properties
+ExecStop=/opt/install/kafka_2.12-2.2.0/bin/kafka-server-stop.sh
+Restart=on-failure
+User=root
+Group=root
+
+[Install]
+WantedBy=multi-user.target
+```
+
+[2] 对此配置文件授权(这个文件最好也做授权,之前测试过,不做授权也可以)
+
+```shell
+chmod +x /usr/lib/systemd/system/kafka.service
+```
+
+[3] 系统重新加载
+
+```shell
+systemctl daemon-reload
+```
+
+[4] 验证(.service可以省略)
+
+```shell
+ systemctl enable kafka.service # 设置某服务开机启动
+ systemctl start kafka.service # 启动某服务
+ systemctl stop kafka.service # 停止某服务
+ systemctl reload kafka.service # 重启某服务
+```
+
+### 修改Kafka默认日志数据目录
+
+[1] 修改Kafka的配置文件($KAFKA_HOME/bin/kafka-run-class.sh)
+
+```shell
+找到自己的Kafka的安装目录下的kafka-run-class.sh文件
+
+vim $KAFKA_HOME/bin/kafka-run-class.sh
+
+添加一句配置(目录指定自定义,但是要确保Kafka有权限操作)
+
+LOG_DIR=/data/kafka/logs
+```
+
+[2] 重启Kafka的服务,
+
+```shell
+systemctl stop kafka.service
+systemctl start kafka.service
+```
+
+> 可以验证一下是否日志文件被写入到了新的目录中。
\ No newline at end of file
diff --git "a/\346\266\210\346\201\257\351\230\237\345\210\227/Mosquitto/Windows10\345\256\211\350\243\205Mosquitto1.6.10.md" "b/\346\266\210\346\201\257\351\230\237\345\210\227/Mosquitto/Windows10\345\256\211\350\243\205Mosquitto1.6.10.md"
index 970309f70b3f3c90ff8e4284ada95d14cf122db1..472c557d619151715a09d6343a18683ac0bcaf42 100644
--- "a/\346\266\210\346\201\257\351\230\237\345\210\227/Mosquitto/Windows10\345\256\211\350\243\205Mosquitto1.6.10.md"
+++ "b/\346\266\210\346\201\257\351\230\237\345\210\227/Mosquitto/Windows10\345\256\211\350\243\205Mosquitto1.6.10.md"
@@ -40,6 +40,8 @@ https://mosquitto.org/download/

+> 另外一种方式是,在系统的服务中找到 Mosquitto Broker的服务,点击右键启动
+
[2] 启动订阅者
> 打开一个cmd的窗口,进入到安装目录下
@@ -83,4 +85,37 @@ https://www.eclipse.org/paho/components/tool/

-> 这个测试就很简单了
\ No newline at end of file
+> 这个测试就很简单了
+
+
+### 设置用户名和密码
+
+> 进入到安装目录这里的安装目录是 D:\Mosquitto目录下 找到mosquitto.conf 文件
+
+
+> 修改配置文件
+
+```shell
+找到 allow_anonymous # 是否开启匿名用户登录,默认是true
+
+修改前:#allow_anonymous
+修改后:allow_anonymous false
+
+找到 password_file # 指定配置用户名称放置位置
+
+修改前:#password_file
+修改后:password_file D:\Mosquitto\pwfile.example
+
+```
+
+> 打开cmd命令行,进入到安装目录下,执行 然后输入密码即可
+
+```shell
+mosquitto_passwd -c D:\Mosquitto\pwfile.example admin
+
+> D:\Mosquitto\mosquitto_passwd.exe D:\Mosquitto\pwfile.example admin # 没有配置环境变量时需要指定绝对路径
+Password:
+Reenter password:
+```
+
+> admin 为用户名 ,可以创建多个