Hadoop HA高可用集群启动教程

276次阅读
没有评论

Task1 高可用集群启动

一、HA的启动

1.启动journalnode守护进程

[hadoop@master hadoop]$  hadoop-daemon.sh start journalnode

[hadoop@slave1 hadoop]$  hadoop-daemon.sh start journalnode

[hadoop@slave2 hadoop]$  hadoop-daemon.sh start journalnode

master: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-root-journalnode-master.out

slave1: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-root-journalnode-slave1.out

slave2: starting journalnode, logging to /usr/local/src/hadoop/logs/hadoop-root-journalnode-slave2.out

2.初始化namenode

[hadoop@master ~]$ hdfs namenode -format

这里显示0为成功,显示1为失败,失败的话可以检查一下配置文件

Hadoop

3.注册ZNode

[hadoop@master ~]$ hdfs zkfc -formatZK

4.启动hdfs

[hadoop@master ~]$ start-dfs.sh

5.启动yarn

[hadoop@master ~]$ start-yarn.sh

6.同步master数据

复制namenode元数据到其它节点(在master节点执行)

[hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave1:/usr/local/src/hadoop/tmp/hdfs/nn/

[hadoop@master ~]$ scp -r /usr/local/src/hadoop/tmp/hdfs/nn/* slave2:/usr/local/src/hadoop/tmp/hdfs/nn/

Hadoop

7.在slave1上启动resourcemanager和namenode进程

 [hadoop@slave1 ~]$  yarn-daemon.sh start resourcemanager

starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-root-resourcemanager-slave1.out

[hadoop@slave1 ~]$ hadoop-daemon.sh start namenode

starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-slave1.out

8.启动 MapReduce任务历史服务器

[hadoop@master ~]$ yarn-daemon.sh start proxyserver

starting proxyserver, logging to /usr/local/src/hadoop/logs/yarn-root-proxyserver-master.out

[hadoop@master ~]$  mr-jobhistory-daemon.sh start historyserver

starting historyserver, logging to /usr/local/src/hadoop/logs/mapred-root-historyserver-master.out

9.查看端口和进程

[hadoop@master ~]$ jps

Hadoop

[hadoop@slave1 ~]$ jps

Hadoop

[hadoop@slave2 ~]$ jps

Hadoop

master:50070

Hadoop

slave1:50070

Hadoop

master:8088

Hadoop

二、HA的测试

1.创建一个测试文件

[hadoop@master ~]$ vi a.txt

//内容如下:

Hello World

Hello Hadoop

2.在hdfs创建文件夹

[hadoop@master ~]# hadoop fs -mkdir /input

3.将a.txt传输到input上

[hadoop@master ~]$ hadoop fs -put ~/a.txt /input

4.进入到jar包测试文件目录下

[hadoop@master ~]$ cd /usr/local/src/hadoop/share/hadoop/mapreduce/

5.测试mapreduce

[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount /input/a.txt /output

成功如下:

Hadoop
Hadoop
Hadoop

6.查看hdfs下的传输结果

[hadoop@master mapreduce]$ hadoop fs -lsr /output

Hadoop

7.查看文件测试的结果

[hadoop@master mapreduce]$ hadoop fs -cat /output/part-r-00000

Hadoop 1

Hello 2

World 1

三、高可用性验证

1.自动切换服务状态

输入代码:

[hadoop@master mapreduce]$ cd

#hdfs haadmin -failover –forcefence –forceactive 主 备

[hadoop@master ~]$ hdfs haadmin -failover –forcefence –forceactive slave1 master

查看状态

[hadoop@master ~]$ hdfs haadmin -getServiceState slave1

Hadoop

[hadoop@master ~]$ hdfs haadmin -getServiceState master

Hadoop

2.手动切换服务状态

在maste停止并启动namenode

[hadoop@master ~]$  hadoop-daemon.sh stop namenode

stopping namenode

查看状态

[hadoop@master ~]$ hdfs haadmin -getServiceState master

[hadoop@master ~]$ hdfs haadmin -getServiceState slave1

Hadoop

[hadoop@master ~]$  hadoop-daemon.sh start namenode

Hadoop

查看状态

[hadoop@master ~]$ hdfs haadmin -getServiceState slave1

Hadoop

[hadoop@master ~]$ hdfs haadmin -getServiceState master

Hadoop

查看web服务端

master:50070

Hadoop

slave1:50070

Hadoop
到点睡觉了
版权声明:本站原创文章,由 到点睡觉了2022-01-07发表,共计2865字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
评论(没有评论)