搭建Hadoop2.x HA
1.机器准备
虚拟机 4台
10.211.55.22 node1
10.211.55.23 node2
10.211.55.24 node3
10.211.55.25 node4
2.四台主机节点安排
|node | namenode | datanode|zk|zkfc|jn|rm |applimanager| |-----|-----------|---------|--|----|--|----|-------------| |node1| 1 | | 1 | 1 | | | |
|node2| 1 | 1 | 1 | 1 | 1 | | 1 | |node3| | 1 | 1 | | 1 | 1 | 1 | |node4| | 1 | | | 1 | 1 | 1 |总结:
node | 启动节点数 |
---|---|
node1 | 4 |
node2 | 7 |
node3 | 6 |
node4 | 5 |
3.所有机器准备工作
3.1主机名及每台hosts dns文件配置
修改虚拟机的名称
修改mac的node1 node2 node3 node4的dns
hostnamenode1 node2 node3 node4vi /etc/sysconfig/network宿主机及node1 node2 node3 node4vi /etc/hosts10.211.55.22 node110.211.55.23 node210.211.55.24 node310.211.55.25 node4
重启
3.2关闭防火墙
service iptables stop && chkconfig iptables off
检查
service iptables status
3.3配置免密钥
这里使用dsa算法
node1 node2 node3 node4本身机器配置免密钥
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
从node1拷贝到node2 node3 node4
scp ~/.ssh/id_dsa.pub root@node2:~scp ~/.ssh/id_dsa.pub root@node3:~scp ~/.ssh/id_dsa.pub root@node4:~node2 node3 node4自身追加:cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
从node2拷贝到node1 node3 node4
scp ~/.ssh/id_dsa.pub root@node1:~scp ~/.ssh/id_dsa.pub root@node3:~scp ~/.ssh/id_dsa.pub root@node4:~node1 node3 node4自身追加:cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
从node3拷贝到node1 node2 node4
scp ~/.ssh/id_dsa.pub root@node1:~scp ~/.ssh/id_dsa.pub root@node2:~scp ~/.ssh/id_dsa.pub root@node4:~node1 node2 node4自身追加:cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
从node4拷贝到node1 node2 node3
scp ~/.ssh/id_dsa.pub root@node1:~scp ~/.ssh/id_dsa.pub root@node2:~scp ~/.ssh/id_dsa.pub root@node3:~node1 node2 node3自身追加:cat ~/id_dsa.pub >> ~/.ssh/authorized_keys
3.4时间同步 ntp
所有机器:
yum install ntpntpdate -u s2m.time.edu.cn在启动的时候,需要同步一下保险,最好设置局域网时间同步,保持同步
检查: date
3.5安装java jdk
安装jdk,配置环境变量
所有机器:
卸载openjdk:java -versionrpm -qa | grep jdkrpm -e --nodeps java-1.6.0-openjdk-javadoc-1.6.0.0-1.41.1.10.4.el6.x86_64...rpm -qa | grep jdk安装jdk:rpm -ivh jdk-7u67-linux-x64.rpm vi ~/.bash_profile export JAVA_HOME=/usr/java/jdk1.7.0_67export PATH=$PATH:$JAVA_HOME/binsource ~/.bash_profile
检查:
java -version
3.6 上传软件及解压
上传hadoop-2.5.1_x64.tar.gz
scp /Users/mac/Documents/happyup/study/files/hadoop/hadoop-2.5.1_x64.tar.gz root@node1:/homenode2node3node4
上传zk
scp /Users/mac/Documents/happyup/study/files/hadoop/ha/zookeeper-3.4.6.tar.gz root@node1:/homenode2node3
解压:
node1 node2 node3 node4tar -xzvf /home/hadoop-2.5.1_x64.tar.gznode1 node2 node3tar -xzvf /home/zookeeper-3.4.6.tar.gz
3.7快照
hadoop 完全ha准备工作
3.1主机名及每台hosts dns文件配置
3.2关闭防火墙
3.3配置所有机器的互相免密钥
3.4时间同步 ntp
3.5安装java jdk
3.6上传解压软件hadoop zk
这时候做一个快照,其他机器也可以使用
4.zk 安装配置
4.1 修改配置文件zoo.cfg
ssh root@node1 cp /home/zookeeper-3.4.6/conf/zoo_sample.cfg /home/zookeeper-3.4.6/conf/zoo.cfgvi zoo.cfg其中把dataDir=/opt/zookeeper另外在最后添加:server.1=node1:2888:3888server.2=node2:2888:3888server.3=node3:2888:3888:wq
4.2修改工作目录
到datadir目录下:mkdir /opt/zookeepercd /opt/zookeeperls vi myid,填写1 :wq拷贝相关文件到node2 node3scp -r /opt/zookeeper/ root@node2:/opt 修改为2scp -r /opt/zookeeper/ root@node3:/opt 修改为3
4.3同步配置
拷贝zk到node2 node3scp -r /home/zookeeper-3.4.6/conf root@node2:/home/zookeeper-3.4.6/confscp -r /home/zookeeper-3.4.6/conf root@node3:/home/zookeeper-3.4.6/conf
4.4添加环境变量
node1 node2 node3
添加PATHvi ~/.bash_profile export ZOOKEEPER_HOME=/home/zookeeper-3.4.6PATH 添加 :$ZOOKEEPER_HOME/binsource ~/.bash_profile
4.5启动
启动:cd zk的bin目录下:zkServer.sh startjps:3214 QuorumPeerMain依次启动 node1 node2 node3
5.hadoop安装配置
5.1 hadoop-env.sh
cd /home/hadoop-2.5.1/etc/hadoop/
vi hadoop-env.sh 改动:export JAVA_HOME=/usr/java/jdk1.7.0_67
5.2 slaves
vi slaves node2node3node4
5.3 hdfs-site.xml
vi hdfs-site.xml
dfs.nameservices cluster1 dfs.ha.namenodes.cluster1 nn1,nn2 dfs.namenode.rpc-address.cluster1.nn1 node1:8020 dfs.namenode.rpc-address.cluster1.nn2 node2:8020 dfs.namenode.http-address.cluster1.nn1 node1:50070 dfs.namenode.http-address.cluster1.nn2 node2:50070 dfs.namenode.shared.edits.dir qjournal://node2:8485;node3:8485;node4:8485/cluster1 dfs.client.failover.proxy.provider.cluster1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_dsa dfs.journalnode.edits.dir /opt/journal/data dfs.ha.automatic-failover.enabled true
5.4 core-site.xml
vi core-site.xml
fs.defaultFS hdfs://cluster1 hadoop.tmp.dir /opt/hadoop ha.zookeeper.quorum node1:2181,node2:2181,node3:2181
5.5 mapred-site.xml
vi mapred-site.xml
cp mapred-site.xml.template mapred-site.xmlmapreduce.framework.name yarn
5.6 yarn-site.xml
vi yarn-site.xml 无需配置applicationmanager,因为和datanode相同
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id rm yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 node3 yarn.resourcemanager.hostname.rm2 node4 yarn.resourcemanager.zk-address node1:2181,node2:2181,node3:2181
5.7 同步配置文件
同步到node2 node3 node4
scp /home/hadoop-2.5.1/etc/hadoop/* root@node2:/home/hadoop-2.5.1/etc/hadoopscp /home/hadoop-2.5.1/etc/hadoop/* root@node3:/home/hadoop-2.5.1/etc/hadoopscp /home/hadoop-2.5.1/etc/hadoop/* root@node4:/home/hadoop-2.5.1/etc/hadoop
5.8 修改环境变量
node1 node2 node3 node4
vi ~/.bash_profileexport HADOOP_HOME=/home/hadoop-2.5.1PATH 添加::$HADOOP_HOME/bin:$HADOOP_HOME/sbinsource ~/.bash_profile
5.8 start
1.启动node1 node2 node3 的zk
启动:cd zk的bin目录下:zkServer.sh startjps:3214 QuorumPeerMain依次启动 node1 node2 node3
2.启动journalnode,用于格式化namenode 如果是第二次重新配置,删除 /opt/hadoop /opt/journal/data node1 node2 node3 node4
在node2 node3 node4分别执行:
./hadoop-daemon.sh start journalnodejps验证是否有journalnode进程
3.格式化一台namenode node1
cd bin./hdfs namenode -format验证打印日志,看工作目录有无文件生成
4.同步这个namenode的edits文件到另外一个node2,要启动被拷贝的namenode node1
cd sbin./hadoop-daemon.sh start namenode验证log日志 cd ../logs tail -n50 hadoop-root-namenode
5.执行同步edits文件
在没有格式化到namenode上进行(node2)cd bin./hdfs namenode -bootstrapStandby在node2上看有无文件生成
6.到node1停止所有服务
cd sbin./stop-dfs.sh
7.初始化zkfc,zk一定要启动,在任何一台namenode上
cd bin./hdfs zkfc -formatZK
8.启动
cd sbin:./start-dfs.shsbin/start-yarn.shjps:remanager nodemanagernode1:8088或者start-all.sh
2.x中resourcemanager 需要手动启动 node3 node4yarn-daemon.sh start resourcemanageryarn-daemon.sh stop resourcemanager
9.查看是否启动成功及测试
jpshdfs webui:http://node1:50070http://node2:50070 standbyrm webui:http://node3:8088http://node4:8088上传文件:cd bin./hdfs dfs -mkdir -p /usr/file./hdfs dfs -put /usr/local/jdk /usr/file关闭一个rm,效果关闭一个namenode效果
10.出现问题解决方法
1.控制台输出2.jps3.对应节点的日志4.格式化之前要删除hadoop工作目录,删除journode的工作目录