资源介绍
1.a1 192.168.9.1 (master)
a2 192.168.9.2 (slave1)
a3 192.168.9.3 (slave2)
修改/etc/hosts
2.3台机器 创建hadoop 用户
hadoop 密码:123
3.安装JDK (3台都安装)
[root@a1 ~]# chmod 777 jdk-6u38-ea-bin-b04-linux-i586-31_oct_2012-rpm.bin
[root@a1 ~]# ./jdk-6u38-ea-bin-b04-linux-i586-31_oct_2012-rpm.bin
[root@a1 ~]# cd /usr/java/jdk1.6.0_38/
[root@a1 jdk]# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_25
export JAVA_BIN=/usr/java/jdk1.7.0_25/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
重启你的系统 或 source /etc/profile
[root@a1 ~]# /usr/java/jdk1.6.0_38/bin/java -version
java version "1.6.0_38-ea"
Java(TM) SE Runtime Environment (build 1.6.0_38-ea-b04)
Java HotSpot(TM) Client VM (build 20.13-b02, mixed mode, sharing)
4.安装hadoop (3台都安)
[root@a1 ~]# tar zxvf hadoop-0.20.2-cdh3u5.tar.gz -C /usr/local
编辑hadoop 配置文件
[root@a1 ~]# cd /usr/local/hadoop-0.20.2-cdh3u5/conf/
[root@a1 conf]# vi hadoop-env.sh
添加
export JAVA_HOME=/usr/java/jdk1.7.0_25
设置namenode启动端口
[root@a1 conf]# vi core-site.xml
添加
fs.default.name
hdfs://hadoop1:9000
设置datanode节点数为2
[root@a1 conf]# vi hdfs-site.xml
添加
dfs.replication
2
设置jobtracker端口
[root@a1 conf]# vim mapred-site.xml
mapred.job.tracker
hadoop1:9001
[root@a1 conf]# vi masters
改为 a1(主机名)
[root@a1 conf]# vi slaves
改为
a2
a3
拷贝到其他两个节点
[root@a1 conf]# cd /usr/local/
[root@a1 local]# scp -r ./hadoop-0.20.2-cdh3u5/ a2:/usr/local/
[root@a1 local]# scp -r ./hadoop-0.20.2-cdh3u5/ a3:/usr/local/
在所有节点上执行以下操作,把/usr/local/hadoop-0.20.2-cdh3u5的所有者,所有者组改为hadoop并su成该用户
[root@a1 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R
[root@a2 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R
[root@a3 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R
[root@a1 ~]# su - hadoop
[root@a2 ~]# su - hadoop
[root@a3 ~]# su - hadoop
所有节点上创建密钥
[hadoop@a1 ~]$ ssh-keygen -t rsa
[hadoop@a2 ~]$ ssh-keygen -t rsa
[hadoop@a3 ~]$ ssh-keygen -t rsa
[hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1
[hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2
[hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3
[hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1
[hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2
[hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3
[hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1
[hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2
[hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3
格式化 namenode
[hadoop@a1 ~]$ cd /usr/local/hadoop-0.20.2-cdh3u5/
[hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/hadoop namenode -format
开启
[hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/start-all.sh
在所有节点查看进程状态验证启动
[hadoop@a1 hadoop-0.20.2-cdh3u5]$ jps
8602 JobTracker
8364 NameNode
8527 SecondaryNameNode
8673 Jps
[hadoop@a2 hadoop-0.20.2-cdh3u5]$ jps
10806 Jps
10719 TaskTracker
10610 DataNode
[hadoop@a3 hadoop-0.20.2-cdh3u5]$ jps
7605 Jps
7515 TaskTracker
7405 DataNode
[hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/hadoop dfsadmin -report
- 上一篇: hadoop0.20.2使用sqoop必需包
- 下一篇: hadoop-0.21.0.tar.gz