启动后的容器如下图所示:

2018-10-25 16-54-41 的屏幕截图.png
安装JDK
将jdk1.7拷贝到/data目录下进行解压,下面就展现出docker搭建hadoop学习环境的好处。
在所有的容器里配值java环境变量,~/.bashrc或/etc/profile
export JAVA_HOME=/data/jdk1.7.0_80
export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
环境搭建
apt-get install ntp
apt-get install ssh#启动ssh服务service ssh start
sudo passwd root
123
vim /etc/ssh/sshd_config
替换
PermitRootLogin without-password
为PermitRootLogin yes
重新启动ssh服务
service ssh restart
配值ssh无密访问
在master上运行
ssh-keygen
ssh-copy-id root@master
ssh-copy-id root@client1
ssh-copy-id root@client2
然后ssh root@client1进行测试是否可以无密访问
Hadoop配置
在/data下解压hadoop-2.7.3
进入hadoop-2.7.3/etc/hadoop/下进行修改配值文件
export JAVA_HOME=/data/jdk1.7.0_80
<property> <name>hadoop.tmp.dir</name><value>/data/hadoop-2.7.3/tmp</value><description>Abaseforothertemporarydirectories.</description> </property> <property><name>fs.default.name</name><value>hdfs://master:9000</value></property>
<property> <name>dfs.name.dir</name><value>/home/hadoop/hadoop-2.6.0/dfs/name</value><description>Path on the local filesystem where the NameNode stores the namespace andtransactionslogspersistently.</description></property><property><name>dfs.data.dir</name><value>/home/hadoop/hadoop-2.6.0/dfs/data</value> <description>Comma separatedlistofpathsonthelocalfilesystemofaDataNodewhere itshouldstoreitsblocks.</description></property> <property> <name>dfs.replication</name> <value>1</value></property>
<configuration> <property><name>mapred.job.tracker</name><value>master:9001</value><description>HostofIPandportofJobTracker.</description></property> </configuration>
<property> <name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value> </property><property><name>yarn.resourcemanager.address</name><value>master:8032</value></property><property> <name>yarn.resourcemanager.scheduler.address</name><value>master:8030</value> </property><property><name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value></property><property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property><property><name>yarn.resourcemanager.webapp.address</name><value>master:8088</value> </property>
export JAVA_HOME=/data/jdk1.7.0_80
client1
client2
vim /etc/profile
export HADOOP_HOME=/data/hadoop-2.7.3
exportPATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
启动集群
bin/hdfsnamenode-format
sbin/start-all.sh
访问50070:
http://master:50070
出现页面,检查datanode是否完全启动成功
启动成功即可
否则:
hadoop-daemons.sh start datanode client1
作者:张晓天a
链接:https://www.jianshu.com/p/781db0147ed9