Mac 安装 Hadoop 3.x

1.安装java

2. SSH

首先在系统里打开远程登录,位置在 System Preference -> Sharing 中,左边勾选 Remote Login,右边选择 All Users

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

ssh localhost用来测试

3. 安装hadoop

brew install hadoop

4. 配置

 

/usr/local/Cellar/hadoop/3.1.0/libexec/etc/hadoop  主要都在这个目录下

a) hadoop-env.sh

输入以下代码看看你把 Java 装到哪里了:

 /usr/libexec/java_home

你会看到类似酱紫结果:

/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home

打开 hadoop-env.sh 文件(位置 etc/hadoop/),找到 # export JAVA_HOME=,改参数如下:

export JAVA_HOME={your java home directory}
  • {your java home directory} 改成你上面查到的 Java 路径,记得去掉注释 #。比如 export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home

b) core-site.xml

打开 core-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration>
 <property>
 <name>fs.defaultFS</name>
 <value>hdfs://localhost:9000</value>
 </property>
<property>
 <name>hadoop.tmp.dir</name>
 <value>/Users/walle/Documents/hadoop_tmp</value>
 <description>A base for other temporary directories.</description>
 </property>
</configuration>

c) hdfs-site.xml

打开 hdfs-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration>
 <property>
 <name>dfs.replication</name>
 <value>1</value>
 </property>
</configuration>

d) mapred-site.xml

打开 mapred-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration>
 <property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
 </property>
</configuration>
  • 如果文件后缀是 .xml.example,改为 .xml

e) yarn-site.xml

打开 yarn-site.xml 文件(位置 etc/hadoop/),改参数如下:

<configuration> 
<property> 
<name>yarn.nodemanager.aux-services</name> 
<value>mapreduce_shuffle</value> 
</property> 
<property> 
<name>yarn.nodemanager.env-whitelist</name>
 <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> 
</property> 
</configuration>

5.运行

格式化文件系统:

bin/hdfs namenode -format

启动 NameNode 和 DataNode:

sbin/start-dfs.sh

现在你应该可以在浏览器中打开下面的链接看到亲切的 Overview 界面了:

NameNode – http://localhost:9870

让 HDFS 可以被用来执行 MapReduce jobs:

bin/hdfs dfs -mkdir /user
bin/hdfs dfs -mkdir /user/<username>
  • <username> 改成你的用户名,记得去掉 <>

启动 ResourceManager 和 NodeManager:

sbin/start-yarn.sh

现在你应该可以在浏览器中打开下面的链接看到亲切的 All Applications 界面了:

ResourceManager – http://localhost:8088

https://www.jianshu.com/p/0e7f16469d87

http://www.waitingfy.com/archives/3975

3975

Leave a Reply

Name and Email Address are required fields.
Your email will not be published or shared with third parties.