kb3189866(kb2966826)

前沿拓展:

kb3189866

如果更新之前不卡的话可以试试卸载KB3189866来自这个更新第二用win10更新隐藏工具隐藏掉KB3189866这个更新看看。


原文链接:https://mp.weixin.qq.com/s/MXemqdout74rTmhJpnD8Ng

默认情况下,每个 Task 任务都需要启动一个 JVM 来运行,如果 Task 任务计算的数据量很小,我们可以让同一个 Job 的多个 Task 运行在一个 JVM 中,不必为每个 Task 都开启一个 JVM。(1)未开启 uber 模式,在 /input 路径上上传多个小文件 并执行 wordcount 程序

[Tom@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output2

(2)观察控制台

2021-06-26 16:18:07,607 INFO mapreduce.Job: Job job_1613281510851_0002 running in uber mode : false

(3)观察 http://hadoop103:8088/cluster

kb3189866(kb2966826)

(4)开启 uber 模式,在 mapred-site.xml 中添加如下配置

<!–开启uber模式,默认关闭–>
<property>
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>
<!–uber模式中最大的mapTask数量,可向下修改–>
<property>
<name>mapreduce.job.ubertask.maxmaps</name>
<value>9</value>
</property>
<!–uber模式中最大的reduce数量,可向下修改–>
<property>
<name>mapreduce.job.ubertask.maxreduces</name>
<value>1</value>
</property>
<!–uber模式中最大的输入数据量,默认使用dfs.blocksize 的值,可向下修改–>
<property>
<name>mapreduce.job.ubertask.maxbytes</name>
<value></value>
</property>

(5)分发配置

[Tom@hadoop102 hadoop]$ xsync mapred-site.xml

(6)再次执行 wordcount 程序

[Tom@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output2

(7)观察控制台

2021-06-27 16:28:36,198 INFO mapreduce.Job: Job job_1613281510851_0003 running in uber mode : true

(8)观察 http://hadoop103:8088/cluster

kb3189866(kb2966826)

8.2 测试MapReduce计算性能

使用 Sort 程序评测 MapReduce注:一个虚拟机不超过 150G 磁盘尽量不要执行这段代码

(1)使用 RandomWriter 来产生随机数,每个节点运行 10 个 Map 任务,每个 Map 产生大约 1G 大小的二进制随机数

[Tom@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar randomwriter random-data

(2)执行 Sort 程序

[Tom@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar sortrandom-data sorted-data

(3)验证数据是否真正排好序了

[Tom@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar testmapredsort -sortInput random-data -sortOutput sorted-data
8.3 企业开发场景案例8.3.1 需求

(1)需求:从 1G 数据中,统计每个单词出现次数。服务器 3 台,每台配置 4G 内存,4 核 CPU,4 线程。(2)需求分析:1G/128m=8个MapTask;1个ReduceTask;1个mrAppMaster,平均每个节点运行 10个/3台≈3个任务(4 3 3)

8.3.2 HDFS参数调优

(1)修改 hadoop-env.sh

export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS-Xmx1024m"
export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS-Xmx1024m"

(2)修改 hdfs-site.xml

<!–NameNode有一个工作线程池,默认值是10–>
<property>
<name>dfs.namenode.handler.count</name>
<value>21</value>
</property>

(3)修改 core-site.xml

<!–配置垃圾回收时间为60分钟–>
<property>
<name>fs.trash.interval</name>
<value>60</value>
</property>

(4)分发配置

[Tom@hadoop102 hadoop]$ xsync hadoop-env.sh hdfs-site.xml core-site.xml
8.3.3 MapReduce参数调优

(1)修改 mapred-site.xml

<!–环形缓冲区大小,默认100m–>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>100</value>
</property>

<!–环形缓冲区溢写阈值,默认0.8–>
<property>
<name>mapreduce.map.sort.spill.percent</name>
<value>0.80</value>
</property>

<!–merge合并次数,默认10个–>
<property>
<name>mapreduce.task.io.sort.factor</name>
<value>10</value>
</property>

<!–maptask内存,默认1g;maptask堆内存大小默认和该值大小一致mapreduce.map.java.opts–>
<property>
<name>mapreduce.map.memory.mb</name>
<value>-1</value>
<description>The amount of memory to request from the scheduler for each map task. If this is not specified or is non-positive, it is inferred frommapreduce.map.java.opts and mapreduce.job.heap.memory-mb.ratio. If java-opts are also not specified, we set it to 1024.
</description>
</property>

<!–matask的CPU核数,默认1个–>
<property>
<name>mapreduce.map.cpu.vcores</name>
<value>1</value>
</property>

<!–matask异常重试次数,默认4次–>
<property>
<name>mapreduce.map.maxattempts</name>
<value>4</value>
</property>

<!–每个Reduce去Map中拉取数据的并行数。默认值是5–>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>5</value>
</property>

<!–Buffer大小占Reduce可用内存的比例,默认值0.7–>
<property>
<name>mapreduce.reduce.shuffle.input.buffer.percent</name>
<value>0.70</value>
</property>

<!–Buffer中的数据达到多少比例开始写入磁盘,默认值0.66。–>
<property>
<name>mapreduce.reduce.shuffle.merge.percent</name>
<value>0.66</value>
</property>

<!–reducetask内存,默认1g;reducetask堆内存大小默认和该值大小一致mapreduce.reduce.java.opts –>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>-1</value>
<description>The amount of memory to request from the scheduler for each reduce task. If this is not specified or is non-positive, it is inferred
from mapreduce.reduce.java.opts and mapreduce.job.heap.memory-mb.ratio.
If java-opts are also not specified, we set it to 1024.
</description>
</property>

<!–reducetask的CPU核数,默认1个–>
<property>
<name>mapreduce.reduce.cpu.vcores</name>
<value>2</value>
</property>

<!–reducetask失败重试次数,默认4次–>
<property>
<name>mapreduce.reduce.maxattempts</name>
<value>4</value>
</property>

<!–当MapTask完成的比例达到该值后才会为ReduceTask申请资源。默认是0.05–>
<property>
<name>mapreduce.job.reduce.slowstart.completedmaps</name>
<value>0.05</value>
</property>

<!–如果程序在规定的默认10分钟内没有读到数据,将强制超时退出–>
<property>
<name>mapreduce.task.timeout</name>
<value>600000</value>
</property>

(2)分发配置

[Tom@hadoop102 hadoop]$ xsync mapred-site.xml
8.3.4 Yarn参数调优

(1)修改 yarn-site.xml 配置参数如下:

<!–选择调度器,默认容量–>
<property>
<description>The class to use as the resource scheduler.</description>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>

<!–ResourceManager处理调度器请求的线程数量,默认50;如果提交的任务数大于50,可以增加该值,但是不能超过3台* 4线程=12线程(去除其他应用程序实际不能超过8)–>
<property>
<description>Number of threads to handle scheduler interface.</description>
<name>yarn.resourcemanager.scheduler.client.thread-count</name>
<value>8</value>
</property>

<!–是否让yarn自动检测硬件进行配置,默认是false,如果该节点有很多其他应用程序,建议手动配置。如果该节点没有其他应用程序,可以采用自动–>
<property>
<description>Enable auto-detection of node capabilities such as memory and CPU.</description>
<name>yarn.nodemanager.resource.detect-hardware-capabilities</name>
<value>false</value>
</property>

<!–是否将虚拟核数当作CPU核数,默认是false,采用物理CPU核数–>
<property>
<description>Flag to determine if logical processors(such as hyperthreads) should be counted as cores. Only applicable on Linux when yarn.nodemanager.resource.cpu-vcores is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true.</description>
<name>yarn.nodemanager.resource.count-logical-processors-as-cores</name>
<value>false</value>
</property>

<!–虚拟核数和物理核数乘数,默认是1.0–>
<property>
<description>Multiplier to determine how to convert phyiscal cores to vcores. This value is used if yarn.nodemanager.resource.cpu-vcores is set to -1(which implies auto-calculate vcores) and yarn.nodemanager.resource.detect-hardware-capabilities is set to true. Thenumber of vcores will be calculated asnumber of CPUs * multiplier.</description>
<name>yarn.nodemanager.resource.pcores-vcores-multiplier</name>
<value>1.0</value>
</property>

<!–NodeManager使用内存数,默认8G,修改为4G内存–>
<property>
<description>Amount of physical memory, in MB, that can be allocated for containers. If set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically calculated(in case of Windows and Linux).In other cases, the default is 8192MB.</description>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>

<!–nodemanager的CPU核数,不按照硬件环境自动设定时默认是8个,修改为4个–>
<property>
<description>Number of vcores that can be allocated
for containers. This is used by the RM scheduler when allocating resources for containers. This is not used to limit the number of CPUs used by YARN containers. If it is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically determined from the hardware in case of Windows and Linux.In other cases, number of vcores is 8 by default.</description>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>4</value>
</property>

<!–容器最小内存,默认1G –>
<property>
<description>The minimum allocation for every container request at the RMin MBs. Memory requests lower than this will be set to the value of thisproperty. Additionally, a node manager that is configured to have less memorythan this value will be shut down by the resource manager.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>

<!–容器最大内存,默认8G,修改为2G –>
<property>
<description>The maximum allocation for every container request at the RMin MBs. Memory requests higher than this will throw anInvalidResourceRequestException.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>2048</value>
</property>

<!–容器最小CPU核数,默认1个–>
<property>
<description>The minimum allocation for every container request at the RMin terms of virtual CPU cores. Requests lower than this will be set to thevalue of this property. Additionally, a node manager that is configured tohave fewer virtual cores than this value will be shut down by the resourcemanager.</description>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>

<!–容器最大CPU核数,默认4个,修改为2个–>
<property>
<description>The maximum allocation for every container request at the RMin terms of virtual CPU cores. Requests higher than this will throw an InvalidResourceRequestException.</description>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>2</value>
</property>

<!–虚拟内存检查,默认打开,修改为关闭–>
<property>
<description>Whether virtual memory limits will be enforced for containers.</description>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

<!–虚拟内存和物理内存设置比例,默认2.1 –>
<property>
<description>Ratio between virtual memory to physical memory whensetting memory limits for containers. Container allocations areexpressed in terms of physical memory, and virtual memory usageis allowed to exceed this allocation by this ratio.</description>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>

(2)分发配置

[Tom@hadoop102 hadoop]$ xsync yarn-site.xml
8.3.5 执行程序

(1)重启集群

[Tom@hadoop102 hadoop-3.1.3]$ **in/stop-yarn.sh
[Tom@hadoop103 hadoop-3.1.3]$ **in/start-yarn.sh

(2)执行 WordCount 程序

[Tom@hadoop102 hadoop 3.1.3]$ hadoop jar
share/hadoop/ mapreduce/hadoop mapreduce examples 3.1.3.jar
wordcount /input /output

(3)观察 Yarn 任务执行页面 http://hadoop103:8088/cluster/apps

拓展知识:

原创文章,作者:九贤生活小编,如若转载,请注明出处:http://www.wangguangwei.com/8323.html