Hadoop之Yarn案例

Hadoop之Yarn案例,第1张

Hadoop之Yarn案例 Hadoop之Yarn案例

目录

Hadoop之Yarn案例

一、Yarn生产环境核心参数配置案例

二、容量调度器多队列提交案例

向Hive队列提交任务

一、Yarn生产环境核心参数配置案例

1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。

2)需求分析:

1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster

平均每个节点运行10个 / 3台 ≈ 3个任务(4 3 3)

3)修改yarn-site.xml配置参数如下:


    The class to use as the resource scheduler.
    yarn.resourcemanager.scheduler.class
    org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler

    Number of threads to handle scheduler interface.
    yarn.resourcemanager.scheduler.client.thread-count
    8

    Enable auto-detection of node capabilities such as
    memory and CPU.
    
    yarn.nodemanager.resource.detect-hardware-capabilities
    false

    Flag to determine if logical processors(such as
    hyperthreads) should be counted as cores. only applicable on Linux
    when yarn.nodemanager.resource.cpu-vcores is set to -1 and
    yarn.nodemanager.resource.detect-hardware-capabilities is true.
    
    yarn.nodemanager.resource.count-logical-processors-as-cores
    false

    Multiplier to determine how to convert phyiscal cores to
    vcores. This value is used if yarn.nodemanager.resource.cpu-vcores
    is set to -1(which implies auto-calculate vcores) and
    yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The  number of vcores will be calculated as  number of CPUs * multiplier.
    
    yarn.nodemanager.resource.pcores-vcores-multiplier
    1.0

    Amount of physical memory, in MB, that can be allocated 
    for containers. If set to -1 and
    yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
    automatically calculated(in case of Windows and Linux).
    In other cases, the default is 8192MB.
    
    yarn.nodemanager.resource.memory-mb
    4096

    Number of vcores that can be allocated
    for containers. This is used by the RM scheduler when allocating
    resources for containers. This is not used to limit the number of
    CPUs used by YARN containers. If it is set to -1 and
    yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
    automatically determined from the hardware in case of Windows and Linux.
    In other cases, number of vcores is 8 by default.
    yarn.nodemanager.resource.cpu-vcores
    4

    The minimum allocation for every container request at the RM   in MBs. Memory requests lower than this will be set to the value of this    property. Additionally, a node manager that is configured to have less memory   than this value will be shut down by the resource manager.
    
    yarn.scheduler.minimum-allocation-mb
    1024

    The maximum allocation for every container request at the RM   in MBs. Memory requests higher than this will throw an  InvalidResourceRequestException.
    
    yarn.scheduler.maximum-allocation-mb
    2048

    The minimum allocation for every container request at the RM   in terms of virtual CPU cores. Requests lower than this will be set to the  value of this property. Additionally, a node manager that is configured to  have fewer virtual cores than this value will be shut down by the resource  manager.
    
    yarn.scheduler.minimum-allocation-vcores
    1

    The maximum allocation for every container request at the RM   in terms of virtual CPU cores. Requests higher than this will throw an
    InvalidResourceRequestException.
    yarn.scheduler.maximum-allocation-vcores
    2

    Whether virtual memory limits will be enforced for
    containers.
    yarn.nodemanager.vmem-check-enabled
    false

    Ratio between virtual memory to physical memory when   setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.
    
    yarn.nodemanager.vmem-pmem-ratio
    2.1

4)分发配置。

注意:如果集群的硬件资源不一致,要每个NodeManager单独配置

5)重启集群

6)执行WordCount程序

[example@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output

7)观察Yarn任务执行页面

http://hadoop103:8088/cluster/apps

二、容量调度器多队列提交案例

需求:建立一个新的hive队列,其中default队列占总内存的40%,最大资源容量占总资源60%,hive队列占总内存的60%,最大资源容量占总资源80%。

1)在capacity-scheduler.xml中配置如下:

(1)修改如下配置


    yarn.scheduler.capacity.root.queues
    default,hive
    
      The queues at the this level (root is the root queue).
    

    yarn.scheduler.capacity.root.default.capacity
    40

    yarn.scheduler.capacity.root.default.maximum-capacity
    60

(2)为新加队列添加必要属性


    yarn.scheduler.capacity.root.hive.capacity
    60

    yarn.scheduler.capacity.root.hive.user-limit-factor
    1

    yarn.scheduler.capacity.root.hive.maximum-capacity
    80

    yarn.scheduler.capacity.root.hive.state
    RUNNING



    yarn.scheduler.capacity.root.hive.acl_submit_applications
    *

    yarn.scheduler.capacity.root.hive.acl_administer_queue
    *

    yarn.scheduler.capacity.root.hive.acl_application_max_priority
    *

​

​


    yarn.scheduler.capacity.root.hive.maximum-application-lifetime
    -1

    yarn.scheduler.capacity.root.hive.default-application-lifetime
    -1

2)分发配置文件

3)重启Yarn或者执行yarn rmadmin -refreshQueues刷新队列,就可以看到两条队列:

[exmaple@hadoop102 hadoop-3.1.3]$ yarn rmadmin -refreshQueues
向Hive队列提交任务

1)hadoop jar的方式

[example@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount -D mapreduce.job.queuename=hive /input /output

2)打jar包的方式

默认的任务提交都是提交到default队列的。如果希望向其他队列提交任务,需要在Driver中声明:

public class WcDrvier {
​
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
​
        Configuration conf = new Configuration();
​
        conf.set("mapreduce.job.queuename","hive");
​
}

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5676367.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存