Apache Hadoop 3.5.0 – Apache Hadoop YARN
Wiki
git
Apache Hadoop
| Last Published: 2026-03-24
| Version: 3.5.0
General
Overview
Single Node Setup
Cluster Setup
Commands Reference
FileSystem Shell
Compatibility Specification
Downstream Developer's Guide
Admin Compatibility Guide
Interface Classification
FileSystem Specification
Common
CLI Mini Cluster
Fair Call Queue
Native Libraries
Proxy User
Rack Awareness
Secure Mode
Service Level Authorization
HTTP Authentication
Credential Provider API
Hadoop KMS
Tracing
Unix Shell Guide
Registry
Async Profiler
HDFS
Architecture
User Guide
Commands Reference
NameNode HA With QJM
NameNode HA With NFS
Observer NameNode
Federation
ViewFs
ViewFsOverloadScheme
Snapshots
Edits Viewer
Image Viewer
Permissions and HDFS
Quotas and HDFS
libhdfs (C API)
WebHDFS (REST API)
HttpFS
Short Circuit Local Reads
Centralized Cache Management
NFS Gateway
Rolling Upgrade
Extended Attributes
Transparent Encryption
Multihoming
Storage Policies
Memory Storage Support
Synthetic Load Generator
Erasure Coding
Disk Balancer
Upgrade Domain
DataNode Admin
Router Federation
Provided Storage
MapReduce
Tutorial
Commands Reference
Encrypted Shuffle
Pluggable Shuffle/Sort
Distributed Cache Deploy
Support for YARN Shared Cache
MapReduce REST APIs
MR Application Master
MR History Server
YARN
Architecture
Commands Reference
Capacity Scheduler
Fair Scheduler
ResourceManager Restart
ResourceManager HA
Resource Model
Node Labels
Node Attributes
Web Application Proxy
Timeline Server
Timeline Service V.2
Writing YARN Applications
YARN Application Security
NodeManager
Running Applications in Docker Containers
Running Applications in runC Containers
Using CGroups
Secure Containers
Reservation System
Graceful Decommission
Opportunistic Containers
YARN Federation
Shared Cache
Using GPU
Using FPGA
Placement Constraints
YARN UI2
YARN REST APIs
Introduction
Resource Manager
Node Manager
Timeline Server
Timeline Service V.2
YARN Service
Overview
QuickStart
Concepts
Yarn Service API
Service Discovery
System Services
Hadoop Compatible File Systems
Aliyun OSS
Amazon S3
Azure Blob Storage
Azure Data Lake Storage
Tencent COS
Huaweicloud OBS
VolcanoEngine TOS
Auth
Overview
Examples
Configuration
Building
Tools
Hadoop Streaming
Hadoop Archives
Hadoop Archive Logs
DistCp
HDFS Federation Balance
GridMix
Rumen
Resource Estimator Service
Scheduler Load Simulator
Hadoop Benchmarking
Dynamometer
Reference
Changelog and Release Notes
Java API docs
Unix Shell API
Metrics
Configuration
core-default.xml
hdfs-default.xml
hdfs-rbf-default.xml
mapred-default.xml
yarn-default.xml
kms-default.xml
httpfs-default.xml
Deprecated Properties
Apache Hadoop YARN
The fundamental idea of YARN is to split up the functionalities of resource management and job scheduling/monitoring into separate daemons. The idea is to have a global ResourceManager (
RM
) and per-application ApplicationMaster (
AM
). An application is either a single job or a DAG of jobs.
The ResourceManager and the NodeManager form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system. The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler.
The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.
The ResourceManager has two main components: Scheduler and ApplicationsManager.
The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees about restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based on the resource requirements of the applications; it does so based on the abstract notion of a resource
Container
which incorporates elements such as memory, cpu, disk, network etc.
The Scheduler has a pluggable policy which is responsible for partitioning the cluster resources among the various queues, applications etc. The current schedulers such as the
CapacityScheduler
and the
FairScheduler
would be some examples of plug-ins.
The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure. The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress.
MapReduce in hadoop-2.x maintains
API compatibility
with previous stable release (hadoop-1.x). This means that all MapReduce jobs should still run unchanged on top of YARN with just a recompile.
YARN supports the notion of
resource reservation
via the
ReservationSystem
, a component that allows users to specify a profile of resources over-time and temporal constraints (e.g., deadlines), and reserve resources to ensure the predictable execution of important jobs.The
ReservationSystem
tracks resources over-time, performs admission control for reservations, and dynamically instruct the underlying scheduler to ensure that the reservation is fulfilled.
In order to scale YARN beyond few thousands nodes, YARN supports the notion of
Federation
via the
YARN Federation
feature. Federation allows to transparently wire together multiple yarn (sub-)clusters, and make them appear as a single massive cluster. This can be used to achieve larger scale, and/or to allow multiple independent clusters to be used together for very large jobs, or for tenants who have capacity across all of them.
Apache Software Foundation

Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.