There are three ways to start the Daemons in Hadoop
1. start-all.sh & stop-all.sh : To start/stop all the deamons on all the nodes from Master machine all at once.
2. a). start-dfs.sh, stop-dfs.sh : to start/ stop HDFS daemons separately on all the nodes from Master machine
b). start-yarn.sh, stop-yarn.sh : : to start/ stop Yarn daemons separately on all the nodes from Master machine
3. a)hadoop-daemon.sh namenode/datanode : To start idatanode/Namenode on an individual machine manually(e.g. this can be used when a new datanode is added and this datanode needs to be started on that particular machine)
b)yarn-deamon.sh resourcemanager : To start a Yarn service on an individual machine manually
start-dfs.sh, stop-dfs.sh and start-yarn.sh, stop-yarn.sh are preferred over start-all.sh & stop-all.sh