Getting Ready to Upgrade
HDP Stack upgrade involves upgrading from HDP 2.5 to HDP 2.6 versions and adding the new HDP 2.6 services. These instructions change your configurations.
| ![[Note]](../common/images/admon/note.png) | Note | 
|---|---|
| You must use kinit before running the commands as any particular user. | 
Hardware recommendations
Although there is no single hardware requirement for installing HDP, there are some basic guidelines. The HDP packages for a complete installation of HDP 2.6.0 consumes about 6.5 GB of disk space.
The first step is to ensure you keep a backup copy of your HDP 2.5 configurations.
| ![[Note]](../common/images/admon/note.png) | Note | 
|---|---|
| The  | 
- Back up the HDP directories for any hadoop components you have installed. - The following is a list of all HDP directories: - /etc/hadoop/conf
- /etc/hbase/conf
- /etc/hive-hcatalog/conf
- /etc/hive-webhcat/conf
- /etc/accumulo/conf
- /etc/hive/conf
- /etc/pig/conf
- /etc/sqoop/conf
- /etc/flume/conf
- /etc/mahout/conf
- /etc/oozie/conf
- /etc/hue/conf
- /etc/knox/conf
- /etc/zookeeper/conf
- /etc/tez/conf
- /etc/storm/conf
- /etc/falcon/conf
- /etc/slider/conf/ 
- /etc/ranger/admin/conf, /etc/ranger/usersync/conf(If Ranger is installed, also take a backup of install.properties for all the plugins, ranger admin & ranger usersync.)
- Optional - Back up your userlogs directories, - ${mapred.local.dir}/userlogs.
 
- Oozie runs a periodic purge on the shared library directory. The purge can delete libraries that are needed by jobs that started before the upgrade began and that finish after the upgrade. To minimize the chance of job failures, you should extend the - oozie.service.ShareLibService.purge.intervaland- oozie.service.ShareLibService.temp.sharelib.retention.dayssettings.- Add the following content to the the - oozie-site.xmlfile prior to performing the upgrade:- <property> <name>oozie.service.ShareLibService.purge.interval</name> <value>1000</value><description> How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS. </description> </property> <property> <name>oozie.service.ShareLibService.temp.sharelib.retention.days</name> <value>1000</value> <description> ShareLib retention time in days.</description> </property> 
- Stop all long-running applications deployed using Slider: - su - yarn "usr/hdp/current/slider-client/bin/slider list"- For all applications returned in previous command, run - su - yarn "/usr/hdp/current/slider-client/bin/slider stop <app_name>"
- Stop all services (including MapReduce) except HDFS, ZooKeeper, and Ranger, and client applications deployed on HDFS. - See Stopping HDP Services for more information. - Component - Command - Accumulo - /usr/hdp/current/accumulo-client/bin$ /usr/hdp/current/accumulo-client/bin/stop-all.sh- Knox - cd $GATEWAY_HOME su - knox -c "bin/gateway.sh stop"- Falcon - su - falcon "/usr/hdp/current/falcon-server/bin/falcon-stop"- Oozie - su - oozie -c "/usr/hdp/current/oozie-server/bin/oozie-stop.sh- WebHCat - su - webhcat -c "/usr/hdp/hive-webhcat/sbin/webhcat_server.sh stop"- Hive - Run this command on the Hive Metastore and Hive Server2 host machine: - ps aux | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1- Or you can use the following: - Killall -u hive -s 15 java- HBase RegionServers - su - hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh --config /etc/hbase/conf stop regionserver"- HBase Master host machine - su - hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh --config /etc/hbase/conf stop master"- YARN & Mapred Histro - Run this command on all NodeManagers: - su - yarn -c "export /usr/hdp/current/hadoop-client/libexec && /usr/hdp/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop nodemanager"- Run this command on the History Server host machine: - su - mapred -c "export /usr/hdp/current/hadoop-client/libexec && && /usr/hdp/current/hadoop-mapreduce-historyserver/sbin/mr-jobhistory-daemon.sh --config /etc/hadoop/conf stop historyserver"- Run this command on the ResourceManager host machine(s): - su - yarn -c "export /usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop resourcemanager"- Run this command on the ResourceManager host machine: - su -yarn -c "export /usr/hdp/current/hadoop-client/libex && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yar-daemon.sh --config /etc/hadoop/conf stop timelineserver"- Run this command on the YARN Timeline Server node: - su -l yarn -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf stop timelineserver"- Storm - storm kill topology-name- sudo service supervisord stop- Spark (History server) - su - spark -c "/usr/hdp/current/spark-client/sbin/stop-history-server.sh"
- If you have the Hive component installed, back up the Hive Metastore database. - The following instructions are provided for your convenience. For the latest backup instructions, see your database documentation. - Table 1.1. Hive Metastore Database Backup and Restore - Database Type - Backup - Restore - MySQL - mysqldump $dbname > $outputfilename.sqlsbr - For example: - mysqldump hive > /tmp/mydir/backup_hive.sql - mysql $dbname < $inputfilename.sqlsbr - For example: - mysql hive < /tmp/mydir/backup_hive.sql - Postgres - sudo -u $username pg_dump $databasename > $outputfilename.sql sbr - For example: - sudo -u postgres pg_dump hive > /tmp/mydir/backup_hive.sql - sudo -u $username psql $databasename < $inputfilename.sqlsbr - For example: - sudo -u postgres psql hive < /tmp/mydir/backup_hive.sql - Oracle - Export the database: - exp username/password@database full=yes file=output_file.dmp - Import the database: - imp username/password@database file=input_file.dmp 
- If you have the Oozie component installed, back up the Oozie metastore database. - These instructions are provided for your convenience. Check your database documentation for the latest backup instructions. - Table 1.2. Oozie Metastore Database Backup and Restore - Database Type - Backup - Restore - MySQL - mysqldump $dbname > $outputfilename.sql - For example: - mysqldump oozie > /tmp/mydir/backup_hive.sql - mysql $dbname < $inputfilename.sql - For example: - mysql oozie < /tmp/mydir/backup_oozie.sql - Postgres - sudo -u $username pg_dump $databasename > $outputfilename.sql - For example: - sudo -u postgres pg_dump oozie > /tmp/mydir/backup_oozie.sql- sudo -u $username psql $databasename < $inputfilename.sql - For example: - sudo -u postgres psql oozie < /tmp/mydir/backup_oozie.sql- Oracle - Export the database: - exp username/password@database full=yes file=output_file.dmp - Import the database: - imp username/password@database file=input_file.dmp 
- Optional: Back up the Hue database. - The following instructions are provided for your convenience. For the latest backup instructions, please see your database documentation. For database types that are not listed below, follow your vendor-specific instructions. - Table 1.3. Hue Database Backup and Restore - Database Type - Backup - Restore - MySQL - mysqldump $dbname > $outputfilename.sqlsbr - For example: - mysqldump hue > /tmp/mydir/backup_hue.sql - mysql $dbname < $inputfilename.sqlsbr - For example: - mysql hue < /tmp/mydir/backup_hue.sql - Postgres - sudo -u $username pg_dump $databasename > $outputfilename.sql sbr - For example: - sudo -u postgres pg_dump hue > /tmp/mydir/backup_hue.sql - sudo -u $username psql $databasename < $inputfilename.sqlsbr - For example: - sudo -u postgres psql hue < /tmp/mydir/backup_hue.sql - Oracle - Connect to the Oracle database using sqlplus. Export the database. - For example: - exp username/password@database full=yes file=output_file.dmp mysql $dbname < $inputfilename.sqlsbr - Import the database: - For example: - imp username/password@database file=input_file.dmp - SQLite - /etc/init.d/hue stop - su $HUE_USER - mkdir ~/hue_backup - sqlite3 desktop.db .dump > ~/hue_backup/desktop.bak - /etc/init.d/hue start - /etc/init.d/hue stop - cd /var/lib/hue - mv desktop.db desktop.db.old - sqlite3 desktop.db < ~/hue_backup/desktop.bak - /etc/init.d/hue start 
- Back up the Knox data/security directory. - cp -RL /etc/knox/data/security ~/knox_backup
- Save the namespace by executing the following commands: - su - hdfs- hdfs dfsadmin -safemode enter- hdfs dfsadmin -saveNamespace![[Note]](../common/images/admon/note.png) - Note - In secure mode, you must have Kerberos credentials for the hdfs user. 
- Run the - fsckcommand as the HDFS Service user and fix any errors. (The resulting file contains a complete block map of the file system.)- su - hdfs -c "hdfs fsck / -files -blocks -locations > dfs-old-fsck-1.log"![[Note]](../common/images/admon/note.png) - Note - In secure mode, you must have Kerberos credentials for the hdfs user. 
- Use the following instructions to compare status before and after the upgrade. - The following commands must be executed by the user running the HDFS service (by default, the user is hdfs). - Capture the complete namespace of the file system. (The following command does a recursive listing of the root file system.) ![[Important]](../common/images/admon/important.png) - Important - Make sure the namenode is started. - su - hdfs -c "hdfs dfs -ls -R / > dfs-old-lsr-1.log"![[Note]](../common/images/admon/note.png) - Note - In secure mode you must have Kerberos credentials for the hdfs user. 
- Run the report command to create a list of DataNodes in the cluster. - su - hdfs dfsadmin -c "-report > dfs-old-report-1.log"
- Run the report command to create a list of DataNodes in the cluster. - su - hdfs -c "hdfs dfsadmin -report > dfs-old-report-l.log"
- Optional: You can copy all or unrecoverable only data storelibext-customer directory in HDFS to a local file system or to a backup instance of HDFS. 
- Optional: You can also repeat the steps 3 (a) through 3 (c) and compare the results with the previous run to ensure the state of the file system remained unchanged. 
 
- Finalize any prior HDFS upgrade, if you have not done so already. - su - hdfs -c "hdfs dfsadmin -finalizeUpgrade"![[Note]](../common/images/admon/note.png) - Note - In secure mode, you must have Kerberos credentials for the hdfs user. 
- Stop remaining services (HDFS, ZooKeeper, and Ranger). - See Stopping HDP Services for more information. - Component - Command - HDFS - On all DataNotes: - If you are running secure cluster, run following command as root: - /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode- Else: - su - hdfs -c "usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode"- If you are not running a highly available HDFS cluster, stop the Secondary NameNode by executing this command on the Secondary NameNode host machine: - su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop secondarynamenode"- On the NameNode host machine(s) - su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop namenode"- If you are running NameNode HA, stop the ZooKeeper Failover Controllers (ZKFC) by executing this command on the NameNode host machine: - su - hdfs -c "/usr/hdp/current/hadoop-clinent/sbin/hadoop-deamon.sh --config /etc/hadoop/conf stop zkfc"- If you are running NameNode HA, stop the JournalNodes by executing these command on the JournalNode host machines: - su - hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh ==config /etc/hadoop/conf stop journalnode"- ZooKeeper Host machines - su - zookeeper "/usr/hdp/current/zookeeper-server/bin/zookeeper-server stop"- Ranger (XA Secure) - service ranger-admin stop- service ranger-usersync stop
- Back up your NameNode metadata. ![[Note]](../common/images/admon/note.png) - Note - It's recommended to take a backup of the full - /hadoop.hdfs/namenodepath.- Copy the following checkpoint files into a backup directory. - The NameNode metadata is stored in a directory specified in the hdfs-site.xml configuration file under the configuration value "dfs.namenode.dir". - For example, if the configuration value is: - <property> <name>dfs.namenode.name.dir</name> <value>/hadoop/hdfs/namenode</value> </property> - Then, the NameNode metadata files are all housed inside the directory - /hadooop.hdfs/namenode.
- Store the layoutVersion of the namenode. - ${dfs.namenode.name.dir}/current/VERSION
 
- Verify that edit logs in - ${dfs.namenode.name.dir}/current/edits*are empty.- Run: - hdfs oev -i ${dfs.namenode.name.dir}/current/edits_inprogress_* -o edits.out
- Verify the edits.out file. It should only have OP_START_LOG_SEGMENT transaction. For example: - <?xml version="1.0" encoding="UTF-8"?> <EDITS> <EDITS_VERSION>-56</EDITS_VERSION> <RECORD> <OPCODE>OP_START_LOG_SEGMENT</OPCODE> <DATA> <TXID>5749</TXID> </DATA> </RECORD> 
- If edits.out has transactions other than OP_START_LOG_SEGMENT, run the following steps and then verify edit logs are empty. - Start the existing version NameNode. 
- Ensure there is a new FS image file. 
- Shut the NameNode down: - hdfs dfsadmin – saveNamespace
 
 
- Rename or delete any paths that are reserved in the new version of HDFS. - When upgrading to a new version of HDFS, it is necessary to rename or delete any paths that are reserved in the new version of HDFS. If the NameNode encounters a reserved path during upgrade, it prints an error such as the following: - /.reserved is a reserved path and .snapshot is a reserved path component in this version of HDFS. Please rollback and delete or rename this path, or upgrade with the -renameReserved key-value pairs option to automatically rename these paths during upgrade. - Specifying - -upgrade -renameReservedoptional key-value pairs causes the NameNode to automatically rename any reserved paths found during startup.- For example, to rename all paths named - .snapshotto- .my-snapshotand change paths named- .reservedto- .my-reserved, specify- -upgrade -renameReserved .snapshot=.my-snapshot,.reserved=.my-reserved.- If no key-value pairs are specified with - -renameReserved, the NameNode then suffixes reserved paths with:- .<LAYOUT-VERSION>.UPGRADE_RENAMED- For example: - .snapshot.-51.UPGRADE_RENAMED.![[Note]](../common/images/admon/note.png) - Note - We recommend that you perform a - -saveNamespacebefore renaming paths (running- -saveNamespaceappears in a previous step in this procedure). This is because a data inconsistency can result if an edit log operation refers to the destination of an automatically renamed file.- Also note that running - -renameReservedrenames all applicable existing files in the cluster. This may impact cluster applications.
- If you are on JDK 1.6, upgrade the JDK on all nodes to JDK 1.7 or JDK 1.8 before upgrading HDP. 

