Help Center > > User Guide> Data Migration> Restoring Data

Restoring Data

Updated at: Sep 12, 2019 GMT+08:00

HDFS File Property Restoration

Based on the exported permission information, run the HDFS commands in the background of the destination cluster to restore the file permission and owner and group information.

$HADOOP_HOME/bin/hdfs dfs –chmod <MODE> <path>
$HADOOP_HOME/bin/hdfs dfs –chown <OWNER> <path>

Hive Metadata Restoration

Install Sqoop and run the Sqoop command in the destination cluster to import the exported Hive metadata to DBService in the MRS cluster.

$Sqoop_Home/bin/sqoop export --connect jdbc:postgresql://<ip>:20051/hivemeta --table <table_name> --username hive -password <passwd> --export-dir <export_from>

The following provides description about the parameters in the preceding command.

  • $Sqoop_Home: Sqoop installation directory in the destination cluster
  • <ip>: IP address of the database in the destination cluster
  • <table_name>: Name of the table to be restored
  • <passwd>: Password of user hive
  • <export_from>: HDFS address of the metadata in the destination cluster

HBase Table Reconstruction

Restart the HBase service of the destination cluster to make data migration take effect. During the restart, HBase loads the data in the current HDFS and regenerates metadata. After the restart is complete, run the following command on the Master node client to load the HBase table data:

$HBase_Home/bin/hbase hbck -fixMeta -fixAssignments

After the command is executed, run the following command repeatedly to check the health status of the HBase cluster until the health status is normal:

hbase hbck

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?







Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel