Help Center> MapReduce Service> Developer Guide (Normal_Earlier Than 3.x)> Spark Application Development> FAQs> How Do I Submit the Spark Application Using Java Commands?
Updated on 2022-06-01 GMT+08:00

How Do I Submit the Spark Application Using Java Commands?

Question

How do I use Java commands to submit Spark applications in addition to the spark-submit command?

Answer

Use the org.apache.spark.launcher.SparkLauncher class and run Java command to submit the Spark application. The procedure is as follows:

  1. Define the org.apache.spark.launcher.SparkLauncher class. The SparkLauncherJavaExample and SparkLauncherScalaExample are provided by default as example code. You can modify the input parameters of example code as required.

    • If you use Java as the development language, you can compile the SparkLauncher class by referring to the following code:
          public static void main(String[] args) throws Exception {
              System.out.println("com.huawei.bigdata.spark.examples.SparkLauncherExample <mode> <jarParh> <app_main_class> <appArgs>");
              SparkLauncher launcher = new SparkLauncher();
              launcher.setMaster(args[0])
                  .setAppResource(args[1]) // Specify user app jar path
                  .setMainClass(args[2]);
              if (args.length > 3) {
                  String[] list = new String[args.length - 3];
                  for (int i = 3; i < args.length; i++) {
                      list[i-3] = args[i];
                  }
                  // Set app args
                  launcher.addAppArgs(list);
              }
      
              // Launch the app
              Process process = launcher.launch();
              // Get Spark driver log
              new Thread(new ISRRunnable(process.getErrorStream())).start();
              int exitCode = process.waitFor();
              System.out.println("Finished! Exit code is "  + exitCode);
          }
    • If you use Scala as the development language, you can compile the SparkLauncher class by referring to the following code:
        def main(args: Array[String]) {
          println(s"com.huawei.bigdata.spark.examples.SparkLauncherExample <mode> <jarParh>  <app_main_class> <appArgs>")
          val launcher = new SparkLauncher()
          launcher.setMaster(args(0))
            .setAppResource(args(1)) // Specify user app jar path
            .setMainClass(args(2))
            if (args.drop(3).length > 0) {
              // Set app args
              launcher.addAppArgs(args.drop(3): _*)
            }
      
      
          // Launch the app
          val process = launcher.launch()
          // Get Spark driver log
          new Thread(new ISRRunnable(process.getErrorStream)).start()
          val exitCode = process.waitFor()
          println(s"Finished! Exit code is $exitCode")
        }

  2. Develop the Spark application based on the service logic and configure constant values such as the main class of the user-compiled Spark application.

    If you use the normal mode, you are advised to prepare the service application code and related configurations.

  3. Call the org.apache.spark.launcher.SparkLauncher.launch() function to submit user applications.

    1. Generate JAR files from the SparkLauncher application and user applications, and upload the JAR files to the Spark node of the application.
      • The compilation dependency package of SparkLauncher is spark-launcher_2.10-1.5.1.jar.
      • The compilation dependency packages of user applications vary with the code. You need to load the dependency package based on the compiled code.
    2. Upload the dependency JAR file of the application to a directory, for example, $SPARK_HOME/lib (the node where the application will run).

      Upload the dependency packages of the SparkLauncher class and the application to the lib directory on the client. The dependency package of the example code has existed in the lib directory on the client.

      If you want to use the SparkLauncher class, the node where the application runs must have the Spark client installed, and the client runs properly. The running of the SparkLauncher class is dependent on the configured environment variables, running dependency package, and configuration files.

    3. In the node where the Spark application is running, run the following command to submit the application using SparkLauncher:

      java -cp $SPARK_HOME/conf:$SPARK_HOME/lib/*:SparkLauncherExample.jar com.huawei.bigdata.spark.examples.SparkLauncherExample yarn-client /opt/female/FemaleInfoCollection.jar com.huawei.bigdata.spark.examples.FemaleInfoCollection <inputPath>