Uploaded image for project: 'CDAP'
  1. CDAP
  2. CDAP-2786 Supporting CDAP 3.1.0 in MapR distro
  3. CDAP-2852

Running MapReduce program throwing exception Cannot initialize Cluster

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.1.0
    • Component/s: App Fabric
    • Labels:
      None
    • Rank:
      1|hzyv33:

      Description

      While running the Mapreduce job from CDAP on MapR cluster i see the following exception about Wrong FS, its expecting file:/// while the path is maprfs://

      2015-06-23 06:07:05,230 - ERROR [MapReduceRunner-PurchaseHistoryBuilder:c.c.c.i.a.r.b.MapReduceRuntimeService@319] - Exception when submitting MapReduce Job: job=PurchaseHistoryBuilder,=namespaceId=default, applicationId=PurchaseHistory, program=PurchaseHistoryBuilder, runid=fca3c8b1-196d-11e5-a950-42010af04e24
      java.lang.IllegalArgumentException: Wrong FS: maprfs:/cdap/mapreduce.default.PurchaseHistory.PurchaseHistoryBuilder.fca3c8b1-196d-11e5-a950-42010af04e24.jar, expected: file:///
      	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:679) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:519) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:794) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:514) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:345) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1939) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1907) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1872) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.mapreduce.JobSubmitter.copyJar(JobSubmitter.java:286) ~[hadoop-mapreduce-client-core-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:254) ~[hadoop-mapreduce-client-core-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301) ~[hadoop-mapreduce-client-core-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389) ~[hadoop-mapreduce-client-core-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) ~[hadoop-mapreduce-client-core-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) ~[hadoop-mapreduce-client-core-2.5.1-mapr-1503.jar:na]
      	at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_75]
      	at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_75]
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566) ~[hadoop-common-2.5.1-mapr-1503.jar:na]
      	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) ~[hadoop-mapreduce-client-core-2.5.1-mapr-1503.jar:na]
      	at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService.startUp(MapReduceRuntimeService.java:297) ~[co.cask.cdap.cdap-app-fabric-3.1.0-SNAPSHOT.jar:na]
      	at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47) [com.google.guava.guava-13.0.1.jar:na]
      	at co.cask.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2$1.run(MapReduceRuntimeService.java:405) [co.cask.cdap.cdap-app-fabric-3.1.0-SNAPSHOT.jar:na]
      	at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
      

      when i tried setting the mapreduce.framework.name to yarn, i got the following error about initializing cluster while running mapreduce job once, but did not happen after that. i mostly get the previous trace about Wrong FS.

      2015-06-23 00:23:05,624 - ERROR [MapReduceRunner-PurchaseHistoryBuilder:c.c.c.i.a.r.b.MapReduceRuntimeService@319] - Exception when submitting MapReduce Job: job=PurchaseHistoryBuilder,=namespaceId=default, applicationId=PurchaseHistory, 
      program=PurchaseHistoryBuilder, 
      runid=f164c241-193d-11e5-94f6-42010af04e24
java.lang.RuntimeException: java.io.IOException: Cannot initialize Cluster. 
      Please check your configuration for mapreduce.framework.name and the correspond server addresses.
      

        Attachments

          Activity

            People

            • Assignee:
              shankar Shankar Selvam
              Reporter:
              hsaputra Henry Saputra
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: