Uploaded image for project: 'CDAP'
  1. CDAP
  2. CDAP-1875

StreamFileJanitor examines temporary mapreduce jar

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0
    • Component/s: App Fabric
    • Labels:
      None
    • Rank:
      1|hzyph3:

      Description

      Saw the following in the master logs on a secure cluster.

      Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=cdap, access=EXECUTE, inode="/cdap/mapreduce.default.PurchaseHistory.PurchaseHistoryBuilder.5b6d8cdd-75d0-44d3-8b29-8e1ec2d69d1d.jar"
      :cdap:supergroup:-rw-r--r--
              at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
              at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
              at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:205)
              at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:168)
              at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5519)
              at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3517)
              at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:785)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:764)
              at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
              at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
              at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:415)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
              at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
      
              at org.apache.hadoop.ipc.Client.call(Client.java:1410)
              at org.apache.hadoop.ipc.Client.call(Client.java:1363)
              at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
              at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
              at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
              at java.lang.reflect.Method.invoke(Method.java:622)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
              at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
              at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
              at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
              at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
              at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
              at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
              at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
              at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
              at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
              at org.apache.twill.filesystem.HDFSLocation.exists(HDFSLocation.java:65)
              at co.cask.cdap.data.stream.StreamFileJanitor.cleanAll(StreamFileJanitor.java:59)
              at co.cask.cdap.data.stream.service.LocalStreamFileJanitorService$2.run(LocalStreamFileJanitorService.java:67)
              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
              at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
              at java.util.concurrent.FutureTask.run(FutureTask.java:166)
              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
              at java.lang.Thread.run(Thread.java:701)
      

      Afterwards, could not find the file it was complaining about on hdfs. I think the jar is temporarily placed there when a mapreduce job is run, which can cause this exception to occur if the janitor happens to run at that time. Nothing bad comes of it, but we should handle that scenario.

        Attachments

          Activity

            People

            • Assignee:
              bhooshan Bhooshan Mogal
              Reporter:
              ashau Albert Shau
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: