[CDAP-9250] Hive operations fail in HDP 2.6 prerelease and release versions Created: 04/Apr/17  Updated: 15/Apr/17  Resolved: 10/Apr/17

Status: Resolved
Project: CDAP
Component/s: CDAP
Affects Version/s: 4.1.0
Fix Version/s: 4.1.1

Type: Task Priority: Major
Reporter: Matt Wuenschel Assignee: Nishith Nand
Resolution: Fixed Votes: 0
Labels: None

Release Notes: Added support for HDP 2.6.
Rank: 1|hzzz2n:

 Description   

Any operation using hive fails on the prerelease of HDP 2.6 with the following error.

Cannot get status. Reason: Response code: 500, message: 'Internal Server Error', body: 'org.apache.hive.service.cli.CLIService.getOperationStatus(Lorg/apache/hive/service/cli/OperationHandle;)Lorg/apache/hive/service/cli/OperationStatus;'

I verified that Hive is working by running show tables from the hive shell.



 Comments   
Comment by Nishith Nand [ 04/Apr/17 ]

hive --version
Hive 1.2.1000.2.6.0.0-598

Comment by Nishith Nand [ 04/Apr/17 ]

This is because as part of the fix for https://issues.apache.org/jira/browse/HIVE-15473

public OperationStatus getOperationStatus(OperationHandle opHandle) was changed to
public OperationStatus getOperationStatus(OperationHandle opHandle, boolean getProgressUpdate)

Comment by Nishith Nand [ 06/Apr/17 ]

PR: https://github.com/caskdata/cdap/pull/8490

Comment by Nishith Nand [ 07/Apr/17 ]

Namespace delete operations in ITN are failing with the following exception

2017-04-06 07:18:02,896 - ERROR [HiveServer2-Background-Pool: Thread-11494:?@?] - org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: T\
he last packet successfully received from the server was 17,081,980 milliseconds ago. The last packet sent successfully to the server was 17,081,980 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring an\
d/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.)
at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:406)
at org.apache.hadoop.hive.ql.exec.DDLTask.dropDatabase(DDLTask.java:4162)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:286)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1748)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1494)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1291)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1158)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1153)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197)
at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76)
at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:253)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:264)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: MetaException(message:java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 17,081,980 milliseconds ago. The last packet sent successfully \
to the server was 17,081,980 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeo\
uts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_database_result$drop_database_resultStandardScheme.read(ThriftHiveMetastore.java:24656)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_database_result$drop_database_resultStandardScheme.read(ThriftHiveMetastore.java:24624)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$drop_database_result.read(ThriftHiveMetastore.java:24558)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_drop_database(ThriftHiveMetastore.java:701)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.drop_database(ThriftHiveMetastore.java:686)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:804)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:178)
at com.sun.proxy.$Proxy49.dropDatabase(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.dropDatabase(Hive.java:402)
... 23 more

Comment by Nishith Nand [ 07/Apr/17 ]

Although I can create a namespace and delete it successfully.

Comment by Nishith Nand [ 08/Apr/17 ]

Explore tests are failing with the following exception

Full command array for failed execution:
[/usr/hdp/2.6.0.3-8/hadoop-yarn/bin/container-executor, cdap, cdap, 1, application_1491597904200_0212, container_1491597904200_0212_01_000003, /data/yarn/local/usercache/cdap/appcache/application_1491597904200_0212/container_1491597904200_0212_01_000003, /data/yarn/lo\
cal/nmPrivate/application_1491597904200_0212/container_1491597904200_0212_01_000003/launch_container.sh, /data/yarn/local/nmPrivate/application_1491597904200_0212/container_1491597904200_0212_01_000003/container_1491597904200_0212_01_000003.tokens, /data/yarn/local/nm\
Private/application_1491597904200_0212/container_1491597904200_0212_01_000003/container_1491597904200_0212_01_000003.pid, /data/yarn/local, /data/logs/hadoop-yarn/userlogs, cgroups=none]
2017-04-08 00:10:46,333 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime: Launch container failed. Exception:
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=143:
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:175)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:103)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:89)
        at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:392)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:317)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: ExitCodeException exitCode=143:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:933)
        at org.apache.hadoop.util.Shell.run(Shell.java:844)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1123)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:150)
        ... 9 more
Comment by Nishith Nand [ 10/Apr/17 ]

Tried running the tests against a single node cluster and they pass. Closing the JIRA.

Generated at Mon Dec 17 06:07:35 UTC 2018 using Jira 7.13.0#713000-sha1:fbf406879436de2f3fb1cfa09c7fa556fb79615a.