Hortonworks.com
  • Explore
    • All Tags
    • All Questions
    • All Repos
    • All SKB
    • All Articles
    • All Ideas
    • All Users
    • All Badges
    • Leaderboard
  • Create
    • Ask a question
    • Add Repo
    • Create Article
    • Post Idea
  • Tracks
    • All Tracks
    • Community Help
    • Cloud & Operations
    • CyberSecurity
    • Data Ingestion & Streaming
    • Data Processing
    • Data Science & Advanced Analytics
    • Design & Architecture
    • Governance & Lifecycle
    • Hadoop Core
    • Sandbox & Learning
    • Security
    • Solutions
  • Login
HCC Hortonworks Community Connection
  • Home /
  • Hadoop Core /
avatar image

HDF HiveStreaming NullPointerException...too many client connections?

Question by Ryan LaMothe Aug 23, 2017 at 01:30 PM Hivehdfhdp-upgradehive-streaming

We just upgraded fully working HDF + HDP clusters to HDF 3.0.1.1 and HDP 2.6.1 versions respectively and when re-enabling HiveStreaming, our logs are now filling with the following exceptions, which is dramatically slowing down the entire system's throughput. I've checked the various logs and HDP UI's but cannot seem to figure out what broke when we upgraded. Any ideas?

2017-08-22 16:32:04,174 WARN [Timer-Driven Process Thread-26] hive.metastore Unexpected increment of user count beyond one: 2 HCatClient: thread: 169 users=2 expired=false closed=false
2017-08-22 16:32:04,179 ERROR [Timer-Driven Process Thread-26] hive.log Got exception: java.lang.NullPointerException null
java.lang.NullPointerException: null
    at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.getFilteredObjects(AuthorizationMetaStoreFilterHook.java:77)
    at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.filterDatabases(AuthorizationMetaStoreFilterHook.java:54)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:1086)
    at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:469)
    at sun.reflect.GeneratedMethodAccessor500.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:174)
    at com.sun.proxy.$Proxy157.isOpen(Unknown Source)
    at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:269)
    at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
    at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.<init>(AbstractRecordWriter.java:94)
    at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:82)
    at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:60)
    at org.apache.nifi.util.hive.HiveWriter.getRecordWriter(HiveWriter.java:85)
    at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:72)
    at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
    at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:964)
    at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:875)
    at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:676)
    at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
    at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:673)
    at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2136)
    at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2106)
    at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:627)
    at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:551)
    at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
    at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
    at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:551)
    at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1118)
    at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
    at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
    at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
2017-08-22 16:32:04,179 ERROR [Timer-Driven Process Thread-26] hive.log Converting exception to MetaException
2017-08-22 16:32:04,179 WARN [Timer-Driven Process Thread-26] hive.metastore Evicted client has non-zero user count: 2
2017-08-22 16:32:04,179 WARN [Timer-Driven Process Thread-26] hive.metastore Non-zero user count preventing client tear down: users=2 expired=true
2017-08-22 16:32:04,179 WARN [Timer-Driven Process Thread-26] hive.metastore Non-zero user count preventing client tear down: users=1 expired=true
Comment

People who voted for this

0 Show 0
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users

4 Replies

· Add your reply
  • Sort: 
  • Votes
  • Created
  • Oldest
avatar image

Answer by Jay Kumar SenSharma · Aug 23, 2017 at 01:50 PM

@Ryan LaMothe

Based on the NullPointerException and "null" value at line 77 i guess it is because some how it is not able to determine the IPAddress of the user who is running the query.

1. https://github.com/apache/hive/blob/release-2.0.0/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/AuthorizationMetaStoreFilterHook.java#L77

75.    SessionState ss = SessionState.get();
76.    HiveAuthzContext.Builder authzContextBuilder = new HiveAuthzContext.Builder();
77    authzContextBuilder.setUserIpAddress(ss.getUserIpAddress());

.

2. And the SessionState has the "getUserIPAddress() as following: https://github.com/apache/hive/blob/release-2.0.0/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1650-L1652

  /**
   * @return ip address for user running the query
   */
  public String getUserIpAddress() {
    return userIpAddress;
  }

.

It basically get user's ip address. This is set only if the authorization api is invoked from a HiveServer2 instance in standalone mode.

So looks like some how it is not able to determine the users IP Address. Can you please check if there is a restriction in getting/determining client IP Address on your N/W?


Just side line note: I also see that you have reported an Improvement https://issues.apache.org/jira/browse/NIFI-3625 Where you mentioned the same error.

.

Comment

People who voted for this

0 Show 0 · Share
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users
avatar image

Answer by Ryan LaMothe · Aug 23, 2017 at 04:45 PM

@Jay SenSharma Thank you for the reply! I want to stress that this was a fully working cluster, not a sandbox, where we upgraded Ambari, HDF and HDP and once all the services came back up, this issue started happening. Do you have a preferred way for me to check if the IP address can be resolved for a user? Right now, we are using the 'nifi' local user on all nodes of an unsecured cluster (non-Kerberos for now).

The really strange thing is that HiveStreaming is successfully storing data into Hive, it is just extremely slow, as it appears a series of connection timeouts and connection refused errors are occurring, which eventually timeout and the transactions get committed to Hive.

I posted a related error message at:

https://community.hortonworks.com/questions/57827/enable-connect-to-time-line-server.html?childToView=132751#answer-132751

And finally, the issue

https://issues.apache.org/jira/browse/NIFI-3625 does appear to be related, but unfortunately the error was never resolved or solution identified in the ticket. Maybe I should ask there what the resolution was as well?

Any help debugging will be appreciated, thanks!

Comment

People who voted for this

0 Show 0 · Share
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users
avatar image

Answer by Mike Tom · Feb 13, 2018 at 06:20 PM

@Jay Kumar SenSharma, @Ryan LaMothe

Hi, I experience the same issue having HDP 2.6.4 and HDF 3.1 - unsecured cluster (without kerberos) installed using Ambari.All components - hive, hbase and hdfs work in HA mode. All cluster is unsecured - there is no authorization. Nifi is installed as 3-nodes cluster on dedicated hosts managed by the same Ambari.

The same issue was appearing when I used non-hortonworks nifi (in versions 1.2, 1.4,1.5 and 1.6 snapshot).

Errors:

2018-02-13 17:12:44,358 INFO [put-hive-streaming-0] org.apache.hadoop.hive.ql.log.PerfLogger </PERFLOG method=Driver.run start=1518538364285 end=1518538364358 duration=73 from=org.apache.hadoop.hive.ql.Driver>
2018-02-13 17:12:44,362 WARN [Timer-Driven Process Thread-5] hive.metastore Unexpected increment of user count beyond one: 2 HCatClient: thread: 111 users=2 expired=false closed=false
2018-02-13 17:12:44,365 ERROR [Timer-Driven Process Thread-5] hive.log Got exception: java.lang.NullPointerException null
java.lang.NullPointerException: null
        at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.getFilteredObjects(AuthorizationMetaStoreFilterHook.java:77)
        at org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.filterDatabases(AuthorizationMetaStoreFilterHook.java:54)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:1116)
        at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:469)
        at sun.reflect.GeneratedMethodAccessor138.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:174)
        at com.sun.proxy.$Proxy374.isOpen(Unknown Source)
        at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:269)
        at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
        at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.<init>(AbstractRecordWriter.java:94)
        at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:82)
        at org.apache.hive.hcatalog.streaming.StrictJsonWriter.<init>(StrictJsonWriter.java:60)
        at org.apache.nifi.util.hive.HiveWriter.getRecordWriter(HiveWriter.java:85)
        at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:72)
        at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:46)
        at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:1036)
        at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:947)
        at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$null$8(PutHiveStreaming.java:743)
        at org.apache.nifi.processor.util.pattern.ExceptionHandler.execute(ExceptionHandler.java:127)
        at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$12(PutHiveStreaming.java:740)
        at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2175)
        at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2145)
        at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:694)
        at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$4(PutHiveStreaming.java:572)
        at org.apache.nifi.processor.util.pattern.PartialFunctions.onTrigger(PartialFunctions.java:114)
        at org.apache.nifi.processor.util.pattern.RollbackOnFailure.onTrigger(RollbackOnFailure.java:184)
        at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:572)
        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
        at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2018-02-13 17:12:44,365 ERROR [Timer-Driven Process Thread-5] hive.log Converting exception to MetaException
2018-02-13 17:12:44,365 WARN [Timer-Driven Process Thread-5] hive.metastore Evicted client has non-zero user count: 2
2018-02-13 17:12:44,365 WARN [Timer-Driven Process Thread-5] hive.metastore Non-zero user count preventing client tear down: users=2 expired=true
2018-02-13 17:12:44,365 WARN [Timer-Driven Process Thread-5] hive.metastore Non-zero user count preventing client tear down: users=1 expired=true
2018-02-13 17:12:44,366 INFO [Timer-Driven Process Thread-5] hive.metastore Trying to connect to metastore with URI thrift://dc1-hadoop-m3.local:9083
2018-02-13 17:12:44,367 INFO [Timer-Driven Process Thread-5] hive.metastore Connected to metastore.
2018-02-13 17:12:44,571 INFO [put-hive-streaming-0] o.a.hadoop.hive.ql.io.orc.WriterImpl WIDE TABLE - Number of columns: 16 Chosen compression buffer size: 32768
2018-02-13 17:12:44,571 INFO [put-hive-streaming-0] o.a.hadoop.hive.ql.io.orc.WriterImpl ORC writer created for path: hdfs://hdfscluster/apps/hive/warehouse/monitors.db/presence_oneminute/dzien=1497135600/delta_193194025_193194124/bucket_00000 with stripeSize: 8388608 blockSize: 268435456 compression: ZLIB bufferSize: 32768


What is more - it seems to be connected with "Unexpected increment of user count" and "Evicted client has non-zero user count" warnings.

Is there any solution to this problem?

PS. Regarding mentioned by Ryan problem with ATS warning - it is caused by wrong hortonworks configuration of nifi conf directory - nifi does not know the name and port of Timeline Server. Defining inside nifi conf directory symbolic link to /etc/hadoop/conf/yarn-site.xml solves this issue. BTW - it should be defined in standard hortonworks installation and such a problem can be the source of other nifi errors/misconfigurations.

Comment

People who voted for this

0 Show 0 · Share
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users
avatar image

Answer by Shawn Weeks · Apr 21, 2018 at 05:56 AM

@Ryan LaMothe Did you ever figure out a solution to this. I'm having the same issue on new installations of HDP 2.6.4 and HDF 3.1 on both CentOS 6 and 7. Non Partitioned Streaming works just fine with no errors.

Comment

People who voted for this

0 Show 0 · Share
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users

Your answer

Hint: You can notify a user about this post by typing @username

Up to 5 attachments (including images) can be used with a maximum of 524.3 kB each and 1.0 MB total.

77
Followers
follow question

Answers Answer & comments

This website uses cookies for analytics, personalisation and advertising. To learn more or change your cookie settings, please read our Cookie Policy. By continuing to browse, you agree to our use of cookies.

HCC Guidelines | HCC FAQs | HCC Privacy Policy | Privacy Policy | Terms of Service

© 2011-2019 Hortonworks Inc. All Rights Reserved.

Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.

  • Anonymous
  • Login
  • Create
  • Ask a question
  • Add Repo
  • Create SupportKB
  • Create Article
  • Post Idea
  • Tracks
  • Community Help
  • Cloud & Operations
  • CyberSecurity
  • Data Ingestion & Streaming
  • Data Processing
  • Data Science & Advanced Analytics
  • Design & Architecture
  • Governance & Lifecycle
  • Hadoop Core
  • Sandbox & Learning
  • Security
  • Solutions
  • Explore
  • All Tags
  • All Questions
  • All Repos
  • All SKB
  • All Articles
  • All Ideas
  • All Users
  • Leaderboard
  • All Badges