Problem Description:
When starting an HBase RegionServer with considerably large Ranger audit spool files, it crashes without any error. And when the RegionServer JVM option "-XX:OnOutOfMemoryError" is changed from "kill -9 %p" to "kill -3 %p", the following error is displayed in the stack trace:
ERROR hbaseRegional.async.summary.batch_hbaseRegional.async.summary.batch.hdfs_destWriter queue.AuditFileSpool: Exception in destination writing thread. java.lang.OutOfMemoryError: Requested array size exceeds VM limit at java.util.Arrays.copyOf(Arrays.java:2367) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:535) at java.lang.StringBuffer.append(StringBuffer.java:322) at java.io.BufferedReader.readLine(BufferedReader.java:363) at java.io.BufferedReader.readLine(BufferedReader.java:382) at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:812) at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:758) at java.lang.Thread.run(Thread.java:745)
This issue occurs because the Ranger audit spool directory contains large log spool files. While reading the files, the Ranger plugin-in within the RegionServer process requests for a large array size that exceeds the VM limit.
The reason for the audit spool file to increase so considerably in size could be a failure to access the HDFS in the past when Ranger audit destination is configured as HDFS.
Ranger audit logs refers to access denied for HTTP user accessing YARN default queue
Unable to download CSV or Excel from User Access Report from Ranger Admin
oldWALs files of HBase are not getting deleted periodically
"ValueError: No JSON object could be decoded" when starting a Standby HBase through Ambari
"Caused by: java.nio.channels.ClosedChannelException" Ranger tagsync fails to sync tags from Atlas
HCC Guidelines | HCC FAQs | HCC Privacy Policy
© 2011-2017 Hortonworks Inc. All Rights Reserved.
Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.
Privacy Policy |
Terms of Service
HCC Guidelines | HCC FAQs | HCC Privacy Policy | Privacy Policy | Terms of Service
© 2011-2018 Hortonworks Inc. All Rights Reserved.
Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.