Hortonworks.com
  • Explore
    • All Tags
    • All Questions
    • All Repos
    • All SKB
    • All Articles
    • All Ideas
    • All Users
    • All Badges
    • Leaderboard
  • Create
    • Ask a question
    • Add Repo
    • Create Article
    • Post Idea
  • Tracks
    • All Tracks
    • Community Help
    • Cloud & Operations
    • CyberSecurity
    • Data Ingestion & Streaming
    • Data Processing
    • Data Science & Advanced Analytics
    • Design & Architecture
    • Governance & Lifecycle
    • Hadoop Core
    • Sandbox & Learning
    • Security
    • Solutions
  • Login
HCC Hortonworks Community Connection
  • Home /
  • Solutions /
  • Home /
  • Solutions /
avatar image

"ERROR: java.lang.OutOfMemoryError: Requested array size exceeds VM limit" when starting an HBase RegionServer

rmaruthiyodan created · Dec 07, 2017 at 08:43 AM
0

SupportKB

Problem Description:

When starting an HBase RegionServer with considerably large Ranger audit spool files, it crashes without any error. And when the RegionServer JVM option "-XX:OnOutOfMemoryError" is changed from "kill -9 %p" to "kill -3 %p", the following error is displayed in the stack trace: 

ERROR hbaseRegional.async.summary.batch_hbaseRegional.async.summary.batch.hdfs_destWriter queue.AuditFileSpool: 
Exception in destination writing thread.
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:535)
at java.lang.StringBuffer.append(StringBuffer.java:322)
at java.io.BufferedReader.readLine(BufferedReader.java:363)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:812)
at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:758)
at java.lang.Thread.run(Thread.java:745)

Cause:

This issue occurs because the Ranger audit spool directory contains large log spool files. While reading the files, the Ranger plugin-in within the RegionServer process requests for a large array size that exceeds the VM limit.

The reason for the audit spool file to increase so considerably in size could be a failure to access the HDFS in the past when Ranger audit destination is configured as HDFS.


Solution:
To resolve this issue, do the following:
  1. Move older Spool log file to a temporary location.
  2. Empty the json files inside spool directory before restarting the RegionServer.


About:
This article created by Hortonworks Support (Article: 000005810) on 2017-05-09 10:54
OS: Linux
Type: Cluster_Administration
Version: n/a

Support ID: 000005810
thub.nodes.view.add-new-comment
solutionhwsupportHbaseRanger
Add comment
10 |6000 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users

Follow

Follow

avatar image avatar image avatar image avatar image avatar image
avatar image avatar image avatar image avatar image avatar image
avatar image avatar image avatar image avatar image avatar image
avatar image avatar image avatar image avatar image avatar image
avatar image avatar image avatar image avatar image

Related posts

ERROR:"org.apache.solr.common.SolrException: Cannot connect to cluster at ZK1:2181,ZK2:2181,ZK3:2181: cluster not found/not ready for ranger audits" when accessing Audit logs from Ranger UI

ERROR: "java.io.IOException: Retry attempted 10 times without completing, bailing out" when performing bulkload in HBase

Ranger audit logs refers to access denied for HTTP user accessing YARN default queue

Unable to download CSV or Excel from User Access Report from Ranger Admin

Error:"org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location" when running a HBase job

oldWALs files of HBase are not getting deleted periodically

Error "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" occurs on the Chrome browser when browsing Ranger Web UI after an upgrade

"ValueError: No JSON object could be decoded" when starting a Standby HBase through Ambari

Error:"java.lang.IllegalArgumentException: KeyValue size too large" when running a job to load data into HBase

"Caused by: java.nio.channels.ClosedChannelException" Ranger tagsync fails to sync tags from Atlas

HCC Guidelines | HCC FAQs | HCC Privacy Policy

Hortonworks - Develops, Distributes and Supports Open Enterprise Hadoop.

© 2011-2017 Hortonworks Inc. All Rights Reserved.
Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.
Privacy Policy | Terms of Service

HCC Guidelines | HCC FAQs | HCC Privacy Policy | Privacy Policy | Terms of Service

© 2011-2018 Hortonworks Inc. All Rights Reserved.

Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.

  • Anonymous
  • Login
  • Create
  • Ask a question
  • Add Repo
  • Create SupportKB
  • Create Article
  • Post Idea
  • Tracks
  • Community Help
  • Cloud & Operations
  • CyberSecurity
  • Data Ingestion & Streaming
  • Data Processing
  • Data Science & Advanced Analytics
  • Design & Architecture
  • Governance & Lifecycle
  • Hadoop Core
  • Sandbox & Learning
  • Security
  • Solutions
  • Explore
  • All Tags
  • All Questions
  • All Repos
  • All SKB
  • All Articles
  • All Ideas
  • All Users
  • Leaderboard
  • All Badges