Sun Java memory fun with logstash

I am working on a distributed logstash deployment in AWS.
I'm using the Elasticcache-Redis provided by AWS as the store between the syslog receiver and the elasticcache writer (aka the worker).
I keep getting OOM errors on the worker like this
{:timestamp=>"2013-10-22T08:18:58.592000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>34, :exception=>java.lang.OutOfMemoryError: Java heap space, :backtrace=>[], :level=>:warn}

I was aware of the ability to change the heapsize allocated to the jvm, but wasn't sure how to find it.  Luckily google helped.  I took the command line I was using to run elasticsearch and compared how the 3 settings affected the defaults

java -Xms512m -Xmx512m -Xss256k -XX:+PrintFlagsFinal -version > /tmp/1
java -XX:+PrintFlagsFinal -version > /tmp/2
diff /tmp/1

That helped me figure out which default settings were important.

Now, I just have to figure out how to give logstash enough RAM to not die.

Comments

  1. Hi,

    I got such results:

    # diff /tmp/1 /tmp/2
    253c253
    < uintx InitialHeapSize := 536870912 {product}
    ---
    > uintx InitialHeapSize := 527180032 {product}
    296c296
    < uintx MaxHeapSize := 536870912 {product}
    ---
    > uintx MaxHeapSize := 8436842496 {product}
    523c523
    < intx ThreadStackSize := 256 {pd product}
    ---
    > intx ThreadStackSize = 1024 {pd product}

    So based on the above diff - how much memory should I allocate?

    Very useful post!

    Thanks
    Ashok

    ReplyDelete

Post a Comment

Popular posts from this blog

Xubuntu Home Server on Dell XPS 13 9370

Cygwin + syslog-ng

Fedora - VNC through systemd/xinetd