2017年3月10日

ganglia 用multi source (cluster)

ganglia  有需要監控二個 source , 一個hadoop 一個solr ..

ganglia 主機設定,改用unicast
1. gmetad.conf
    data_source "hadoop" 172.1.0.2:8649
    data_source "solr" 172.1.0.2:8655

2. 要設定二個 gmond.conf.1 gmond.conf.2 , 開二個gmond 來收metric
     gmond.conf.1   (這個gmond 收也送,主機加入hadoop source)
         cluster {
         name = "hadoop"
         owner = "unspecified"
         latlong = "unspecified"
         url = "unspecified"
         }
         host {
           location = "unspecified"
         }
         udp_send_channel {
           host = 172.1.0.2
           port = 8649
           ttl = 1
         }
         udp_recv_channel {
          port = 8649
          bind = 172.1.0.2
         }
         tcp_accept_channel {
         port = 8649
         }
      gmond.conf.2 (這個gmond 就只收)
         cluster {
         name = "solr"
         owner = "unspecified"
         latlong = "unspecified"
         url = "unspecified"
         }
         host {
          location = "unspecified"
         }
         udp_recv_channel {
           port = 8655
          bind = 172.1.0.2
         }
         tcp_accept_channel {
          port = 8655
         }
3. 執行gmond
    /usr/sbin/gmond  --conf /etc/ganglia/gmond.conf.1
    /usr/sbin/gmond  --conf /etc/ganglia/gmond.conf.2
    或改daemon 的shell  , /etc/init.d/ganglia-monitor
         start-stop-daemon --start --quiet --name gmond1 \
                --exec $DAEMON -- --pid-file /var/run/$NAME.pid.1 --conf /etc/ganglia/gmond.conf.1
        start-stop-daemon --start --quiet --name gmond2 \
                --exec $DAEMON -- --pid-file /var/run/$NAME.pid.2 --conf /etc/ganglia/gmond.conf.2

   


2017年3月1日

log4j log 送到 logstash log4j input

jvm 程式,像是hadoop solr 可以把log 送到ELK 集中控管

ex: 修改hadoop log4j.properties
log4j.appender.server=org.apache.log4j.net.SocketAppender
log4j.appender.server.Port=4560
log4j.appender.server.RemoteHost=172.1.1.1
log4j.appender.server.LocationInfo=true
log4j.appender.server.Application=hadoop

logstash 設定
input {
  log4j {
    host => "172.1.1.1"
    port => 4560
  }
}
filter {
  grok {
    match => { "host" => "%{IPORHOST:host}:%{POSINT}" }  // 修改1.2.3.4:8888 => 1.2.3.4
    overwrite => [ "host" ]
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
}