簡體   English   中英

如何使Logstash 2.3.2配置文件更靈活

[英]How to make the logstash 2.3.2 configuration file more flexible

我正在使用logstash 2.3.2讀取和解析wso2 esb的日志文件。 我能夠成功解析日志條目並將其發送到Json格式的API。

在日志文件中,有不同的日志級別,例如“ INFO,ERROR,WARN和DEBUG” 當前,如果日志類型為“錯誤”,則僅通過它發送日志條目。

樣本日志文件:

TID: [-1234] [] [2016-05-26 11:22:34,366]  INFO {org.wso2.carbon.application.deployer.internal.ApplicationManager} -  Undeploying Carbon Application : CustomerService_CA_01_001_1.0.0... {org.wso2.carbon.application.deployer.internal.ApplicationManager}
TID: [-1234] [] [2016-05-26 11:22:35,539]  INFO {org.apache.axis2.transport.jms.ServiceTaskManager} -  Task manager for service : CustomerService_01_001 shutdown {org.apache.axis2.transport.jms.ServiceTaskManager}
TID: [-1234] [] [2016-05-26 11:22:35,545]  INFO {org.apache.axis2.transport.jms.JMSListener} -  Stopped listening for JMS messages to service : CustomerService_01_001 {org.apache.axis2.transport.jms.JMSListener}
TID: [-1234] [] [2016-05-26 11:22:35,549]  INFO {org.apache.synapse.core.axis2.ProxyService} -  Stopped the proxy service : CustomerService_01_001 {org.apache.synapse.core.axis2.ProxyService}
TID: [-1234] [] [2016-05-26 11:22:35,553]  INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} -  Removing Axis2 Service: CustomerService_01_001 {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [-1234] [] [2016-05-26 11:22:35,572]  INFO {org.apache.synapse.deployers.ProxyServiceDeployer} -  ProxyService named 'CustomerService_01_001' has been undeployed {org.apache.synapse.deployers.ProxyServiceDeployer}
TID: [-1234] [] [2016-05-26 18:10:26,465]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: LogaftervalidationWSAction: urn:mediateLogaftervalidationSOAPAction: urn:mediateLogaftervalidationMessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8bLogaftervalidationDirection: response {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-05-26 18:10:26,469]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: XPATH-LogLastNameWSAction: urn:mediateXPATH-LogLastNameSOAPAction: urn:mediateXPATH-LogLastNameMessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8bXPATH-LogLastNameDirection: responseXPATH-LogLastNameproperty_name LastName_Value = XPATH-LogLastNameEnvelope:
TID: [-1234] [] [2016-05-26 18:10:26,477] ERROR {org.apache.synapse.mediators.transform.XSLTMediator} -  The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null {org.apache.synapse.mediators.transform.XSLTMediator}
TID: [-1234] [] [2016-05-26 18:10:26,478] ERROR {org.apache.synapse.mediators.transform.XSLTMediator} -  Unable to perform XSLT transformation using : Value {name ='null', keyValue ='gov:CustomerService/01/xslt/CustomertoCustomerSchemaMapping.xslt'} against source XPath : //tns1:Customer reason : The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null {org.apache.synapse.mediators.transform.XSLTMediator}
org.apache.synapse.SynapseException: The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null
    at org.apache.synapse.util.xpath.SourceXPathSupport.selectOMNode(SourceXPathSupport.java:100)
    at org.apache.synapse.mediators.transform.XSLTMediator.performXSLT(XSLTMediator.java:216)
    at org.apache.synapse.mediators.transform.XSLTMediator.mediate(XSLTMediator.java:196)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:214)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
    at org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:185)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:395)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:142)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
TID: [-1234] [] [2016-05-26 18:10:26,500]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: , WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8b, Direction: response {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-05-26 11:32:24,272]  WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} -  The running OS : Windows 8 is not a tested Operating System for running WSO2 Carbon {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter}
TID: [-1234] [] [2016-05-26 11:32:24,284]  WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} -  Carbon is configured to use the default keystore (wso2carbon.jks). To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile. {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter}
TID: [-1] [] [2016-05-26 11:32:24,315]  INFO {org.wso2.carbon.databridge.agent.thrift.AgentHolder} -  Agent created ! {org.wso2.carbon.databridge.agent.thrift.AgentHolder}

配置文件:

input {
 stdin {}
    file {
       path => "C:\MyDocument\Project\SampleESBLogs\wso2carbon.log" 
        type => "wso2carbon"
        start_position => "beginning"
        codec => multiline {
                pattern => "(^\s*at .+)|^(?!TID).*$"
                negate => false
                what => "previous"
        }

    }
}
filter {

    if [type] == "wso2carbon" {
        grok {
            match => [ "message", "TID:%{SPACE}\[%{INT:log_SourceSystemId}\]%{SPACE}\[%{DATA:log_ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:log_MessageType}%{SPACE}{%{JAVACLASS:log_MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_Message}" ]
            add_tag => [ "grokked" ]        
        }

        if "grokked" in [tags] {
            grok {
                match => ["log_MessageType", "ERROR"]
                add_tag => [ "loglevelerror" ]
            }   
        }

        if !( "_grokparsefailure" in [tags] ) {
            grok{
                    match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
                    add_tag => [ "grokked" ]    
                }
            date {
                    match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
                    target => "TimeStamp"
                    timezone => "UTC"
                }
        }               
    }
}

    if ( "multiline" in [tags] ) {
        grok {
            match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
            add_tag => [ "multiline" ]
            tag_on_failure => [ "multiline" ]       
        }
        date {
                match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
                target => "TimeStamp"

        }       
    }

}

output {
       if [type] == "wso2carbon" {  
        if "loglevelerror" in [tags] {
            stdout { }
            http {
                url => "https://localhost:8086/messages"
                http_method => "post"
                format => "json"
                mapping => ["TimeStamp","%{TimeStamp}","MessageType","%{log_MessageType}","MessageTitle","%{log_MessageTitle}","Message","%{log_Message}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
            }
        }
    }
}

問題陳述 :

我想為用戶提供一個靈活的選項,用戶可以從哪里決定需要向API發送哪種類型的日志條目。 與現有設置一樣,僅向API發送“ ERROR”類型的日志條目。

目前我如何做到這一點:

目前,我正在按照以下方式進行操作。 我首先在過濾器中檢查是否經過重新解析的日志條目具有錯誤類型,然后將標記添加到該日志條目。

if "grokked" in [tags] {
            grok {
                match => ["log_MessageType", "ERROR"]
                add_tag => [ "loglevelerror" ]
            }   
        } 

在輸出部分,我再次檢查“如果”條件,如果已解析的條目具有必需的標記,則將其放開,否則將其刪除或忽略它。

if "loglevelerror" in [tags] {
            stdout { }
            http {
             ....
          }
      }

現在,我也想檢查其他日志級別,那么還有其他更好的方法嗎? 否則我必須放置類似的東西,如果它們內部具有相同的填充物,則條件將有所不同。

總結一下:如果我想提供一個選項,使他們可以通過使用我的配置,可以通過取消注釋或其他任何方式選擇他們想要發送到API的日志類型(INFO,WARN,ERROR,DEBUG),我該如何做到?

您可以跳過該問題,而僅在輸出中使用條件檢查。 您可以檢查字段的值是否在數組內或與值匹配。

Logstash條件參考

檢查它是否只是水平誤差

if [log_MessageType] == "ERROR" {
  # outputs
}

發送錯誤和警告

if [log_MessageType] in ["ERROR", "WARN"] {
  # outputs
}

但是請注意不要做類似的事情

if [log_MessageType] in ["ERROR"] {

這不會像預期的那樣起作用,有關更多信息,請參見此問題

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM