繁体   English   中英

如何使Logstash 2.3.2配置文件更灵活

[英]How to make the logstash 2.3.2 configuration file more flexible

我正在使用logstash 2.3.2读取和解析wso2 esb的日志文件。 我能够成功解析日志条目并将其发送到Json格式的API。

在日志文件中,有不同的日志级别,例如“ INFO,ERROR,WARN和DEBUG” 当前,如果日志类型为“错误”,则仅通过它发送日志条目。

样本日志文件:

TID: [-1234] [] [2016-05-26 11:22:34,366]  INFO {org.wso2.carbon.application.deployer.internal.ApplicationManager} -  Undeploying Carbon Application : CustomerService_CA_01_001_1.0.0... {org.wso2.carbon.application.deployer.internal.ApplicationManager}
TID: [-1234] [] [2016-05-26 11:22:35,539]  INFO {org.apache.axis2.transport.jms.ServiceTaskManager} -  Task manager for service : CustomerService_01_001 shutdown {org.apache.axis2.transport.jms.ServiceTaskManager}
TID: [-1234] [] [2016-05-26 11:22:35,545]  INFO {org.apache.axis2.transport.jms.JMSListener} -  Stopped listening for JMS messages to service : CustomerService_01_001 {org.apache.axis2.transport.jms.JMSListener}
TID: [-1234] [] [2016-05-26 11:22:35,549]  INFO {org.apache.synapse.core.axis2.ProxyService} -  Stopped the proxy service : CustomerService_01_001 {org.apache.synapse.core.axis2.ProxyService}
TID: [-1234] [] [2016-05-26 11:22:35,553]  INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} -  Removing Axis2 Service: CustomerService_01_001 {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [-1234] [] [2016-05-26 11:22:35,572]  INFO {org.apache.synapse.deployers.ProxyServiceDeployer} -  ProxyService named 'CustomerService_01_001' has been undeployed {org.apache.synapse.deployers.ProxyServiceDeployer}
TID: [-1234] [] [2016-05-26 18:10:26,465]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: LogaftervalidationWSAction: urn:mediateLogaftervalidationSOAPAction: urn:mediateLogaftervalidationMessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8bLogaftervalidationDirection: response {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-05-26 18:10:26,469]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: XPATH-LogLastNameWSAction: urn:mediateXPATH-LogLastNameSOAPAction: urn:mediateXPATH-LogLastNameMessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8bXPATH-LogLastNameDirection: responseXPATH-LogLastNameproperty_name LastName_Value = XPATH-LogLastNameEnvelope:
TID: [-1234] [] [2016-05-26 18:10:26,477] ERROR {org.apache.synapse.mediators.transform.XSLTMediator} -  The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null {org.apache.synapse.mediators.transform.XSLTMediator}
TID: [-1234] [] [2016-05-26 18:10:26,478] ERROR {org.apache.synapse.mediators.transform.XSLTMediator} -  Unable to perform XSLT transformation using : Value {name ='null', keyValue ='gov:CustomerService/01/xslt/CustomertoCustomerSchemaMapping.xslt'} against source XPath : //tns1:Customer reason : The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null {org.apache.synapse.mediators.transform.XSLTMediator}
org.apache.synapse.SynapseException: The evaluation of the XPath expression //tns1:Customer did not result in an OMNode : null
    at org.apache.synapse.util.xpath.SourceXPathSupport.selectOMNode(SourceXPathSupport.java:100)
    at org.apache.synapse.mediators.transform.XSLTMediator.performXSLT(XSLTMediator.java:216)
    at org.apache.synapse.mediators.transform.XSLTMediator.mediate(XSLTMediator.java:196)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:214)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
    at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
    at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
    at org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:185)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:395)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:142)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
TID: [-1234] [] [2016-05-26 18:10:26,500]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: , WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:f89e4244-7a95-46ff-9df2-3e296009bf8b, Direction: response {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-05-26 11:32:24,272]  WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} -  The running OS : Windows 8 is not a tested Operating System for running WSO2 Carbon {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter}
TID: [-1234] [] [2016-05-26 11:32:24,284]  WARN {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter} -  Carbon is configured to use the default keystore (wso2carbon.jks). To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile. {org.wso2.carbon.core.bootup.validator.util.ValidationResultPrinter}
TID: [-1] [] [2016-05-26 11:32:24,315]  INFO {org.wso2.carbon.databridge.agent.thrift.AgentHolder} -  Agent created ! {org.wso2.carbon.databridge.agent.thrift.AgentHolder}

配置文件:

input {
 stdin {}
    file {
       path => "C:\MyDocument\Project\SampleESBLogs\wso2carbon.log" 
        type => "wso2carbon"
        start_position => "beginning"
        codec => multiline {
                pattern => "(^\s*at .+)|^(?!TID).*$"
                negate => false
                what => "previous"
        }

    }
}
filter {

    if [type] == "wso2carbon" {
        grok {
            match => [ "message", "TID:%{SPACE}\[%{INT:log_SourceSystemId}\]%{SPACE}\[%{DATA:log_ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:log_MessageType}%{SPACE}{%{JAVACLASS:log_MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_Message}" ]
            add_tag => [ "grokked" ]        
        }

        if "grokked" in [tags] {
            grok {
                match => ["log_MessageType", "ERROR"]
                add_tag => [ "loglevelerror" ]
            }   
        }

        if !( "_grokparsefailure" in [tags] ) {
            grok{
                    match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
                    add_tag => [ "grokked" ]    
                }
            date {
                    match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
                    target => "TimeStamp"
                    timezone => "UTC"
                }
        }               
    }
}

    if ( "multiline" in [tags] ) {
        grok {
            match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
            add_tag => [ "multiline" ]
            tag_on_failure => [ "multiline" ]       
        }
        date {
                match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
                target => "TimeStamp"

        }       
    }

}

output {
       if [type] == "wso2carbon" {  
        if "loglevelerror" in [tags] {
            stdout { }
            http {
                url => "https://localhost:8086/messages"
                http_method => "post"
                format => "json"
                mapping => ["TimeStamp","%{TimeStamp}","MessageType","%{log_MessageType}","MessageTitle","%{log_MessageTitle}","Message","%{log_Message}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
            }
        }
    }
}

问题陈述 :

我想为用户提供一个灵活的选项,用户可以从哪里决定需要向API发送哪种类型的日志条目。 与现有设置一样,仅向API发送“ ERROR”类型的日志条目。

目前我如何做到这一点:

目前,我正在按照以下方式进行操作。 我首先在过滤器中检查是否经过重新解析的日志条目具有错误类型,然后将标记添加到该日志条目。

if "grokked" in [tags] {
            grok {
                match => ["log_MessageType", "ERROR"]
                add_tag => [ "loglevelerror" ]
            }   
        } 

在输出部分,我再次检查“如果”条件,如果已解析的条目具有必需的标记,则将其放开,否则将其删除或忽略它。

if "loglevelerror" in [tags] {
            stdout { }
            http {
             ....
          }
      }

现在,我也想检查其他日志级别,那么还有其他更好的方法吗? 否则我必须放置类似的东西,如果它们内部具有相同的填充物,则条件将有所不同。

总结一下:如果我想提供一个选项,使他们可以通过使用我的配置,可以通过取消注释或其他任何方式选择他们想要发送到API的日志类型(INFO,WARN,ERROR,DEBUG),我该如何做到?

您可以跳过该问题,而仅在输出中使用条件检查。 您可以检查字段的值是否在数组内或与值匹配。

Logstash条件参考

检查它是否只是水平误差

if [log_MessageType] == "ERROR" {
  # outputs
}

发送错误和警告

if [log_MessageType] in ["ERROR", "WARN"] {
  # outputs
}

但是请注意不要做类似的事情

if [log_MessageType] in ["ERROR"] {

这不会像预期的那样起作用,有关更多信息,请参见此问题

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM