中文字幕精品亚洲无线码二区,国产黄a三级三级三级看三级,亚洲七七久久桃花影院,丰满少妇被猛烈进入,国产小视频在线观看网站

wso2~apim_metrics的(de)配置(zhi)與二次(ci)開發

apim_metrics作為分析,診斷日志,開啟它非常有必要,它是指你的應用調用api的明細,這與wso2~自定義event-publisher是不同(tong)的,這(zhe)塊需要清楚。

開啟診斷日志

  1. 開啟metrics日志的支持,我們需要編輯log4j2.properties文件,它的位置為wso2am-4.x.x/repository/conf
appenders = APIM_METRICS_APPENDER, .... (list of other available appenders)

  1. 可以為elk開啟一個APIM_METRICS_APPENDER的日志記錄者,當然elk是最常用的日志收集和分析平臺,你也可以對接到其它云平臺
appender.APIM_METRICS_APPENDER.type = RollingFile
appender.APIM_METRICS_APPENDER.name = APIM_METRICS_APPENDER
appender.APIM_METRICS_APPENDER.fileName = ${sys:carbon.home}/repository/logs/apim_metrics.log
appender.APIM_METRICS_APPENDER.filePattern = ${sys:carbon.home}/repository/logs/apim_metrics-%d{MM-dd-yyyy}-%i.log
appender.APIM_METRICS_APPENDER.layout.type = PatternLayout
appender.APIM_METRICS_APPENDER.layout.pattern = %d{HH:mm:ss,SSS} [%X{ip}-%X{host}] [%t] %5p %c{1} %m%n
appender.APIM_METRICS_APPENDER.policies.type = Policies
appender.APIM_METRICS_APPENDER.policies.time.type = TimeBasedTriggeringPolicy
appender.APIM_METRICS_APPENDER.policies.time.interval = 1
appender.APIM_METRICS_APPENDER.policies.time.modulate = true
appender.APIM_METRICS_APPENDER.policies.size.type = SizeBasedTriggeringPolicy
appender.APIM_METRICS_APPENDER.policies.size.size=1000MB
appender.APIM_METRICS_APPENDER.strategy.type = DefaultRolloverStrategy
appender.APIM_METRICS_APPENDER.strategy.max = 10

  1. 添加一個報表日志
loggers = reporter, ...(list of other available loggers)

  1. 添加一個報表日志的收集配置
logger.reporter.name = org.wso2.am.analytics.publisher.reporter.elk
logger.reporter.level = INFO
logger.reporter.additivity = false
logger.reporter.appenderRef.APIM_METRICS_APPENDER.ref = APIM_METRICS_APPENDER

The apim_metrics.log file be rolled each day or when the log size reaches the limit of 1000 MB by default. Note that only 10 revisions will be kept and older revisions will be deleted automatically. You can change these configurations by updating the configurations provided in step 2 of this section given above.

  1. 日志數據的結構

終端用(yong)戶-》自己(ji)建立的應(ying)用(yong)-》wso2-api接口(kou)

  • apiName api名稱
  • proxyResponseCode api后端服務返回狀態碼
  • destination 網關上地址
  • apiContext api的詳細路徑
  • applicationId 應用的ID
  • applicationName 應用名稱
  • userIp 訪問者的ip地址
{"apiName":"user-register","proxyResponseCode":200,"destination":"//test.ddd.com/user-
register","apiCreatorTenantDomain":"carbon.super","platform":"Other","apiMethod":"GET","apiVersion":"1.0.0","gatewayType":"SYNAPSE","apiCreator":"admin","responseCacheHit":false,"backendLatency":111,"correlationId":
"0e5482a5-b281-4b91-a728-1b90f443110c","requestMediationLatency":389,"keyType":"PRODUCTION","apiId":"d642741c-b34a-4fde-8e47-5ef70455f638","applicationName":"test1","targetResponseCode":200,"requestTimestamp":"2025-
05-19T02:01:28.765Z","applicationOwner":"admin","userAgent":"PostmanRuntime","userName":"admin@carbon.super","apiResourceTemplate":"/*","regionId":"default","responseLatency":511,"responseMediationLatency":11,"userI
p":"111.1.1.2","apiContext":"/user/1.0.0","applicationId":"a18b9944-5ddf-4708-9922-a45e04474f81","apiType":"HTTP","properties":{"commonName":"N/A","responseContentType":"application/json","subtype":"D
EFAULT","isEgress":false,"apiContext":"/user-register/1.0.0","responseSize":0,"userName":"admin@carbon.super"}}

發布到其它系統

可以二級開發/home/wso2carbon/wso2am-4.5.0/repository/components/plugins/org.wso2.am.analytics.publisher.client_1.2.23.jar這個模(mo)塊,這里已經集成了(le)遠程(cheng)推送和elk日志記錄,我們可(ke)以擴(kuo)展一個kafka推送,擴(kuo)展完代碼(ma)之后,進行編譯,覆蓋(gai)源來的(de)jar包即可(ke)

集成kafka流程圖

  • org.wso2.am.analytics.publisher.reporter.elk.ELKCounterMetric.java內容擴展
    @Override
    public int incrementCount(MetricEventBuilder builder) throws MetricReportingException {
        Map<String, Object> event = builder.build();
        String jsonString = gson.toJson(event);
        String jsonStringResult = jsonString.replaceAll("[\r\n]", "");
        log.info("apimMetrics: " + name.replaceAll("[\r\n]", "") + ", properties :" +
                jsonStringResult);
        KafkaMqProducer.publishEvent("apim-metrics", jsonStringResult);
       
        return 0;
    }
    
$ mvn clean install -DskipTests -Dcheckstyle.skip

  • KafkaMqProducer.java
/**
 * kafka生產者.
 */
public class KafkaMqProducer {

  private final static String BOOTSTRAP_SERVER = ConfigFactory.getInstance().getStrPropertyValue("kafka.host");
  private static final Logger logger = LogManager.getLogger(KafkaMqProducer.class);
  private static KafkaProducer<String, String> producer;
  private static ExecutorService executorService = Executors.newFixedThreadPool(4);

  private static KafkaProducer<String, String> getProducer() {
    if (producer == null) {
      //reset thread context
      resetThreadContext();
      // create the producer
      producer = new KafkaProducer<String, String>(getProperties());
    }
    return producer;
  }

  public static void publishEvent(String topic, String value) {
    executorService.execute(() -> {
      try {
        // create a producer record
        ProducerRecord<String, String> eventRecord =
            new ProducerRecord<String, String>(topic, value);

        // send data - asynchronous
        getProducer().send(eventRecord, new Callback() {
          @Override
          public void onCompletion(RecordMetadata recordMetadata, Exception e) {
            if (e != null) {
              e.printStackTrace();
            }
          }
        });

      } catch (Exception ex) {
        logger.error("kafka.error", ex);
      }
    });
  }

  private static void resetThreadContext() {
    Thread.currentThread().setContextClassLoader(null);
  }

  public static Properties getProperties() {
    Properties properties = new Properties();
    properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVER);
    properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
    properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
    properties.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, "16384");
    return properties;
  }


}

集成第三方組件

如果需要集成第三方組件,如kafka,rabbitmq這些,需要將他們的原始jar包添加到/home/wso2carbon/wso2am-4.5.0/lib目(mu)錄下(xia),你(ni)可以把(ba)這(zhe)個目(mu)錄當成是共(gong)享目(mu)錄,這(zhe)里的jar可以被其它(ta)模塊加(jia)載,類似jboss中的模塊,但咱們OSGi平臺不需要通過jboss-deployment-structure.xml顯示指定它(ta),如果你(ni)是docker部署的,可以在原始鏡像基礎之上(shang),添加(jia)這(zhe)些jar包。

Dockerfile

# 基于官方 WSO2 APIM 鏡像
FROM wso2/wso2am:4.5.0

# 第三方jar包,需要放到lib目錄
COPY lib/*.jar /home/wso2carbon/wso2am-4.5.0/lib/

# 業務插件包,替換或覆蓋目標 JAR 文件,重新構建docker鏡像后需要更新一下values.yaml里的sha256這個值,告訴服務器使用最新的鏡像
COPY plugins/*.jar /home/wso2carbon/wso2am-4.5.0/repository/components/plugins/

posted @ 2025-05-19 14:39  張占嶺  閱讀(55)  評論(0)    收藏  舉報