国产亚洲精品福利在线无卡一,国产精久久一区二区三区,亚洲精品无码国模,精品久久久久久无码专区不卡

當(dāng)前位置: 首頁 > news >正文

專門做批發(fā)的網(wǎng)站seo網(wǎng)絡(luò)推廣經(jīng)理招聘

專門做批發(fā)的網(wǎng)站,seo網(wǎng)絡(luò)推廣經(jīng)理招聘,株洲網(wǎng)站建設(shè)的企業(yè),企業(yè)網(wǎng)站設(shè)計的深圳公司目錄 一、背景 二、設(shè)計 三、具體實現(xiàn) Filebeat配置 K8S SideCar yaml Logstash配置 一、背景 將容器中服務(wù)的trace日志和應(yīng)用日志收集到KAFKA,需要注意的是 trace 日志和app 日志需要存放在同一個KAFKA兩個不同的topic中。分別為APP_TOPIC和TRACE_TOPIC 二、…

目錄

一、背景

二、設(shè)計

三、具體實現(xiàn)

Filebeat配置

K8S SideCar yaml

Logstash配置


一、背景

? ? 將容器中服務(wù)的trace日志和應(yīng)用日志收集到KAFKA,需要注意的是 trace 日志和app 日志需要存放在同一個KAFKA兩個不同的topic中。分別為APP_TOPIC和TRACE_TOPIC

二、設(shè)計

流程圖如下:

日志采集流程??

說明:

????????APP_TOPIC:主要存放服務(wù)的應(yīng)用日志

????????TRACE_TOPIC:存放程序輸出的trace日志,用于排查某一個請求的鏈路

文字說明:

? ? ?filebeat 采集容器中的日志(這里需要定義一些規(guī)范,我們定義的容器日志路徑如下),filebeat會采集兩個不同目錄下的日志,然后輸出到對應(yīng)的topic中,之后對kafka 的topic進行消費、存儲。最終展示出來

/home/service/
└── logs├── app│   └── pass│       ├── 10.246.84.58-paas-biz-784c68f79f-cxczf.log│       ├── 1.log│       ├── 2.log│       ├── 3.log│       ├── 4.log│       └── 5.log└── trace├── 1.log├── 2.log├── 3.log├── 4.log├── 5.log└── trace.log4 directories, 13 files

三、具體實現(xiàn)

上干貨~

Filebeat配置

配置說明:

????????其中我將filebeat的一些配置設(shè)置成了變量,在接下來的k8s yaml文件中需要定義變量和設(shè)置變量的value。

? ? ? ? 需要特別說明的是我這里是使用了? tags: ["trace-log"]結(jié)合when.contains來匹配,實現(xiàn)將對應(yīng)intput中的日志輸出到對應(yīng)kafka的topic中

filebeat.inputs:
- type: logenabled: truepaths:- /home/service/logs/trace/*.logfields_under_root: truefields:topic: "${TRACE_TOPIC}"json.keys_under_root: truejson.add_error_key: truejson.message_key: messagescan_frequency: 10smax_bytes: 10485760harvester_buffer_size: 1638400ignore_older: 24hclose_inactive: 1htags: ["trace-log"]processors:- decode_json_fields:fields: ["message"]process_array: falsemax_depth: 1target: ""overwrite_keys: true- type: logenabled: truepaths:- /home/service/logs/app/*/*.logfields:topic: "${APP_TOPIC}"scan_frequency: 10smax_bytes: 10485760harvester_buffer_size: 1638400close_inactive: 1htags: ["app-log"]output.kafka:enabled: truecodec.json:pretty: true  # 是否格式化json數(shù)據(jù),默認falsecompression: gziphosts: "${KAFKA_HOST}"topics:- topic: "${TRACE_TOPIC}"bulk_max_duration: 2sbulk_max_size: 2048required_acks: 1max_message_bytes: 10485760when.contains:tags: "trace-log"- topic: "${APP_TOPIC}"bulk_flush_frequency: 0bulk_max_size: 2048compression: gzipcompression_level: 4group_id: "k8s_filebeat"grouping_enabled: truemax_message_bytes: 10485760partition.round_robin:reachable_only: truerequired_acks: 1workers: 2when.contains:tags: "app-log"

K8S SideCar yaml

配置說明:

? ? ? ? 該yaml中定一個兩個容器,容器1為nginx(示例)容器2為filebeat容器。定義了一個名稱為logs的emptryDir類型的卷,將logs卷同時掛載在了容器1和容器2的/home/service/logs目錄

????????接下來又在filebeat容器中自定義了三個環(huán)境變量,這樣我們就可以通過修改yaml的方式很靈活的來配置filebeat

????????????????TRACE_TOPIC: Trace日志的topic

????????????????APP_TOPIC:App日志的topic

????????????????KAFKA_HOST:KAFKA地址

apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginxname: nginxnamespace: default
spec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:imagePullSecrets:- name: uhub-registrycontainers:- image: uhub.service.ucloud.cn/sre-paas/nginx:v1imagePullPolicy: IfNotPresentname: nginxports:- name: nginxcontainerPort: 80- mountPath: /home/service/logsname: logsterminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /home/service/logsname: logs- env:- name: TRACE_TOPICvalue: pro_platform_monitor_log- name: APP_TOPICvalue: platform_logs- name: KAFKA_HOSTvalue: '["xxx.xxx.xxx.xxx:9092","xx.xxx.xxx.xxx:9092","xx.xxx.xxx.xxx:9092"]'- name: MY_POD_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.nameimage: xxx.xxx.xxx.cn/sre-paas/filebeat-v2:8.11.2imagePullPolicy: Alwaysname: filebeatresources:limits:cpu: 150mmemory: 200Mirequests:cpu: 50mmemory: 100MisecurityContext:privileged: truerunAsUser: 0terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /home/service/logsname: logsdnsPolicy: ClusterFirstimagePullSecrets:- name: xxx-registryrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}terminationGracePeriodSeconds: 30volumes:- emptyDir: {}name: logs                                                                                                                                                                              

Logstash配置

input {kafka {type => "platform_logs"bootstrap_servers => "xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092"topics => ["platform_logs"]group_id => 'platform_logs'client_id => 'open-platform-logstash-logs'}kafka {type => "platform_pre_log"bootstrap_servers => "xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092"topics => ["pre_platform_logs"]group_id => 'pre_platform_logs'client_id => 'open-platform-logstash-pre'}kafka {type => "platform_nginx_log"bootstrap_servers => "xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092,xxx.xxx.xxx.xxx:9092"topics => ["platform_nginx_log"]group_id => 'platform_nginx_log'client_id => 'open-platform-logstash-nginx'}
}
filter {if [type] == "platform_pre_log" {grok {match => { "message" => "\[%{IP}-(?<service>[a-zA-Z-]+)-%{DATA}\]" }}}if [type] == "platform_logs" {grok {match => { "message" => "\[%{IP}-(?<service>[a-zA-Z-]+)-%{DATA}\]" }}}
}
output {if [type] == "platform_logs" {elasticsearch {id => "platform_logs"hosts => ["http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200"]index => "log-xxx-prod-%{service}-%{+yyyy.MM.dd}"user => "logstash_transformer"password => "xxxxxxx"template_name => "log-xxx-prod"manage_template => "true"template_overwrite => "true"}}if [type] == "platform_pre_log" {elasticsearch {id => "platform_pre_logs"hosts => ["http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200"]index => "log-xxx-pre-%{service}-%{+yyyy.MM.dd}"user => "logstash_transformer"password => "xxxxxxx"template_name => "log-xxx-pre"manage_template => "true"template_overwrite => "true"}}if [type] == "platform_nginx_log" {elasticsearch {id => "platform_nginx_log"hosts => ["http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200","http://xxx.xxx.xxx.xxx:9200"]index => "log-platform-nginx-%{+yyyy.MM.dd}"user => "logstash_transformer"password => "xxxxxxx"template_name => "log-platform-nginx"manage_template => "true"template_overwrite => "true"}}
}

????????如果有幫助到你麻煩給個或者收藏一下~,有問題可以隨時私聊我或者在評論區(qū)評論,我看到會第一時間回復(fù)

http://m.aloenet.com.cn/news/38541.html

相關(guān)文章:

  • 企業(yè)網(wǎng)站建立制作個人免費網(wǎng)站建設(shè)
  • php網(wǎng)站制作 青島外貿(mào)seo軟件
  • 配置 tomcat 做網(wǎng)站做搜索引擎優(yōu)化的企業(yè)
  • 12個優(yōu)秀平面設(shè)計素材網(wǎng)站南京seo整站優(yōu)化技術(shù)
  • 做網(wǎng)站廣告收入會員卡營銷策劃方案
  • 做視頻找空鏡頭那個網(wǎng)站比較全seo優(yōu)化有百度系和什么
  • 電子商務(wù)網(wǎng)站建設(shè)的技術(shù)綜述論文優(yōu)化排名 生客seo
  • 安卓開發(fā)培訓(xùn)北京百度推廣優(yōu)化
  • 做封面的地圖網(wǎng)站app推廣是什么意思
  • 微信知彼網(wǎng)絡(luò)網(wǎng)站建設(shè)電商網(wǎng)站開發(fā)平臺
  • 深圳市寶安區(qū)怎么樣seo網(wǎng)站推廣的主要目的是什么
  • 鄭州鞏義網(wǎng)站建設(shè)全國人大常委會委員長
  • 網(wǎng)站備案截圖網(wǎng)站收錄免費咨詢
  • 克隆網(wǎng)站后怎么做查詢關(guān)鍵詞網(wǎng)站
  • 銀川app購物網(wǎng)站制作公司西安網(wǎng)站搭建公司
  • 專業(yè)網(wǎng)站設(shè)計公司哪家好百度投訴中心人工電話號碼
  • .net網(wǎng)站開發(fā)實例臨沂seo代理商
  • 做黃色網(wǎng)站被抓了怎么處理公司培訓(xùn)課程有哪些
  • 天龍八部私服怎么做網(wǎng)站百度風(fēng)云排行榜
  • 蘇州做網(wǎng)站推廣哪家好網(wǎng)絡(luò)營銷策劃書論文
  • 專做動漫av的網(wǎng)站市場營銷
  • 網(wǎng)站建設(shè)方向百度搜索風(fēng)云榜總榜
  • 廣州企業(yè)推廣seo工資待遇 seo工資多少
  • 有沒有做維修的網(wǎng)站哪有免費的網(wǎng)站
  • 網(wǎng)站收縮欄免費自助建站網(wǎng)站
  • 手機站建網(wǎng)站免費
  • 有什么做兼職的網(wǎng)站關(guān)鍵詞長尾詞優(yōu)化
  • 下載簡歷模板免費百度系優(yōu)化
  • html5效果網(wǎng)站做一個網(wǎng)站要花多少錢
  • 群站優(yōu)化之鏈輪模式制作網(wǎng)站要花多少錢