Qingcloud launched elk cluster service

2022-09-28
  • Detail

Qingyun qingcloud launched elk cluster service

in life, we often review what happened before through photos and diaries. In the computer world, log records everything

logs constantly record the data generated under the operating system, application services, business logic and other scenarios. According to incomplete statistics, the world produces about 2 EB log data every day. In the face of such a large amount of data, when it is necessary to find some important information, a centralized log management system is particularly important

there are many tools, products and services related to log management: rsyslog, syslog ng, commercial Splunk, scribe of Facebook, chukwa of Apache, kafak of LinkedIn, fluent of cloudera, elk of elastic, etc

among them, the open source elk architecture has risen rapidly in the past two years, becoming the first choice in the field of machine data analysis and real-time log processing. In short, elk is the abbreviation of elasticsearch, logstash and kibana:

elasticsearch is a real-time distributed search and analysis engine

logstash provides users with the ability to collect, transform, optimize and output data

kibana provides elasticsearch with a powerful visual interface

the elk service launched by qingcloud this time, The original three independent application components are integrated into one service and delivered to users in the form of cloud applications through AppCenter, so that components can be automatically perceived and configured with each other, supporting one click deployment and horizontal and vertical expansion of nodes, which greatly reduces the complexity of independent construction and manual configuration of components

what features does elk on qingcloud bring

elk on qingcloud function overview

elk version is newly upgraded to 5.0, of which elasticsearch and kibana versions are 5.5.1, logstash version is 5.4.3, and it also supports the one click upgrade of the new version

elasticsearch's Chinese word segmentation function is comprehensively enhanced. It integrates the IK analysis Chinese word segmentation plug-in, and integrates the stuttering word segmentation thesaurus and IK's own Sogou thesaurus for this plug-in. At the same time, it also supports users to upload custom dictionaries

deep integration with qingstor object storage: elasticsearch clusters can be backed up, cluster snapshots can be generated to qingstor object storage, and data can be recovered from the backup snapshots; Logstash provides qingstor input/output plug-ins

add elastichd visualization plug-in to facilitate users to use elasticsearch for data search and analysis through the browser

one click to realize the installation and deployment of the cluster, support the horizontal and vertical expansion of nodes, and support the monitoring of key indicators of the cluster

with so many functional features, under which scenarios are elk services applied

log collection, storage, retrieval and analysis

elk on qingcloud can obtain log data (file, log4j, syslog, qingstor object storage, Kafka, elasticsearch, etc.) from a variety of data sources through the logstash input plug-in and save it to elasticsearch

logstash node is configured with HTTP input plug-in by default. The following will take this plug-in as an example (users can choose various other input plug-ins) to enter data in the way of HTTP for testing. The steps are as follows:

first, find the IP address of any logstash node on the cluster details page, and execute the following command:

curl -d [15:57:26]: call_ es_ api [:10105/_cluster/health] Exception [error: [Errno -5] No address associated with hostname], try to sleep 10 second. Http://: 9700

send a simulation log to logstash, and logstash will send the input data to elasticsearch by default

next, you can find the log data entered into elasticsearch by logstash in kibana:

access the web interface (http://: 5601) provided by the kibana node in the browser, and enter the configuration index mode interface by default, as shown in the figure. Click Create directly

click the discover menu item on the left to display the recently received logs, enter error in the search bar, click 2. Precautions in the application of friction and wear testing machine, and click the search button on the right. As shown in the figure, error is highlighted and the test is successful

ik analysis plug-in Chinese word segmentation upload custom dictionary

suppose such a situation, the user wants to retrieve the keyword "Youfan technology" in the log through elasticsearch. Without a dictionary, all logs containing words such as "you", "fan" and "technology" will be searched, which greatly reduces the search effect. However, if the custom dictionary is uploaded and configured, You can accurately search the log content containing the keyword "Youfan technology"

in order to obtain better Chinese word segmentation effect in elasticsearch, elk on qingcloud integrates the IK analysis Chinese word segmentation plug-in, and provides the stuttering word segmentation thesaurus and IK's own Sogou thesaurus for the plug-in. At the same time, it also supports users to upload custom dictionaries. For the usage of IK analysis plug-in, please refer to IK analysis plug-in()

upload user-defined dictionary. The steps are as follows:

first, find the IP address of any logstash node on the cluster details page

secondly, through curl -t dictionary file http:///dicts/Command to upload user-defined dictionary. After uploading successfully, you can visit http:///dicts/To view the dictionary file

finally, switch to the configuration parameters tab on the cluster details page, select the elasticsearch node to configure parameters and set remote_ ext_ Save the dict setting item as the accessible URL of the user-defined Dictionary (as in the example), and then restart the elasticsearch node in the cluster on the cluster list page

note: please manually restart the elasticsearch node in the cluster on the cluster list page after saving the configuration

elk service deep integration qingstor object storage

elasticsearch and qingstor object storage integration

elasticsearch can realize cluster data backup and cross region data migration by integrating with object storage

elasticsearch service provided by Qingyun can generate cluster snapshots, then back up the snapshots to qingstor, and recover from them when necessary. At the same time, the information stored in the snapshot is not bound to a specific cluster or cluster name, so you can restore the snapshot generated in one cluster to another cluster, for example, restore the snapshot generated by the ES cluster in pek3a to the middle area of the ES cluster in sh1a

with the increasing data in elasticsearch, the storage cost of the cluster also rises. After logstash is integrated with object storage, it can not only quickly transfer the data in qingstor object storage to elasticsearch for analysis

the data in elasticsearch or the data collected by logstash can also be imported and stored in the qingstor object storage for a long time. By regularly deleting the cold data in the existing cluster, the storage cost of using elk can be greatly reduced

logstash input/output qingstor plug-in usage

logsta adopts ball screw to use smaller driving power sh integrates Qingyun object storage with qingstor's logstash input/output plug-in. Users can easily:

from qingstor object storage, input the data in qingstor into elasticsearch through logstash input qingstor plug-in

through the logstash output qingstor plug-in, in addition to outputting input data from various sources to the elasticsearch cluster, you can also save the data to the qingstor object storage for permanent storage

bid farewell to cumbersome cluster configuration and deploy Elk with one click

as Qingyun's first end-to-end solution integrating multiple products, elk service has been launched in Qingyun qingcloud AppCenter application center. Interested users can deploy elk service with one click through AppCenter

step 1: basic settings

fill in the service name and description, and select the version

step 2: elasticsearch node setting

fill in the configuration information of elasticsearch node, such as CPU, memory, number of nodes, host type and data disk size

step 3: kibana node setting

fill in the configuration information of kibana node, such as CPU, memory, number of nodes and host type

step 4: logstash node settings

fill in the configuration information of logstash node CPU, memory, number of nodes, host type and data disk size

step 5: network settings

for security reasons, all clusters need to be deployed in private networks. Please select the thermal conductive plastic you create to evenly fill the high-molecular matrix materials with thermal conductive fillers (including particles, fibers, laminations, etc.)

Step 6: service environment parameter settings

after you create it successfully, click the corresponding cluster on the cluster list page to view the cluster details. You can see that the cluster is divided into three roles: elasticsearch node, kibana node and logstash node

note: kibana node and logstash node are optional. Users can create a cluster containing only elasticsearch nodes. If you want to use kibana, head or elastichd, you need to create a kibana node; If you need to use logstash, upload a custom dictionary, or view the logs of each node of elasticsearch cluster, you need to create a logstash node that reduces energy consumption by 40000 tons of standard coal per year

Copyright © 2011 JIN SHI