This Task involves setting up a centralized logging system using the ELK stack (Elasticsearch, Logstash, Kibana) and deploying Filebeat as a log shipper to forward logs from multiple application servers. The entire process will be automated using Ansible, ensuring a consistent, scalable, and efficient deployment.
Create an Ansible inventory file defining the ELK and client servers.
[elk]
elk-master ansible_host=192.168.1.100 ansible_user=ubuntu
[app_servers]
app1 ansible_host=192.168.1.101 ansible_user=ubuntu
app2 ansible_host=192.168.1.102 ansible_user=ubuntu
Create a playbook install_elasticsearch.yml to install and configure Elasticsearch.
- hosts: elk
become: yes
tasks:
- name: Install dependencies
apt:
name: [apt-transport-https, openjdk-11-jdk]
state: present
- name: Add Elasticsearch GPG Key
shell: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
- name: Add Elasticsearch repository
copy:
dest: /etc/apt/sources.list.d/elastic-7.x.list
content: "deb https://artifacts.elastic.co/packages/7.x/apt stable main"
- name: Install Elasticsearch
apt:
name: elasticsearch
state: present
update_cache: yes
- name: Configure Elasticsearch
lineinfile:
path: /etc/elasticsearch/elasticsearch.yml
line: "{{ item }}"
with_items:
- "network.host: 0.0.0.0"
- "http.port: 9200"
- "cluster.name: elk-cluster"
- "node.name: elk-master"
- name: Start Elasticsearch service
systemd:
name: elasticsearch
enabled: yes
state: started
Create a playbook install_logstash.yml to install and configure Logstash.
- hosts: elk
become: yes
tasks:
- name: Install Logstash
apt:
name: logstash
state: present
- name: Configure Logstash pipeline
copy:
dest: /etc/logstash/conf.d/logstash.conf
content: |
input {
beats {
port => 5044
}
}
filter {
mutate {
add_field => { "log_source" => "%{host}" }
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
}
- name: Restart Logstash
systemd:
name: logstash
enabled: yes
state: restarted
Create a playbook install_kibana.yml to install and configure Kibana.
- hosts: elk
become: yes
tasks:
- name: Install Kibana
apt:
name: kibana
state: present
- name: Configure Kibana
lineinfile:
path: /etc/kibana/kibana.yml
line: "{{ item }}"
with_items:
- "server.host: '0.0.0.0'"
- "elasticsearch.hosts: ['http://localhost:9200']"
- name: Start Kibana service
systemd:
name: kibana
enabled: yes
state: started
Create a playbook install_filebeat.yml to install and configure Filebeat.
- hosts: app_servers
become: yes
tasks:
- name: Download Filebeat
get_url:
url: https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.x-amd64.deb
dest: /tmp/filebeat.deb
- name: Install Filebeat
shell: dpkg -i /tmp/filebeat.deb
- name: Configure Filebeat
copy:
dest: /etc/filebeat/filebeat.yml
content: |
filebeat.inputs:
- type: log
paths:
- /var/log/syslog
- /var/log/auth.log
output.logstash:
hosts: ["192.168.1.100:5044"]
- name: Start Filebeat
systemd:
name: filebeat
enabled: yes
state: started
Run the playbooks in order to set up the centralized logging system.
ansible-playbook -i inventory install_elasticsearch.yml
ansible-playbook -i inventory install_logstash.yml
ansible-playbook -i inventory install_kibana.yml
ansible-playbook -i inventory install_filebeat.yml
Run the following command on the ELK server:
curl -X GET "http://localhost:9200/_cluster/health?pretty"
Check if Logstash is listening on port 5044:
netstat -tulnp | grep 5044
Open a web browser and go to:
http://<ELK_SERVER_IP>:5601
Check if Kibana is loading.
This project automates the deployment of a centralized logging system using Ansible, ensuring logs from multiple servers are collected, processed, stored, and visualized efficiently. The solution improves troubleshooting, security auditing, and performance monitoring across an enterprise infrastructure.
In this extension of the Centralized Logging Setup, we integrate AWS CloudWatch and S3 for log storage, monitoring, and alerting. This allows us to:
We create an IAM user with permissions to push logs to CloudWatch Logs and store them in S3.
Use the following JSON policy for CloudWatch and S3 permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:CreateLogGroup"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::centralized-log-storage/*"
}
]
}
Attach this policy to an IAM user and generate AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Run the following Ansible playbook to install the AWS CLI and configure credentials.
install_aws_cli.yml
- hosts: elk
become: yes
tasks:
- name: Install AWS CLI
apt:
name: awscli
state: present
- name: Configure AWS Credentials
template:
src: aws_credentials.j2
dest: /root/.aws/credentials
mode: '0600'
aws_credentials.j2
[default]
aws_access_key_id = {{ aws_access_key }}
aws_secret_access_key = {{ aws_secret_key }}
region = us-east-1
Replace aws_access_key
and aws_secret_key
with actual AWS credentials in ansible-playbook
command.
We modify the Logstash pipeline to forward logs to AWS CloudWatch and S3.
- hosts: elk
become: yes
tasks:
- name: Configure Logstash for AWS CloudWatch and S3
copy:
dest: /etc/logstash/conf.d/logstash.conf
content: |
input {
beats {
port => 5044
}
}
filter {
mutate {
add_field => { "log_source" => "%{host}" }
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
cloudwatch {
log_group => "application-logs"
log_stream_name => "%{log_source}"
region => "us-east-1"
access_key_id => "{{ aws_access_key }}"
secret_access_key => "{{ aws_secret_key }}"
}
s3 {
access_key_id => "{{ aws_access_key }}"
secret_access_key => "{{ aws_secret_key }}"
region => "us-east-1"
bucket => "centralized-log-storage"
codec => "json_lines"
prefix => "logs/%{+YYYY}/%{+MM}/%{+dd}/"
}
}
- name: Restart Logstash
systemd:
name: logstash
enabled: yes
state: restarted
Instead of using Logstash, we can configure Filebeat to push logs directly to AWS CloudWatch.
- hosts: app_servers
become: yes
tasks:
- name: Configure Filebeat to send logs to AWS CloudWatch
copy:
dest: /etc/filebeat/filebeat.yml
content: |
filebeat.inputs:
- type: log
paths:
- /var/log/syslog
- /var/log/auth.log
output.console:
pretty: true
output.awscloudwatch:
log_group_name: "application-logs"
log_stream_name: "%{[host]}"
region: "us-east-1"
access_key_id: "{{ aws_access_key }}"
secret_access_key: "{{ aws_secret_key }}"
- name: Restart Filebeat
systemd:
name: filebeat
enabled: yes
state: restarted
Run the following Ansible playbooks to apply changes.
ansible-playbook -i inventory install_aws_cli.yml
ansible-playbook -i inventory install_filebeat.yml
ansible-playbook -i inventory install_logstash.yml
aws logs describe-log-groups --region us-east-1
aws s3 ls s3://centralized-log-storage/logs/ --recursive
We configure CloudWatch Alarms to alert on error logs.
aws logs put-metric-filter \
--log-group-name "application-logs" \
--filter-name "ErrorFilter" \
--filter-pattern "ERROR" \
--metric-transformations metricName=ErrorCount,metricNamespace=LogMetrics,metricValue=1
aws cloudwatch put-metric-alarm \
--alarm-name "HighErrorRate" \
--metric-name "ErrorCount" \
--namespace "LogMetrics" \
--statistic "Sum" \
--period 300 \
--threshold 10 \
--comparison-operator "GreaterThanThreshold" \
--evaluation-periods 1 \
--alarm-actions "arn:aws:sns:us-east-1:123456789012:SendAlert"
This will trigger an alert if more than 10 error logs appear within 5 minutes.