Centralized Logging Setup using Ansible (ELK Stack) – End-to-End Project Implementation

Task Overview

This Task involves setting up a centralized logging system using the ELK stack (Elasticsearch, Logstash, Kibana) and deploying Filebeat as a log shipper to forward logs from multiple application servers. The entire process will be automated using Ansible, ensuring a consistent, scalable, and efficient deployment.

Technology Stack

Project Architecture

Step-by-Step Implementation using Ansible

Step 1: Set Up Inventory File

Create an Ansible inventory file defining the ELK and client servers.

[elk]
elk-master ansible_host=192.168.1.100 ansible_user=ubuntu

[app_servers]
app1 ansible_host=192.168.1.101 ansible_user=ubuntu
app2 ansible_host=192.168.1.102 ansible_user=ubuntu

Step 2: Create Ansible Playbooks

1. Install and Configure Elasticsearch

Create a playbook install_elasticsearch.yml to install and configure Elasticsearch.

- hosts: elk
  become: yes
  tasks:
    - name: Install dependencies
      apt:
        name: [apt-transport-https, openjdk-11-jdk]
        state: present

    - name: Add Elasticsearch GPG Key
      shell: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -

    - name: Add Elasticsearch repository
      copy:
        dest: /etc/apt/sources.list.d/elastic-7.x.list
        content: "deb https://artifacts.elastic.co/packages/7.x/apt stable main"

    - name: Install Elasticsearch
      apt:
        name: elasticsearch
        state: present
        update_cache: yes

    - name: Configure Elasticsearch
      lineinfile:
        path: /etc/elasticsearch/elasticsearch.yml
        line: "{{ item }}"
      with_items:
        - "network.host: 0.0.0.0"
        - "http.port: 9200"
        - "cluster.name: elk-cluster"
        - "node.name: elk-master"

    - name: Start Elasticsearch service
      systemd:
        name: elasticsearch
        enabled: yes
        state: started

2. Install and Configure Logstash

Create a playbook install_logstash.yml to install and configure Logstash.

- hosts: elk
  become: yes
  tasks:
    - name: Install Logstash
      apt:
        name: logstash
        state: present

    - name: Configure Logstash pipeline
      copy:
        dest: /etc/logstash/conf.d/logstash.conf
        content: |
          input {
            beats {
              port => 5044
            }
          }
          filter {
            mutate {
              add_field => { "log_source" => "%{host}" }
            }
          }
          output {
            elasticsearch {
              hosts => ["http://localhost:9200"]
              index => "logs-%{+YYYY.MM.dd}"
            }
          }

    - name: Restart Logstash
      systemd:
        name: logstash
        enabled: yes
        state: restarted

3. Install and Configure Kibana

Create a playbook install_kibana.yml to install and configure Kibana.

- hosts: elk
  become: yes
  tasks:
    - name: Install Kibana
      apt:
        name: kibana
        state: present

    - name: Configure Kibana
      lineinfile:
        path: /etc/kibana/kibana.yml
        line: "{{ item }}"
      with_items:
        - "server.host: '0.0.0.0'"
        - "elasticsearch.hosts: ['http://localhost:9200']"

    - name: Start Kibana service
      systemd:
        name: kibana
        enabled: yes
        state: started

4. Install and Configure Filebeat on Application Servers

Create a playbook install_filebeat.yml to install and configure Filebeat.

- hosts: app_servers
  become: yes
  tasks:
    - name: Download Filebeat
      get_url:
        url: https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.x-amd64.deb
        dest: /tmp/filebeat.deb

    - name: Install Filebeat
      shell: dpkg -i /tmp/filebeat.deb

    - name: Configure Filebeat
      copy:
        dest: /etc/filebeat/filebeat.yml
        content: |
          filebeat.inputs:
            - type: log
              paths:
                - /var/log/syslog
                - /var/log/auth.log
          output.logstash:
            hosts: ["192.168.1.100:5044"]

    - name: Start Filebeat
      systemd:
        name: filebeat
        enabled: yes
        state: started

Step 3: Run Ansible Playbooks

Run the playbooks in order to set up the centralized logging system.

ansible-playbook -i inventory install_elasticsearch.yml
ansible-playbook -i inventory install_logstash.yml
ansible-playbook -i inventory install_kibana.yml
ansible-playbook -i inventory install_filebeat.yml

Validation and Testing

1. Verify Elasticsearch

Run the following command on the ELK server:

curl -X GET "http://localhost:9200/_cluster/health?pretty"

2. Verify Logstash

Check if Logstash is listening on port 5044:

netstat -tulnp | grep 5044

3. Verify Kibana

Open a web browser and go to:

http://<ELK_SERVER_IP>:5601

Check if Kibana is loading.

4. Verify Logs in Kibana

Security Considerations

Future Enhancements

Conclusion

This project automates the deployment of a centralized logging system using Ansible, ensuring logs from multiple servers are collected, processed, stored, and visualized efficiently. The solution improves troubleshooting, security auditing, and performance monitoring across an enterprise infrastructure.

Enhancing Centralized Logging with AWS CloudWatch and S3 Using Ansible

Project Overview

In this extension of the Centralized Logging Setup, we integrate AWS CloudWatch and S3 for log storage, monitoring, and alerting. This allows us to:

Updated Architecture

Step-by-Step Implementation

Step 1: Set Up IAM Roles and Policies for CloudWatch and S3

We create an IAM user with permissions to push logs to CloudWatch Logs and store them in S3.

1.1 Create IAM Policy for Log Management

Use the following JSON policy for CloudWatch and S3 permissions.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:CreateLogGroup"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::centralized-log-storage/*"
        }
    ]
}

Attach this policy to an IAM user and generate AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Step 2: Install and Configure AWS CLI on Logstash Server

Run the following Ansible playbook to install the AWS CLI and configure credentials.

2.1 Create Playbook install_aws_cli.yml

- hosts: elk
  become: yes
  tasks:
    - name: Install AWS CLI
      apt:
        name: awscli
        state: present

    - name: Configure AWS Credentials
      template:
        src: aws_credentials.j2
        dest: /root/.aws/credentials
        mode: '0600'

2.2 Create AWS Credentials Template aws_credentials.j2

[default]
aws_access_key_id = {{ aws_access_key }}
aws_secret_access_key = {{ aws_secret_key }}
region = us-east-1

Replace aws_access_key and aws_secret_key with actual AWS credentials in ansible-playbook command.

Step 3: Configure Logstash for CloudWatch and S3

We modify the Logstash pipeline to forward logs to AWS CloudWatch and S3.

3.1 Modify Logstash Configuration

- hosts: elk
  become: yes
  tasks:
    - name: Configure Logstash for AWS CloudWatch and S3
      copy:
        dest: /etc/logstash/conf.d/logstash.conf
        content: |
          input {
            beats {
              port => 5044
            }
          }

          filter {
            mutate {
              add_field => { "log_source" => "%{host}" }
            }
          }

          output {
            elasticsearch {
              hosts => ["http://localhost:9200"]
              index => "logs-%{+YYYY.MM.dd}"
            }

            cloudwatch {
              log_group => "application-logs"
              log_stream_name => "%{log_source}"
              region => "us-east-1"
              access_key_id => "{{ aws_access_key }}"
              secret_access_key => "{{ aws_secret_key }}"
            }

            s3 {
              access_key_id => "{{ aws_access_key }}"
              secret_access_key => "{{ aws_secret_key }}"
              region => "us-east-1"
              bucket => "centralized-log-storage"
              codec => "json_lines"
              prefix => "logs/%{+YYYY}/%{+MM}/%{+dd}/"
            }
          }

    - name: Restart Logstash
      systemd:
        name: logstash
        enabled: yes
        state: restarted

Step 4: Configure Filebeat to Send Logs to CloudWatch (Optional)

Instead of using Logstash, we can configure Filebeat to push logs directly to AWS CloudWatch.

4.1 Modify Filebeat Configuration

- hosts: app_servers
  become: yes
  tasks:
    - name: Configure Filebeat to send logs to AWS CloudWatch
      copy:
        dest: /etc/filebeat/filebeat.yml
        content: |
          filebeat.inputs:
            - type: log
              paths:
                - /var/log/syslog
                - /var/log/auth.log
          
          output.console:
            pretty: true

          output.awscloudwatch:
            log_group_name: "application-logs"
            log_stream_name: "%{[host]}"
            region: "us-east-1"
            access_key_id: "{{ aws_access_key }}"
            secret_access_key: "{{ aws_secret_key }}"

    - name: Restart Filebeat
      systemd:
        name: filebeat
        enabled: yes
        state: restarted

Step 5: Deploy and Verify

Run the following Ansible playbooks to apply changes.

ansible-playbook -i inventory install_aws_cli.yml
ansible-playbook -i inventory install_filebeat.yml
ansible-playbook -i inventory install_logstash.yml

Step 6: Validate CloudWatch and S3 Integration

6.1 Check Logs in CloudWatch

aws logs describe-log-groups --region us-east-1

6.2 Check Log Data in S3

aws s3 ls s3://centralized-log-storage/logs/ --recursive

6.3 View Logs in AWS Console

Step 7: Set Up CloudWatch Alarms

We configure CloudWatch Alarms to alert on error logs.

7.1 Create CloudWatch Metric Filter

aws logs put-metric-filter \
    --log-group-name "application-logs" \
    --filter-name "ErrorFilter" \
    --filter-pattern "ERROR" \
    --metric-transformations metricName=ErrorCount,metricNamespace=LogMetrics,metricValue=1

7.2 Create CloudWatch Alarm

aws cloudwatch put-metric-alarm \
    --alarm-name "HighErrorRate" \
    --metric-name "ErrorCount" \
    --namespace "LogMetrics" \
    --statistic "Sum" \
    --period 300 \
    --threshold 10 \
    --comparison-operator "GreaterThanThreshold" \
    --evaluation-periods 1 \
    --alarm-actions "arn:aws:sns:us-east-1:123456789012:SendAlert"

This will trigger an alert if more than 10 error logs appear within 5 minutes.

Conclusion