In this blog I will show how use bot IntelMQ and ELK stack together so that the feed from IntelMQ can be pushed into the ELK for analysis. I will not go into the details of architecture for either of these as they are beyond the scope of this blog. In the previous blog post, I have already illustrated how we can install IntelMQ in Ubuntu 20.04 system using sources and use apt command to download packages and install them. If you have not checked out that post please do that first.

Please note, the VM needs approximately, 8GB to work properly.

intelmq phase

If you access the intelMQ manager interface, we can see that the output is configured as given below.

aabe57c5f34b9edc46ba6e203a6eb76f.png

Here we can see that the default configuration of IntelMQ gives us a predefined access to some threat intel sources. IntelMQ collects them, parses them, enriches them and eventually pushes them into a file as configured in the default settings.

Now we will change the file output to Radis output so that it becomes more efficient and you can distribute the load to a queuing server and not to depends on a file which can lead to unexpected behavior.

Now lets create a Redis-Output and the configuration is given as below: 0de9ef45b083727f61b5cecc3ce56159.png Here, we are telling IntelMQ to push information to the Redis server so that use can use logstash to pull data from the server.

If you are facing any issue when running such bots, you can use the command below to run it in Debug mode so that you can understand where it is not working.

$ sudo intelmqctl run Redis-Output --loglevel DEBUG

elk phase

For simplicity, we will use docker for deployment as out primary objective is integration and not installation. In this phase, we will now pull data from the Redis server and use it.

To use docker, we need to install it first. To install it, please use the command below:

$ sudo apt install docker.io

Now we will execute commands as provided below:

$ sudo docker network create elastic
$ sudo docker pull docker.elastic.co/elasticsearch/elasticsearch:8.6.2

$ sudo sysctl -w vm.max_map_count=262144

$ sudo docker run --name es01 --net elastic -p 9200:9200 -p 9300:9300 -t docker.elastic.co/elasticsearch/elasticsearch:8.6.2

After running the final command, we will see a lot of information on the terminal. If we go through it, we can find something like below:


━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  <PASSWORD>

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  df4228134b319746fbe355395c7f02a9fcd820e72977fc2667b6ba2c20802a49

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjYuMiIsImFkciI6WyIxNzIuMTguMC4zOjkyMDAiXSwiZmdyIjoiZGY0MjI4MTM0YjMxOTc0NmZiZTM1NTM5NWM3ZjAyYTlmY2Q4MjBlNzI5NzdmYzI2NjdiNmJhMmMyMDgwMmE0OSIsImtleSI6IkQzSDQyWVlCc0RYZzNnMGdYblNTOkZac3UxbnlaUjBlejJNWGxfek90NFEifQ==

ℹ️ Configure other nodes to join this cluster:
• Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjYuMiIsImFkciI6WyIxNzIuMTguMC4zOjkyMDAiXSwiZmdyIjoiZGY0MjI4MTM0YjMxOTc0NmZiZTM1NTM5NWM3ZjAyYTlmY2Q4MjBlNzI5NzdmYzI2NjdiNmJhMmMyMDgwMmE0OSIsImtleSI6IkVYSDQyWVlCc0RYZzNnMGdYblNaOnliSzZPSURnUng2OS1sdWd4SFBFWFEifQ==

  If you're running in Docker, copy the enrollment token and run:
  `docker run -e "ENROLLMENT_TOKEN=<token>" docker.elastic.co/elasticsearch/elasticsearch:8.6.2`
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

We can not manually check if Elastic server is up and running or not.

$ curl -k -u elastic:<ENTER_PASSWORD> "https://localhost:9200/_cat/health" 

If you get green status, which you should, means that elastic server is up and running.

Now lets move to kibana. Open a new tab and run to following commands:

$ sudo docker pull docker.elastic.co/kibana/kibana:8.6.2

We have now downloaded the kibana image into the local machine and now we need to setup

$ sudo docker run --name kib01 --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:8.6.2
[2023-03-13T07:54:53.016+00:00][INFO ][node] Kibana process configured with roles: [background_tasks, ui]
[2023-03-13T07:55:01.250+00:00][INFO ][plugins-service] Plugin "cloudChat" is disabled.
<SNIPPED>
[2023-03-13T07:55:01.373+00:00][INFO ][root] Holding setup until preboot stage is completed.

i Kibana has not been configured.

Go to http://0.0.0.0:5601/?code=489402 to get started.

In the last line of the console output, we can see that a URL is provided by Kibana so that we can configure. Lets open the URL and use the Enrollment token which we have obtained when setting up Elastics. Please note that the the generated key is only vaild for 30 mins.

8dfc4f76fd94b133544cff2a4b02b6f2.png If the Enrollment key is correct, it will move on to the next screen. 51e091492397922568febc5b63791201.png

Now we have both Elastic and Kibana running but we still have one more step which is feeding in the data into Elastic. To do that, we will use logstash. In the next section, we will describe how we can install and forward logs to Elastics using logstash.

logstash

We could have downloaded and used docker but to improve flexibility, we have installed it on local machine as you will be able to use File-Collector Bot more easily.

$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg

$ sudo apt-get install apt-transport-https

$ echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list

$ sudo apt-get update && sudo apt-get install logstash

After installation, we will now configure it so that it can connect to a Radis server and pull data from it and eventually forward it to the Elastic server we have configured.

Lets make a configuration file as below and save it to /etc/logstash/conf.d/my-radis.conf

input {
  redis {
    host => "127.0.0.1"
    port => 6379
    db => 2
    data_type => "list"
    key => "logstash-queue"
  }
}
output {
  elasticsearch {
    hosts => ["https://127.0.0.1:9200"]
    user => "elastic"
    password => "<ENTER PASSWORD HERE>"
    ssl_certificate_verification => false
    data_stream => true
  }
}

Please note that as I have used docker and hosted in local machine, the IPs here are given as localhost. In your case if you are using separate server(s), you will have to adjust accordingly. Now lets run the logstash:

$ sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/my-radis.conf --log.level trace

Now if we run the logstash, we will get data into the Kibana Dashboard.

Kibana view

In the Kibana dashboard, we will not see anything f7986c3d915e08548e9e28c74400ed9a.png Now lets create a view 43575b2af82a57e0894e621a7f3ede6d.png As we have not specified any “name” for our incoming logs in logstash, it will by design, will go into “logs-generic-default”. 84f112ac1f81edb0005453c5cb1b08fd.png

conclusion

In this blog we have just used a basic setup guide so that we can move data around and visualize the incoming data. Obviously, this is not production rather a playground for testing.