How to Setup Wazuh Open Source SIEM Virtual Machine

2
329
How to Setup Wazuh Open Source SIEM Virtual Machine

Hello, friends, I hope you are doing great learning new stuff and working on your skills. In this article, I will be continuing the previous article that I wrote related to the Open-Source SIEM solution. I will be talking about How to Setup Wazuh Open Source SIEM.

In this part, we are going to taking a look at the pre-built virtual machine that I provided by Wazuh to quickly get started using the SIEM and get testing. It’s not recommended to use the virtual machine when you are deploying Wazuh as a production system. You should manually install all the components.

The ideal way to deploy it in the production environment is to setup Wazuh Manager and Kibana on one system and then install elastic search on a separate machine to manage the resources for all the components, and if you have a large number of client systems then it is recommended to use elastic search cluster to divide all the logs across all the nodes.

I will cover that also in a future article where I will cover how you can configure and setup Wazuh in a cluster configuration.

Also Read: What is Security Information and Event Management (SIEM) Tool? A Beginner’s Guide

How Wazuh SIEM Works?

The way Wazuh works is it’s divided into three parts as seen in the above image. Wazuh server is where all the SIEM magic happens Wazuh manager.

Wazuh server

The server component is in charge of analyzing the data received from the agents and triggering alerts when an event matches a rule (e.g. intrusion detected, file changed, a configuration not compliant with policy, possible rootkit, etc…).

The server usually runs on a stand-alone physical machine, virtual machine or cloud instance and runs agent components with the purpose of monitoring itself. Below is a list of the main server components:

  • Registration service: This is used to register new agents by provisioning and distributing pre-shared authentication keys that are unique to each agent. This process runs as a network service and supports authentication via TLS/SSL and/or by a fixed password.
  • Remote daemon service: This is the service that receives data from the agents. It makes use of the pre-shared keys to validate each agent’s identity and to encrypt the communications between the agent and the manager.
  • Analysis daemon: This is the process that performs data analysis. It utilizes decoders to identify the type of information being processed (e.g. Windows events, SSHD logs, web server logs, etc.) and then extract relevant data elements from the log messages (e.g. source IP, event id, user, etc.). Next, by using rules, it can identify specific patterns in the decoded log records which could trigger alerts and possibly even call for automated countermeasures (active responses) like an IP ban on the firewall.
  • RESTful API: This provides an interface to manage and monitor the configuration and deployment status of agents. It is also used by the Wazuh web interface, which is a Kibana app.

All the log analyses are done on the Wazuh manager server and then they are categorized into the different attack and log fields and then those filtered and segregated logs are stored into Elastic search indexes.

Elastic Stack

Elastic Stack is a unified suite of popular open-source projects for log management, including Elasticsearch, Kibana, Filebeat, and others. The projects that are especially relevant to the Wazuh solution are:

  • Elasticsearch: A highly scalable, full-text search and analytics engine. Elasticsearch is distributed, meaning the data (indices) are divided into shards and each shard can have zero or more replicas.
  • Kibana: A flexible and intuitive web interface for mining, analyzing, and visualizing data. It runs on top of the content indexed on an Elasticsearch cluster.
  • Filebeat: A lightweight forwarder used to convey logs across a network, usually to Elasticsearch.

Wazuh integrates with Elastic Stack to provide a feed of already decoded log messages to be indexed by Elasticsearch, as well as a real-time web console for alert and log data analysis. In addition, the Wazuh user interface (running on top of Kibana) can be used for the management and monitoring of your Wazuh infrastructure.

An Elasticsearch index is a collection of documents that have somewhat similar characteristics (like certain common fields and shared data retention requirements). Wazuh utilizes as many as three different indices, created daily, to store different event types:

  • wazuh-alerts: Index for alerts generated by the Wazuh server each time an event trips a rule.
  • wazuh-events: Index for all events (archive data) received from the agents whether or not they trip a rule.
  • wazuh-monitoring: Index for data related to agent status over time. It is used by the web interface to represent when individual agents are or have been “Active”, “Disconnected” or “Never connected”.

An index is composed of documents. For the indices above, documents are individual alerts, archived events or status events.

An Elasticsearch index is divided into one or more shards and each shard can optionally have one or more replicas. Each primary and replica shard is an individual Lucene index. Thus, an Elasticsearch index is made up of many Lucene indexes. When a search is run on an Elasticsearch index, the search is executed on all the shards in parallel and the results are merged. Dividing Elasticsearch indexes into multiple shards and replicas is used in multiple-node Elasticsearch clusters with the purpose of scaling out searches and for high availability. Single-node Elasticsearch clusters normally have only one shard per index and no replicas.

Also Read: Setup your own VPN server | Unblock the Internet – Outline VPN by Jigsaw

Wazuh Installation Guide (Virtual Machine):

Wazuh provides a pre-built virtual machine image (OVA) that you can directly import using VirtualBox (where installed) and other OVA compatible virtualization systems.

Note

”This VM only runs on 64-bit systems and is not recommended for use in production environments. It can be a useful tool for proofs of concepts and labs. Distributed architectures and multi-node Elastic Stack clusters are usually a better fit for production environments where higher performance is required.

  1. This virtual appliance, available here, contains the following components:
  • CentOS 7
  • Wazuh 3.12.0
  • Wazuh API 3.12.0
  • Elasticsearch 7.6.1
  • Filebeat 7.6.1
  • Kibana 7.6.1
  • Wazuh app 3.12.0-7.6.1

To set up the Virtual machine you first need to download the OVA file from the link above and then you need to import the OVA file in either Oracle VirtualBox or VMware. In this case, I will be using VMware Fusion on macOS.

Open VMware Fusion and click on the import button and then select the OVA file from the download folder.

  1. The root password is “wazuh” and the username/password for the Wazuh API is “foo/bar”.

    Although you don’t need to change any Elastic Stack configuration settings, feel free to explore the options. You can find Elasticsearch installed in /usr/share/elasticsearch. Similarly, Filebeat is installed in /usr/share/filebeat and its configuration file is found in /etc/filebeat/filebeat.yml.

    In case of using VirtualBox, once the virtual machine is imported it may run into issues caused by time skew when VirtualBox synchronizes the time of the guest machine. To prevent this situation it is recommended to enable the Hardware Clock in UTC Time option on the System tab of the virtual machine’s settings.

  2. The Wazuh manager and the Elastic Stack included in this virtual image are configured to work out of the box.

  1. You can start and stop wazuh-manager, wazuh-api, elasticsearch, filebeat, and kibana with the ‘systemctl’ command. For example:

    # systemctl restart wazuh-manager
    # systemctl restart wazuh-api
    # systemctl stop elasticsearch
    # systemctl start filebeat
    # systemctl status kibana
    
  2. In order to access the Kibana interface just open the virtual machine IP in your browser and you will be logged into the Kibana interface https://vmip/

  3. The next step is to register agents with the manager to start getting the logs and security events from the agents to the manager so you can start your testing.

I will be covering the agent registration process in the next article.