Build the Splunk Enterprise Servers for the initial setup
- Keith Sanks
- May 14, 2023
- 3 min read
Terminology

License Master (LM)
The License Master is an important component in a Splunk deployment, responsible for managing license allocations and usage across the entire environment. It allows administrators to control and monitor license usage for all Splunk components, including indexers, search heads, and forwarders. The License Master can be deployed as a standalone instance or as part of a distributed environment.
By using the License Master, administrators can prevent over-licensing and ensure compliance with Splunk licensing agreements. The License Master can also help optimize the usage of available licenses by re-allocating unused licenses to other components that require additional capacity. In addition, it provides detailed reporting on license usage, which can be used to identify trends, track usage patterns, and forecast future license requirements.
Monitor Console (MC)
Splunk's Monitoring Console is a valuable tool that provides a health check for administrators to evaluate the health of their Splunk deployment. When adding a file input in Splunk web, administrators have several available input methods, including continuous monitoring. To collect data 45 days old and newer from a log file containing 193 days worth of timestamped events, the ignoreOlderThan = 45d monitor stanza would be used. The Monitoring Console allows administrators to ensure that no one can see reports or any other knowledge objects and primarily communicates with indexers to consolidate individual results and prepare reports in a distributed environment. If the error "Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member" occurs when adding or rejoining a member to a search head cluster, running the splunk apply shcluster-bundle command from the deployer is the recommended corrective action. The Monitoring Console also allows for testing data ingestion without creating an inputs.conf file and provides a way to remove missing forwarders by rebuilding the forwarder asset table.
Cluster Manager (CM)
The Cluster Manager is a component of Splunk Enterprise that enables the management of distributed search and indexing clusters. The Cluster Manager provides a centralized interface for managing the configuration, status, and health of a cluster. It allows administrators to perform tasks such as adding or removing nodes, configuring replication factors and search factors, and monitoring the performance of the cluster. The Cluster Manager is an important tool for maintaining a highly available and scalable Splunk deployment, and it can help to ensure that the cluster is optimized for the needs of the organization.
Search Head Cluster (SHC)
A search head cluster in Splunk is a group of search heads that work together to provide a highly available and scalable solution for searching and reporting. It consolidates the individual search results and prepares reports in a distributed environment. As the primary point of communication with indexers, it ensures that no one can see reports or any other knowledge objects.
When adding or rejoining a member to a search head cluster, the error "Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member" may be displayed. To correct this issue, the Splunk apply shcluster-bundle command should be run from the deployer.
Search Peers
Distributed Search in Splunk allows for the search head to distribute searches across search peers, providing resilience from search head failure. The search head consolidates the individual results and prepares reports based on the search results received from the search peers. Search peers perform indexing and respond to search requests from the search head. Knowledge bundles are distributed to search peers when a distributed search is initiated, ensuring that they have access to the necessary knowledge objects.
When running a real-time search, the search results are pulled from the search peers. This allows for real-time data to be searched and indexed efficiently, providing faster results and better data analysis capabilities. By using search peers, distributed search allows for efficient use of resources and scalable architecture for large data environments.
The Deployment Server
The deployment server is an essential component for a search head cluster in Splunk. It serves multiple purposes, including the distribution of apps and specific configuration updates to all members of the search head cluster. Additionally, it enables the distribution of apps and configurations to other Splunk instances in the deployment. The deployment server requires an Enterprise license to operate and allows for the centralized management of remote Splunk instances. With its capabilities, it can handle the task of sending configurations to remote instances and even automatically restart them when necessary. Overall, the deployment server plays a critical role in ensuring the efficient management and coordination of configurations and apps across the Splunk deployment.
Splunk Components

Universal Forwarder
Splunk forwarders are agents that collect data from remote sources and forward it to a central Splunk indexer for indexing and analysis. They are lightweight and designed for minimal resource usage. Heavy forwarders are a more powerful version of the forwarder, with additional capabilities such as data transformation, filtering, and routing. They are used in complex data collection scenarios, where the data needs to be preprocessed before indexing. Both forwarders and heavy forwarders play a crucial role in enabling centralized data collection and analysis, making it easier to monitor and troubleshoot complex IT environments
The Splunk Universal Forwarder, which has its own license, is used to collect and forward data to a Splunk deployment. When ingesting data from syslog on port 514, the best practice is to configure syslog to write logs and use a Splunk forwarder to collect them. Using TCP syslog and one or more syslog servers with a Universal Forwarder can improve the reliability of syslog delivery to Splunk.
Integrating third-party systems with Splunk allows for searching alerts to provision actions and using Splunk alerts to provision actions on a third-party system. Data can also be forwarded from Splunk forwarders to a third-party system without indexing it first.
If an update is made to an attribute in inputs.conf on a universal forwarder, the forwarder's fishbucket needs to be reset in order to reindex the data. The Monitoring Console monitors forwarders by forwarding internal logs from the forwarder.
The deployment server feature of Splunk is used for updating configuration and distributing apps to processing components, primarily forwarders. The Universal Forwarder has the capability of indexer acknowledgement and compressing data when sending it.
A remote monitor input is distributed to forwarders as an app. When a file is manually created on a universal forwarder, such as /opt/splunkforwarder/etc/apps/my_TA/local/inputs.conf, and a new app with a new inputs.conf file is deployed via the deployment server, the newly monitored file is /var/log/maillog.
After configuring a universal forwarder to communicate with an indexer, the successful connection to an index can be checked via the Splunk Web UI by searching for Index=_internal.
Using SSL to secure the feed from a forwarder does not automatically compress the feed by default.
Heavy Forwarder
A heavy forwarder is a type of forwarder in Splunk that has additional capabilities beyond just forwarding data. It has the ability to perform parsing, filtering, and transformation of the data before forwarding it on to the indexers. This allows for more efficient data processing and can reduce the load on the indexers. Heavy forwarders also have the ability to run scripts and executables, which can be useful for performing additional data manipulation or integration with other systems. Additionally, heavy forwarders can be configured to act as a standalone instance, which means they can also perform indexing and search functions. This makes them a versatile component in a Splunk deployment, allowing for more flexibility and customization in data processing and management.
Indexer
A Splunk indexer is a component of a Splunk deployment that is responsible for indexing and storing data. It processes data as it comes in and makes it searchable for users. The indexer scales horizontally to handle large amounts of data and can be clustered for high availability and fault tolerance.
Preconfigured Indexes
Index name | Purpose
_internal | To index Splunk's own logs and metrics
_audit | To store Splunk audit trails and other optional auditing information
_introspection | To track system performance, Splunk resource usage data and provide Monitoring Console (MC) with performance data
_thefishbucket | To contain checkpoint information for file monitoring inputs
summary | Default index for summary indexing system
main | Default index for inputs; located in the defaulttdb directory
Difference between Indexer and Forwarder

Indexer
Runs on dedicated servers
Listens on receiving ports
Stores and indexes the data
Forwarder
Gathers the data
Sends to indexers over network
Most production data is on remote servers
Directory Structure and Config files
Splunk Directory Structure

Config Files
Now lets build
Install the OS and configure network connections
If you haven't already install linux/windows OS on the server of choice
Add the IP address to the Splunk server
For windows

Click Start >Settings >Control Panel.
On the control panel, double-click Network Connections.
Right-click Local Area Connection.
Click Properties. ...
Select Internet Protocol (TCP/IP), and then click Properties.
Select Use the Following IP Address.
Note: the IP information in the image is only for informative purposes. Select an IP that is reachable for your environment
For linux
Ubuntu
As of version 17 of Ubuntu, networking is configured using Netplan, which is a YAML-based configuration system. It allows you to set your IP, netmask, gateway, and DNS all in one place.
Start by editing the file for your interface: in this case 01-netcfg.yaml.
vi /etc/netplan/01-netcfg.yaml
Editing your interface file
You’ll either see networkd or systemd in the renderer spot; keep that the same.
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: no
addresses: [192.168.1.12/24]
gateway4: 192.168.1.1
nameservers:
addresses: [8.8.8.8,8.8.4.4]
To have your changes take effect, restart networking with this command:
You can then apply this configuration by running netplan apply.
YAML configs are crazy about indentation, so if you get an error check there first.
netplan apply
CentOS
Now let’s do the same thing in CentOS. Here we’ll need to edit things the old way using sysconfig and network-scripts:
vi /etc/sysconfig/network-scripts/ifcfg-eth0
You’ll change what you see there to something like this:
HWADDR=$SOMETHING
TYPE=Ethernet
BOOTPROTO=none // turns off DHCP
IPADDR=192.168.2.2 // set your IP
PREFIX=24 // subnet mask
GATEWAY=192.168.2.254
DNS1=1.1.1.2 // set your own DNS
DNS2=1.0.0.2
DNS3=9.9.9.9
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=eth0
DEVICE=eth0
ONBOOT=yes // starts on boot
You can then apply this configuration by running:
/etc/init.d/network restart
Ok, that will get you up and running with a static IP on the two most common Linux distros. Now let’s take a deeper look at the new ip command.
Building the Deployment Server:
Install Splunk on a server that will serve as the Deployment Server.
Once Splunk is installed, navigate to the Splunk Web interface by entering the IP address of the server followed by the port number (usually 8000) in a web browser.
Log in to Splunk Web with admin credentials.
Navigate to the Settings menu and select "Forwarder Management".
Select "Deployment Server" from the list of options.
Click on the "Set Up Deployment Server" button.
Enter the IP address of the server in the "Deployment Server URL" field.
Select the Splunk instance that will be the Deployment Server.
Configure any necessary settings, such as authentication and SSL.
Save the changes and restart Splunk.
Building Forwarders
Get Splunk Universal Forwarder by downloading it from the official website.
Extract the installation files using the following command after navigating to the directory where the file is downloaded:
[root@ABS]# tar -xvf splunkforwarder-8.0.0-8c86330ac18-Linux-x86_64.tgz
Note: In the above command, replace the file name mentioned with the name of the downloaded file.
Start Splunk Universal Forwarder using the command:
[root@ABS]# cd splunkforwarder/bin [root@ABS]# ./splunk start --accept-license
Configure the forward server details (receiver host and port) in Splunk using the following command:
[root@dashboard]# ./splunk add forward-server ip:port
Note: Replace 'ip:port' with the IP address and port number of your forward server.
Ensure that the receiving port in Splunk is enabled. For example, configure port number 9997 in your Splunk deployment.
Edit the 'inputs.conf' file on the Splunk Forwarder as follows:
[root@ABS]# ./splunk add monitor /opt/pingidentity/splunk/data/ [root@dashboard]# cat /opt/splunkforwarder/etc/apps/search/local/inputs.conf [monitor:///opt/pingidentity/pingidentity/dashboard/logs/attack.log/] index = pi_events sourcetype=pi_events_source_type disabled = false
Restart the Splunk Universal Forwarder: [root@ABS]# ./splunk restart
Verify that data is flowing into Splunk. Check the data flow in the Splunk snapshot.
Note:
If you can't see any data in Splunk, check the firewall settings.
Building Indexers
Ensure that the host you are preparing meets or exceeds the Splunk Enterprise system requirements. Note down the host name and IP address of the host.
Confirm that no firewalls are blocking any network traffic into or out of the host.
Download the Splunk Enterprise software onto the host.
Install the appropriate version of the Splunk software for the host's operating system.
Verify that the Splunk Enterprise software starts without any errors. You should also be able to perform a basic search using the Search app.
Download the latest version of the Splunk Add-on for Microsoft Exchange Indexes.
Extract the package content to $SPLUNK_HOME/etc/apps directory.
Restart Splunk Enterprise using PowerShell: > cd \Program Files\Splunk\bin > .\splunk restart
Configuration Steps:
Log into Splunk Enterprise on the indexer.
Click Settings > Forwarding and Receiving in the system bar. The "Forwarding and Receiving" page will load.
Under "Receive Data," click Configure Receiving.
Click New and enter the port number that you want Splunk Enterprise to listen on for incoming data from other Splunk instances in the "Listen on this port" field. The conventional port number is 9997.
Click Save to enable receiving on the indexer.
Next Steps:
Write down the host name or IP address and port number of the indexer.
Read Splunk's documentation on apps before proceeding.
Create the send to indexer app to continue building out a Splunk App for Microsoft Exchange deployment.
Building the Search Head
Download the Splunk software package for your operating system from the official Splunk website.
Install the software package by following the prompts in the installation wizard.
Once installed, launch Splunk and log in to the web interface.
Obtain the IP address of the instance you wish to configure as a search head, assuming that Splunk has already been installed. Then, navigate to the SEARCHHEAD UI at https://:8000.
Go to Settings, then Distributed Search, and then Search Peer. Click Add New.
Enter the indexer URI, as well as the remote username and password. This step involves adding the indexer's URI, username, and password. Click Save after completing this step.
Return to the Search Head UI, and navigate to Settings, then Distributed Search, and then Search Peer. You should now see your indexer configured as a search peer.
Congratulations! Your Splunk instance is now configured as a search head, allowing you to return queries and create knowledge objects as needed, provided that you have installed a heavy forwarder/universal forwarder and an indexer.
Once the search head is configured, you can start using it to search your data. To do this, you will need to create a search query in the search bar of the Splunk web interface. The search head will then send the query to the indexers in your environment to retrieve the data, which it will then display in the search results page. You can also create dashboards and alerts on the search head to help you monitor your data over time.
Comentários