SPLUNK useful commands and Search

SPLUNK useful commands and Search

List of commands for the installation of SPLUNK and Searching indexes

sudo groupadd splunk

grep splunk /etc/group

sudo useradd -g splunk splunker

grep splunker /etc/passwd

(Downloading Splunk source file using wget)

wget -O splunk-7.0.3-fa31da744b51-Linux-x86_64.tgz ‘https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=7.0.3&product=splunk&filename=splunk-7.0.3-fa31da744b51-Linux-x86_64.tgz&wget=true’

sudo tar zxf splunk-7.0.3-fa31da744b51-Linux-x86_64.tgz -C /opt

sudo chown -R splunker:splunk /opt/splunk

sudo ls -l /opt/splunk

sudo /opt/splunk/bin/splunk start –accept-license –no-prompt -answer

sudo /opt/splunk/bin/splunk enable boot-start -user splunker

sudo /opt/splunk/bin/splunk status

/opt/splunk/bin/splunk version

/opt/splunk/bin/splunk show web-port -auth admin:changeme

/opt/splunk/bin/splunk show splunkd-port -auth admin:changeme

/opt/splunk/bin/splunk show appserver-ports -auth admin:changeme

/opt/splunk/bin/splunk show kvstore-port -auth admin:changeme

/opt/splunk/bin/splunk show servername -auth admin:changeme

/opt/splunk/bin/splunk show default-hostname -auth admin:changeme

/opt/splunk/bin/splunk set servername SEARCH1 -auth admin:changeme

/opt/splunk/bin/splunk set default-hostname SEARCH1 -auth admin:changeme

$splunk show config conf_name

$splunk btool check

$splunk show config config_name

$splunk show config inputs

$splunk btool list conf_name –debug

$splunk btool list monitor://var/log – – debug

On Indexer: $splunk enable listen 9997

On Indexer: $splunk display listen 9997

On Deployment_Server: $splunk list deploy-clients

On Deployment_Server: $splunk reload deploy-server

On Forwarder: $splunk add forward-server Indexer:9997

On Forwarder: $spluk list forward-server

On Forwarder: $splunk remove forward-server idx:9997

ON Forwarder: $splunk set deploy-poll deployment_server:8089

On Forwarder: $splunk show deploy-poll

On the Forwarder /opt/splunkforwarder/bin/splunk set deploy-poll deployment_server:8089 -auth admin:changeme

On the Forwarder /opt/splunkforwarder/bin/splunk restart

On the Forwarder /opt/splunkforwarder/bin/splunk show deploy-poll -auth admin:changeme

ON THE DEPLOYMENT SERVER $/opt/splunk/bin/splunk list deploy-clients -auth admin:splunk

To remove all data from an index on indexer :

$ splunk clean eventdata –index index_name

Remove the file pointe for a particular soruce from the fishbucket, :

$splunk cmd btprobe –d /opt/splunk/var/loib/splunk/fishbucker/splunk_private_db –file source – reset

Recreate the idx files for a bucket :

$splunk rebuild path_to_bucket

$splunk add licenses /your-dir/licensefile.xml

$splunk list license

$splunk edit licenser-localslave –master_uri https://License_Master:8089

$splunk list licenser-localslave

$spluk edit cluster-config -mode master –replication_factor 2 –search_factor 2 –secret ‘my_cluster_secret_key’

$splunk edit cluster-config –mode master –multisite true -site site1 –availabel_sites site1, site2 -site_repliaction_factor origin:1, total 2 –secret ‘my_cluster_secret_key’

$splunk edit cluster-config –mode slave –master_uri https://CLUSTER_MASTER:8089 –secret ‘my_cluster_secret_key’ –replication_ports 9887

$splunk edit cluster-config –master_uri https://CLUSTER_MASTER:8089 -mode slave site site1 secret ‘my_cluster_secret_key’ –replication_ports 9887

$splunk add cluste-master – master_uri https://CLUSTER_MASTER:8089 –secret ‘my_cluster_secret_key’

$splunk add cluster-config –mode searchhead -master_uri https://CLUSTER_MASTER:8089 –secret ‘my_cluster_secret_key’

Indexer Cluster Commands

$splunk show maintenance-mode

$splunk enable maintenance-mode

$splunk disable maintenance-mode

Take this peer offline with enforced counts, takes peer offline permanently

$splunk offline { — enforce-counts }

$spluk apply cluster-bundle

$splunk show cluster-bundle-status

$splunk show cluster-status

Cluster_Master $splunk rolloing-restart cluster-peers

Cluster_Master $splun remove cluster-peers –peers idx1

Cluster_Master: $splunk dig –enable –rest

Search head Clustering commands

$splunk edit licenser-localsalve –master_uri https://CLUSTER-MASTER:8089

$splunk edit cluster-config –mode searchhead –master_uri https://CLUSTER-MASTER:8089 –site sit1 –secret ‘my_cluster_secret_key’

$splunk restart

$splunk bootstrap shcluster-captain –server list http://search_head1:8089, http://shearch_head2:8089,http://shearch_head3:8089, http://search_head4:8089

$splunk show shcluster-status

$splunk rolloing-restart shcluster-members –status

$spunk edit shcluster-config –shcluster_label search_head_cluster

$splunk edit shcluster-config –conf_deploy_fetch_url

$splunk show shcluster-status

$splunk list shcluster-member

$splunk rolling-restart shcluster-members

$splunk apply shcluster-bundle

$splunk remove shcluster-member

$splunk disable shcluster-config

$splunk remove shcluster-member –mgmt_uri https://SH:8089

SH CLUSTER captain $splunk diag

Maintenance mode for Indexer cluster

$splunk [ show | enable | disable ] maintenance-mode

$splunk apply cluster bundle automatically invoke maintenance mode

$splunk rolling-restart automatically invoke maintenance mode

Cleaning up excess replicas bucket:

$/opt/splunk/bin/splunk list excess-buckets (index)

$splunk remove excess-buckets (index)

$splunk rebalance cluster-data –action start –index (index)

$splunk rebalance cluster-data –action status

$splunk rebalance cluster-data –action stop

$splunk edit cluster-config –rebalance_threshold 0.90

$splunk edit cluster-config –summary_replication true

RUN IT ON CLUSTER MASTER

On Cluster Master $splunk validate cluster-bundle

On Cluster Master $splunk apply cluster-bundle

On Cluster Master $splunk show cluster-bundle status

Search for events on all “hosts” servers for accesses by the user “root”. It then reports the 20 most recent events.

host=* eventtype=access user=root

Search across all public indexes.

index=*

Search across all indexes, public and internal.

index=* OR index=_*

if you often search for failed logins

“failed login” OR “FAILED LOGIN” OR “Authentication failure” OR “Failed to authenticate user”

Web access errors from the beginning of the week to the current time of your search (now).

eventtype=webaccess error earliest=@w0

Web access errors from the current business week (Monday to Friday).

eventtype=webaccess error earliest=@w1 latest=+7d@w6

Subsearches must be enclosed in square brackets in the primary search.

Consider the following search.

sourcetype=access_* status=200 action=purchase [search

sourcetype=access_* status=200 action=purchase | top limit=1 clientip | table clientip] | stats count, dc(productId), values(productId) by clientip

index=_internal earliest=-15m latest=now

The following search example is attempting to return the bytes for the individual indexes.

index=_internal source=*license* type=usage | stats sum(b) BY index

In this search the stats portion of the search is commented out.

index=_internal source=*license* type=usage comment("| stats sum(b) BY index")

Web access errors from the last full business week

eventtype=webaccess error earliest=-7d@w1 latest=@w6

Display customer interactions and retrieve the only by clientip

sourcetype=access_combined* | fields clientip

Display the action, productId, and status of customer

sourcetype=access_combined* action=* productId=* | table action, productId, status

sourcetype=access_combined* action=* productId=* | table action, productId, status | rename productId as “Product ID” , actions as “Customer Purchase”, status as “HTTP Status Code”

Display and ports

sourcetype=linux_secure port “failed password” | rex “\s+(?<ports>port\s\d+)” | top src port

sourcetype=linux_secure port “failed password” | rex “(?i) port (?P<port>[^ }+” | top port

sourcetype=linux_secure port “failed password” | erex Port examples=”4940,4608,4920” | top port

Display the top mail domains from sourcetype=cisco_esa

sourcetype=cisco_esa | rex field=mailfrom “@(?<maildomain>.*)” | top limit=10 maildomain

Most IP or Top product selling in 24 hours.

sourcetype=linux_secure password fail* | top src

sourcetype=access_combined action=purchase status=200 | top product_name

Display the count of retail sales made yesterday

sourcetype=vendor_sales | stats count as “Retail Sales”

Count the number of events

sourcetype=access_combined* | stats count(actions)

How many unique website visited

sourcetype=cisco_wsa_squid | stats dc(s_hostname)

Display the quantity of sales by product name and price

sourcetype=vendor_sales | stats count as quantity by product_name, price

Which website have employees accessed

sourcetype=cisco_wsa_squid | stats list(s_hostname) by cs_username

sourcetype=access_combinde* action=purchase | timechart count(product_name) by categoryId

sourcetype=access_combinde* action=purchase | chart count(product_name) by categoryId

sourcetype=vendor_sales | geostats latfield=VendorLatitude longfiled=VendorLongitude count by product_name

sourcetype=acess_combined* action=purchase | stats sum(price) as count | gauge count 0 10000 20000

Display the errors that our host produce

sourcetype=access_combined* status>299 | chart count over status by host

Display the transactions that failed for each product from the shopping cars online

sourcetype=access_combined* status>299 | chart count over host by itemId

Sourcetype=access_combinded product_name=* | timechart span=30m count by product_name

sourcetype=access_combined status=4* clientip="69.72.161.186"

Indexer verify that universal forwarder make connection?

Indexer: Look for your receiving port to be open on the indexer:

#netstat –an | grep 9997

On indexer go to setting > forwarding and receiving > port 9997 [ if not enable it ]

Tcpdump port 9997data for any errors

# tcpdump –I eth0 port 9997

Next, you should run a search to find the forwarder connection on the indexer:

# index=_internal source=*metrics.log tcpin_connnections

http://indexer:8000 Search: index=_internal host=forwarder_host

http://indexer:8000 Search: index=_internal host=forwarder_host component=”TcpOutputProc”

Splunk server has indexed events to verify

Search index=_internal host=”Your_host” component=”TcpOutputProc”

Search: sourcetype=vendor_sales

Search: sourcetype=access_combined*

Search: sourcetype=access_combined* action=purchase

Search: sourcetype=access_combined* | table clientip, host, action, status

Search: sourcetype=access_combined* action=purchase | table clientip, host, status

Search: sourcetype=cisco_esa mailto=*

Search: sourcetype=cisco_esa mailto=* | erex domain example=”yahoo.com. Hotmail.com”

Search : sourcetype=access_combined* action=purchase | timechart span=2

Search: sourcetype=access_combined* (action=remove OR action=purchase)

Search: sourcetype=access_combined* (action=remove OR action=purchase) | stats sum(price) as totalSales by action, product_name

Search: sourcetype=access_combined* status>399

Search: sourcetype=access_combined* status>399 | chart count by host, status

Search: sourcetype=access_combined* status>399 | chart count by host, status limit=5

Search: sourcetype=access_combined* status>399 | timechart count(action) by host

Search: sourcetype=access_combined* clientip=* status>399

Search: sourcetype=access_combined* clientip=* status>399 | dedup host, clientip

Search: sourcetype=access_combined* clientip=* status>399 | dedup clientip, host

"Brute Force Access Behavior Detected" correlation search without extreme search commands:

| datamodel(“Authentication”,”Authentication”) | stats values(Authentication.tag) as tag,count(eval('Authentication.action'=="failure")) as failure,count(eval('Authentication.action'=="success")) as success by Authentication.src | drop_dm_object_name(“Authentication”) | search failure>6 success>0 | settags(“access”)

| datamodel(“Authentication”,”Authentication”) | stats values(Authentication.tag) as tag,count(eval('Authentication.action'=="failure")) as failure,count(eval('Authentication.action'=="success")) as success by Authentication.src | drop_dm_object_name(“Authentication”) | search success>0 | xswhere failure from failures_by_src_count_1h in authentication is above medium | settags(“access”)`

How to Inspecting Buckets

Search: | dbinspect index=name span or timeformat

Display a chart with the span size of 1 day, using the command line interface (CLI)

| dbinspect index=_internal span=1d”

Default dbinspect output for a local _internal index.

| dbinspect index=_internal

Check for corrupt buckets

Use the corruptonly argument to display information about corrupted buckets, instead of information about all buckets.

The output fields that display are the same with or without the corruptonly argument.

| dbinspect index=_internal corruptonly=true

Count the number of buckets for each Splunk server

Use this command to verify that the Splunk servers in your distributed environment are included in the dbinspect command.

Counts the number of buckets for each server.

| dbinspect index=_internal | stats count by splunk_server

Find the index size of buckets in GB

Use dbinspect to find the index size of buckets in GB.

For current numbers, run this search over a recent time range.

| dbinspect index=_internal | eval GB=sizeOnDiskMB/1024| stats sum(GB)

Deleting Events you need to have can_delete role

delete command to make the unwanted data not to show up in searches
Index=web host=myhost source=access_combined_wcookie | delete

splunk clean eventdata indexname wipes out all data from the index

How can you tell your indexer is working

index=[your_index_name]

index=_internal LicenseUsage idx=fwlog”

To check the license usage, search

Index=_internal Metrics series=”fwlog” | stats sum(kbps)

index=_internal Metrics group=”per_sourcetype_thruput” series=access* | timechart span=1h sum(kb) by series

index=_internal Metrics group=”per_sourcetype_thruput” series=access* | timechart span=1h sum(kb) by series | sort – sum(MB)

Determine how many active sources are being indexed.

Search | dbinspect index=main OR index=fwlog OR index=oslog OR index=weblog OR index=applog

How to calculate the data compression rate of bucket

Search : | dbinstpect index=main OR index=[your_inedex]

| were eventCount > 10000

| fields index,id,state,eventCount,rawSize,sizeOnDiskMB,sourceTypeCount

| eval TotalRawMB=(rawSize / 1024 / 1024)

| eval compression=tostring(round( sizeOnDiskMB / TotalRawMB * 100, 2 )) + “%”

| table index, id, state, sourceTypeCount, TotalRawMB, sizeOnDiskMB, compression

Please follow and like us: