-
Notifications
You must be signed in to change notification settings - Fork 636
Configuration
The configuration file of TheHive is defined in application.conf
. This file use the HOCON format. All configuration parameters should go in this file.
You can have a look in default settings
TheHive uses the search engine ElasticSearch to store all persistent data. ElasticSearch is not part of TheHive package. It must be installed and configured in standalone instance (cf. ElasticSearch installation guide).
Three settings are required to connect to ElasticSearch :
- the base name of the index
- the name of the cluster
- the address(es) and port(s) of the ElasticSearch
Defaults settings are :
# ElasticSearch
search {
# Name of the index
index = the_hive
# Name of the ElasticSearch cluster
cluster = hive
# Address of the ElasticSearch instance
host = ["127.0.0.1:9300"]
# Scroll keepalive
keepalive = 1m
# Size of the page for scroll
pagesize = 50
}
If you use another configuration of ElasticSearch databse, add the correct parameters to the application.conf
file.
If multiple ElasticSearch nodes are used in cluster, addresses of masters must be used for search.host
setting. All cluster nodes must use the same cluster name :
search {
host = ["node1:9300", "node2:9300"]
...
TheHive uses the transport
port (9300/tcp by default), not the http
port (9200/tcp).
TheHive versions index schema (mapping) in ElasticSearch. Version number are appended to index base name (the 7th version of the schema uses the index the_hive_7
if search.index = the_hive
).
When too many documents are requested to TheHive, it uses the scroll feature : result is retrieve using pagination. You can specify the size of the page (search.pagesize
) and how long pages are keep in ElasticSearch before purge (search.keepalive
).
TheHive stores attachments in ElasticSearch documents. They are split in chunks and each chunk, sent to ElasticSearch, is identified by the hash of the entire attachment and the chunk number.
The size of chunks (datastore.chunksize
) can be changed (only for new attachments, already inserted attachments are not changed).
Attachment are identified by its hash. The algorithm used is configurable (datastore.hash.main
) but must not be changed after the first attachment insertion (otherwise, the old attachments won't be able to be retrieved).
Extra hash algorithms can be configured using datastore.hash.extra
. These hashes are not used to identify the attachment but are shown in user interface (the hash from main algorithm is also shown). If you change extra algorithms, you should inform TheHive and ask it to recompute all hashes (this API is currently disabled. It will be reactivated in the next release).
Observables can contains malicious data. When you try to download attachment from observable, it is automatically protected by an encrypted zip. The password is "malware" by default but it can be change with datastore.attachment.password
setting.
Default values are:
# Datastore
datastore {
name = data
# Size of stored data chunks
chunksize = 50k
hash {
# Main hash algorithm /!\ Don't change this value
main = "SHA-256"
# Additional hash algorithms (used in attachments)
extra = ["SHA-1", "MD5"]
}
attachment.password = "malware"
}
TheHive supports Local, LDAP and Active Directory for authentication. By default, it relies on local credentials stored in ElasticSearch.
Authentication methods are stored in the auth.type
parameter, which is multi-valued. When an user logs in, each authentication method is tried in order until one succeeds. If no authentication method works, an error is returned and the user cannot log in.
The Default values within the configuration file are:
auth {
# "type" parameter contains authentication provider. It can be multi-valued (useful for migration)
# available auth types are:
# services.LocalAuthSrv : passwords are stored in user entity (in ElasticSearch). No configuration are required.
# ad : use ActiveDirectory to authenticate users. Configuration is under "auth.ad" key
# ldap : use LDAP to authenticate users. Configuration is under "auth.ldap" key
type = [local]
ad {
# Domain Windows name using DNS format. This parameter is required.
#domainFQDN = "mydomain.local"
# Domain Windows name using short format. This parameter is required.
#domainName = "MYDOMAIN"
# Use SSL to connect to domain controller
#useSSL = true
}
ldap {
# LDAP server name or address. Port can be specified (host:port). This parameter is required.
#serverName = "ldap.mydomain.local:389"
# Use SSL to connect to directory server
#useSSL = true
# Account to use to bind on LDAP server. This parameter is required.
#bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local"
# Password of the binding account. This parameter is required.
#bindPW = "***secret*password***"
# Base DN to search users. This parameter is required.
#baseDN = "ou=users,dc=mydomain,dc=local"
# Filter to search user {0} is replaced by user name. This parameter is required.
#filter = "(cn={0})"
}
}
# Maximum time between two requests without requesting authentication
session {
warning = 5m
inactivity = 1h
}
To enable authentication using AD or LDAP edit the application.conf
file and supply the values for your environment.
In order to use SSL on LDAP or AD, TheHive must be able to validate remote certificate. For that, Java truststore must contain certificate authorities used to generate AD or LDAP certificate. Default JVM truststore contains main official authorities but LDAP or AD certificate probably aren't issue by them.
Use keytool to create truststore:
keytool -import -file /path/to/your/ca.cert -alias InternalCA -keystore /path/to/your/truststore.jks
Then add -Djavax.net.ssl.trustStore=/path/to/your/truststore.jks
parameter when you start TheHive (or put it in JAVA_OPTS
environment variable before starting TheHive)
User interface are automatically updated when data is changed in the back-end. To do this, the back-end send events to all connected front-end. The mechanism used to notify front-end is long polling and its settings are :
-
refresh
: when there no notification, close the connection after this duration (default 1 minute). -
cache
: before polling a session must be created, in order to make sure no event is lost between to poll. If there is no poll during thecache
setting, session is destroyed (default 15 minutes). -
nextItemMaxWait
,globalMaxWait
: when an event occurs, it is not immediately sent to front-end. Back-end waitsnextItemMaxWait
and up toglobalMaxWait
in case another event can be included in response. This mechanism saves many HTTP request.
Default values are :
# Streaming
stream.longpolling {
# Maximum time a stream request waits for new element
refresh = 1m
# Lifetime of the stream session without request
cache = 15m
nextItemMaxWait = 500ms
globalMaxWait = 1s
}
Play framework, used by TheHive, sets by default HTTP body size limit to 100KB for textual content (json, xml, text, form data) and 10MB for file upload. This could be too small is most case so you may want to change it with the following settings in the application.conf
file:
# Max textual content length
play.http.parser.maxMemoryBuffer=1M
# Max file size
play.http.parser.maxDiskBuffer=1G
If you are using nginx reverse proxy in front of TheHive, be aware that it doesn't distinguish text data and file upload. So, you should also set client_max_body_size
parameter in nginx server configuration. Set it to the max between file and text size defined in application.conf
file.
TheHive can use analysis engine Cortex to get additional information on observables. When configured, analyzers available on Cortex become usable on TheHive. First you must enable CortexConnector, then choose an identifier and specify the url for each Cortex server:
## Enable the Cortex module
play.modules.enabled += connectors.cortex.CortexConnector
cortex {
"CORTEX-SERVER-ID" {
# URL of the CORTEX server
url = "<The_URL_of_the_CORTEX_Server_goes_here>"
}
Cortex analyzes observables and outputs report in JSON format. TheHive show this report as-is by default. In order to make reports more readable, we offers report templates which are in a separate package and must be install manually:
- download the report template package at https://dl.bintray.com/cert-bdf/thehive/report-templates.zip
- log in TheHive using administrator profile
- go to
Admin
>Report templates
menu - click on
Import templates
button and select the downloaded package
TheHive has the ability to connect to one or several MISP servers. Within the configuration file, you can register your MISP server(s) under the misp
configuration keyword. Each server shall be identified using an arbitrary name, its url
, the corresponding authentication key
and optional tags
to add to the corresponding cases when importing MISP events.
To sync with a MISP server and retrieve events, edit the application.conf
file and adjust the example shown below to your setup:
## Enable the MISP module
play.modules.enabled += connectors.misp.MispConnector
misp {
"MISP-SERVER-ID" {
# URL of the MISP server
url = "<The_URL_of_the_MISP_Server_goes_here>"
# authentication key
key = "<the_auth_key_goes_here>"
# tags that must be automatically added to the case corresponding to the imported event
tags = ["misp"]
}
# truststore to use to validate the MISP certificate (if the default truststore is not sufficient)
#cert = /path/to/truststore.jsk
# Interval between two MISP event import in hours (h) or minutes (m)
interval = 1h
}
As stated in the subsection above, TheHive is able to automatically import MISP events and create cases out of them. This operation leverages the template engine. Thus you'll need to create a case template prior to importing MISP events.
First, create a case template. Let's call it MISP_CASETEMPLATE.
Then update TheHive's configuration to add a 'caseTemplate' parameter as shown in the example below:
misp {
"MISP-SERVER-ID" {
# URL of the MISP server
url = "<The_URL_of_the_MISP_Server_goes_here>"
# authentication key
key = "<the_auth_key_goes_here>"
# tags that must be automatically added to the case corresponding to the imported event
tags = ["misp"]
# case template
caseTemplate = "MISP_CASETEMPLATE"
}
Once the configuration file has been edited, restart TheHive. Every new import of MISP event will generate a case according to the "MISP_CASETEMPLATE" template.
Performance metrics (response time, call rate to ElasticSearch and HTTP request, throughput, memory used...) can be collected if enabled in configuration.
Enable it by editing the application.conf
file, and add:
# Register module for dependency injection
play.modules.enabled += connectors.metrics.MetricsModule
metrics.enabled = true
These metrics can optionally be sent to an external database (graphite, ganglia or influxdb) in order to monitor the health of the platform. This feature is disabled by default
metrics {
name = default
enabled = true
rateUnit = SECONDS
durationUnit = SECONDS
showSamples = false
jvm = true
logback = true
graphite {
enabled = false
host = "127.0.0.1"
port = 2003
prefix = thehive
rateUnit = SECONDS
durationUnit = MILLISECONDS
period = 10s
}
ganglia {
enabled = false
host = "127.0.0.1"
port = 8649
mode = UNICAST
ttl = 1
version = 3.1
prefix = thehive
rateUnit = SECONDS
durationUnit = MILLISECONDS
tmax = 60
dmax = 0
period = 10s
}
influx {
enabled = false
url = "http://127.0.0.1:8086"
user = root
password = root
database = thehive
retention = default
consistency = ALL
#tags = {
# tag1 = value1
# tag2 = value2
#}
period = 10s
}
}
To enable HTTPS in the application, add the following lines to /etc/thehive/application.conf
https.port: 9443
play.server.https.keystore {
path: "/path/to/keystore.jks"
type: "JKS"
password: "password_of_keystore"
}
To import your certificate in the keystore, depending on your situation, you can follow this documentation.
More information: This is a setting of the Play framework that is documented on its website https://www.playframework.com/documentation/2.5.x/ConfiguringHttps.