MongoDB integration with FastNetMon Advanced

FastNetMon Advanced relies on MongoDB for configuration storage. It does not store traffic here or metrics, but it stores every configuration option. MongoDB is crucial for FastNetMon to function properly.

FastNetMon uses upstream version of MongoDB from official site.

By default, both the FastNetMon daemon and the fcli command line tool connect directly to MongoDB. Both tools use the login "fastnetmon_user" and the password from file /etc/fastnetmon/keychain/.mongo_fastnetmon_password.

For debugging purposes, you can log in to MongoDB with the admin password in the following way:

mongo admin --username administrator --password `sudo cat /etc/fastnetmon/keychain/.mongo_admin`
use fastnetmon

For new installations, you will need to use the mongosh tool for management:

mongosh --username administrator --password `sudo cat /etc/fastnetmon/keychain/.mongo_admin`
use fastnetmon

You can log in with the same permissions as fcli, and FastNetMon use that way:

mongosh --username fastnetmon_user --password `sudo cat /etc/fastnetmon/keychain/.mongo_fastnetmon_password`

If you use FerretDB, you may need to addthe following flag:

--authenticationMechanism PLAIN

If you use a Docker-based setup, then you can get the password that way:

sudo cat /var/lib/fastnetmon_docker/fastnetmon/conf/keychain/.mongo_admin

After this, you will need to log in to the container with MongoDB:

sudo docker exec -ti mongo-server.networkd bash

And then use the commands provided above, manually setting the password into the arguments.

Optionally, you can set your own connection information for MongoDB using the configuration file /etc/fastnetmon/fastnetmon.conf with the following information:

{
  "mongodb_host": "127.0.0.1",
  "mongodb_port": 27017,
  "mongodb_database_name": "fastnetmon",
  "mongodb_username": "fastnetmon_user",
  "mongodb_auth_source": "admin",
  "mongodb_auth_mechanism": "",
  "mongodb_timeout": 5,
  "mongodb_timeout_heavy": 95
}

We use different timeout mongodb_timeout_heavy for queries which involve the retrieval and processing of a large amount of data. For all other queries, we use mongodb_timeout.

After that, please put the password for mongodb_username in to file /etc/fastnetmon/keychain/.mongo_fastnetmon_password. Starting from version 2.0.351, you can use IPv6 address in field mongodb_host.

In addition to configuration storage, FastNetMon can store attack information in JSON format. You can enable this behaviour in the following way:

sudo fcli set main mongo_store_attack_information true
sudo fcli commit

Attack information will be stored in collections with names "attacks" and "mitigations". Data stored in the "attacks" collection uses same format as webhook callbacks and JSON callback scripts.

For flow spec bans, we store data into the "mitigations" collection in the following format:

{ "_id" : ObjectId("61254d9e09f2c264c71a152f"), "active" : true, "mitigation_source" : "automatic", "affected_host" : "0.0.0.0/32", "mitigation_type" : "flow_spec", "mitigation_uuid" : "f2411460-3904-4610-83f3-62632a9ba2a3", "details" : { "uuid" : "f2411460-3904-4610-83f3-62632a9ba2a3", "source_prefix" : "152.228.221.228/32", "destination_prefix" : "192.168.1.112/32", "destination_ports" : [ 34772 ], "source_ports" : [ 443 ], "packet_lengths" : [ 1492 ], "protocols" : [ "tcp" ], "fragmentation_flags" : [ "dont-fragment" ], "tcp_flags" : [ "ack|push" ], "action_type" : "discard", "action" : {  } } }

For blackhole bans, we store them in the "attacks" collection using the following format for ban:

{
    _id: ObjectId('66fea5ad7695a083eb0450a8'),
    action: 'ban',
    alert_scope: 'host',
    attack_details: {
      attack_detection_source: 'manual',
      attack_detection_threshold: 'unknown',
      attack_detection_threshold_direction: 'unknown',
      attack_direction: 'other',
      attack_protocol: 'hopopt',
      attack_severity: 'middle',
      attack_type: 'unknown',
      attack_uuid: '6642f712-e3c4-478f-8b3d-e400d176c12d',
      average_incoming_flows: 0,
      average_incoming_pps: 0,
      average_incoming_traffic: 0,
      average_incoming_traffic_bits: 0,
      average_outgoing_flows: 0,
      average_outgoing_pps: 0,
      average_outgoing_traffic: 0,
      average_outgoing_traffic_bits: 0,
      host_group: 'global',
      host_network: '45.189.200.0/22',
      incoming_dropped_pps: 0,
      incoming_dropped_traffic: 0,
      incoming_dropped_traffic_bits: 0,
      incoming_icmp_pps: 0,
      incoming_icmp_traffic: 0,
      incoming_icmp_traffic_bits: 0,
      incoming_ip_fragmented_pps: 0,
      incoming_ip_fragmented_traffic: 0,
      incoming_ip_fragmented_traffic_bits: 0,
      incoming_syn_tcp_pps: 0,
      incoming_syn_tcp_traffic: 0,
      incoming_syn_tcp_traffic_bits: 0,
      incoming_tcp_pps: 0,
      incoming_tcp_traffic: 0,
      incoming_tcp_traffic_bits: 0,
      incoming_udp_pps: 0,
      incoming_udp_traffic: 0,
      incoming_udp_traffic_bits: 0,
      initial_attack_power: 0,
      outgoing_dropped_pps: 0,
      outgoing_dropped_traffic: 0,
      outgoing_dropped_traffic_bits: 0,
      outgoing_icmp_pps: 0,
      outgoing_icmp_traffic: 0,
      outgoing_icmp_traffic_bits: 0,
      outgoing_ip_fragmented_pps: 0,
      outgoing_ip_fragmented_traffic: 0,
      outgoing_ip_fragmented_traffic_bits: 0,
      outgoing_syn_tcp_pps: 0,
      outgoing_syn_tcp_traffic: 0,
      outgoing_syn_tcp_traffic_bits: 0,
      outgoing_tcp_pps: 0,
      outgoing_tcp_traffic: 0,
      outgoing_tcp_traffic_bits: 0,
      outgoing_udp_pps: 0,
      outgoing_udp_traffic: 0,
      outgoing_udp_traffic_bits: 0,
      parent_host_group: '',
      peak_attack_power: 0,
      protocol_version: 'IPv4',
      total_incoming_flows: 0,
      total_incoming_pps: 0,
      total_incoming_traffic: 0,
      total_incoming_traffic_bits: 0,
      total_outgoing_flows: 0,
      total_outgoing_pps: 0,
      total_outgoing_traffic: 0,
      total_outgoing_traffic_bits: 0
    },
    ip: '1.2.3.4'
  }

For blackhole unbans, we store it this way (starting from v2.0.367):

{
    _id: ObjectId('66fead2af5cb9c63ba0d6a3b'),
    action: 'unban',
    alert_scope: 'host',
    attack_details: {
      attack_detection_source: 'manual',
      attack_detection_threshold: 'unknown',
      attack_detection_threshold_direction: 'unknown',
      attack_direction: 'other',
      attack_protocol: 'unknown',
      attack_severity: 'middle',
      attack_type: 'unknown',
      attack_uuid: '4d07be25-e458-4355-952f-aad22b7bd8d7',
      average_incoming_flows: 0,
      average_incoming_pps: 0,
      average_incoming_traffic: 0,
      average_incoming_traffic_bits: 0,
      average_outgoing_flows: 0,
      average_outgoing_pps: 0,
      average_outgoing_traffic: 0,
      average_outgoing_traffic_bits: 0,
      host_group: 'global',
      host_network: '45.189.200.0/22',
      incoming_dropped_pps: 0,
      incoming_dropped_traffic: 0,
      incoming_dropped_traffic_bits: 0,
      incoming_icmp_pps: 0,
      incoming_icmp_traffic: 0,
      incoming_icmp_traffic_bits: 0,
      incoming_ip_fragmented_pps: 0,
      incoming_ip_fragmented_traffic: 0,
      incoming_ip_fragmented_traffic_bits: 0,
      incoming_syn_tcp_pps: 0,
      incoming_syn_tcp_traffic: 0,
      incoming_syn_tcp_traffic_bits: 0,
      incoming_tcp_pps: 0,
      incoming_tcp_traffic: 0,
      incoming_tcp_traffic_bits: 0,
      incoming_udp_pps: 0,
      incoming_udp_traffic: 0,
      incoming_udp_traffic_bits: 0,
      initial_attack_power: 0,
      outgoing_dropped_pps: 0,
      outgoing_dropped_traffic: 0,
      outgoing_dropped_traffic_bits: 0,
      outgoing_icmp_pps: 0,
      outgoing_icmp_traffic: 0,
      outgoing_icmp_traffic_bits: 0,
      outgoing_ip_fragmented_pps: 0,
      outgoing_ip_fragmented_traffic: 0,
      outgoing_ip_fragmented_traffic_bits: 0,
      outgoing_syn_tcp_pps: 0,
      outgoing_syn_tcp_traffic: 0,
      outgoing_syn_tcp_traffic_bits: 0,
      outgoing_tcp_pps: 0,
      outgoing_tcp_traffic: 0,
      outgoing_tcp_traffic_bits: 0,
      outgoing_udp_pps: 0,
      outgoing_udp_traffic: 0,
      outgoing_udp_traffic_bits: 0,
      parent_host_group: '',
      peak_attack_power: 0,
      protocol_version: 'IPv4',
      total_incoming_flows: 0,
      total_incoming_pps: 0,
      total_incoming_traffic: 0,
      total_incoming_traffic_bits: 0,
      total_outgoing_flows: 0,
      total_outgoing_pps: 0,
      total_outgoing_traffic: 0,
      total_outgoing_traffic_bits: 0
    },
    ip: '2.3.4.5'
  }

FastNetMon does not store information about the time when it discovered an attack and added it into MongoDB. But you can use the internal feature of MongoDB to retrieve this information. For example, you have this attack in MongoDB (attacks collection):

db.attacks.findOne()
{
    "_id" : ObjectId("5a985f3cec7006077668a321"),
    "ip" : ":0001",
    "attack_details" : {
        "attack_uuid" : "e9ba83bd-389f-4332-8e2e-0300c01c839b",
        "attack_severity" : "middle",
        "attack_type" : "unknown",
                ....
    }
}

And you can extract information when this object was added in the following way:

ObjectId("5a985f3cec7006077668a321").getTimestamp()

And it will return a timestamp object:

ISODate("2022-03-01T20:14:52Z")

FastNetMon has logic to try connecting to MongoDB multiple times, and you may find it in the log file /var/log/fastnetmon/fastnetmon.log:

2022-08-11 12:28:20,459 [INFO] This is 1 try to connect to MongoDB
2022-08-11 12:28:20,459 [DEBUG] Start MongoDB connection checking
2022-08-11 12:28:20,459 [INFO] PING mongodb database fastnetmon
2022-08-11 12:28:20,460 [ERROR] Could not ping MongoDB: No suitable servers found (`serverSelectionTryOnce` set): [connection refused calling ismaster on '127.0.0.1:27017']: generic server error
2022-08-11 12:28:20,460 [INFO] Will do second try in 5 seconds
2022-08-11 12:28:25,460 [INFO] This is 2 try to connect to MongoDB
2022-08-11 12:28:25,460 [DEBUG] Start MongoDB connection checking
2022-08-11 12:28:25,460 [INFO] PING mongodb database fastnetmon
2022-08-11 12:28:25,460 [ERROR] Could not ping MongoDB: No suitable servers found (`serverSelectionTryOnce` set): [connection refused calling ismaster on '127.0.0.1:27017']: generic server error
2022-08-11 12:28:25,460 [INFO] Will do second try in 5 seconds
2022-08-11 12:28:30,460 [INFO] This is 3 try to connect to MongoDB
2022-08-11 12:28:30,460 [DEBUG] Start MongoDB connection checking
2022-08-11 12:28:30,460 [INFO] PING mongodb database fastnetmon
2022-08-11 12:28:30,461 [ERROR] Could not ping MongoDB: No suitable servers found (`serverSelectionTryOnce` set): [connection refused calling ismaster on '127.0.0.1:27017']: generic server error
2022-08-11 12:28:30,461 [INFO] Will do second try in 5 seconds

fcli has similar logic, and it tries multiple times to establish a connection to MongoDB:

sudo fcli
2022/08/11 12:48:35 We cannot establish connection with MongoDB: server selection error: context deadline exceeded, current topology: { Type: Unknown, Servers: [{ Addr: 127.0.0.1:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }, ] } Attempt number: 1

To investigate the root cause of issues with MongoDB, we recommend checking the daemon status first:

sudo systemctl status mongod

The output for MongoDB instance looks like:

● mongod.service - MongoDB Database Server
   Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2022-08-11 12:29:46 BST; 3s ago
     Docs: https://docs.mongodb.org/manual
 Main PID: 8921 (mongod)
   CGroup: /system.slice/mongod.service
           └─8921 /usr/bin/mongod --config /etc/mongod.conf

Aug 11 12:29:46 fastlab1 systemd[1]: Started MongoDB Database Server.

If "Active" field differs from "active", you may have issues with MongoDB.

In addition to systemd service status, you may check that MongoDB process is running:

ps aux|grep mongo
mongodb    8921  0.3  0.2 1009456 73424 ?       Ssl  12:29   0:03 /usr/bin/mongod --config /etc/mongod.conf
odintsov   9465  0.0  0.0  12940   968 pts/0    S+   12:47   0:00 grep --color=auto mongo

It may be useful to check your disk space usage. You need to have at least 10G of spare disk space for MongoDB to operate correctly. You can check disk space usage in the following:

df -h

To find more reasons about issues with MongoDB we recommend checking the log file /var/log/mongodb/mongod.log.

In addition to the log file, you may check systemd service logs in the following way:

sudo journalctl -u mongod -n 1000 -f

In case your server runs out of disk space, MongoDB terminates its daemons to prevent data corruption, and you will need to start it again manually. You need to allocate more disk space to the server or free up existing disk space. We recommend carefully investigating which service used all the disk space to avoid similar cases in future. This can be done using the following command:

sudo systemctl start mongod

And then you need to restart FastNetMon as it depends on MongoDB:

sudo systemctl restart fastnetmon

You can check that FastNetMon is running well using the fcli command:

sudo fcli show total_traffic_counters

Modern versions of MongoDB require AVX support, and ifthe CPU on your server does not support it, it will crash with a SIGILL error such as this:

sudo service mongod status
× mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Thu 2023-11-09 14:56:28 UTC; 5min ago
Docs: https://docs.mongodb.org/manual
Process: 154878 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=killed, signal=ILL)
Main PID: 154878 (code=killed, signal=ILL)
CPU: 12ms
 
Nov 09 14:56:28 flowdetector systemd[1]: Started MongoDB Database Server.
Nov 09 14:56:28 flowdetector systemd[1]: mongod.service: Main process exited, code=killed, status=4/ILL
Nov 09 14:56:28 flowdetector systemd[1]: mongod.service: Failed with result 'signal'.

Upgrade procedure

If you run an old version of MongoDB and you want to upgrade to the latest one, you cannot upgrade directly to the latest version due to MongoDB upgrade process constraints. You need to upgrade from version to version. Fortunately, you can skip some versions where MongoDB database format did not change.

Before you start the upgrade process, we recommend doing a full backup of the VM/server and making a configuration export this way.

Let's assume that you run MongoDB 3.4. To upgrade to 5.0, you need to do the following upgrades:

Removal of a large number of values from the configuration

If you've added too many networks in configuration and are looking for an easy way to remove all of them, you can use the following command in the MongoDB client:

db.configuration.findOneAndUpdate({}, { $set: {  networks_list: [] } });

How to back up the entire MongoDB database?

We recommend using our own backup tool.

export FASTNETMON_BACKUP_PATH=/var/backup/fastnetmon-`date +"%m-%d-%Y-%T"`
sudo mkdir $FASTNETMON_BACKUP_PATH
sudo mongodump  --username administrator --password `sudo cat /etc/fastnetmon/keychain/.mongo_admin` --out $FASTNETMON_BACKUP_PATH
tar -cpzf ~/fastnetmon_configuration_dump.tar.gz $FASTNETMON_BACKUP_PATH

How to import a MongoDB backup?

We recommend using our own backup tool.

Proceed with caution. This command will overwrite the current configuration with data from the backup:

sudo mongorestore --username administrator --password `sudo cat /etc/fastnetmon/keychain/.mongo_admin`  --drop /tmp/folder_with_incompressed_backup

Then, ask FastNetMon to re-read the new configuration:

sudo fcli commit

Controlling MongoDB verbosity in logs

If you find that MongoDB generates an excessive amount of log messages to /var/log/mongodb/mongod.log you can decrease the verbosity level by making the following change (adding quiet flag) in /etc/mongod.conf:

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
  # Reduce logging verbosity little bit
  quiet: true

For all new installations this is default configuration.

Unfortunately, this change does not stop even the latest MongoDB 6.0.13 from logging every single client connection to the log file this way:

{"t":{"$date":"2024-03-06T15:39:51.328+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn11","msg":"Interrupted operation as its client disconnected","attr":{"opId":23223}}
{"t":{"$date":"2024-03-06T15:39:51.772+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn14","msg":"client metadata","attr":{"remote":"127.0.0.1:55784","client":"conn14","doc":{"driver":{"name":"mongo-go-driver","version":"v1.12.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.1"}}}
{"t":{"$date":"2024-03-06T15:39:51.772+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn13","msg":"client metadata","attr":{"remote":"127.0.0.1:55782","client":"conn13","doc":{"driver":{"name":"mongo-go-driver","version":"v1.12.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.1"}}}
{"t":{"$date":"2024-03-06T15:39:51.773+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn15","msg":"client metadata","attr":{"remote":"127.0.0.1:55796","client":"conn15","doc":{"driver":{"name":"mongo-go-driver","version":"v1.12.1"},"os":{"type":"linux","architecture":"amd64"},"platform":"go1.18.1"}}}

MongoDB logrotate configuration

Please note that this change is included in FastNetMon 2.0.362 and newer releases.

To address the issue with MongoDB's excessive logging, we can recommend enabling logrotate for the MongoDB log file /var/log/mongodb/mongod.log.

As a first step, please makethe following changes to the MongoDB configuration file /etc/mongod.conf:

systemLog:
  destination: file
  logAppend: true
  logRotate: reopen
  path: /var/log/mongodb/mongod.log
  # Reduce logging verbosity little bit
  quiet: true

Then apply configuration changes:

sudo systemctl restart mongod

Then create the following file /etc/logrotate.d/mongodb with the following content:

/var/log/mongodb/mongod.log
{
   rotate 7
   daily
   size 1M
   missingok
   create 0600 mongodb mongodb
   delaycompress
   compress
   sharedscripts
   postrotate
     /bin/kill -SIGUSR1 $(pidof mongod)
   endscript
}

To check that it works, you can run the following actions:

sudo logrotate --force /etc/logrotate.d/mongodb