Bowtie Controllers¶
A Bowtie controller is the server-side orchestrator that glues all the Bowtie client machines together.
The roles and responsibilities of a controller in your environment include:
Supporting administrative tasks by a privileged user to:
Update policies and their associated ACLs
Grant or revoke privileges for users that have joined the network
Managing the private network, which includes:
Distributing updated routes to other controllers and clients
Restricting or granting access to resources based upon defined ACL policies
Providing DNS for configured domains within the private network
This page enumerates some of the nuances to bear in mind when operating the Controller appliance or guides outside the scope of Controller Setup. Please refer to Controller Operation for exhaustive documentation about all configuration options available for these appliances.
Bootstrapping¶
A Controller’s initial startup logic proceeds as follows:
Run cloud-init
Check for the existence of
/etc/hostname
. If present, bootstrap the appropriate settings for the Bowtie server software and supporting services like Grafana.As part of this step, if
/etc/ssl/controller.pem
and/etc/ssl/controller-key.pem
are present, the Controller will configure itself to serve TLS using using this certificate and key.
Start the Bowtie server daemon. The server daemon will ingest a few key files at start time:
If
/var/lib/bowtie/init-users
is present, create the first set of initial users based on this file’s contents. Read about pre-seeding initial users for additional information.If
/var/lib/bowtie/should-join.conf
is present, initialize clustering operations with the other Controller indicated in this file. Read about joining other Controllers for additional information.If
/var/lib/bowtie/skip-gui-init
is present, skip presenting the initial setup start screen for new Controllers, which is useful for automation-driven provisioning.Source environment variables for the daemon from
/etc/default/bowtie-server
and/etc/bowtie-server.d/*
You may leverage this logic to perform initial configuration according to your taste:
Skip defining cloud-init hostname settings to avoid running the initial bootstrap process automatically. You can define this file later to invoke the same process.
Specifying all configuration options (hostname, certificates, and SSO files) in cloud-init user data permits for a declarative Controller setup without any manual steps required for installation.
Change log verbosity by dropping in environment variable settings into files in
/etc/bowtie-server.d/*
.
See Controller Operation for more comprehensive server configuration documentation.
Backup and Restore¶
Bowtie Controllers support the ability to perform regular backups and also to restore from backups.
These settings are defined via each Controller’s backup strategy and controlled via the Backups page.
Backups are fundamentally restic snapshots and may be used with basic restic
commands if desired.
Backups are primarily intended to facilitate data recovery in a disaster scenario - in order to achieve highly available or fault-tolerant deployments, you should preferentially architect your deployment to leverage Multi-Controller Setup which will replicate your data across multiple hosts in the event that a Controller is irrevocably lost. Downtime can be avoided entirely in this scenario by running more than one Controller for clients to fail-over.
We suggest using backup and restore for the following shape of use case:
To safeguard data in deployments that operate a single Controller
To backup historical application data in the event of configuration mistakes or accidental deletion
To retain old application database versions in the event of unexpected stability or software bugs and a full restore becomes necessary
Note that Controllers cannot currently rely on backup snapshots to restore individual configuration items (like policies or users) at a fine-grained level – backup operations can restore the entire application database, but not specific objects as represented over the API. This may change in future updates.
Backup Settings¶
For some backup repository types, you will need to provide authentication credentials to successfully create and remove backup snapshots. For local backup paths, credentials are unnecessary. Similarly, if you launch a Controller in AWS with an appropriate instance role with privileges granted to a backup S3 bucket, the underlying restic libraries will authenticate without the need for static AWS credentials.
For other cases that may require API keys or authentication tokens, two options are available:
Use the secrets management page of the Control Plane to manage secret values with a web interface.
Alternatively, you may populate the file at
/etc/default/backupd
with key/value pairs that set your desired credentials. Use the documentation for supported restic environment variables to set the desired credentials for your repository type.For example, to set credentials for use with a Minio deployment located at the backup repository endpoint
s3:https://minio.example.com/backups/bowtie/bt1
, populate the file at/etc/default/backupd
with the following:AWS_ACCESS_KEY_ID=<your minio access key id> AWS_SECRET_ACCESS_KEY=<your minio access key id>
Changes to this file will prompt the Controller’s backup daemon to restart and load these environment variables as necessary. The secrets management page will operate on this same file.
Backup Encryption¶
Controller backups are encrypted.
Before using backup and restore, you must define a backup encryption/decryption key that will be used when either performing backups or restoring them (we refer to this value as a “key” in this documentation, but any sequence of random characters constitutes a valid key, which might also be labeled a password).
This key should be an ASCII string.
A long sequence of random characters generated from a password manager is a good example of a strong encryption key or the output of a cryptographically-strong command like openssl rand -hex 32
.
Store this value in a secure location apart from the Controller in case of a disaster recovery scenario in the event that the Controller’s filesystem is unreadable.
To provide this encryption key to the Controller for backup operations, use the secrets management page to load this ENCRYPTION_KEY
variable onto the Controller.
Alternatively, you may define it at the file path /etc/default/backupd
in the following form:
# Replace this value with your real encryption key.
ENCRYPTION_KEY=5352891d395f59cb64d72ea07270f8da
Daemons that honor this configuration key on the Controller will restart as necessary to consume its value when it is defined or changed.
Backup Restore¶
Two primary interfaces exist for restoring from backup: interactive restore and automated restore.
Interactive Restore¶
The restore
command is available from the command line to retrieve backup snapshots, move them into the appropriate filesystem paths, and start or stop the relevant system services in the correct sequence to facilitate successful restore.
You may simply invoke the restore
command which will prompt you for any missing values required to operate, like the desired remote repository URL and backup encryption key.
Alternatively, consult restore --help
for a list of accepted parameters to run the command with predetermined values, including additional actions such as the ability to list available backup snapshots and optionally restore specific backup snapshot IDs.
Automated Restore¶
Controllers will consume and honor values found in the file /etc/restore
at boot and feed these values into the restore
command outlined in Interactive Restore if you would like to instruct a Controller to restore from a previous backup at startup.
To use this feature, write out the file contents at /etc/restore
with the following variables set:
REPOSITORY
Use the restic-compatible repository endpoint used when configuring the backup repository endpoint.
ENCRYPTION_KEY
Use the encryption key you configured when setting up the backup repository.
In an automated Controller provisioning scenario, these files may be written with a mechanism like Seeding Configuration.
The automated restoration process will pause until cloud-init
has finished in order to ensure the file has been written.
After a successful restore, the automated process will remove the file at /etc/restore
to avoid potentially overwriting the running Controller’s database in the future if the Controller is rebooted.
Manual Backup Interaction¶
Underneath the high-level tooling that Bowtie provides for automatic backups, restic is the fundamental tool that manages backup snapshots.
Because backup snapshots are ultimately restic
snapshots, you are free to interact with these backup snapshots as any other restic
repository without negatively impacting the backup or pruning operations that Controllers may run.
For example, assume that you have configured Controller backups for storage in S3 at s3:s3.amazonaws.com/example/bowtie/controller/bt1
using the backup encryption key correct-horse-battery-staple
.
With restic
installed, you may view existing snapshots with the command:
RESTIC_PASSWORD=correct-horse-battery-staple \
RESTIC_REPOSITORY=s3:s3.amazonaws.com/example/bowtie/controller/bt1 \
restic snapshots
While this is a supported method to view, delete, and manage snapshots, we still recommend using the approaches outlined in Backup Restore to restore a Controller from backup snapshots because the restore
tool includes additional logic to move restored files into the correct paths and start/stop services in the correct sequence when restoring.
This technique is also a viable approach if you have a need to retrieve your application data for uses such as manual recovery of specific pieces of data in tandem with a Bowtie representative.
Self-Signed Certificate Bundles¶
Some environments may require trusting a self-signed certificate if HTTPS endpoints are intercepted. Follow the installation guide’s section about self-signed certificates to install one.
Controller Security¶
As an edge device on your network, Bowtie takes the security of our software seriously. While most security features are built into the products we ship and configurable at the application layer, a few specific features are worth calling out specifically:
Vulnerability Data¶
Controller artifacts under regular scanning as part of our release process. Although third-party tools like Wiz can scan Controller systems, our experience has shown that, with our detailed knowledge of Controller internals, Bowtie is better-suited to provider scanning results that we can triage to filter our false positives or noise.
Refer to Vulnerability Data for additional information about retrieving scan data about your specific Controller version and format.
Audit Trails¶
Controller server logs include the output of a variety of server daemons, but logs for the bowtie-server.service
systemd
service include logs with the annotation audit_event=true
that can be targeted to record key events like user authentication or address assignment.
Refer to Application Logs for additional information.
Updates¶
Bowtie strongly recommends opting-in to Unattended Updates whenever possible to stay ahead of new vulnerabilities and the latest stability fixes. Generally speaking, Controllers can move upwards in version without breaking backwards compatibility with Client packages.
Terraform¶
Deploying Controllers via Terraform is supported. cloud-init is included on all images to permit configuration via user-data at instance creation time as well. Paired with native cloud images for AWS, GCE, and Azure, you can fully automate the installation of Bowtie Controllers.
Examples are provided for both Deploying to AWS and Deploying to Azure.
Deploying to AWS¶
When writing Terraform code to deploy and manage Controllers, you may want to leverage the aws_ami
data
resource in order to acquire the appropriate Bowtie Controller AMI ID. For example, the following code snippet demonstrates how to acquire a Controller AMI ID and create an AWS instance:
1# Retrieve the Bowtie Controller appliance ID
2data "aws_ami" "controller" {
3 most_recent = true
4 owners = ["055761336000"] # Bowtie’s account ID
5}
6
7# Load cloud-init user data from an external file:
8data "template_file" "user_data" {
9 template = file("user-data.yaml.tpl")
10 vars = {
11 site_id = var.site_id # Set this to your site's UUID
12 sso_client_id = var.sso_client_id # Acquire from your SSO provider
13 sso_client_secret = var.sso_client_secret # Acquire from your SSO provider
14 sso_trusted_group = var.sso_trusted_group # Which group to grant access to
15 }
16}
17
18resource "aws_instance" "controller" {
19 ami = data.aws_ami.controller.id
20
21 instance_type = var.instance_type # replace with your desired instance size
22 subnet_id = var.subnet_id # replace with your subnet ID
23 vpc_security_group_ids = var.security_groups # replace with your sg ID
24 key_name = var.key_name # replace with your key name
25
26 # 20-50 GiB is likely enough for most organizations. Use a larger disk for
27 # very active organizations with large networks.
28 root_block_device {
29 volume_size = 50 # GiB
30 }
31
32 # Set user_data to a cloud-config compatible value:
33 user_data = data.template_file.user_data.rendered
34
35 # Prompt rebuilding instance if the user-data changes.
36 user_data_replace_on_change = true
37}
The referenced user-data.yaml.tpl
would contain the content noted in Terraform cloud-init User Data.
Terraform resources may look slightly different for the first Controller versus all subsequent Controllers as laid out by Multi-Controller Setup.
Notably, the /var/lib/bowtie/should-join.conf
file will instruct a new Controller to peer with an existing node and SITE_ID
should be consistent within a site (such as a public cloud region or on-premise datacenter).
Deploying to Azure¶
Deploying a Controller into Azure is similar to the steps for AWS: establish a new instance based on the latest Bowtie Controller ID, define the relevant cloud-init data, and load it as user-data for the new instance.
1# Define user-data templated from an external YAML file.
2data "template_file" "user_data" {
3 template = file("user-data.yaml.tpl")
4 vars = {
5 site_id = var.site_id # Set this to your site's UUID
6 sso_client_id = var.sso_client_id # Acquire from your SSO provider
7 sso_client_secret = var.sso_client_secret # Acquire from your SSO provider
8 sso_trusted_group = var.sso_trusted_group # Which group to grant access to
9 }
10}
11
12# Controller instance
13resource "azurerm_linux_virtual_machine" "controller" {
14 name = var.instance_name # replace with your desired controller name
15 resource_group_name = var.rg_name # replace with your existing resource group name
16 location = azurerm_resource_group.testing_rg.location
17 size = var.instance_size # replace with your desired instance size
18
19 admin_username = "azureuser"
20 # Rely on ssh pubkey authentication instead.
21 disable_password_authentication = true
22 custom_data = base64encode(data.template_file.user_data.rendered)
23
24 # This parameter should be set to the image resource identifier for
25 # Bowtie Controllers taken from
26 # https://api.bowtie.works/platforms/Azure. You may optionally use a
27 # well-defined release with a format like `.../version/24.02.003` or
28 # reference the latest gallery image with `.../version/latest`
29 # instead.
30 source_image_id = var.bowtie_gallery_image_id
31
32 os_disk {
33 caching = "ReadWrite"
34 storage_account_type = "Premium_LRS"
35 }
36
37 admin_ssh_key {
38 username = "azureuser"
39 public_key = var.instance_pubkey # replace this with your desired pubkey
40 }
41
42 network_interface_ids = [
43 # Replace this with a previously-created network interface
44 # resource that suits your network topology.
45 azurerm_network_interface.instance_interface.id
46 ]
47}
See Terraform cloud-init User Data for example contents of the referenced cloud-init.yaml.tpl
.
Terraform cloud-init User Data¶
When creating cloud instances with Terraform, Seeding Configuration provides a way to bootstrap configuration options for new Controllers. Use the following boilerplate as reference to configure the relevant portions of your deployment:
1#cloud-config
2
3# user-data is fed into cloud-init. There are a few key fields:
4#
5# - The `fqdn` fields set the Controller’s endpoint. This informs
6# the Controller if its own endpoint which is important to acquire
7# TLS certificates if you decide to leverage ACME certs.
8#
9# - The site_id variable injected into a file in
10# /etc/bowtie-server.d should be uniform across sites. In
11# this example, we assume that the Terraform variable is set
12# earlier. This can be a UUID4 string.
13#
14# - Include the BOWTIE_SYNC_STRATEGY and BOWTIE_SYNC_PSK values
15# in a file within /etc/bowtie-server.d when this Controller should
16# join an _existing_ Bowtie network and Controller. The file
17# placed at /var/lib/bowtie/should-join.conf is also necessary for
18# additional Controllers to discover others to join.
19#
20# - Create the file /var/lib/bowtie/skip-gui-init if you would like to
21# perform a completely headless installation and configuration of a
22# Controller. This will bypass the initial setup wizard and derive
23# the appropriate endpoint values to bootstrap its configuration.
24#
25# - Create the file /var/lib/bowtie/init-users if you would like to
26# provision initial users instead of granting administrative
27# access to the first user. See the documentation about
28# cloud-init in the "Controller Setup" docs for more.
29#
30# - /var/lib/bowtie/should-join.conf is an optional file that you may
31# set for all Controllers following the first installation to instruct
32# subsequent Controllers to cluster with the Controller indicated in
33# this file.
34#
35# - The file at /etc/dex/gitlab.yaml represents SSO
36# configuration. In this Terraform file, the example assumes that
37# you’ve declared the appropriate client values in sso_client_id
38# and sso_client_secret.
39#
40# - The yaml `groups` field for the SSO configuration will be
41# granted login access to the web interface.
42#
43# - Platforms like AWS should automatically provision ssh keys
44# for default users but you may optionally include directives
45# under the `user` key to inject additional ssh keys during
46# initial provisioning.
47
48# Set the DNS endpoint for this controller
49fqdn: controller-1.net.example.com
50hostname: controller-1.net.example.com
51preserve_hostname: false
52prefer_fqdn_over_hostname: true
53
54write_files:
55# Path to the bowtie-server environment variable file
56- path: /etc/bowtie-server.d/custom.conf
57 content: |
58 # Set identical values for all Controllers in the same site
59 SITE_ID=${site_id}
60 # Uniform value consistent across the entire cluster of Controlers
61 BOWTIE_SYNC_PSK=<your sync PSK>
62
63# Perform Controller boostrapping without the initial setup wizard.
64# Ensure that DNS is properly pointed at the hostname indicated in `fqdn`
65- path: /var/lib/bowtie/skip-gui-init
66
67# Create a first user automatically:
68- path: /var/lib/bowtie/init-users
69 content: |
70 me@example.com:$argon2i$v=19$m=4096,t=3,p=1$Y2JhZmQ1ZWYtMTAyNC00NDAxLWFlNWMtMjJlMjYwNWM4OTY5$P/tqTpNHVtAjB3zQizSftOdJiTEi3PpVTcVdCx5/eVQ
71
72# Cluster configuration file. Set the URL to an existing Controller if
73# this should be part of an existing deploy. Do not include this file
74# for your first Controller.
75- path: /var/lib/bowtie/should-join.conf
76 content: |
77 entrypoint = "https://your-first-controller.example.com"
78
79# Will configure single-sign on
80- path: /etc/dex/gitlab.yaml
81 content: |
82 id: gitlab
83 name: GitLab
84 type: gitlab
85 config:
86 clientID: ${sso_client_id}
87 clientSecret: ${client_secret}
88 redirectURI: $DEX_ORIGIN/dex/callback
89 useLoginAsID: false
90 groups:
91 - ${sso_trusted_group}
92
93# Optionally configure additional public keys for shell access.
94users:
95- name: root
96 lock_passwd: true
97 ssh_authorized_keys:
98 - ssh-ed25519 <replace with your pubkey> myuser
Bowtie Terraform Provider¶
To configure the Bowtie application itself beyond just the network Controller appliance, you may optionally use the Bowtie Terraform Provider to declaratively manage resources over the Bowtie API natively in Terraform. This is a powerful approach when paired with building Controllers with Terraform to achieve declaratively managed infrastructure.
The Bowtie provider page provides the latest documentation for supported resources and example code. An example is included here for convenience to demonstrate how to use the provider:
1terraform {
2 required_providers {
3 bowtie = {
4 source = "bowtieworks/bowtie"
5 # Latest at time of writing, you may need to update this to
6 # point at the latest version.
7 version = "0.4.0"
8 }
9 }
10}
11
12# Instantiate the Bowtie provider with your own Controller
13# information:
14#
15# - `host` should be an HTTPS-terminated endpoint that resolves to one
16# of your Controller URLs.
17#
18# - `username` is an administrative user that you have either set a
19# password for in the Users web interface or provisioned with
20# the init-users configuration file. In this example, we suggest
21# setting the `BOWTIE_USERNAME` environment variable instead to
22# avoid hard-coding credentials into Terraform state.
23#
24# - `password` should be the plaintext password for the associated
25# user specified in the `username` field. This password can be
26# set in the web interface or the init-users configuration
27# file. In this example, we suggest setting the `BOWTIE_PASSWORD`
28# environment variable instead to avoid hard-coding credentials
29# into Terraform state.
30#
31provider "bowtie" {
32 host = var.bowtie_host
33}
34
35# Configure sites with the `bowtie_site` resource.
36#
37# Note that this resource will create any necessary sites and store
38# their ID locally. If you would like to manage an _existing_ site, for
39# example, the site that an existing Controller is already a part of,
40# you may `import` this site to avoid creating additional sites. For
41# example, assuming that you provisioned a new Controller with SITE_ID
42# set to eb69ad74-1ed5-4fa4-8fde-6252ffb9455e in cloud-init data, you
43# can bring this site into your Terraform state with the following
44# command. This example assumes that you have configured the provider
45# with a functional endpoint that the `terraform import` command can
46# reach to retrieve information from the Bowtie API.
47#
48# $ terraform import 'bowtie_site.my_site' eb69ad74-1ed5-4fa4-8fde-6252ffb9455e
49#
50resource "bowtie_site" "us_west_2" {
51 name = "AWS us-west-2"
52}
53
54# Configure site ranges.
55#
56# Requires an associated site ID which can be referenced from previous
57# resources such as with this example and bowtie_site.us_west_2.
58#
59resource "bowtie_site_range" "us_west_2_private" {
60 site_id = bowtie_site.us_west_2.id
61 name = "AWS us-west-2 private subnet"
62 description = "Private subnet"
63 ipv4_range = "10.0.0.0/16"
64}
65
66# Configure managed domains with DNS.
67#
68# Servers should be reachable from Controllers to facilitate name
69# resolution correctly. Enable DNS64 to send traffic destined to
70# matching domains over the private network tunnel.
71#
72resource "bowtie_dns" "private_my_domain" {
73 name = "private.my.domain"
74 is_dns64 = true
75 servers = [{
76 addr = "192.0.2.1"
77 }]
78 excludes = [{
79 name = "other.my.domain"
80 }]
81}
82
83# Configure DNS block lists.
84#
85# DNS block lists are configured against upstream URLs that are
86# fetched periodically with an optional list of names to exclude
87# from blocking.
88#
89resource "bowtie_dns_block_list" "threat_intelligence" {
90 name = "Threat Intelligence Feed"
91 upstream = "https://raw.githubusercontent.com/hagezi/dns-blocklists/main/domains/tif.txt"
92 override_to_allow = ["permitted.example.com"]
93}
94
95# Configure resources for access control via policies.
96#
97# There are a variety of options to control access, including CIDR
98# ranges, protocol (such as ICMP or HTTP), and ports. See the provider
99# documentation for the `bowtie_resource` resource for a complete
100# listing of all available schema options.
101#
102resource "bowtie_resource" "all" {
103 name = "All Access"
104 protocol = "all"
105 location = {
106 cidr = "0.0.0.0/0"
107 }
108 ports = {
109 range = [
110 0, 65535
111 ]
112 }
113}
114
115# Group resources together for targeting by policies.
116#
117# Note that resource groups can be nested via the `inherited` option.
118#
119resource "bowtie_resource_group" "private_resources" {
120 name = "Access to all private resources"
121 inherited = []
122 resources = [bowtie_resource.all.id]
123}
Additional items to remember when using the native Terraform Bowtie provider:
The provider is under active development. If you encounter APIs that may not have equivalent Terraform resources, let us know! The provider is an open source project.
Many resources support the
terraform import
ability to populate Terraform state with existing resources already present over the control plane web API. Remember to import these resources if you would like to avoid creating duplicate resources.
Provider apply
Ordering¶
Many teams may enforce infrastructure-as-code as a top-level Terraform file that declares defined state across many domains including providers like EC2, Route53, and more.
While Bowtie resources can fit into this model, there are a few caveats to remember when Terraform refreshes state to generate a final plan for the apply
step.
The provider "bowtie" { }
block must be able to successfully authenticate against a Controller endpoint when terraform apply
attempts to manage resources.
If you intend to deploy a new Controller in the same Terraform run in which you declare a Bowtie provider and Bowtie resources, you may need to amend your .hcl
code to enforce ordering so that resources are called in the correct sequence.
For example, consider a Terraform deployment in which an AWS instance with a Route 53 record serves as the Bowtie API endpoint.
To delay calling the API endpoint until it has become available, you might consider code like the following, which injects a null terraform_data
resource that blocks until the indicated API endpoint is reachable:
# Create a `terraform_data` resource that does nothing except wait for
# the API endpoint to become responsive.
resource "terraform_data" "bowtie_api_wait" {
# Loop until a curl command against the endpoint stored as
# local.bowtie_endpoint responds.
provisioner "local-exec" {
command = <<EOF
until curl --fail --silent ${local.bowtie_endpoint}/-net/api/v0/ok
do
echo "Waiting for Bowtie API..."
sleep 2
done
echo "API is up! Pausing briefly to ensure the endpoint is stable."
sleep 10
EOF
}
# Insert your own resource that must come before the API is
# available. This could be a module, elastic IP, or DNS record.
depends_on = [aws_route53_record.bowtie_endpoint]
}
# The dependent resource can be a module or individual `bowtie_*`
# resources:
module "bowtie" {
#
# ...other arguments...
#
# Ensure that this module only gets started after the API is online.
depends_on = [terraform_data.bowtie_api_wait]
}
This is the recommended approach and enables your Terraform workflow to operate within a single run without ordering issues. If you are unable to use such a solution, you might consider other potential options:
If you have the ability to bootstrap an initial Controller that can serve as a starting point for other Terraform-managed Controllers to join as part of a cluster, it can serve as the endpoint to let the provider succeed when refreshing its state.
Through resource targeting, you can
apply
only a subset of your Terraform resources to avoid instantiating providers that may not yet have ready endpoints. For example, given this examplemain.tf
file that declares a module to manage AWS resources as well as a Bowtie deployment:module "us_west_2" { source = "/path/to/custom/module/aws" # You likely have additional module options here } module "our_bowtie_org" { source = "/path/to/custom/module/bowtie" # You likely have additional module options here }
You can build the requisite Controller hosts first by providing the module name to
terraform apply
in the following form:$ terraform apply -target=module.us_west_2
After your
apply
step has created any Controllers that can serve the Bowtie API, you can then either apply the Bowtie resources which should now succeed:$ terraform apply -target=module.our_bowtie_org
Alternatively, assuming that the Bowtie API is available, applying the entire Terraform state should also succeed:
$ terraform apply
Caddy¶
Bowtie Controllers front all inbound traffic with Caddy, a modern reverse proxy with secure defaults and automatic SSL/TLS management. In some rare cases, the reverse proxy configuration may need to be reset to restore known-working defaults.
To do this, you can run the following script from the command line over an ssh
session on the impacted Controller which recovers the stock Caddy configuration and sets the running configuration to its contents:
curl "http://127.0.0.1:2019/load" -H "Content-Type: text/caddyfile" --data-binary @/etc/caddy/caddy_config
Confirm that the reverse proxy is functioning correctly by browsing to your Controller’s HTTP(S) endpoint or viewing the running configuration with curl http://127.0.0.1:2019/config/ | jq
.
Helm¶
The Kubernetes installation documentation provides comprehensive information and examples regarding how to deploy the server-side component of Bowtie to your Kubernetes environment. The Kubernetes Help section of Troubleshooting offers additional guidance.
Collecting Observability Data¶
Reference Exporting Telemetry for examples of defining additional OpenTelemetry Collector exporters.
The opentelemetry-collector-contrib package permits a wide range of exporter targets, so you do not necessarily need to run an identical observability stack on your own infrastructure (Prometheus for metrics, Loki for logs, and Tempo for traces). For example, you may define an Elasticsearch exporter and then append it to the Controller’s log pipeline to receive and store logs using your own Elasticsearch cluster in tandem to the Controller’s local Loki installation. This strategy is equally valid for metrics and traces as well.
Note that Prometheus often operates on a pull-based scraping scheme and so each Controller makes :9090
available that provides all metrics from this single endpoint. You may elect to implement federation in order to implement hierarchical scaling and receive all discrete metrics that are visible from the local Grafana installation.
Audit Logging¶
For operators who need insight into auditing events such as user authentication or IP assignment activity, refer to Logs and specifically the section about audit logs under Application Logs.
Policy Verdict Tracking¶
Every Controller provides settings to optionally report on per-packet policy decision verdicts that occur when inbound traffic flows over the network control plane. This may take the form of metrics or logs (which is configurable) and operators may choose to view the resulting data locally by leveraging the Controller-local Observability stack or forwarding this data off-Controller by Exporting Telemetry.
In the case of policy verdict tracking, there are additional considerations to bear in mind:
Because the policy engine is consulted for all network traffic, the volume of metrics and logs may be significant. You should be prepared to handle log streams with significant amounts of output volume and metrics with a nontrivial number of active time series reported at any given time.
In general, endpoint device activity can be measured and queried based upon either kind of data – metrics include metadata like device or user ID as part of their labels and logs may be counted in aggregate to estimate volume. Given operator flexibility, Bowtie suggests relying on metrics within environments that have established auditing requirements because metrics incur less processing overhead for Controllers, generate less overall volume of data, and still achieve similar levels of granularity given sufficiently sophisticated queries.
Anecdotal performance benchmark measurements taken with policy verdict tracking enabled under pathological traffic patterns (for example, loaded with
iperf3
traffic) show that reporting metrics yields almost no negative performance impact, while reporting logs can noticeably reduce overall throughput.
Whether metrics or log verdict tracking are enabled, either may be used from the local observability stack or exported for centralized aggregation:
For local consumption, use the Grafana installation to view metrics (exposed via
prometheus
or logs (exposed vialoki
).To forward these signals to a receiving endpoint, we strongly suggest relying on strategies similar to those outlined in Exporting Telemetry. This will usually take the form of tapping into a
receiver
and constructing apipeline
that forwards along ametrics
orlogs
stream to a designatedexporter
suitable for your environment. Becausebowtie-server.service
forwards logs and metrics via OTLP into the localopentelemetry-collector.service
daemon, you may freely choose an exporter to collect these signals.
Verdict Collection Examples¶
The following YAML document, when placed at /etc/otel.yaml
, will aggregate all logs from the Controller and deliver them to Sumologic.
exporters:
sumologic:
endpoint: https://my-endpoint
service:
pipelines:
logs:
exporters:
- sumologic
Similarly, this configuration file will deliver all Controller metrics to Datadog:
exporters:
datadog:
api:
key: ${env:DATADOG_KEY}
service:
pipelines:
metrics:
exporters:
- datadog
Nix and NixOS¶
Controller appliances are based on the NixOS Linux distribution. If you find yourself interacting with Bowtie Controllers in a command line shell, there are some key differences to keep in mind when performing tasks like package installation or configuration management.
Message of the Day (motd
)
Many of these suggestions are included in the login shell message for ease of discovery and are repeated here for sake of completeness.
Package Management¶
A variety of packages are available for ad-hoc installation as with a distribution like Debian or Red Hat, albeit with a different package manager.
You may find package names by:
Entering a missing command name and running it from the shell. A shell hook will inform you about whether the indicated command is available in the named nix package.
Searching for specific file contents among all known packages with
nix-locate
. For example, to find which package provides the executableethtool
:nix-locate --whole-name --at-root /bin/ethtool
Using
nix-locate
Consult
nix-locate --help
for additional flags.Searching for package names explicitly. You can either search the
nixpkgs
repository directly:nix search nixpkgs ethtool
Or instead search the local database. This method may be slow.
nix-env -qaP ethtool
Once you’ve found the desired package name, install it:
Using
nix-env
will install the package persistently so it is available across upgrades, logins, and cache rotations:nix-env -iA nixos.ethtool
Alternatively, you may enter an ephemeral shell with the desired program in-
$PATH
. The installed package will be cleaned up later on when the system undergoes regular cache clearing.nix-shell -p ethtool
Sandbox¶
A NixOS system does not include operating system-wide shared libraries in /usr
or /lib
.
If you find yourself in need of a remote executable that is not available as package, you will need to stub out a sandbox environment so that dynamically-linked executables can run.
The sandbox
command is provided for this purpose:
$ sandbox
$
The shell will enter an ephemeral environment populated with functional, sandboxed /lib
and /usr
directories.
You may download and run any executables or utilities and then exit the sandbox normally via exit
or Ctrl-d
.