**************** Controller Setup **************** .. spelling:word-list:: plaintext unencrypted Proceed through these setup instructions sequentially or jump to a particular section of interest. The outline for the entire process is: #. Install the controller using an :ref:`appliance or container image `. Ensure that it is :ref:`reachable over the public internet ` with the :ref:`right ports open `. Cloud users of platforms like AWS should define a :ref:`cloud-init ` file before first boot to ensure that you have terminal access to the Controller and to optionally configure HTTPS, your Controller’s hostname, and SSO. You may optionally use :ref:`terraform` or :ref:`helm` for this step. #. Configure :ref:`endpoint`. We also recommend using :doc:`sso`. Browse to your Controller either via DNS that you've directed at your Controller’s public IP address or the plain public IP address if you haven’t configured DNS. #. (Optional) Install any additional controllers by following :doc:`ha-controller`. #. Mission accomplished! Proceed to :doc:`setup-client` to install the Bowtie Client on one or more assets in your network which will connect to the Controller that you've setup here. Any further Controller configuration can be performed over the web interface as outlined in :doc:`operating/controller` or with the :ref:`terraform-provider`. Remember that :ref:`controller-sos` is available for assisted support at any time at any point after the Controller is reachable over the network and enterprise-level :ref:`controller-chat` is available for licensed customers. **Step Summary** .. contents:: :local: :class: this-will-duplicate-information-and-it-is-still-useful-here .. _preflight: Pre-flight Checks ================= Before you install a Bowtie :doc:`controller `, consider :ref:`controller-requirements`, your private :ref:`networking strategy ` (where an appliance should be deployed), open :ref:`port requirements `, and whether to :ref:`pre-seed configuration files ` or do so after first boot. .. _controller-requirements: Controller System Requirements ------------------------------ In terms of resource use, a Controller behaves like a typical network appliance. We do not anticipate (nor have observed) heavy load during the course of normal operation, though your particular case may vary based upon number of users and volume of traffic: - A minimum of 2 cores is recommended for a basic installation. If your organization’s total number of client devices will exceed over one thousand nodes, consider 4 or more cores. - 4GB of memory should be sufficient for most installations. - To accommodate for sufficient data retention across both application data and observability metrics and logs, a minimum disk size of 50GB is recommended. - In addition to normal application data (access policies, users, and so on), Controller cloud images ship with a collection of :ref:`controller-observability` tools. Your chosen disk size may impact the data retention period for data such as logs and metrics. For example, you may choose to dramatically increase disk size to retain several months of :ref:`controller-logs`. If you have needs for centralized log aggregation, please ask. - If you anticipate heavy network activity across your network control plane, ensure that sufficient bandwidth is available on any attached NICs. For example, in terms of `AWS EC2 instance types, m5.large is a known-compatible size `_. We strongly suggest using `Nitro `_ enabled instances to facilitate access to the serial console for :ref:`controller-sos`. We recommend modeling Controllers as easily replaceable "`cattle `_" that can be scaled elastically. We build this assumption for reliable fail-over and scalability directly into both the server-side Controller and client-side Client software, so bear this in mind when considering your deployment topology. For example, if your on-premise hypervisors have relatively limited capacity, Bowtie can support many small Controllers rather than a few large ones. If you need to fine-tune resource allocation based upon observed measurements in your environment, consider overestimating system requirements and then collecting usage numbers using :ref:`controller-grafana`. .. include:: partials/serial-console.rst .. _networking: Network Requirements -------------------- As a gateway to network-adjacent resources, your controller should satisfy a few requirements: - Any private network resource that you intend to access through the Bowtie controller should be accessible from the Bowtie controller itself. For example, if you're configuring a Bowtie network to facilitate access to a web application within an AWS VPC that has no public network address, your Bowtie controller likely needs to be installed within a subnet that is routable to the private subnet. A Bowtie container executing in a Kubernetes environment will be able to route inbound VPN traffic to any pod that the Bowtie application itself can reach. - The controller should be deployed under a network address accessible from the public Internet because: - Bowtie clients require a generally-available endpoint that they will use to establish private networking routes, and - The Bowtie controller will leverage its public address and DNS to acquire TLS certificates which are critical to ensuring secure access to its network endpoints (the Controller will not perform this step if you :ref:`provide your own certificate material `) For example, if you install your Bowtie controller in EC2, it should be given a public network address and DNS name in route53 that can be resolved from any public DNS server. In Kubernetes, you will need to use an `Ingress Controller `_ to route inbound HTTP traffic as well as wireguard UDP traffic to Bowtie ``Deployments``. In the cloud and other virtualized environments, a Bowtie Controller will solicit the network for DHCP addresses on all attached network interfaces. If your network instead requires a statically-configured IP address, please reference the :ref:`static networking ` section under :ref:`seeding` to set values like address, gateway, and more at boot time. The Controller :ref:`controller-networking` documentation outlines how to make configuration changes to the network after initial setup if desired. .. _self-signed-certs: ------------------------ Self-Signed Certificates ------------------------ Within environments that require trusting self-signed certificates (such as for TLS interception), you may choose to trust your root certificate on Controllers. .. admonition:: ``/etc`` Trusted Cert File :class: warning The typical file location for root certificate trust on Linux systems at ``/etc/ssl/certs/ca-bundle.crt`` is enforced at the operating system configuration level and will be overwritten to enforce the configured state if the file or link is changed. Use the method described in this section instead to choose an alternative trusted certificate authority (CA) bundle. 1. Move your custom certificate authority bundle file onto the Controller. You may choose a path in ``/etc/ssl/certs/`` without risking its deletion during Controller upgrades. 2. Set the ``NIX_SSL_CERT_FILE`` environment variable to the path of your certificate bundle for the daemons that should honor the self-signed certificate. For example, if a custom certificate bundle were present at ``/etc/ssl/certs/custom.crt``, the following commands would trust this bundle for all daemons necessary for a Controller to operate without errors behind a self-signed certificate: .. code:: sh for override in /etc/default/{bowtie-server,dex,update} do echo NIX_SSL_CERT_FILE=/etc/ssl/certs/custom.crt >> $override done Trusting the self-signed certificate bundle for each of these daemons effects the following: - ``bowtie-server`` - OAuth/SSO authentication steps against TLS endpoints, retrieving DNS block lists from HTTPS endpoints, licensing, and Control Plane support. - ``dex`` - If you are leveraging :ref:`sso` against an endpoint secured behind a self-signed certificate, trusting the certificate from this daemon is necessary. - ``update`` - For updates to function correctly, both version information and updated packages are fetched over HTTPS and must be set to a self-signed certificate if one is in use. Other daemons that are not critical to Controller operation that may also need their trusted bundles set as necessary: - ``/etc/default/sos`` - To generate and deliver support bundles to Bowtie, Controllers send these payloads to an HTTPS endpoint. - ``/etc/default/backupd`` - If your backup destinations have a self-signed certificate in front of their endpoints, you may need to set the trusted bundle for the backup daemon. .. _ports: Open Port Requirements ---------------------- .. admonition:: Required Ports :class: attention Bowtie controllers, in a default configuration, require ports ``tcp:443`` for the HTTPS API and ``udp:443`` for the Wireguard tunnel connection to be open in order to operate. Please be sure both ports are open and reachable **from all sources**. You can find more information about both required and optional port configurations below. In order to ensure that your Controller can communicate with Clients and other network peers, all **established network connections** should be permitted via your platform’s network configuration control plane. For example, in ``iptables`` terms, you should permit ``state RELATED,ESTABLISHED`` to and from the Controller (this is presented as an example - if you’re operating within an environment like EC2, these types of state rules are likely already in-place, although you likely need to explicitly grant new outbound and inbound connections). ---------------- Kubernetes Ports ---------------- Within Kubernetes, an ``Ingress`` is created that directs traffic destined for the ``endpoint`` host to the associated Bowtie ``Service``. By default, the Chart also creates a ``ConfigMap`` that configures `ingress-nginx `_ to route inbound UDP 443 traffic to Bowtie using `its UDP routing feature `_. This feature can be disabled with the indicated value in the Chart’s ``values.yaml`` file. .. admonition:: Kubernetes VPN Traffic :class: hint If you are *not* using ``ingress-nginx``, you should disable inbound UDP routing and instead reference your Ingress Controller’s documentation to determine how to route inbound UDP traffic over port ``443`` to the Bowtie server ``Service``. In a Kubernetes-based deployment, Bowtie does not manage TLS or ACME-provided certificates and defers certificate procurement to the cluster’s running Ingress Controller. ------------------------------- Cloud and Virtual Machine Ports ------------------------------- ========= Inbound ========= Ensure that you open the following inbound ports **from all sources**. Note that exposing port ``tcp:80`` -- normally used for unencrypted HTTP -- is *optional* depending on your operational preference: - You may elect to open port ``tcp:80`` for typical HTTP to HTTPS redirect behavior when TLS is available and to leverage `http-01 ACME validation `_ if you :ref:`provision certificates dynamically `. - Alternatively, you may leave ``tcp:80`` closed entirely to minimize risk of plaintext over the wire and reduce attack surface on your network perimeter. While ``http-01`` ACME validation will be unavailable, you can still acquire dynamic certificates via `tls-alpn-01 validation `_ without any changes -- the selection between ``http-01`` and ``tls-alpn-1`` is automatic. .. _acme-caveats: .. admonition:: Provisioning ACME certificates :class: important ACME certificates are provisioned automatically using `Caddy’s `_ `automatic HTTPS capabilities `_. Caddy will strive to acquire certificates through whatever means are available, whether ``http-01`` or ``tls-alpn-01`` validation, and try multiple ACME providers (Let’s Encrypt and ZeroSSL). Bear in mind the following: - Let’s Encrypt enforces `rate limits `_. If you are iterating rapidly with proof-of-concept Controllers or a large number of Controllers, you may want to ensure that your number of new Controller certificates does not impact your domain’s rate limits. - If Caddy falls back to `ZeroSSL `_ whether due to Let’s Encrypt rate-limiting or service availability, be aware that ZeroSSL does not support the ``tls-alpn-01`` ACME validation protocol, and you will need to expose ``:80`` to the Controller if you would like to acquire certificates automatically. :ref:`Providing your own certificates ` is an alternative method if you would like to avoid using ACME to acquire certificates. The list of required ports is: - ``tcp:443`` *or* ``tcp:80,443`` (see the above bullet points) - ``udp:443`` Usage: - ``TCP:80`` is used for redirects and for acquiring certificates via ``http-01`` ACME validation. - ``TCP:443`` is used for the Bowtie controller's web interface, public HTTPS and websocket connections, and ``tls-alpn-01`` ACME validation. - ``UDP:443`` is used for the Bowtie controller and client's Wireguard and QUIC protocol traffic. Optional ports are: - ``TCP:911`` is used for :ref:`controller-sos`. This port does not need to be open at all times and can be exposed ad-hoc if you are engaged in troubleshooting at a later time. Note that this port grants anonymous access to :ref:`controller-sos` and so you may wish to restrict access to this port to trusted networks only. Opening the port to public access does not present an inherent security risk and anonymous requests can only either begin the validation process to generate a bundle or generate bundles after an administrator has confirmed a validation token. ========== Outbound ========== Bowtie Controllers initiate the following outbound network connections which must be open: - ``UDP:443`` - These ports should be open **between Controllers** but are not required to other networks such as the public Internet. - ``TCP:80,443/UDP:53`` .. _acme-ports: - These ports must be open to the public-facing Internet so that the Controller can acquire TLS certificates via the Let's Encrypt `ACME protocol `_ if you are not managing certificates manually and receive updates from Bowtie. Updates are primarily to ensure that Controllers are kept up-to-date -- see :ref:`controller-updates` for additional information. These ports are also used when Controllers emit :ref:`telemetry payloads ` destined for Bowtie’s telemetry APIs. ========= Summary ========= - Inbound - ``TCP:443`` and ``UDP:443`` - Outbound - ``TCP:80,443`` and ``UDP:53,443`` - All connections ``RELATED,ESTABLISHED`` - Optional - Port ``TCP:80`` if you’d like typical web application HTTPS redirection behavior and ``http-01`` ACME validation. Note that you can elect to enable :ref:`hsts` to aid in pointing client browsers directly at the TLS listener endpoint. - Port ``TCP:22`` if you'd like the option to ``ssh`` into your Controller (and have ``ssh`` keys present through a mechanism like :ref:`cloud-init `) - Port ``TCP:911`` to leverage the :ref:`controller-sos` feature. You may open this port at a later time if you need to perform diagnostics after initial setup; it does not need to be open at all times to support :ref:`controller-sos` capabilities. .. _seeding: Seeding Configuration --------------------- .. admonition:: Kubernetes Seeding :class: tip On platforms like Kubernetes, consult your provider to determine how best to inject ``cloud-init`` data. Several crucial pieces of the controller setup process -- such as :ref:`TLS ` and :doc:`sso` -- can be performed either following the creation of a new controller appliance or loaded beforehand during the bootstrap process. Any static network addresses should also be set at this installation stage. The combination of cloud-init and some specific configuration files can launch a Controller without any manual intervention necessary to bootstrap its setup. For a condensed version of how to achieve this, consult the :ref:`terraform` example code, which provides a complete sample of provisioning an AWS instance with the cloud-init data requisite for an unattended/automated installation. You can optionally populate the file at ``/etc/restore`` to leverage :ref:`backup-restore` if you would instead prefer to rebuild a Controller from an old :ref:`backup ` snapshot. The remainder of this section lays out the potential options to pre-seed various options and settings that comprise an automatic deployment. .. admonition:: AWS SSM :class: tip Bowtie Amazon AMIs ship with `AWS Systems Manager Agent (SSM Agent) `_ enabled if you need to leverage SSM capabilities after system launch. Refer to :ref:`this section ` for additional information. If you prefer to launch a controller image without seeding initial configuration data, proceed to :ref:`controller-installation`. Note that you will need to retrieve an authentication key as noted in :ref:`setup-key` from the serial console before accessing the Controller :ref:`endpoint` page. If you prefer to launch a controller with configuration set at boot time, you may elect to populate :ref:`cloud-init ` data for specific files: .. _cloud-init-hostname: - Configure your Controller’s endpoint by populating its hostname using `the cloud-init hostname module `_. This informs your appliance of its final destination endpoint. For example, if you intend to operate your controller at ``https://bowtie.net.example.com``, you would enter this value into your cloud-init user data: .. code-block:: yaml #cloud-config fqdn: bowtie.net.example.com hostname: bowtie.net.example.com preserve_hostname: false prefer_fqdn_over_hostname: true These settings will ensure that your Controller’s fully-qualified domain name (FQDN) is written out to ``/etc/hostname`` and used as Bowtie’s network endpoint for mechanisms such as server discovery and ACME certificate provisioning if you are not providing your own certificate files. .. admonition:: Automatic Hostname TLS :class: important If you choose to automatically bootstrap your Controller’s hostname and certificate, please ensure that the desired hostname that you enter into cloud-init has been directed at your Controller’s public IP address to ensure that it can properly obtain a TLS certificate. For environments like EC2, you may elect to provision a elastic IP address to associate with the DNS hostname and attach to your EC2 instance at launch time, which ensures that the DNS record is preemptively directed at your new instance. Failure to do so may result in your Controller failing to setup TLS properly. - If you would like to bypass :ref:`the initial guided setup wizard ` for a completely automated installation, create the file ``/var/lib/bowtie/skip-gui-init`` which will instruct Bowtie to skip the initial setup screen. For example, the ``write_files`` cloud-init directive can achieve this: .. code-block:: yaml #cloud-config write_files: - path: /var/lib/bowtie/skip-gui-init .. _seed-user: - You can also choose to pre-configure the initial administrative user at this stage. To do so, create a file at ``/var/lib/bowtie/init-users`` with a newline-separated list of users. Users defined in this way will be granted administrative roles. The following example shell session demonstrates how to generate the requisite lines to insert into the ``init-users`` file: .. code-block:: sh username=me@example.com password=$(openssl rand -hex 16) hash=$(echo -n $password | argon2 $(uuidgen) -i -t 3 -p 1 -m 12 -e) echo $username:$hash Retain the ``$password`` variable to use when logging in after initial startup. Insert the contents of the final ``echo`` command into the ``init-users`` file with cloud-init (replace this example value with your own) .. code-block:: yaml #cloud-config write_files: - path: /var/lib/bowtie/init-users content: | me@example.com:$argon2i$v=19$m=4096,t=3,p=1$Y2JhZmQ1ZWYtMTAyNC00NDAxLWFlNWMtMjJlMjYwNWM4OTY5$P/tqTpNHVtAjB3zQizSftOdJiTEi3PpVTcVdCx5/eVQ .. _seed-certs: - If you would like to provide your own certificate files for the Controller’s endpoint, place your certificate file at the path ``/etc/ssl/controller.pem`` and the corresponding key file at ``/etc/ssl/controller-key.pem``. When the Controller bootstraps its initial configuration, it will detect and load these files to serve over its HTTPS endpoint. .. _init-sso: - All ``.yaml`` files present in ``/etc/dex/`` will be processed to configure single sign-on. Any connector noted on `this page `_ is supported. Each file in this directory should contain a single ``connector`` from the associated Dex documentation. .. admonition:: SSO Access :class: attention Please take special care to restrict access to a specific **group** when configuring SSO. Failure to do so may mean that *any* user able to authenticate with any Google, GitLab, or other provider will be granted access to your private controller administrative interface. For example, the following ``yaml`` file could be used to configure a connector for GitLab which gates access only for members of the ``bowtienet`` group. The ``$DEX_ORIGIN`` variable is provided as a convenience and will ultimately expand to the destination endpoint of your controller. .. code-block:: yaml id: gitlab name: GitLab type: gitlab config: clientID: clientSecret: redirectURI: $DEX_ORIGIN/dex/callback useLoginAsID: false groups: - bowtienet Remember to configure the new Controller’s HTTPS endpoint as a *trusted callback* in your SSO provider of choice, or authorization flows may not function properly. See :doc:`sso` for additional information about how to use these YAML files and Dex to configure single sign-on. .. _static-network: - If your networking topology requires that a Controller be given a statically-assigned IP address, you should configure this value at boot time using cloud-init's `networking capabilities `_. Please note that network configuration can **not** be controlled via user data and should be defined via other data sources. Refer to `the cloud-init documentation section about network configuration sources `_ for additional information about your specific environment. For example, the `LXD documentation for cloud-init network configuration `_ provides the following illustrative snippet: .. code-block:: yaml config: cloud-init.network-config: | version: 1 config: - type: physical name: eth1 subnets: - type: static ipv4: true address: 10.10.101.20 netmask: 255.255.255.0 gateway: 10.10.101.1 control: auto - type: nameserver address: 10.10.10.254 Reference the `cloud-init networking documentation `_ for a list of all available options. See :ref:`controller-networking` for documentation regarding networking configuration changes after first boot. - Reverse proxy settings outlined in :ref:`controller-proxy` can be set at this time. For example, to raise :ref:`controller-ratelimiting` to 500 requests per remote IP per 20 seconds and to set :ref:`hsts` to one month, include the following cloud-init directive: .. code-block:: yaml #cloud-config write_files: - path: /etc/default/caddy content: | RATELIMIT="500" HSTS="max-age=10800" - You should configure a consistent :term:`site` ID for co-located Controllers by defining the ``SITE_ID`` environment variable in a file within ``/etc/bowtie-service.d/`` which populates environment variables for the ``bowtie-server`` service. The following example YAML configures this: .. code-block:: yaml #cloud-config write_files: - path: /etc/bowtie-server.d/site-id.conf content: | # Change this to a value unique to your environment. # You can generate UUID4 strings with the uuidgen command-line utility SITE_ID=11111111-2222-3333-4444-555555555555 - If you would like to configure additional users, `cloud-init supports provisioning users via the users option `_. For example, this configuration snippet adds additional trusted ssh keys for the ``root`` user: .. code-block:: yaml #cloud-config users: - name: root ssh_authorized_keys: - ssh-ed25519 myuser lock_passwd: true .. _cluster-join: - To cluster or peer Controllers into a highly-available setup as outlined in :doc:`ha-controller`, two values should be set: the ``should-join.conf`` file and the ``BOWTIE_SYNC_PSK`` environment variable. ``should-join.conf`` instructs the Controller to peer with the other Controller indicated in the configuration file, and ``BOWTIE_SYNC_PSK`` establishes a trusted key to authenticate the Controllers against each other. These two values can be configured with the following example cloud-init YAML: .. code-block:: yaml #cloud-config write_files: # Uniform value consistent across the entire cluster of Controlers. - path: /etc/bowtie-server.d/psk.conf content: | BOWTIE_SYNC_PSK= # This will be consumed by the bowtie-server service at start-up. - path: /var/lib/bowtie/should-join.conf content: | entrypoint = "https://your-first-controller.example.com" Consult :doc:`ha-controller` for complete documentation on these parameters. - You may configure :ref:`automatic Controller updates ` at this time. Configure :ref:`unattended-updates` with a file like the following, which instructs the Controller to update hourly: .. code-block:: yaml #cloud-config write_files: - path: /etc/update-at content: | [Timer] OnCalendar= OnCalendar=hourly Putting these settings together, you could, for example, launch an EC2 instance with this ``user-data`` that would configure the above settings (leaving aside any static networking configuration, which cannot be controlled via ``user-data``): .. admonition:: Example Values :class: warning Note that these are example values. Please use your own configuration values that apply to your environment. .. code-block:: yaml #cloud-config fqdn: bowtie.net.example.com hostname: bowtie.net.example.com preserve_hostname: false prefer_fqdn_over_hostname: true write_files: - path: /etc/default/caddy content: | RATELIMIT="500" HSTS="max-age=10800" - path: /etc/ssl/controller.pem content: | -----BEGIN CERTIFICATE----- MIIEB.........................................xgSng== -----END CERTIFICATE----- - path: /etc/ssl/controller-key.pem content: | -----BEGIN PRIVATE KEY----- MIIEvb..............................pVkglLcnghNN3suU= -----END PRIVATE KEY----- - path: /etc/dex/gitlab.yaml content: | id: gitlab name: GitLab type: gitlab config: clientID: clientSecret: redirectURI: $DEX_ORIGIN/dex/callback useLoginAsID: false groups: - bowtienet - path: /etc/bowtie-server.d/custom.conf content: | SITE_ID=11111111-2222-3333-4444-555555555555 BOWTIE_SYNC_PSK= - path: /var/lib/bowtie/skip-gui-init - path: /var/lib/bowtie/should-join.conf content: | entrypoint = "https://your-first-controller.example.com" - path: /etc/update-at content: | [Timer] OnCalendar= OnCalendar=hourly users: - name: root ssh_authorized_keys: - ssh-ed25519 myuser lock_passwd: true .. admonition:: Seeding Configuration :class: hint Both the controller endpoint and the SSO configuration files may be set independently, concurrently (as in this example), or left unset to be configured later in :ref:`config`. Once your controller completes its automatic bootstrapping process, you may access your controller at its HTTPS endpoint. In this example, that endpoint would be ``https://bowtie.net.example.com``. To proceed to use this method, continue to :ref:`controller-installation`, amending the launch process with any applicable ``user-data`` for your platform of choice. For additional documentation about settings and configuration options available on Controllers, see :doc:`operating/controller` as well as the :doc:`controller` page. The :ref:`bowtie-server` section outlines options configurable in environment variable files in ``/etc/bowtie-server.d/*`` that can be set at this time as well. .. _cloud-init-caveats: ------------------ cloud-init Caveats ------------------ Bear in mind the following when leveraging cloud-init modules and features: - Many login shell executable paths that you may reference for modules such as user provisioning may not be at their normal paths like ``/bin/bash``. You can instead find login shells under ``/run/current-system/sw/bin/``, such as ``/run/current-system/sw/bin/bash`` for ``bash`` and ``/run/current-system/sw/bin/zsh`` for ``zsh``. - ``sudo`` rules may be subject to the configuration management changes on Controllers and may not persist across updates or changes. If a non-root user should have ``sudo`` privileges, add them to the ``wheel`` group, which has ``sudo`` access. .. _controller-installation: Installation ============ Two types of Controller installations are supported: either :ref:`cloud or virtual-machine based ` or :ref:`on Kubernetes `. .. _cloud-install: Cloud or Virtual Machine Installation ------------------------------------- One or more :doc:`Controllers ` will coordinate membership on your Bowtie network, access policies, and more. At least one Controller is necessary for Clients to use as their network entry point, but we suggesting running more than one for high availability. If you’re just getting started, proving out your network topology with one Controller to begin with is perfectly fine. .. _cloudinit: .. admonition:: cloud-init :class: hint All Bowtie Controller appliance images ship with `cloud-init `_ configured and available for use. Note that, due to the nature of the controller image as an appliance, not all features of ``cloud-init`` are available for use. We recommend using an associated cloud-init user data script primarily for :ref:`seeding` for automated provisioning or `provisioning ssh keys `_ so that you may access running appliances in case a need arises for system-level access for debugging purposes. Note that leveraging ``cloud-init`` is optional and not required during the Controller setup process, but installing ssh keys now can ease debugging steps later. In particular, performing low-level maintenance as outlined in :ref:`controller-sysadmin` that requires ``ssh`` access requires pre-loaded keys. We provide appliance images to deploy in your environment. These are self-contained virtual machine or container images that bundle everything necessary for a functional Bowtie Controller either as an operating system disk or OCI-compliant container. Choose the image suitable for your environment: :ref:`aws`, :ref:`gce`, :ref:`proxmox`, :ref:`qcow` (for run-times like ``qemu`` and KVM), :ref:`xen-controller`, :ref:`vmware`, or :ref:`hyperv`: If you manage your environment with Terraform and would like to deploy a Controller in the same way, consult the :ref:`Terraform` section. .. include:: partials/cloud-disk-images.rst .. include:: partials/compression.rst .. _aws: --- AWS --- Bowtie controller appliances are provided as `AMIs `_ in several regions including `AWS GovCloud `_. We also provide raw disk images suitable for deployment into restricted environments -- follow the instructions for :ref:`ami-import` for this approach. The following instructions continue using the native AMI ID method. Select the "AMIs" menu item from the EC2 sidebar under the "Images" menu: .. image:: _static/ami-menu.png Select "Public images" from the AMIs menu, then search for images owned by account ``055761336000`` (the AWS account ID for Bowtie). If deploying within an AWS GovCloud region, use the Bowtie AWS account ID ``055769116956``. Bowtie maintains one latest AMI at any given time, so you should be presented with one choice. .. image:: _static/ami-select.png Finally, select "Launch instance from AMI" to perform any requisite configuration steps to launch an instance from the AMI in your EC2 environment. Many different settings are presented when launching an EC2 instance -- if you’re a network administrator, then you may already have pre-existing security groups and private networks that satisfy the criteria outlined in :ref:`preflight`. In general, use the following guidance when selecting from the options presented on the “Launch instance” page: - Instance size should be suitably large to avoid under-powered performance -- we suggest, at a minimum, no fewer than 2 cores and no less than 4GB of memory. - Network communication must be open between each Bowtie controller and Bowtie clients installed on edge network devices like employee workstations. At a minimum: - Open the ports indicated under :ref:`ports` - Ensure your Controller is installed on a subnet that has access to the Internet and can reach any private resources you intend to make accessible over your private Bowtie network - Remember to allocate a public network address for the instance - Optionally populate the “User data” field if you elect to use :ref:`seeding` .. _ssm-agent: - AWS images are bundled with `AWS Systems Manager Agent (SSM Agent) `_ for convenience. You may leverage any of the capabilities of SSM Agent for your Controller such as `ssh access `_. Remember to `launch your instance with an appropriate instance profile `_ if you elect to use this capability. Once the instance has launched successfully, proceed to :ref:`config`. .. _ami-import: ====================== Raw AMI Image Import ====================== In some cases, you may wish to import raw ``.vhd`` files directly such as when deploying into restricted environments or regions. To do so, you will download the associated ``.vhd`` file built for AWS and import it as a new snapshot and then register an AMI from that restored snapshot. Navigate to the `meta control plane downloads page for Amazon package artifacts `_ and download the disk image. Once you have the raw image file, check the download’s ``sha256`` checksum against the value listed on the meta control plane download page to verify the file integrity: .. code:: sha256sum *.vhd Upload this ``.vhd`` file into an S3 bucket within the same account that you would like to create a new AMI within. Once the S3 object is ready, you may proceed to follow the AWS documentation to `import a disk as a snapshot `_ which will create a new snapshot in your account based upon the contents of the ``.vhd`` file in S3. When defining your ``containers.json`` file, use the ``vhd`` format as the disk image format. After the snapshot has been prepared, continue to `create a Linux AMI from a snapshot `_ based upon the official AWS documentation. Use the following values as appropriate when configuring the import command from either the console or the AWS CLI: - Architecture: ``x86_64`` - Root device name: ``/dev/xvda`` - Boot mode: ``legacy-bios`` - Virtualization type: ``hvm`` - SR IOV net support: ``simple`` - ENA support: ``true`` - Additional block device mappings: - ``DeviceName=/dev/sdb,VirtualName=ephemeral0`` - ``DeviceName=/dev/sdc,VirtualName=ephemeral1`` - ``DeviceName=/dev/sdd,VirtualName=ephemeral2`` - ``DeviceName=/dev/sde,VirtualName=ephemeral3`` Once your AMI has finished creating, follow the original steps to :ref:`create an instance from an AWS AMI ` using the new AMI ID. .. _azure: ------- Azure ------- Controller images are provided as community-shared `Azure Compute Gallery Images `_ in the Azure public cloud as well as Azure GovCloud. .. admonition:: Azure GovCloud :class: hint Azure GovCloud images cannot be shared as a `Community Gallery `_ and so need to be shared with individual tenants. If your Controller should be deployed into an Azure GovCloud region, please reach out to a Bowtie representative who will provide you with the necessary app registration information to access GovCloud Controller images. Alternatively, you may refer to :ref:`azure-import` if you are unable to create new instances from shared images and wish to import raw disk images instead. The following steps demonstrate how to use the former (shared images): To provision a new Controller, launch a new virtual machine using the latest Gallery image. To begin, start the process to create a new virtual machine: .. image:: _static/azure-create-vm.png Select "Azure Virtual Machine" to proceed to the virtual machine configuration setup screen: .. image:: _static/azure-select-create-vm.png From this page under the "Image" section, select "See all images" to be taken to the image selection screen for Marketplace and community images. .. image:: _static/azure-all-images.png From this page, select "Community Images" under "Other Items" .. image:: _static/azure-community-images.png Use the filters for community images to find the latest Bowtie Controller image. Click "Image Name" and enter ``controller`` and click "Public gallery name" to enter ``bowtie`` to filter down the available candidates. The selection interface will display the latest Controller image available for all published regions. .. image:: _static/azure-community-filter.png .. admonition:: Community Gallery ID :class: note To fully-qualify the community gallery name to avoid any possibility of ambiguity when selecting a gallery image, you may filter for gallery name ``bowtiecontroller-ee2d3cce-b6ae-4d47-a93b-b88591ea9108``. All gallery images should originate from the Bowtie Works, Inc. tenant ID ``9f6fcbb0-a706-4c3d-bf31-52ee2ee6273c``. Select the ``controller`` image for your desired region to proceed. After choosing the gallery image, continue with the VM configuration process. Please refer to the :ref:`controller requirements section ` when configuring machine resources and network controls such as inbound port access. Bear in mind the following when deploying a Bowtie Controller into Azure: - The native Azure Linux provisioning daemon, `WALinuxAgent `_, runs on Controller appliances in Azure to conform to Azure’s expectation that Virtual Machines check-in with operational events such as the completion of the boot-up process. However, Bowtie Controllers rely on the provisioning capabilities of :ref:`cloud-init ` to configure resources like files and hostname, so fields used for provisioning such as the username field and public SSH key will **not** be provisioned by ``WALinuxAgent``, which will defer provisioning to ``cloud-init``. If you would like to create users or load an initial ``ssh`` key onto the host, please rely on :ref:`cloud-init ` rather than the username and ssh public key fields when building your Virtual Machine (or ``az vm`` flags like ``--ssh-key-values`` or ``--generate-ssh-keys``). - Due to the limited scope of the Bowtie Controller as a network appliance, many WALinuxAgent extensions may not function as expected. We suggest logging into the appliance via ``ssh`` keys populated with :ref:`cloud-init ` if a host requires debugging. - We currently publish into a limited set of regions during this preview stage but can readily publish into new regions if you operate within a region without images available. Please reach out to a representative if you require images in a new region. - The normal :ref:`cloud-init-caveats` apply. After you have configured and launched the Controller appliance, proceed to :ref:`config`. .. _azure-import: ========================= Azure Disk Image Import ========================= For environments that may not have access to Bowtie Controller images, you may instead download the raw ``.vhd`` disk image files which are suitable for importing as new images that can be used to create new Bowtie Controller images. Navigate to the `package download page for Azure `_ to find a listing of all downloadable disk image files compatible with Azure. After downloading the latest version, confirm the file’s integrity by comparing its checksum against the checksum listed on the download page: .. code:: sha256sum *.vhd Upload this validated ``.vhd`` file as a storage blob into a storage account container that you intend to restore the image from. Once it has completed uploading, navigate to the ``Images`` section of the Azure console: .. image:: _static/azure-image-link.png Select "Create" on the top menu to create a new image: .. image:: _static/azure-image-create.png Fill out the relevant fields for your deployment like instance ``Name``, and under the "OS Disk" section, choose the following settings: - OS type: ``Linux`` - VM generation: ``Gen 1`` - Storage blob - Select ``Browse`` and navigate to the ``.vhd`` previously uploaded to your storage container and select it. .. image:: _static/azure-image-settings.png Once you are satisfied with your new image's settings, proceed to "Review + Create". After the image creation process finishes, you may follow the steps outlined in the :ref:`Azure setup ` and substitute your custom image name instead of choosing "See all images" at that step. .. _gce: ----- GCE ----- Publicly-available Google Compute Engine images are published for use from the ``bowtie`` GCP project. Because `GCP does not list public images `_, you will need to acquire the image resource ID to launch a new instance based upon this custom image. The `meta control plane downloads page for GCP `_ lists these image IDs. To deploy within the GCP console directly, follow the :ref:`GCE Disk Image Import ` instructions below to import the image and then create your instance. If opting to use the ``gcloud`` CLI, pass the ``image=projects/bowtie-works/global/images/$ID`` parameter when defining disks in order to create an instance based on the publicly-available Controller image. For example: .. code:: gcloud compute instances create $your_instance_name \ <...other parameters...> \ image=projects/bowtie-works/global/images/$bowtie_controller_name Reference :ref:`controller-requirements` when choosing options like instance size and network access controls. Proceed to :ref:`config` once instance creation completes. .. _gce-import: ======================= GCE Disk Image Import ======================= Bowtie also provides a Google Cloud Platform disk image that may be imported into your project and used to create new instances in cases where native compute images may not be suitable. To acquire the disk image, follow the instructions at :doc:`bowtie-get`. You may navigate directly to GCP images `by navigating here `_ Once the download has finished, you should have a local file ending in ``.raw.tar.gz`` ready to use with the ``gcloud`` command-line tool. First, complete any necessary steps in order to `install `_ and `configure `_ the ``gcloud`` command-line utility. Once ``gcloud`` has been installed and configured, import the Bowtie controller appliance image in two steps: 1. First, copy the disk image to GCP GCS. Replace the filename below with the name of the file that you acquired from the `download page `_ and ```` to a bucket in your GCP project: .. code-block:: shell gcloud alpha storage cp \ bowtie_controller-gce-r@66c7fe9_d@a298975_i@8g369as.raw.tar.gz \ gs:// 2. Then create a GCE image from the GCS object, replacing the bucket name and object name as necessary for your project's bucket and uploaded image file. You may choose any name for the image that ``gcloud`` will create, in this example the desired name is ``bowtie-controller``: .. code-block:: shell gcloud compute images create bowtie-controller \ --source-uri gs:/// Finally, navigate to the GCE section of the GCP console and select the image with the name you created and create a new GCE instance following the requisite guidelines for :ref:`networking`, :ref:`ports`, and :ref:`seeding`. .. _proxmox: --------- Proxmox --------- Bowtie provides a Proxmox disk image that may be imported into your environment and used to create new virtual machines. To acquire the disk image, follow the instructions at :doc:`bowtie-get`. Once finished, you should have a local file ending in ``.vma.zst`` ready to use with Proxmox. Remember that :ref:`cloud-init is available ` at launch time if you would like to provision any ssh keys for later command-line access using a user-data value. For example, the following steps will bootstrap a Proxmox environment in preparation to launch a Controller configured via :ref:`cloud-init `. First, define any relevant ``cloud-init`` user-data with desired settings in a file like ``/var/lib/vz/snippets/user.yaml``: .. code-block:: yaml #cloud-config # Replace these values with your desired hostname endpoint hostname: controller-beta.net.example.com fqdn: controller-beta.net.example.com preserve_hostname: false prefer_fqdn_over_hostname: true users: - name: root ssh_authorized_keys: # Replace this with your actual key - ssh-ed25519 myuser lock_passwd: true write_files: # Perform Controller boostrapping without setup wizard - path: /var/lib/bowtie/skip-gui-init # Configure the server with environment variables - path: /etc/bowtie-server.d/custom.conf content: | # Replace these values with your actual Site ID and pre-shared key. SITE_ID=11111111-2222-3333-4444-555555555555 BOWTIE_SYNC_PSK= # Cluster with another Bowtie Controller - path: /var/lib/bowtie/should-join.conf content: | entrypoint="https://controller-alpha.net.example.com" # Configure single sign-on - path: /etc/dex/sso.yaml content: | type: oidc id: keycloak name: Organization SSO config: issuer: https://auth.example.com/realms/my_organization clientID: my_organization clientSecret: redirectURI: https://controller-beta.net.example.com/dex/callback If your environment expects statically-configured network addresses, you may configure the Controller as outlined in :ref:`static networking `. Create the ``cloud-init`` YAML in a path like ``/var/lib/vz/snippets/net.yaml``: .. code-block:: yaml version: 2 ethernets: en*: addresses: - 10.100.1.100/16 gateway4: 10.100.1.1 nameservers: addresses: [10.100.1.53] Define the local content using ``pvesm``: .. code:: sh pvesm set local --content images,rootdir,vztmpl,backup,iso,snippets Create a new virtual machine with ID 100 (choose a free ID suitable for your environment): .. code:: sh qmrestore 100 --unique Finally, define the ``cloud-init`` data for the new virtual machine: .. code:: sh qm set 100 --sata0 local:cloudinit qm set 100 --cicustom "network=local:snippets/net.yaml,user=local:snippets/user.yaml" You may now start the virtual machine which will honor the settings defined in ``user.yaml`` and ``net.yaml``. .. _qcow: ------- qcow2 ------- Bowtie provides two variants of ``qcow2`` images: one with a legacy BIOS boot loader and an EFI-based option. Follow the instructions at :doc:`bowtie-get` to acquire a ``qcow2`` image suitable for use in environments such as KVM and libvirt by using the `downloads page `_. Note that both BIOS-based ``qcow`` images are available in conjunction with EFI-based ``qcow-efi`` files. Once downloaded, use the disk image in your hypervisor manager of choice, such as `virt-manager `_ or `virsh `_. Once the virtual machine has been created and is running, proceed to :ref:`config`. .. _xen-controller: ----- Xen ----- Bowtie Xen images are compatible with hypervisors that rely on `Xen `_ as their underlying virtualization technology. This includes platforms like `XCP-ng `_ or vanilla Xen installations. Images are provided in ``.ova`` format. See the `Xen downloads page `_ on the Bowtie downloads page for a list of latest Xen ``.ova`` image files. After downloading the disk image and verifying its checksum to confirm its integrity, import the ``.ova`` to create a new Controller instance. The ``.ova`` file includes recommended allocation defaults for resources like CPU count and memory size which you may tune according to :ref:`controller-requirements`. Refer to your platform’s documentation for instructions on how to import disk image ``.ova`` files. Once the virtual machine has been created and is running, proceed to :ref:`config`. .. _vmware: -------- VMWare -------- Follow the instructions to acquire a VMWare image on the :doc:`bowtie-get` page. Once you've downloaded a ``.vmdk`` or ``.ova`` file, create a new virtual machine from the image. Depending on your environment, you may need to choose a specific format: - For workstations, most users should choose a ``.vmdk`` image to import. - For enterprise environments that usually run platforms like ESXi, the ``.ova`` image is compatible. Proceed to :ref:`config` once the virtual machine is running. .. _hyperv: --------- Hyper-V --------- Follow :doc:`bowtie-get` where you can download a file ending with ``.vhdx`` suitable for use with Hyper-V. Once you've created a new virtual machine from the ``.vhdx`` file, you can proceed to :ref:`config`. .. _incus-controller: ----- Incus ----- `Incus `_ (and compatible systems like `LXD `_) are supported via `unified tarballs `_ that bundle a virtual machine disk images alongside a metadata file. Creating *container* instances based on file trees rather than ``qcow``-based disk images is currently unsupported. See the `Incus and LXD downloads page `_ on the Bowtie downloads page for a list of latest Incus tarball files. Import the unified tarball image with an appropriate ``import`` command, such as:: incus image import controller.tar.zst --alias bowtie-controller After the image is available, proceed to create a virtual machine instance with your deployment's relevant network settings. Once the virtual machine has been created and is running, proceed to :ref:`config`. .. _kubernetes-install: Kubernetes Installation ----------------------- .. admonition:: Kubernetes Support :class: warning Early Bowtie Controller Kubernetes support was added in the form of a Helm chart that relied on custom kernel modules being present on underlying hosts. This method, while baseline functional, lacked feature parity with full appliance-based deployments, and so we have deprecated this installation method pending a new strategy. If your environment requires Kubernetes support, we suggest considering `KubeVirt `_ paired with a Controller build such as :ref:`qcow`. Future updates to this platform will likely take the form of a Helm chart paired with a KubeVirt virtual machine resource. .. _config: Configuration ============= .. todo:: I have no idea how this part works with a Kubernetes install, because: #. It hard-crashes with ``Caddy`` API access, which isn't bundled with the Helm Chart, so #. I bootstrap ``/var/lib/bowtie`` to bypass ``/setup`` and none of this runs Once you've started one or more Bowtie :doc:`controllers `, it's time to configure them! .. _naming: A mature Bowtie installation will include several controllers. In that shape, they should all share a parent domain that is *not* your organization's bare domain. If you are ``example.com`` your control plane could be ``net.example.com`` and your controllers could be ``controller1.net.example.com``, ``controller2.net.example.com``, etc. We would like you to start thinking of individual controllers as impermanent and replaceable, so you might even prefer randomized names, or names based on private IPs. ``cp-172-0-0-1.net.example.com`` is a fine name for a controller. If this is your first controller or a demonstration, any name will do. ``bowtie-demo.example.com`` can be sufficient, and it can serve as the primary entry point and the only controller for a while. .. admonition:: Post-setup Configuration :class: note The settings mentioned here apply primarily to initial setup. If you’d like to configure more fine-grained settings like :ref:`reverse proxy settings ` or :ref:`log retention `, please refer to :doc:`operating/controller`. .. _setup-key: Setup Key --------- Browse to your Controller’s public IP address if you haven’t set up a DNS record for the host. If you've configured a DNS record to point at the public IP address of the new Controller, use that hostname instead -- either approach is supported. If you’re :ref:`deploying on Kubernetes, `, navigate to the ``endpoint`` that you've configured. .. admonition:: Setup HTTPS :class: attention Both HTTP and HTTPS listeners are configured during initial setup. If possible, we recommend relying on HTTPS to ensure configuration options remain encrypted. Because unconfigured Controllers lack valid certificates, a self-signed certificate may be used to terminate HTTPS during initial setup. The setup instructions emitted on the serial console and ``bowtie-server.service`` logs include self-signed certificate fingerprints you may use to validate connection authenticity. If you :ref:`provide your own certificates ` during initial host creation, your Controller will consume and terminate TLS with the certificate and private key you provide for all inbound HTTPS requests. The setup wizard will pre-populate this certificate information for you, but other settings will still need to be configured. :ref:`endpoint` access is protected by authentication to ensure that only authorized administrators can initiate the setup phase. This authentication key is emitted by the Controller over multiple channels accessible only to administrators. After initial boot and periodically afterwards, the setup authentication key is emitted: - On all serial consoles - In ``bowtie-server.service`` logs On most hypervisors or cloud platforms the serial console should be viewable to retrieve the key, which is derived from an English word list in cases where the serial text output may not be easily copied. After obtaining the setup key, use it to unlock the :ref:`endpoint` page: .. image:: _static/setup-auth.png Authentication will be retained as part of your browser session. .. _endpoint: Initial Setup ------------- The initial setup landing page will load: .. image:: _static/setup.png Use this page to perform initial Controller configuration. Each section of the initial setup page should be self-explanatory, but we include information here for additional clarity. -------------------- Setup Organization -------------------- Use this section to inform your new Controller about your organization. .. _org-tls: ----------- Setup TLS ----------- Two operating modes are supported: either bring-your-own hostname or an automatically-provisioned hostname served under a Bowtie domain for convenience. As long as the Controller IP address is publicly-reachable, you may rely on a dynamic ``bowtie.direct`` name for ad-hoc configuration without any need to configure DNS manually. If you prefer your own DNS hostname, please ensure that the desired name is directed at the Controller’s public address and fill out the fields for domain and Controller name as necessary. .. admonition:: ``bowtie.direct`` :class: note Without a custom domain, the ``bowtie.direct`` domain is provided as a convenience to facilitate TLS without creating your own DNS records and avoid TLS errors with an approach like plain public IP addresses. Bowtie provides a hosted API that negotiates and provisions ``bowtie.direct`` names automatically if you do not configure your own DNS records. This service is optional and does not require any additional configuration if you decide to use this approach. In either case, you *must* configure TLS to proceed because the remainder of the setup sequence configures settings that should be secured via HTTPS encryption. You may rely on either the default self-signed certificate or your own hostname and certificate to change these fields over HTTPS. Controllers support automatic TLS via the ACME protocol as long as :ref:`the requisite ports are open ` and you :ref:`provide the Controller’s endpoint `. In the case of self-managed bring-your-own certificates, you may use the setup page to upload them to the Controller, :ref:`load them at boot time with cloud-init `, or after initial boot by copying your TLS key and certificate to ``/etc/ssl/controller-key.pem`` and ``/etc/ssl/controller.pem``. System services will watch for the appearance of these files so no additional steps are necessary to load them. Once you've defined a hostname and chosen the TLS acquisition method, proceed to establish HTTPS before proceeding to the next steps: .. image:: _static/setup-hostname.png The web application will perform any steps necessary to provision a domain, TLS, and other hostname settings before transitioning to an HTTPS connection. The remainder of the form will appear once a TLS connection is active. .. admonition:: Set Re-authentication :class: note You may be prompted to re-enter your initial setup key before proceeding. ------------ First User ------------ Follow the instructions to create the first user for this Controller. Note that: - Password strength requirements are enforced - The first user is granted superuser privileges - :ref:`sso` may be setup afterward .. admonition:: Pre-provisioning the first user :class: tip If you prefer to perform this step as part of your infrastructure automation, refer to :ref:`the section on pre-seeding users with cloud-init `. ----------- Site Name ----------- Follow the indicated instructions to establish a **Site Name**. ---------------- Address Ranges ---------------- Your Controller will attempt to auto-detect a network range suitable for initial configuration. Confirm that the pre-populated address range correctly reflects the **private network range** that this Controller is intended to serve. For example, if you launched a Controller into a AWS VPC with the range ``172.128.0.0/16``, enter this CIDR notation to indicate that the Controller is capable of routing traffic for addresses that fall within the range. ------------------- Save and Validate ------------------- The **Save and Validate** button will confirm that all fields are valid and bootstrap your new Controller. After clicking **Save and Validate**, you’ll be taken to your Controller’s login page. Log in with the user credentials you entered on the Setup page to arrive at a landing page with additional configuration setup instructions. Congratulations! Your Controller is configured and ready for use. If you haven’t already, you may want to setup :ref:`SSO` now. Otherwise, continue to either :doc:`ha-controller` to setup additional Controllers or proceed directly to :doc:`setup-client` to start connecting Clients to your private network. .. _sso: SSO --- You may load SSO configuration files in one of two ways: - At system boot time via :ref:`cloud-init `. See the item about :ref:`configuring SSO files ` to define these files when launching the Controller image. - Alternatively, you may load these files after initial launch. Reference :ref:`sso-config` for more information. This approach is suitable for cases such as using ``scp`` to copy over configuration files. Note that with most SSO providers, you will likely need to add your controller's HTTPS endpoint as an **authorized callback** URL when configuring a new SSO application for your Bowtie controller. To do so, when creating a new SSO OAuth application in a provider such as GitHub or GitLab, assuming that your controller's endpoint is ``https://controller.net.example.com``, use the following callback URL: .. code:: none https://controller.net.example.com/dex/callback Please refer to :doc:`sso` for complete documentation about how to configure SSO for your Controller if you need additional help or more information. .. admonition:: SSO Access :class: attention Please take special care to restrict access to a specific **group** when configuring SSO. Failure to do so may mean that *any* user able to authenticate with any Google, GitLab, or other provider will be granted access to your private controller administrative interface. Once complete, you may authenticate using the chosen provider for various endpoints, including: - The administrative console at the root of the controller's HTTPS endpoint. - The instrumentation tools at ``/grafana`` Scrape Configuration Files -------------------------- If your deployment requires additional Prometheus scraper configuration files, you may stage those YAML files here for installation upon setup finalization. The contents of files listed here should adhere to the Prometheus ``scrape_config`` `schema `_ and will be collected by the ``scrape_config_files`` setting. Summary ======= After completing these steps, your Controller should be operational! Congratulations! You should proceed to either: - :doc:`Client setup ` to let end users join the private network, or - :doc:`ha-controller` to setup additional Controllers in a highly-available configuration. The steps for :doc:`ha-controller` largely follow the patterns in this setup documentation with a few small changes to initiate clustering. - Configure :ref:`backup-and-restore` to safeguard your data to a location like S3 or ``minio``. :ref:`controller-sos` is available in the event that you need additional support to troubleshoot any lingering issues.