Overview

Quilkin is a UDP proxy, specifically designed for use with multiplayer dedicated game servers.

What is Quilkin?

Quilkin on open source is a non-transparent UDP proxy specifically designed for use with large scale multiplayer dedicated game servers deployments, to ensure security, access control, telemetry data, metrics and more.

It is designed to be used behind game clients as well as in front of dedicated game servers.

Quilkin's aim is to pull the above functionality out of bespoke, monolithic dedicated game servers and clients, and provide standard, composable modules that can be reused across a wide set of multiplayer games, so that game developers can instead focus on their game specific aspects of building a multiplayer game.

Why use Quilkin?

Some of Quilkin's advantages:

  • Lower development and operational costs for securing, monitoring and making reliable multiplayer game servers and their communications.
  • Provide entry-point redundancy for your game clients to connect to - making it much harder to take down your game servers.
  • Multiple integration patterns, allowing you to choose the level of integration that makes sense for your architecture.
  • Remove non-game specific computation out of your game server's processing loop - and save that precious CPU for your game simulation!

Major Features

Quilkin incorporates these abilities:

  • Non-transparent proxying of UDP data, the internal state of your game architecture is not visible to bad actors.
  • Out of the box metrics for UDP packet information.
  • Composable tools for access control and security.
  • Able to be utilised as a standalone binary, with no client/server changes required or as a Rust library depending on how deep an integration you wish for your system.
  • Can be integrated with C/C++ code bases via FFI.

What Next?

Quickstart: Quilkin with netcat

Requirements

1. Start an udp echo service

So that we have a target for sending UDP packets to, let's use ncat to create a simple UDP echo process.

To do this run:

ncat -e $(which cat) -k -u -l 8000

This routes all UDP packets that ncat receives to the local cat process, which echoes it back.

2. Start Quilkin

Next, let's configure Quilkin, with a static configuration that points at the udp echo service we just started.

Open a new terminal and copy the following to a file named proxy.yaml:

version: v1alpha1
static:
  endpoints:
    - address: 127.0.0.1:8000

This configuration will start Quilkin on the default port of 7000, and it will redirect all incoming UDP traffic to a single endpoint of 127.0.0.1, port 8000.

Let's start Quilkin with the above configuration:

quilkin run --config proxy.yaml

You should see an output like the following:

$ quilkin run --config proxy.yaml
{"msg":"Starting Quilkin","level":"INFO","ts":"2021-04-25T19:27:22.535174615-07:00","source":"run","version":"0.1.0-dev"}
{"msg":"Starting","level":"INFO","ts":"2021-04-25T19:27:22.535315827-07:00","source":"server::Server","port":7000}
{"msg":"Starting admin endpoint","level":"INFO","ts":"2021-04-25T19:27:22.535550572-07:00","source":"proxy::Admin","address":"[::]:9091"}

3. Send a packet!

In (yet ๐Ÿ˜ƒ) another shell, let's use netcat to send an udp packet.

Run the following to connect netcat to Quilkin's receiving port of 7000 via UDP (-u):

nc -u 127.0.0.1 7000

Type the word "test" and hit enter, you should see it echoed back to you like so:

nc -u 127.0.0.1 7000
test
test

Feel free to send even more packets, as many as you would like ๐Ÿ‘.

Congratulations! You have successfully routed a UDP packet and back again with Quilkin!

What's next?

Quickstart: Quilkin with Agones and Xonotic

Requirements

1. Agones Fleet with Quilkin

In this step, we're going to set up a Xonotic dedicated game server, with Quilkin running as a sidecar, which will give us access to all the metrics that Quilkin provides.

kubectl apply -f https://raw.githubusercontent.com/googleforgames/quilkin/main/examples/agones-xonotic/sidecar.yaml

This applies two resources to your cluster:

  1. A Kubernetes ConfigMap with a basic Quilkin static configuration.
  2. An Agones Fleet specification with Quilkin running as a sidecar to Xonotic, such that it can process all the UDP traffic and pass it to the Xonotic dedicated game server.

Now you can run kubectl get gameservers until all your Agones GameServers are marked as Ready like so:

$ kubectl get gameservers
NAME                          STATE   ADDRESS         PORT   NODE                                    AGE
xonotic-sidecar-htc2x-84mzm   Ready   34.94.107.201   7533   gke-agones-default-pool-0f7d8adc-7w3c   7m25s
xonotic-sidecar-htc2x-sdp4k   Ready   34.94.107.201   7599   gke-agones-default-pool-0f7d8adc-7w3c   7m25s

2. Play Xonotic!

Usually with Agones you would Allocate a GameServer, but we'll skip this step for this example.

Choose one of the listed GameServers from the previous step, and connect to the IP and port of the Xonotic server via the "Multiplayer > Address" field in the Xonotic client in the format of {IP}:{PORT}.

You should now be playing a game of Xonotic against 4 bots!

3. Check out the metrics

Let's take a look at some metrics that Quilkin outputs.

Grab the name of the GameServer you connected to before, and replace the ${gameserver} value below, and run the command. This will forward the admin interface to localhost.

kubectl port-forward ${gameserver} 9091

Then open a browser to http://localhost:9091/metrics to see the Prometheus metrics that Quilkin exports.

5. Cleanup

Run the following to delete the Fleet and the accompanying ConfigMap:

kubectl delete -f  https://raw.githubusercontent.com/googleforgames/quilkin/main/examples/agones-xonotic/sidecar.yaml

6. Agones Fleet, but with Compression

Let's take this one step further and compress the data between the Xonotic client and the server, without having to change either of them!

Let's create a new Xonotic Fleet on our Agones cluster, but this time configured such that Quilkin will decompress packets that are incoming.

Run the following:

kubectl apply -f https://raw.githubusercontent.com/googleforgames/quilkin/main/examples/agones-xonotic/sidecar-compress.yaml

This will implement the Compress filter in our Quilkin sidecar proxy in our new Fleet.

Now you can run kubectl get gameservers until all your Agones GameServers are marked as Ready like so:

$ kubectl get gameservers
NAME                                   STATE   ADDRESS         PORT   NODE                                    AGE
xonotic-sidecar-compress-htc2x-84mzm   Ready   34.94.107.201   7534   gke-agones-default-pool-0f7d8adc-7w3c   7m25s
xonotic-sidecar-compress-htc2x-sdp4k   Ready   34.94.107.201   7592   gke-agones-default-pool-0f7d8adc-7w3c   7m25s

4. Play Xonotic, through Quilkin

What we will do in this step, is run Quilkin locally as a client-side proxy to compress the UDP data before it is sent up to our Xonotic servers that are expecting compressed data.

First, grab a copy of the Quilkin configuration client-compress.yaml locally. This has the Compress filter already configured, but we need to fill in the address to connect to.

Rather than editing a file, this could also be sent through the xDS API, but it is easier to demonstrate this functionality through a static configuration.

Instead of connecting Xonotic directly, take the IP and port from one of the Agones hosted GameServer records, and replace the ${GAMESERVER_IP} and ${GAMESERVER_PORT} values in your copy of client-compress.yaml.

Run this configuration locally as:

quilkin run -c ./client-compress.yaml`

Now we can connect to the local client proxy on "127.0.0.1:7000" via the "Multiplayer > Address" field in the Xonotic client, and Quilkin will take care of compressing the data for you without having to change the game client!

Congratulations! You are now using Quilkin to manipulate the game client to server connection, without having to edit either!

7. Cleanup

Run the following to delete the Fleet and the accompanying ConfigMap:

kubectl delete -f https://raw.githubusercontent.com/googleforgames/quilkin/main/examples/agones-xonotic/sidecar-compress.yaml

What's Next?

Using Quilkin

There are two choices for running Quilkin:

  • Binary
  • Container image

For each version there is both a release version, which is optimised for production usage, and a debug version that has debug level logging enabled.

Binary

The release binary can be downloaded from the Github releases page.

Quilkin needs to be run with an accompanying configuration file, like so:

quilkin run --config="configuration.yaml"

To view debug output, run the same command with the quilkin-debug binary.

You can also use the shorthand of -c instead of --config if you so desire.

Container Image

For each release, there are both a release and debug container image built and hosted on Google Cloud Artifact Registry listed for each release.

The production release can be found under the tag:

us-docker.pkg.dev/quilkin/release/quilkin:{version}

Whereas, if you need debugging logging, use the following tag:

us-docker.pkg.dev/quilkin/release/quilkin:{version}-debug

Mount your configuration file at /etc/quilkin/quilkin.yaml to configure the Quilkin instance inside the container.

A default configuration is provided, such the container will start without a new configuration file, but it is configured to point to 127.0.0.1:0 as a no-op configuration.

What's next:

Examples

See the examples folder on Github for configuration and usage examples.

Depending on which release version of Quilkin you are using, you may need to choose the appropriate release tag from the dropdown, as the API surface for Quilkin is still under development.

Examples include:

...and more!

Session

Quilkin uses the Session concept to track traffic flowing through the proxy between any client-server pair. A Session serves the same purpose, and can be thought of as a lightweight version of a TCP session in that, while a TCP session requires a protocol to establish and teardown:

  • A Quilkin session is automatically created upon receiving the first packet from the client, to be sent to an upstream server.
  • The session is automatically torn down after a period of inactivity (where no packet was sent between either party) - currently 60 seconds.

A session is identified by the 4-tuple (client IP, client Port, server IP, server Port) where the client is the downstream endpoint which initiated the communication with Quilkin and the server is one of the upstream endpoints that Quilkin proxies traffic to.

Sessions are established after the filter chain completes. The destination endpoint of a packet is determined by the filter chain, so a session can only be created after filter chain completion. For example, if the filter chain drops all packets, then no session will ever be created.

Metrics

The proxy exposes the following metrics around sessions:

  • quilkin_session_active (Gauge)

    The number of currently active sessions.

  • quilkin_session_duration_secs (Histogram)

    A histogram over how long sessions lasted before they were torn down. Note that, by definition, active sessions are not included in this metric.

  • quilkin_session_total (Counter)

    The total number of sessions that have been created.

  • quilkin_session_rx_bytes_total (Counter)

    The total number of bytes received from the upstream endpoint.

  • quilkin_session_rx_bytes_total (Counter)

    The total number of bytes sent to the upstream endpoint.

  • quilkin_session_rx_packets_total (Counter)

    The total number of packets received from the upstream endpoint.

  • quilkin_session_tx_packets_total (Counter)

    The total number of packets sent to the upstream endpoint.

  • quilkin_session_packets_dropped_total (Counter)

    The total number of packets received from the upstream endpoint which were dropped by the filter chain rather than forwarded to the downstream endpoint.

  • quilkin_session_rx_errors_total (Counter)

    The total number of errors encountered while reading a packet from the upstream endpoint.

  • quilkin_session_rx_errors_total (Counter)

    The total number of errors encountered while sending a packet to the upstream endpoint.

Proxy

Concepts

Upstream Endpoint

An Upstream Endpoint represents a server that Quilkin forwards packets to. It is represented by an IP address and port. An upstream endpoint can optionally be associated with a (potentially empty) set of tokens as well as metadata.

Endpoint Metadata

Arbitrary key value pairs that are associated with the endpoint. These are visible to Filters when processing packets and can be used to provide more context about endpoints (e.g whether or not to route a packet to an endpoint). Keys must be of type string otherwise the configuration is rejected.

Metadata associated with an endpoint contain arbitrary key value pairs which Filters can consult when processing packets (e.g they can contain information that determine whether or not to route a particular packet to an endpoint).

In fact, the tokens associated with an endpoint are simply a special piece of metadata well known to Quilkin and is used by the built-in TokenRouter filter to route packets. Such well known values are placed within an object in the endpoint metadata, under the special key quilkin.dev. Currently, only the tokens entry is in use.

As an example, the following shows the configuration for an endpoint with its metadata:

static:
  endpoints:
    - address: 127.0.0.1:26000
      metadata:
        canary: false
        quilkin.dev: # This object is extracted by Quilkin and is usually reserved for built-in features
          tokens:
            - MXg3aWp5Ng== # base64 for 1x7ijy6
            - OGdqM3YyaQ== # base64 for 8gj3v2i

An endpoint's metadata can be specified alongside the endpoint in static configuration or using the xDS endpoint metadata field when using dynamic configuration via xDS.

Session

A session represents ongoing communication flow between a client and an Upstream Endpoint. See the Session documentation for more information.

Metrics

The proxy exposes the following general metrics (See the metrics sub-sections for metrics specific to other Quilkin components, e.g for metrics related to packet flow see sessions metrics, or metrics exported by individual filters can be found in the documentation for each filter):

  • quilkin_proxy_packets_dropped_total{reason} (Counter)

    The total number of packets (not associated with any session) that were dropped by proxy. Not that packets reflected by this metric were dropped at an earlier stage before they were associated with any session. For session based metrics, see the list of session metrics instead.

    • reason = NoConfiguredEndpoints
      • NoConfiguredEndpoints: No upstream endpoints were available to send the packet to. This can occur e.g if the endpoints cluster was scaled down to zero and the proxy is configured via a control plane.
  • quilkin_cluster_active (Gauge)

    The number of currently active clusters.

  • quilkin_cluster_active_endpoints (Gauge)

    The number of currently active upstream endpoints. Note that this tracks the number of endpoints that the proxy knows of rather than those that it is connected to (see Session Metrics instead for those)

Proxy Configuration

The following is the schema and reference for a Quilkin proxy configuration file. See the examples folder for example configuration files.

By default Quilkin will look for a configuration file named quilkin.yaml in its current running directory first, then if not present, in /etc/quilkin/quilkin.yaml on UNIX systems. This can be overridden with the -c/--config command-line argument, or the QUILKIN_FILENAME environment variable.

type: object
properties:
  version:
    type: string
    description: |
      The configuration file version to use.
    enum:
      - v1alpha1
  proxy:
    type: object
    description: |
      Configuration of core proxy behavior.
    properties:
      id:
        type: string
        description: |
          An identifier for the proxy instance.
        default: On linux, the machine hostname is used as default. On all other platforms a UUID is generated for the proxy.
      port:
        type: integer
        description: |
          The listening port for the proxy.
        default: 7000
  admin:
    type: object
    description: |
      Configuration of proxy admin HTTP interface.
    properties:
      address:
      type: string
      description: |
        Socket Address and port to bind the administration interface to.
      default: [::]:9091
  static:
    type: object
    description: |
      Static configuration of endpoints and filters.
      NOTE: Exactly one of `static` or `dynamic` can be specified.
    properties:
      filter:
        '$ref': '#/definitions/filterchain'
      endpoints:
        '$ref': '#/definitions/endpoints'
    required:
      - endpoints
  dynamic:
    type: object
    description: |
      Dynamic configuration of endpoints and filters.
      NOTE: Exactly one of `static` or `dynamic` can be specified.
    properties:
      management_servers:
        type: array
        description: |
          A list of XDS management servers to fetch configuration from.
          Multiple servers can be provided for redundancy for the proxy to
          fall back to upon error.
        items:
          type: object
            description: |
              Configuration for a management server.
            properties:
              address:
                type: string
                description: |
                  Address of the management server. This must have the `http(s)` scheme prefix.
                  Example: `http://example.com`
    required:
      - management_servers

required:
  - version

definitions:
  filterchain:
    type: array
    description: |
      A filter chain.
    items:
      '$ref': {} # Refer to the Filter documentation for a filter configuration schema.
  endpoints:
    type: array
    description: |
      A list of upstream endpoints to forward packets to.
    items:
      type: object
        description: |
          An upstream endpoint
        properties:
          address:
            type: string
            description: |
              Socket address of the endpoint. This must be of the ยดIP:Port` form e.g `192.168.1.1:7001`
            metadata:
              type: object
              description: |
                Arbitrary key value pairs that is associated with the endpoint.
                These are visible to Filters when processing packets and can be used to provide more context about endpoints (e.g whether or not to route a packet to an endpoint).
                Keys must be of type string otherwise the configuration is rejected.
      required:
        - address

Filters

In most cases, we would like Quilkin to do some preprocessing of received packets before sending them off to their destination. Because this stage is entirely specific to the use case at hand and differs between Quilkin deployments, we must have a say over what tweaks to perform - this is where filters come in.

Filters and Filter chain

A filter represents a step in the tweaking/decision-making process of how we would like to process our packets. For example, at some step, we might choose to append some metadata to every packet we receive before forwarding it while at a later step, choose not to forward packets that don't meet some criteria.

Quilkin lets us specify any number of filters and connect them in a sequence to form a packet processing pipeline similar to a Unix pipeline - we call this pipeline a Filter chain. The combination of filters and filter chain allows us to add new functionality to fit every scenario without changing Quilkin's core.

As an example, say we would like to perform the following steps in our processing pipeline to the packets we receive.

  • Append a predetermined byte to the packet.
  • Compress the packet.
  • Do not forward (drop) the packet if its compressed length is over 512 bytes.

We would create a filter corresponding to each step either by leveraging any existing filters that do what we want or writing one ourselves and connect them to form the following filter chain:

append | compress | drop

When Quilkin consults our filter chain, it feeds the received packet into append and forwards the packet it receives (if any) from drop - i.e the output of append becomes the input into compress and so on in that order.

There are a few things we note here:

  • Although we have in this example, a filter called drop, every filter in the filter chain has the same ability to drop or update a packet - if any filter drops a packet then no more work needs to be done regarding that packet so the next filter in the pipeline never has any knowledge that the dropped packet ever existed.

  • The filter chain is consulted for every received packet, and its filters are traversed in reverse order for packets travelling in the opposite direction. A packet received downstream will be fed into append and the result from drop is forwarded upstream - a packet received upstream will be fed into drop and the result from append is forwarded downstream.

  • Exactly one filter chain is specified and used to process all packets that flow through Quilkin.

Metrics

  • filter_read_duration_seconds The duration it took for a filter's read implementation to execute.

    • Labels
      • filter The name of the filter being executed.
  • filter_write_duration_seconds The duration it took for a filter's write implementation to execute.

    • Labels
      • filter The name of the filter being executed.

Configuration Examples

// Wrap this example within an async main function since the
// local_rate_limit filter spawns a task on initialization
#[tokio::main]
async fn main() {
let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.debug.v1alpha1.Debug
      config:
        id: debug-1
    - name: quilkin.extensions.filters.local_rate_limit.v1alpha1.LocalRateLimit
      config:
        max_packets: 10
        period: 500ms
  endpoints:
    - address: 127.0.0.1:7001
";
let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 2);
quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

We specify our filter chain in the .filters section of the proxy's configuration which has takes a sequence of FilterConfig objects. Each object describes all information necessary to create a single filter.

The above example creates a filter chain comprising a Debug filter followed by a Rate limiter filter - the effect is that every packet will be logged and the proxy will not forward more than 20 packets per second.

The sequence determines the filter chain order so its ordering matters - the chain starts with the filter corresponding the first filter config and ends with the filter corresponding the last filter config in the sequence.

Filter Dynamic Metadata

A filter within the filter chain can share data within another filter further along in the filter chain by propagating the desired data alongside the packet being processed. This enables sharing dynamic information at runtime, e.g information about the current packet that might be useful to other filters that process that packet.

At packet processing time each packet is associated with filter dynamic metadata (a set of key-value pairs). Each key is a unique string while value is an arbitrary value. When a filter processes a packet, it can choose to consult the associated dynamic metadata for more information or itself add/update or remove key-values from the set.

As an example, the built-in CaptureBytes filter is one such filter that populates a packet's filter metadata. CaptureBytes extracts information (a configurable byte sequence) from each packet and appends it to the packet's dynamic metadata for other filters to leverage. On the other hand, the built-in TokenRouter filter selects what endpoint to route a packet by consulting the packet's dynamic metadata for a routing token. Consequently, we can build a filter chain with a CaptureBytes filter preceeding a TokenRouter filter, both configured to write and read the same key in the dynamic metadata entry. The effect would be that packets are routed to upstream endpoints based on token information extracted from their contents.

Well Known Dynamic Metadata

The following metadata are currently used by Quilkin core and built-in filters.

NameTypeDescription
quilkin.dev/captured_bytesVec<u8>The default key under which the CaptureBytes filter puts the byte slices it extracts from each packet.

Built-in filters

Quilkin includes several filters out of the box.

FilterDescription
DebugLogs every packet
LocalRateLimiterLimit the frequency of packets.
ConcatenateBytesAdd authentication tokens to packets.
CaptureBytesCapture specific bytes from a packet and store them in filter dynamic metadata.
TokenRouterSend packets to endpoints based on metadata.
CompressCompress and decompress packets data.

FilterConfig

Represents configuration for a filter instance.

properties:
  name:
    type: string
    description: |
      Identifies the type of filter to be created.
      This value is unique for every filter type - please consult the documentation for the particular filter for this value.

  config:
    type: object
    description: |
      The configuration value to be passed onto the created filter.
      This is passed as an object value since it is specific to the filter's type and is validated by the filter
      implementation. Please consult the documentation for the particular filter for its schema.

required: [ 'name', 'config' ]

CaptureBytes

The CaptureBytes filter's job is to find a series of bytes within a packet, and capture it into Filter Dynamic Metadata, so that it can be utilised by filters further down the chain.

This is often used as a way of retrieving authentication tokens from a packet, and used in combination with ConcatenateBytes and TokenRouter filter to provide common packet routing utilities.

Filter name

quilkin.extensions.filters.capture_bytes.v1alpha1.CaptureBytes

Configuration Examples


#![allow(unused)]
fn main() {
let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.capture_bytes.v1alpha1.CaptureBytes
      config:
          strategy: PREFIX
          metadataKey: myapp.com/myownkey
          size: 3
          remove: false
  endpoints:
    - address: 127.0.0.1:7001
";
let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 1);
quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

Configuration Options

properties:
  strategy:
    type: string
    description: |
      The selected strategy for capturing the series of bytes from the incoming packet.
       - SUFFIX: Retrieve bytes from the end of the packet.
       - PREFIX: Retrieve bytes from the beginnning of the packet.
    default: "SUFFIX"
    enum: ['PREFIX', 'SUFFIX']
  metadataKey:
    type: string
    default: quilkin.dev/captured_bytes
    description: | 
      The key under which the captured bytes are stored in the Filter invocation values.
  size:
    type: integer
    description: |
      The number of bytes in the packet to capture using the applied strategy.
  remove:
    type: boolean
    default: false
    description: |
      Whether or not to remove the captured bytes from the packet before passing it along to the next filter in the
      chain.
  required: ['size']

Metrics

  • quilkin_filter_CaptureBytes_packets_dropped
    A counter of the total number of packets that have been dropped due to their length being less than the configured size.

Compress

The Compress filter's job is to provide a variety of compression implementations for compression and subsequent decompression of UDP data when sent between systems, such as a game client and game server.

Filter name

quilkin.extensions.filters.compress.v1alpha1.Compress

Configuration Examples


#![allow(unused)]
fn main() {
let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.compress.v1alpha1.Compress
      config:
          on_read: COMPRESS
          on_write: DECOMPRESS
          mode: SNAPPY
  endpoints:
    - address: 127.0.0.1:7001
";
let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 1);
quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

The above example shows a proxy that could be used with a typical game client, where the original client data is sent to the local listening port and then compressed when heading up to a dedicated game server, and then decompressed when traffic is returned from the dedicated game server before being handed back to game client.

It is worth noting that since the Compress filter modifies the entire packet, it is worth paying special attention to the order it is placed in your Filter configuration. Most of the time it will likely be the first or last Filter configured to ensure it is compressing the entire set of data being sent.

Configuration Options

properties:
  on_read:
    '$ref': '#/definitions/action'
    description: |
      Whether to compress, decompress or do nothing when reading packets from the local listening port
  on_write:
    '$ref': '#/definitions/action'
    description: |
      Whether to compress, decompress or do nothing when writing packets to the local listening port
  mode:
    type: string
    description: |
      The compression implementation to use on the incoming and outgoing packets. See "Compression Modes" for details.
    enum:
      - SNAPPY
    default: SNAPPY

definitions:
  action:
    type: string
    enum:
      - DO_NOTHING
      - COMPRESS
      - DECOMPRESS
    default: DO_NOTHING

Compression Modes

Snappy

Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression.

Currently, this filter only provides the Snappy compression format via the rust-snappy crate, but more will be provided in the future.

Metrics

  • quilkin_filter_Compress_packets_dropped_total Total number of packets dropped as they could not be processed.
    • Labels:
      • action: The action that could not be completed successfully, thereby causing the packet to be dropped.
        • Compress: Compressing the packet with the configured mode was attempted.
        • Decompress Decompressing the packet with the configured mode was attempted.
  • quilkin_filter_Compress_decompressed_bytes_total Total number of decompressed bytes either received or sent.
  • quilkin_filter_Compress_compressed_bytes_total Total number of compressed bytes either received or sent.

Debug

The Debug filter logs all incoming and outgoing packets to standard output.

This filter is useful in debugging deployments where the packets strictly contain valid UTF-8 encoded strings. A generic error message is instead logged if conversion from bytes to UTF-8 fails.

Filter name

quilkin.extensions.filters.debug_filter.v1alpha1.Debug

Configuration Examples


#![allow(unused)]
fn main() {
let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.debug.v1alpha1.Debug
      config:
        id: debug-1
  endpoints:
    - address: 127.0.0.1:7001
";
let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 1);
quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

Configuration Options

properties:
  id:
    type: string
    description: |
      An identifier that will be included with each log message.

Metrics

This filter currently exports no metrics.

LoadBalancer

The LoadBalancer filter distributes packets received downstream among all upstream endpoints.

Filter name

quilkin.extensions.filters.load_balancer.v1alpha1.LoadBalancer

Configuration Examples

#[tokio::main]
async fn main() {
  let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.load_balancer.v1alpha1.LoadBalancer
      config:
        policy: ROUND_ROBIN
  endpoints:
    - address: 127.0.0.1:7001
";
  let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 1);
  quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

The load balancing policy (the strategy to use to select what endpoint to send traffic to) is configurable. In the example above, packets will be distributed by selecting endpoints in turn, in round robin fashion.

Configuration Options

properties:
  policy:
    type: string
    description: |
      The load balancing policy with which to distribute packets among endpoints.
    enum:
      - ROUND_ROBIN # Send packets by selecting endpoints in turn.
      - RANDOM      # Send packets by randomly selecting endpoints.
      - HASH        # Send packets by hashing the source IP and port.
    default: ROUND_ROBIN

Metrics

This filter currently does not expose any metrics.

LocalRateLimit

The LocalRateLimit filter controls the frequency at which packets received downstream are forwarded upstream by the proxy.

Filter name

quilkin.extensions.filters.local_rate_limit.v1alpha1.LocalRateLimit

Configuration Examples

// Wrap this example within an async main function since the
// local_rate_limit filter spawns a task on initialization
#[tokio::main]
async fn main() {
  let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.local_rate_limit.v1alpha1.LocalRateLimit
      config:
        max_packets: 1000
        period: 500ms
  endpoints:
    - address: 127.0.0.1:7001
";
  let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 1);
  quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

To configure a rate limiter, we specify the maximum rate at which the proxy is allowed to forward packets. In the example above, we configured the proxy to forward a maximum of 1000 packets per 500ms (2000 packets/second).

Packets that that exceeds the maximum configured rate are dropped.

Configuration Options

properties:
  max_packets:
    type: integer
    description: |
      The maximum number of packets allowed to be forwarded over the given duration.
    minimum: 0

  period:
    type: string
    description: |
      A human readable duration overwhich `max_packets` applies.
      Examples: `1s` 1 second, `500ms` 500 milliseconds.
      The minimum allowed value is 100ms.
    default: '1s' # 1 second

required: [ 'max_packets' ]

Metrics

  • quilkin_filter_LocalRateLimit_packets_dropped
    A counter over the total number of packets that have exceeded the configured maximum rate limit and have been dropped as a result.

TokenRouter

The TokenRouter filter's job is to provide a mechanism to declare which Endpoints a packet should be sent to.

This Filter provides this functionality by comparing a byte array token found in the Filter Dynamic Metadata from a previous Filter, and comparing it to Endpoint's tokens, and sending packets to those Endpoints only if there is a match.

Filter name

quilkin.extensions.filters.token_router.v1alpha1.TokenRouter

Configuration Examples


#![allow(unused)]
fn main() {
let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.token_router.v1alpha1.TokenRouter
      config:
          metadataKey: myapp.com/myownkey
  endpoints: 
    - address: 127.0.0.1:26000
      metadata:
        quilkin.dev:
          tokens:
            - MXg3aWp5Ng== # Authentication is provided by these ids, and matched against 
            - OGdqM3YyaQ== # the value stored in Filter dynamic metadata
    - address: 127.0.0.1:26001
      metadata:
        quilkin.dev:
          tokens:
            - bmt1eTcweA==
";
let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 1);
quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

View the CaptureBytes filter documentation for more details.

Configuration Options

properties:
  metadataKey:
    type: string
    default: quilkin.dev/captured_bytes
    description: | 
      The key under which the token is stored in the Filter dynamic metadata.

Metrics

  • quilkin_filter_TokenRouter_packets_dropped
    A counter of the total number of packets that have been dropped. This is also provided with a Reason label, as there are differing reasons for packets to be dropped:
    • NoEndpointMatch - The token provided via the Filter dynamic metadata does not match any Endpoint's tokens.
    • NoTokenFound - No token has been found in the Filter dynamic metadata.
    • InvalidToken - The data found for the token in the Filter dynamic metadata is not of the correct data type (Vec)

Sample Applications

Packet Authentication

In combination with several other filters, the TokenRouter can be utilised as an authentication and access control mechanism for all incoming packets.

Capturing the authentication token from an incoming packet can be implemented via the CaptureByte filter, with an example outlined below, or any other filter that populates the configured dynamic metadata key for the authentication token to reside.

It is assumed that the endpoint tokens that are used for authentication are generated by an external system, are appropriately cryptographically random and sent to each proxy securely.

For example, a configuration would look like:


#![allow(unused)]
fn main() {
let yaml = "
version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.capture_bytes.v1alpha1.CaptureBytes # Capture and remove the authentication token
      config:
          size: 3
          remove: true
    - name: quilkin.extensions.filters.token_router.v1alpha1.TokenRouter
  endpoints: 
    - address: 127.0.0.1:26000
      metadata:
        quilkin.dev:
          tokens:
            - MXg3aWp5Ng== # Authentication is provided by these ids, and matched against 
            - OGdqM3YyaQ== # the value stored in Filter dynamic metadata
    - address: 127.0.0.1:26001
      metadata:
        quilkin.dev:
          tokens:
            - bmt1eTcweA==
";
let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap();
assert_eq!(config.source.get_static_filters().unwrap().len(), 2);
quilkin::Builder::from(std::sync::Arc::new(config)).validate().unwrap();
}

On the game client side the ConcatenateBytes filter could also be used to add authentication tokens to outgoing packets.

Writing Custom Filters

Quilkin provides an extensible implementation of Filters that allows us to plug in custom implementations to fit our needs. This document provides an overview of the API and how we can go about writing our own Filters.

API Components

The following components make up Quilkin's implementation of filters.

Filter

A trait representing an actual Filter instance in the pipeline.

  • An implementation provides a read and a write method.
  • Both methods are invoked by the proxy when it consults the filter chain - their arguments contain information about the packet being processed.
  • read is invoked when a packet is received on the local downstream port and is to be sent to an upstream endpoint while write is invoked in the opposite direction when a packet is received from an upstream endpoint and is to be sent to a downstream client.

FilterFactory

A trait representing a type that knows how to create instances of a particular type of Filter.

  • An implementation provides a name and create_filter method.
  • create_filter takes in configuration for the filter to create and returns a FilterInstance type containing a new instance of its filter type.
    name returns the Filter name - a unique identifier of filters of the created type (e.g quilkin.extensions.filters.debug.v1alpha1.Debug).

FilterRegistry

A struct representing the set of all filter types known to the proxy. It contains all known implementations of FilterFactory, each identified by their name.

These components come together to form the filter chain.

Note that when using dynamic configuration, the process repeats in a similar manner - new filter instances are created according to the updated filter configuration and a new filter chain is re-created while the old one is dropped.

Creating Custom Filters

To extend Quilkin's code with our own custom filter, we need to do the following:

  1. Import the Quilkin crate.
  2. Implement the Filter trait with our custom logic, as well as a FilterFactory that knows how to create instances of the Filter implementation.
  3. Start the proxy with the custom FilterFactory implementation.

The full source code used in this example can be found here

1. Import the Quilkin crate

# Start with a new crate
cargo new --bin quilkin-filter-example

Add Quilkin as a dependency in Cargo.toml.

[dependencies]
quilkin = "0.2.0"

2. Implement the filter traits

It's not terribly important what the filter in this example does so let's write a Greet filter that appends Hello to every packet in one direction and Goodbye to packets in the opposite direction.

We start with the Filter implementation

#![allow(unused)]
fn main() {

// src/main.rs
use quilkin::filters::prelude::*;
 
struct Greet;

impl Filter for Greet {
    fn read(&self, mut ctx: ReadContext) -> Option<ReadResponse> {
        ctx.contents.splice(0..0, String::from("Hello ").into_bytes());
        Some(ctx.into())
    }
    fn write(&self, mut ctx: WriteContext) -> Option<WriteResponse> {
        ctx.contents.splice(0..0, String::from("Goodbye ").into_bytes());
        Some(ctx.into())
    }
}
}

Next, we implement a FilterFactory for it and give it a name:

#![allow(unused)]
fn main() {

struct Greet;
impl Filter for Greet {}
use quilkin::filters::Filter;
// src/main.rs
use quilkin::filters::prelude::*;

pub const NAME: &str = "greet.v1";

pub fn factory() -> DynFilterFactory {
    Box::from(GreetFilterFactory)
}

struct GreetFilterFactory;
impl FilterFactory for GreetFilterFactory {
    fn name(&self) -> &'static str {
        NAME
    }
    fn create_filter(&self, _: CreateFilterArgs) -> Result<FilterInstance, Error> {
        let filter: Box<dyn Filter> = Box::new(Greet);
        Ok(FilterInstance::new(serde_json::Value::Null, filter))
    }
}
}

3. Start the proxy

We can run the proxy in the exact manner as the default Quilkin binary using the run function, passing in our custom FilterFactory. Let's add a main function that does that. Quilkin relies on the Tokio async runtime, so we need to import that crate and wrap our main function with it.

Add Tokio as a dependency in Cargo.toml.

[dependencies]
quilkin = "0.2.0"
tokio = { version = "1", features = ["full"]}

Add a main function that starts the proxy.

// src/main.rs
#[tokio::main]
async fn main() {
    quilkin::run(vec![self::factory()].into_iter())
        .await
        .unwrap();
}

Now, let's try out the proxy. The following configuration starts our extended version of the proxy at port 7001 and forwards all packets to an upstream server at port 4321.

# config.yaml
version: v1alpha1
proxy:
  port: 7001
static:
  filters:
  - name: greet.v1
  endpoints:
  - address: 127.0.0.1:4321
  • Start the proxy

    cargo run -- -c config.yaml
    
  • Start a UDP listening server on the configured port

    nc -lu 127.0.0.1 4321
    
  • Start an interactive UDP client that sends packet to the proxy

    nc -u 127.0.0.1 7001
    

Whatever we pass to the client should now show up with our modification on the listening server's standard output. For example typing Quilkin in the client prints Hello Quilkin on the server.

4. Working with Filter Configuration

Let's extend the Greet filter to require a configuration that contains what greeting to use.

The Serde crate is used to describe static YAML configuration in code while Prost is used to describe dynamic configuration as Protobuf messages when talking to the management server.

Static Configuration

First let's create the config for our static configuration:

1. Add the yaml parsing crates to Cargo.toml:
  [dependencies]
  # ...
  serde = "1.0"
  serde_yaml = "0.8"
2. Define a struct representing the config:
// src/main.rs
#[derive(Serialize, Deserialize, Debug)]
struct Config {
    greeting: String,
}
3. Update the Greet Filter to take in greeting as a parameter:
// src/main.rs
struct Greet(String);

impl Filter for Greet {
    fn read(&self, mut ctx: ReadContext) -> Option<ReadResponse> {
        ctx.contents
            .splice(0..0, format!("{} ", self.0).into_bytes());
        Some(ctx.into())
    }
    fn write(&self, mut ctx: WriteContext) -> Option<WriteResponse> {
        ctx.contents
            .splice(0..0, format!("{} ", self.0).into_bytes());
        Some(ctx.into())
    }
}
4. Finally, update GreetFilterFactory to extract the greeting from the passed in configuration and forward it onto the Greet Filter.
// src/main.rs
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug)]
struct Config {
    greeting: String,
}
use quilkin::filters::prelude::*;
struct Greet(String);
impl Filter for Greet { }
use quilkin::config::ConfigType;

pub const NAME: &str = "greet.v1";

pub fn factory() -> DynFilterFactory {
    Box::from(GreetFilterFactory)
}

struct GreetFilterFactory;
impl FilterFactory for GreetFilterFactory {
    fn name(&self) -> &'static str {
        NAME
    }
    fn create_filter(&self, args: CreateFilterArgs) -> Result<FilterInstance, Error> {
        let config = match args.config.unwrap() {
          ConfigType::Static(config) => {
              serde_yaml::from_str::<Config>(serde_yaml::to_string(config).unwrap().as_str())
                .unwrap()
          }
          ConfigType::Dynamic(_) => unimplemented!("dynamic config is not yet supported for this filter"),
        };
        let filter: Box<dyn Filter> = Box::new(Greet(config.greeting));
        Ok(FilterInstance::new(serde_json::Value::Null, filter))
    }
}

And with these changes we have wired up static configuration for our filter. Try it out with the following config.yaml:

# config.yaml
version: v1alpha1
proxy:
  port: 7001
static:
  filters:
  - name: greet.v1
    config:
      greeting: Hey
  endpoints:
  - address: 127.0.0.1:4321
Dynamic Configuration

You might have noticed while adding static configuration support, that the config argument passed into our FilterFactory has a Dynamic variant.

let config = match args.config.unwrap() {
    ConfigType::Static(config) => {
        serde_yaml::from_str::<Config>(serde_yaml::to_string(config).unwrap().as_str())
         .unwrap()
    }
    ConfigType::Dynamic(_) => unimplemented!("dynamic config is not yet supported for this filter"),
};

The Dynamic contains the serialized Protobuf message received from the management server for the Filter to create. As a result, its contents are entirely opaque to Quilkin and it is represented with the Prost Any type so the FilterFactory can interpret its contents however it wishes.
However, it usually contains a Protobuf equivalent of the filter's static configuration.

1. Add the proto parsing crates to Cargo.toml:
[dependencies]
# ...
tonic = "0.5.0"
prost = "0.7"
prost-types = "0.7"
2. Create a Protobuf equivalent of the static configuration:
// src/greet.proto
syntax = "proto3";

package greet;

message Greet {
  string greeting = 1;
}
3. Generate Rust code from the proto file:

There are a few ways to generate Prost code from proto, we will use the prost_build crate in this example.

Add the required crates to Cargo.toml:

[dependencies]
# ...
bytes = "1.0"

[build-dependencies]
prost-build = "0.7"

Add a build script to generate the Rust code during compilation:

// src/build.rs
fn main() {
    prost_build::compile_protos(&["src/greet.proto"], &["src/"]).unwrap();
}

To include the generated code, we'll use a convenience macro include_proto, which imports the generated code, while recreating the grpc package name as Rust modules:

// src/main.rs
quilkin::include_proto!("greet");
use greet::Greet as ProtoGreet;
4. Decode the serialized proto message into a config:

If the message contains a Protobuf equivalent of the filter's static configuration, we can leverage the deserialize method to deserialize either a static or dynamic config. The function automatically deserializes and converts from the Protobuf type if the input contains a dynamic configuration.
As a result, the function requires that the std::convert::TryFrom is implemented from our dynamic config type to a static equivalent.

// src/main.rs
impl TryFrom<ProtoGreet> for Config {
    type Error = ConvertProtoConfigError;

    fn try_from(p: ProtoGreet) -> Result<Self, Self::Error> {
        Ok(Config {
            greeting: p.greeting,
        })
    }
}

With our conversion implementation, we can to extract a greeting from any configuration type and forward it onto the Greet Filter.

// src/main.rs
pub const NAME: &str = "greet.v1";

pub fn factory() -> DynFilterFactory {
    Box::from(GreetFilterFactory)
}

struct GreetFilterFactory;
impl FilterFactory for GreetFilterFactory {
    fn name(&self) -> &'static str {
        NAME
    }
    fn create_filter(&self, args: CreateFilterArgs) -> Result<FilterInstance, Error> {
        let (config_json, config) = self
            .require_config(args.config)?
            .deserialize::<Config, ProtoGreet>(self.name())?;
        let filter: Box<dyn Filter> = Box::new(Greet(config.greeting));
        Ok(FilterInstance::new(config_json, filter))
    }
}

Quilkin Integration Examples

The Quilkin proxy can be integrated with your dedicated game servers in several ways, each providing different capabilities and complexity tradeoffs.

Below captures several of the most useful and prevalent architectural patterns to give you inspiration on how you can use Quilkin in your multiplayer game networking architecture.

Server Proxy as a Sidecar

                  +
                  |
               Internet
                  |
                  |
                  |
+---------+       |          +----------------+ +----------------+
|  Game   |       |          | Quilkin        | | Dedicated      |
|  Client <------------------> (Server Proxy) | | Game Server    |
+---------+       |          |                <->                |
                  |          +----------------+ +----------------+
                  |
                  |
                  |          +----------------+ +----------------+
                  |          | Quilkin        | | Dedicated      |
                  |          | (Server Proxy) | | Game Server    |
                  |          |                <->                |
                  |          +----------------+ +----------------+
                  |
                  |
                  |
                  +

This is the simplest integration and configuration option with Quilkin, but does provide the smallest number of possible feature implementations and ability to provide redundancy.

That being said, this is a low risk way to integrate Quilkin, and take advantage of the out-of-the-box telemetry and metric information that comes with Quilkin.

  • In this example, the Server proxy is running alongside the dedicated game server - on the same public IP/machine/container.
    • This is often referred to as a sidecar pattern.
  • Communication between the Server Proxy and the Dedicated Game Server occurs over the localhost network, with a separate port for each Game Client connection.
  • Clients connect to the Server Proxy's public port/IP combination, and the Server Proxy routes all traffic directly to the dedicated game server.
  • The Server Proxy can still use filters such as rate limiting, compression (forthcoming) or encryption (forthcoming), as long as the Game Client conforms to the standard protocols utilised by those filters as appropriate.

Client Proxy to Sidecar Server Proxy

                                    +
                                    |
                                 Internet
                                    |
                                    |
                                    |
+---------+    +----------------+   |        +----------------+ +----------------+
|  Game   |    | Quilkin        |   |        | Quilkin        | | Dedicated      |
|  Client <----> (Client Proxy) <------------> (Server Proxy) | | Game Server    |
+---------+    +----------------+   |        |                <->                |
                                    |        +----------------+ +----------------+
                                    |
                                    |
                                    |        +----------------+ +----------------+
                                    |        | Quilkin        | | Dedicated      |
                                    |        | (Server Proxy) | | Game Server    |
                                    |        |                <->                |
                                    |        +----------------+ +----------------+
                                    |
                                    |
                                    |
                                    +

This example is the same as the above, but puts a Client Proxy between the Game Client, and the Server Proxy to take advantage of Client Proxy functionality.

  • The Client Proxy may be integrated as a standalone binary, or directly into the client, with communication occurring over a localhost port.
  • The Client Proxy can now utilise filters, such as, compression (forthcoming) and encryption (forthcoming), without having to change the Game Client.
  • The Game Client will need to communicate to the Client Proxy what IP it should connect to when the Client is match-made with a Game Server.

Client Proxy to Separate Server Proxies Pools

                                       +                          +
                                       |                          |
                                    Internet                   Private
                                       |                       Network
                                       |     +----------------+   |          +----------------+
                                       |     | Quilkin        |   |          | Dedicated      |
                                       |  +--> (Server Proxy) <-------+------> Game Server    |
+---------+      +----------------+    |  |  |                |   |   |      |                |
|  Game   |      | Quilkin        <-------+  +----------------+   |   |      +----------------+
|  Client <------> (Client Proxy) |    |  |                       |   |
+---------+      +----------------+    |  |  +----------------+   |   |      +----------------+
                                       |  |  | Quilkin        |   |   |      | Dedicated      |
                                       |  +--> (Server Proxy) <-------+      | Game Server    |
                                       |     |                |   |          |                |
                                       |     +----------------+   |          +----------------+
                                       |                          |
                                       |     +----------------+   |          +----------------+
                                       |     | Quilkin        |   |          | Dedicated      |
                                       |     | (Server Proxy) |   |          | Game Server    |
                                       |     |                |   |          |                |
                                       |     +----------------+   |          +----------------+
                                       +                          +

This is the most complex configuration, but enables the most reuse of Quilkin's functionality, while also providing the most redundancy and security for your dedicated game servers.

  • The Game client sends and receives packets from the Quilkin client proxy.
  • The Client Proxy may be integrated as a standalone binary, or directly into the client, with communication occurring over a localhost port.
  • The Client Proxy can utilise the full set of filters, such as routing, compression (forthcoming) and encryption (forthcoming), without having to change the Game Client.
  • There are a hosted set of Quilkin Server proxies that have public IP addresses, and are connected to a control plane to coordinate routing and access control to the dedicated game servers, which are on private IP addresses.
  • The Client Proxy is made aware of one or more Server proxies to connect to, possibly via their Game Client matchmaker or another service, with an authentication token to pass to the Server proxies, such that the UDP packets can be routed correctly to the dedicated game server they should connect to.
  • Dedicated game servers receive traffic as per normal from the Server Proxies, and send data back to the proxies directly.
  • If the dedicated game server always expects traffic from only a single ip/port combination for client connection, then traffic will always need to be sent through a single Server Proxy. Otherwise, UDP packets can be load balanced via the Client Proxy to multiple Server Proxies for even greater redundancy.

What Next?


Diagrams powered by http://asciiflow.com/

Administration Interface

Quilkin exposes an HTTP interface to query different aspects of the server.

It is assumed that the administration interface will only ever be able to be accessible on localhost.

By default, the administration interface is bound to [::]:9091, but it can be configured through the proxy configuration file, like so:

admin:
  address: [::]:9095

The admin interface provides the following endpoints:

/live

This provides a liveness probe endpoint, most commonly used in Kubernetes based systems.

Will return an HTTP status of 200 when all health checks pass.

/metrics

Outputs Prometheus formatted metrics for this proxy.

See the Proxy Metrics documentation for what metrics are available.

/config

Returns a JSON representation of the cluster and filterchain configuration that the proxy is running with at the time of invocation.

FAQ

Just how fast is Quilkin? What sort of performance can I expect?

Our current testing shows that on Quilkin shows that it process packets quite fast!

We won't be publishing performance benchmarks, as performance will always change depending on the underlying hardware, number of filters, configurations and more.

We highly recommend you run your own load tests on your platform and configuration, matching your production workload and configuration as close as possible.

Our iperf3 based performance test in the examples' folder is a good starting point.

Since this is still an alpha project, we have plans on investigating further performance improvements in upcoming releases, both from an optimisation and observability perspective as well.

Can I integrate Quilkin with C++ code?

Quilkin is also released as a library, so it can be integrated with an external codebase as necessary.

Using Rust code inside a C or C++ project mostly consists of two parts.

  • Creating a C-friendly API in Rust
  • Embedding your Rust project into an external build system

See A little Rust with your C for more information.

Over time, we will be expanding documentation on how to integrate with specific engines if running Quilkin as a separate binary is not an option.

I would like to run Quilkin as a client side proxy on a console? Can I do that?

This is an ongoing discussion, and since console development is protected by non-disclosure agreements, we can't comment on this directly.

That being said, we are having discussions on how we can release lean versions of certain filters that would work with known supported game engines and languages for circumstances where compiling Rust or providing a separate Quilkin binary as an executable is not an option.

Any reason you didn't contribute this into/extend Envoy?

This is an excellent question! Envoy is an amazing project, and has set many of the standards for how proxies are written and orchestrated, and was an inspiration for many of the decisions made on Quilkin.

However, we decided to build this project separately:

  • Envoy seems primarily focused on web/mobile network workloads (which makes total sense), whereas we wanted something specialised on gaming UDP communication, so having a leaner, more focused codebase would allow us to move faster.
  • We found the Rust and Cargo ecosystem easier to work with than Bazel and C++, and figured our users would as well.

Dynamic Configuration using xDS Management Servers

In addition to static configuration provided upon startup, a Quiklin proxy's configuration can also be updated at runtime. The proxy can be configured on startup to talk to a set of management servers which provide it with updates throughout its lifecycle.

Communication between the proxy and management server uses the xDS gRPC protocol, similar to an envoy proxy. xDS is one of the standard configuration mechanisms for software proxies and as a result, Quilkin can be setup to discover configuration resources from any API compatible server. Also, given that the protocol is well specified, it is similarly straight-forward to implement a custom server to suit any deployment's needs.

The go-control-plane project provides production ready implementations of the API on top of which custom servers can be built relatively easily.

As described within the xDS-api documentation, the xDS API comprises a set of resource discovery APIs, each serving a specific set of configuration resource types, while the protocol itself comes in several variants. Quilkin implements the Aggregated Discovery Service (ADS) State of the World (SotW) variant with gRPC.

Supported APIs

Since the range of resources configurable by the xDS API extends that of Quilkin's domain (i.e being UDP based, Quilkin does not have a need for HTTP/TCP resources), only a subset of the API is supported. The following lists these relevant parts and any limitation to the provided support as a result:

  • Cluster Discovery Service (CDS): Provides information about known clusters and their membership information.

    • The proxy uses these resources to discover clusters and their endpoints.
    • While cluster topology information like locality can be provided in the configuration, the proxy currently does not use this information (support may be included in the future however).
    • Any load balancing information included in this resource is ignored. For load balancing, use Quilkin filters instead.
    • Only cluster discovery type STATIC and EDS is supported. Configuration including other discovery types e.g LOGICAL_DNS is rejected.
  • Endpoint Discovery Service (EDS): Provides information about endpoints.

    • The proxy uses these resources to discover information about endpoints like their IP addresses.
    • Endpoints may provide Endpoint Metadata via the metadata field. These metadata will be visible to filters as part of the corresponding endpoints information when processing packets.
    • Only socket addresses are supported on an endpoint's address configuration - i.e an IP address and port number combination. Configuration including any other type of addressing e.g named pipes will be rejected.
    • Any load balancing information included in this resource is ignored. For load balancing, use Quilkin filters instead.
  • Listener Discovery Service (LDS): Provides information about Filters and Filter Chains.

    • Only the name and filter_chains fields in the Listener resource are used by the proxy. The rest are ignored.
    • Since Quilkin only uses one filter chain per proxy, at most one filter chain can be provided in the resource. Otherwise the configuration is rejected.
    • Only the list of filters specified in the filter chain is used by the proxy - i.e other fields like filter_chain_match are ignored. This list also specifies the order that the corresponding filter chain will be constructed.
    • gRPC proto configuration for Quilkin's built-in filters can be found here. They are equivalent to the filter's static configuration.

Metrics

Quilkin exposes the following metrics around the management servers and its resources:

  • quilkin_xds_connected_state (Gauge)

    A boolean that indicates whether or not the proxy is currently connected to a management server. A value 1 means that the proxy is connected while 0 means that it is not connected to any server at that point in time.

  • quilkin_xds_update_attempt_total (Counter)

    The total number of attempts made by a management server to configure the proxy. This is equivalent to the total number of configuration updates received by the proxy from a management server.

  • quilkin_xds_update_success_total (Counter)

    The total number of successful attempts made by a management server to configure the proxy. This is equivalent to the total number of configuration updates received by the proxy from a management server and was successfully applied by the proxy.

  • quilkin_xds_update_failure_total (Counter)

    The total number of unsuccessful attempts made by a management server to configure the proxy. This is equivalent to the total number of configuration updates received by the proxy from a management server and was rejected by the proxy (e.g due to a bad/inconsistent configuration).

  • quilkin_xds_requests_total (Counter)

    The total number of DiscoveryRequests made by the proxy to management servers. This tracks messages flowing in the direction from the proxy to the management server.