Overview
Quilkin is a UDP proxy, specifically designed for use with multiplayer dedicated game servers.
What is Quilkin?
Quilkin on open source is a non-transparent UDP proxy specifically designed for use with large scale multiplayer dedicated game servers deployments, to ensure security, access control, telemetry data, metrics and more.
It is designed to be used behind game clients as well as in front of dedicated game servers.
Quilkin's aim is to pull the above functionality out of bespoke, monolithic dedicated game servers and clients, and provide standard, composable modules that can be reused across a wide set of multiplayer games, so that game developers can instead focus on their game specific aspects of building a multiplayer game.
Why use Quilkin?
Some of Quilkin's advantages:
- Lower development and operational costs for securing, monitoring and making reliable multiplayer game servers and their communications.
- Provide entry-point redundancy for your game clients to connect to - making it much harder to take down your game servers.
- Multiple integration patterns, allowing you to choose the level of integration that makes sense for your architecture.
- Remove non-game specific computation out of your game server's processing loop - and save that precious CPU for your game simulation!
Major Features
Quilkin incorporates these abilities:
- Non-transparent proxying of UDP data, the internal state of your game architecture is not visible to bad actors.
- Out of the box metrics for UDP packet information.
- Composable tools for access control and security.
- Able to be utilised as a standalone binary, with no client/server changes required or as a Rust library depending on how deep an integration you wish for your system.
- Integration with Game Server hosting platforms such as Agones.
- Can be integrated with C/C++ code bases via FFI.
What Next?
- Read the usage guide
- Have a look at the example configurations for basic configuration examples.
- Check out the example integration patterns.
Quickstart
This section provides a series of quickstarts to get you up and running with Quilkin quickly!
Quickstart: Quilkin with netcat
Requirements
- A *nix terminal
- A binary release of Quilkin from the Github releases page or by running
cargo install quilkin
- ncat
- netcat
1. Start an udp echo service
So that we have a target for sending UDP packets to, let's use ncat
to create a simple UDP echo process.
To do this run:
ncat -e $(which cat) -k -u -l 8000
This routes all UDP packets that ncat
receives to the local cat
process, which echoes it back.
2. Start Quilkin
Next let's configure Quilkin in proxy mode, with a static configuration that points at the UDP echo service we just started.
quilkin run --to 127.0.0.1:8000
This configuration will start Quilkin on the default port of 7000, and it will redirect all incoming UDP traffic to a single endpoint of 127.0.0.1, port 8000.
You should see an output like the following:
{"msg":"Starting Quilkin","level":"INFO","ts":"2021-04-25T19:27:22.535174615-07:00","source":"run","version":"0.1.0-dev"}
{"msg":"Starting","level":"INFO","ts":"2021-04-25T19:27:22.535315827-07:00","source":"server::Server","port":7000}
{"msg":"Starting admin endpoint","level":"INFO","ts":"2021-04-25T19:27:22.535550572-07:00","source":"proxy::Admin","address":"[::]:9091"}
3. Send a packet!
In (yet ๐) another shell, let's use netcat to send an udp packet.
Run the following to connect netcat to Quilkin's receiving port of 7000 via UDP (-u
):
nc -u 127.0.0.1 7000
Type the word "test" and hit enter, you should see it echoed back to you like so:
nc -u 127.0.0.1 7000
test
test
Feel free to send even more packets, as many as you would like ๐.
Congratulations! You have successfully routed a UDP packet and back again with Quilkin!
What's next?
- Run through the Quilkin with Agones quickstart.
- Have a look at some of the examples we have.
- Check out the usage documentation for other configuration options.
Quickstart: Quilkin with Agones and Xonotic (Sidecar)
Requirements
- A terminal with
kubectl
installed - A local copy of the Xonotic client
- A running Agones Kubernetes cluster
- Installation instructions
- If you aren't familiar with Agones, we recommend working through their Getting Started guides.
1. Agones Fleet with Quilkin
In this step, we're going to set up a Xonotic dedicated game server, with Quilkin running as a sidecar, which will give us access to all the metrics that Quilkin provides.
kubectl apply -f https://raw.githubusercontent.com/googleforgames/quilkin/v0.4.0/examples/agones-xonotic-sidecar/sidecar.yaml
This applies two resources to your cluster:
- A Kubernetes ConfigMap with a basic Quilkin static configuration.
- An Agones Fleet specification with Quilkin running as a sidecar to Xonotic, such that it can process all the UDP traffic and pass it to the Xonotic dedicated game server.
Now you can run kubectl get gameservers
until all your Agones GameServers
are marked as Ready
like so:
$ kubectl get gameservers
NAME STATE ADDRESS PORT NODE AGE
xonotic-sidecar-htc2x-84mzm Ready 34.94.107.201 7533 gke-agones-default-pool-0f7d8adc-7w3c 7m25s
xonotic-sidecar-htc2x-sdp4k Ready 34.94.107.201 7599 gke-agones-default-pool-0f7d8adc-7w3c 7m25s
2. Play Xonotic!
Usually with Agones you would
Allocate a
GameServer
, but we'll skip this step for this example.
Choose one of the listed GameServer
s from the previous step, and connect to the IP and port of the Xonotic
server via the "Multiplayer > Address" field in the Xonotic client in the format of {IP}:{PORT}.
You should now be playing a game of Xonotic against 4 bots!
3. Check out the metrics
Let's take a look at some metrics that Quilkin outputs.
Grab the name of the GameServer you connected to before, and replace the ${gameserver}
value below, and run the
command. This will forward the admin interface to localhost.
kubectl port-forward ${gameserver} 9091
Then open a browser to http://localhost:9091/metrics to see the Prometheus metrics that Quilkin exports.
5. Cleanup
Run the following to delete the Fleet and the accompanying ConfigMap:
kubectl delete -f https://raw.githubusercontent.com/googleforgames/quilkin/v0.4.0/examples/agones-xonotic-sidecar/sidecar.yaml
6. Agones Fleet, but with Compression
Let's take this one step further and compress the data between the Xonotic client and the server, without having to change either of them!
Let's create a new Xonotic Fleet on our Agones cluster, but this time configured such that Quilkin will decompress packets that are incoming.
Run the following:
kubectl apply -f https://raw.githubusercontent.com/googleforgames/quilkin/v0.4.0/examples/agones-xonotic-sidecar/sidecar-compress.yaml
This will implement the Compress filter in our Quilkin sidecar proxy in our new Fleet.
Now you can run kubectl get gameservers
until all your Agones GameServers
are marked as Ready
like so:
$ kubectl get gameservers
NAME STATE ADDRESS PORT NODE AGE
xonotic-sidecar-compress-htc2x-84mzm Ready 34.94.107.201 7534 gke-agones-default-pool-0f7d8adc-7w3c 7m25s
xonotic-sidecar-compress-htc2x-sdp4k Ready 34.94.107.201 7592 gke-agones-default-pool-0f7d8adc-7w3c 7m25s
4. Play Xonotic, through Quilkin
What we will do in this step, is run Quilkin locally as a client-side proxy to compress the UDP data before it is sent up to our Xonotic servers that are expecting compressed data.
First, grab a copy of the Quilkin configuration client-compress.yaml locally. This has the Compress filter already configured, but we need to fill in the address to connect to.
Rather than editing a file, this could also be sent through the xDS API, but it is easier to demonstrate this functionality through a static configuration.
Instead of connecting Xonotic directly, take the IP and port from one of the Agones hosted GameServer
records, and
replace the ${GAMESERVER_IP}
and ${GAMESERVER_PORT}
values in your copy of client-compress.yaml
.
Run this configuration locally as:
quilkin -c ./client-compress.yaml run
Now we can connect to the local client proxy on "127.0.0.1:7000" via the "Multiplayer > Address" field in the Xonotic client, and Quilkin will take care of compressing the data for you without having to change the game client!
Congratulations! You are now using Quilkin to manipulate the game client to server connection, without having to edit either!
7. Cleanup
Run the following to delete the Fleet and the accompanying ConfigMap:
kubectl delete -f https://raw.githubusercontent.com/googleforgames/quilkin/v0.4.0/examples/agones-xonotic-sidecar/sidecar-compress.yaml
What's Next?
- Have a look at the examples folder for configuration and usage examples.
- Explore the usage documentation for other configuration options.
Using Quilkin
There are two choices for running Quilkin:
- Binary
- Container image
Binary
The release binary can be downloaded from the Github releases page.
Container Image
For each release, there is a container image built and hosted on Google Cloud Artifact Registry.
The latest production release can be found under the tag:
us-docker.pkg.dev/quilkin/release/quilkin:0.4.0-1d414cb
Which can be browsed as us-docker.pkg.dev/quilkin/release/quilkin.
The entrypoint of the container is to run /quilkin
with no arguments, therefore arguments will need to be supplied. See the documentation below
for all command line options.
Command-Line Interface
Quilkin provides a variety of different commands depending on your use-case.
The primary entrypoint of the process is run
, which runs Quilkin as a reverse
UDP proxy. To see a basic usage of the command-line interface run through the
netcat with Quilkin quickstart.
For more advanced usage, checkout the quilkin::Cli
documentation or run:
$ quilkin --help
The Command-Line Interface for Quilkin
Usage: quilkin [OPTIONS] <COMMAND>
Commands:
run Run Quilkin as a UDP reverse proxy
generate-config-schema Generates JSON schema files for known filters
manage Runs Quilkin as a xDS management server, using `provider` as a configuration source
help Print this message or the help of the given subcommand(s)
Options:
--no-admin Whether to spawn the admin server or not [env: NO_ADMIN=]
-c, --config <CONFIG> The path to the configuration file for the Quilkin instance [env: QUILKIN_CONFIG=] [default: quilkin.yaml]
--admin-address <ADMIN_ADDRESS> The port to bind for the admin server [env: QUILKIN_ADMIN_ADDRESS=]
-q, --quiet Whether Quilkin will report any results to stdout/stderr [env: QUIET=]
-h, --help Print help information
File Based Configuration
For use cases that utilise functionality such as:
- A static set of Filters
- Multiple static Endpoints
- Static metadata on Endpoints
Quilkin also provides a yaml based config file as well. See the File Configuration documentation for details.
Logging
By default Quilkin will log INFO
level events, you can change this by setting
the RUST_LOG
environment variable. See log
documentation for
more advanced usage.
If you are debugging Quilkin set the
RUST_LOG
environemnt variable toquilkin=trace
, to filter trace level logging to only Quilkin components.
File Configuration
The following is the schema and reference for a Quilkin configuration file. See the examples folder for example configuration files.
By default Quilkin will look for a configuration file named quilkin.yaml
in
its current running directory first, then if not present, in
/etc/quilkin/quilkin.yaml
on UNIX systems. This can be overridden with the
-c/--config
command-line argument, or the QUILKIN_FILENAME
environment variable.
Static Configuration
Example of a full configuration for quilkin run
that utlisies a static Endpoint configuration:
#
# Example configuration for a Quilkin Proxy with static Endpoints
#
version: v1alpha1
admin: # configuration options for administration
address: "[::]:9091" # the address and port to bind the admin API to.
maxmind_db: null # Remote URL or local file path to retrieve a Maxmind database (requires licence).
id: my-proxy # An identifier for the proxy instance.
port: 7001 # the port to receive traffic to locally
clusters: # grouping of clusters
default:
localities: # grouping of endpoints within a cluster
- endpoints: # array of potential endpoints to send on traffic to
- address: 127.0.0.1:26000
metadata: # Metadata associated with the endpoint
quilkin.dev:
tokens:
- MXg3aWp5Ng== # the connection byte array to route to, encoded as base64 (string value: 1x7ijy6)
- OGdqM3YyaQ== # (string value: 8gj3v2i)
- address: 127.0.0.1:26001
metadata: # Metadata associated with the endpoint
quilkin.dev:
tokens:
- bmt1eTcweA== # (string value: nkuy70x)
Dynamic Configuration
Example of a full configuration for quilkin run
that utlisies a dynamic Endpoint configuration through an
xDS management endpoint:
#
# Example configuration for a Quilkin Proxy that is configured via an xDS control plane.
#
version: v1alpha1
admin: # configuration options for administration
address: "[::]:9091" # the address and port to bind the admin API to.
maxmind_db: null # Remote URL or local file path to retrieve a Maxmind database (requires licence).
id: my-proxy # An identifier for the proxy instance.
port: 7001 # the port to receive traffic to locally
management_servers: # array of management servers to configure the proxy with.
# Multiple servers can be provided for redundancy.
- address: http://127.0.0.1:26000
Json Schema
The full JSON Schema for the YAML configuration file.
type: object
properties:
version:
type: string
description: |
The configuration file version to use.
enum:
- v1alpha1
id:
type: string
description: |
An identifier for the proxy instance.
default: On linux, the machine hostname is used as default. On all other platforms a UUID is generated for the proxy.
port:
type: integer
description: |
The listening port. In "proxy" mode, the port for traffic to be sent to. In "manage" mode, the port to connect to the xDS API.
default: 7000
maxmind_db:
type: string
description: |
The remote URL or local file path to retrieve the Maxmind database (requires licence).
admin:
type: object
description: |
Configuration of proxy admin HTTP interface.
properties:
address:
type: string
description: |
Socket Address and port to bind the administration interface to.
default: "[::]:9091"
filters:
type: array
description: |
A filter chain.
items:
'$ref': {} # Refer to the Filter documentation for a filter configuration schema.
clusters:
type: object
description: |
grouping of clusters, each with a key for a name
additionalProperties:
type: object
description: |
An individual cluster
properties:
localities:
type: array
description: |
grouping of endpoints, per cluster.
items:
type: object
properties:
endpoints:
type: array
description: |
A list of upstream endpoints to forward packets to.
items:
type: object
description: |
An upstream endpoint
properties:
address:
type: string
description: |
Socket address of the endpoint. This must be of the ยดIP:Port` form e.g `192.168.1.1:7001`
metadata:
type: object
description: |
Arbitrary key value pairs that is associated with the endpoint.
These are visible to Filters when processing packets and can be used to provide more context about endpoints (e.g whether or not to route a packet to an endpoint).
Keys must be of type string otherwise the configuration is rejected.
required:
- address
management_servers:
type: array
description: |
A list of XDS management servers to fetch configuration from.
Multiple servers can be provided for redundancy for the proxy to
fall back to upon error.
items:
type: object
description: |
Configuration for a management server.
properties:
address:
type: string
description: |
Address of the management server. This must have the `http(s)` scheme prefix.
Example: `http://example.com`
Quilkin Integration Examples
The Quilkin proxy can be integrated with your dedicated game servers in several ways, each providing different capabilities and complexity tradeoffs.
Below captures several of the most useful and prevalent architectural patterns to give you inspiration on how you can use Quilkin in your multiplayer game networking architecture.
Server Proxy as a Sidecar
|
|
Internet
|
|
|
โโโโโโโโโโโ | โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
โ Game โ | โ Quilkin โ โ Dedicated โ
โ Client โโโโโโโโโโโโโโโโโโโโบ (Server Proxy) โ โ Game Server โ
โโโโโโโโโโโ | โ โโโบ โ
| โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
|
|
| โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
| โ Quilkin โ โ Dedicated โ
| โ (Server Proxy) โ โ Game Server โ
| โ โโโบ โ
| โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
|
|
|
|
This is the simplest integration and configuration option with Quilkin, but does provide the smallest number of possible feature implementations and ability to provide redundancy.
That being said, this is a low risk way to integrate Quilkin, and take advantage of the out-of-the-box telemetry and metric information that comes with Quilkin.
- In this example, the Server proxy is running alongside the dedicated game server - on the same public IP/machine/container.
- This is often referred to as a sidecar pattern.
- Communication between the Server Proxy and the Dedicated Game Server occurs over the localhost network, with a separate port for each Game Client connection.
- Clients connect to the Server Proxy's public port/IP combination, and the Server Proxy routes all traffic directly to the dedicated game server.
- The Server Proxy can still use filters such as rate limiting, compression, firewall rules, etc as long as the Game Client conforms to the standard protocols utilised by those filters as appropriate.
Client Proxy to Sidecar Server Proxy
|
|
Internet
|
|
|
โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ | โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
โ Game โ โ Quilkin โ | โ Quilkin โ โ Dedicated โ
โ Client โโโโโโบ (Client Proxy) โโโโโโโโโโโโโโบ (Server Proxy) โ โ Game Server โ
โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ | โ โโโบ โ
| โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
|
|
| โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
| โ Quilkin โ โ Dedicated โ
| โ (Server Proxy) โ โ Game Server โ
| โ โโโบ โ
| โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ
|
|
|
|
This example is the same as the above, but puts a Client Proxy between the Game Client, and the Server Proxy to take advantage of Client Proxy functionality.
- The Client Proxy may be integrated as a standalone binary, directly into the client with communication occurring over a localhost port or it may be possible utlise one of our client SDKs.
- The Client Proxy can now utilise filters, such as compression, without having to change the Game Client.
- The Game Client will need to communicate to the Client Proxy what IP it should connect to when the Client is match-made with a Game Server.
Client Proxy to Separate Server Proxies Pools
| |
| |
Internet Private
| Network
| โโโโโโโโโโโโโโโโโโ | โโโโโโโโโโโโโโโโโโ
| โ Quilkin โ | โ Dedicated โ
| โโโโบ (Server Proxy) โโโโโโโโโโโโฌโโโโบ Game Server โ
โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ | โ โ โ | โ โ โ
โ Game โ โ Quilkin โโโโโโโโโค โโโโโโโโโโโโโโโโโโ | โ โโโโโโโโโโโโโโโโโโ
โ Client โโโโโโโโบ (Client Proxy) โ | โ | โ
โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ | โ โโโโโโโโโโโโโโโโโโ | โ โโโโโโโโโโโโโโโโโโ
| โ โ Quilkin โ | โ โ Dedicated โ
| โโโโบ (Server Proxy) โโโโโโโโโโโโ โ Game Server โ
| โ โ | โ โ
| โโโโโโโโโโโโโโโโโโ | โโโโโโโโโโโโโโโโโโ
| |
| โโโโโโโโโโโโโโโโโโ | โโโโโโโโโโโโโโโโโโ
| โ Quilkin โ | โ Dedicated โ
| โ (Server Proxy) โ | โ Game Server โ
| โ โ | โ โ
| โโโโโโโโโโโโโโโโโโ | โโโโโโโโโโโโโโโโโโ
| โฒ | โฒ
โ โ
โ โโโโโโโโโโโโโดโโโโโ
โ โ xDS โ
โโโโโโโโโโโโโโโโค Control Plane โ
โโโโโโโโโโโโโโโโโโ
This is the most complex configuration, but enables the most reuse of Quilkin's functionality, while also providing the most redundancy and security for your dedicated game servers.
- The Game client sends and receives packets from the Quilkin client proxy.
- The Client Proxy may be integrated as a standalone binary, with communication occurring over a localhost port, or it could be integrated directly with the game client as a library, or the client could utilise one of our client SDKs if Rust integration is not possible.
- The Client Proxy can utilise the full set of filters, such as concatenation (for routing), compression or load balancing, without having to change the Game Client.
- A hosted set of Quilkin Server proxies that have public IP addresses, are connected to an xDS Control Plane to coordinate routing and access control to the dedicated game servers, which are on private IP addresses.
- The Client Proxy is made aware of one or more Server proxies to connect to, possibly via their Game Client matchmaker or another service, with an authentication token to pass to the Server proxies, such that the UDP packets can be routed correctly to the dedicated game server they should connect to.
- Dedicated game servers receive traffic as per normal from the Server Proxies, and send data back to the proxies directly.
- If the dedicated game server always expects traffic from only a single ip/port combination for client connection, then traffic will always need to be sent through a single Server Proxy. Otherwise, UDP packets can be load balanced via the Client Proxy to multiple Server Proxies for even greater redundancy.
What Next?
- Have a look at the example configurations for configuration and usage examples.
- Review the set of filters that are available.
Diagrams powered by asciiflow.com
Proxy Mode
The "proxy mode" is the primary mode of operation for Quilkin, wherein it acts as a non-transparent UDP proxy.
This is driven by Quilkin being executed via the
run
subcommand.
To view all the options for the run
subcommand, run:
$ quilkin run --help
Run Quilkin as a UDP reverse proxy
Usage: quilkin run [OPTIONS]
Options:
-m, --management-server <MANAGEMENT_SERVER>
One or more `quilkin manage` endpoints to listen to for config changes [env: QUILKIN_MANAGEMENT_SERVER=]
--mmdb <MMDB>
The remote URL or local file path to retrieve the Maxmind database [env: MMDB=]
-p, --port <PORT>
The port to listen on [env: QUILKIN_PORT=]
-t, --to <TO>
One or more socket addresses to forward packets to [env: QUILKIN_DEST=]
-h, --help
Print help information
Proxy Concepts
The Concepts section helps you learn about the parts of Quilkin when running as proxy and how they work toghether.
Local Port
This is the port
configuration, in which initial connections to Quilkin are made. Defaults to 7000
.
Endpoints
An Endpoint represents an address that Quilkin forwards packets to that it has recieved from the Local Port, and recieves data from as well.
It is represented by an IP address and port. An Endpoint can optionally be associated with an arbitrary set of metadata as well.
Proxy Filters
Filters are the way for a Quilkin proxy to intercept UDP traffic travelling between a Local Port and Endpoints in either direction, and be able to inpsect, manipulate, and route the packets as desired.
See Filters for a deeper dive into Filters, as well as the list of build in Filters that come with Quilkin.
Endpoint Metadata
Enpoint metadata is an arbitrary set of key value pairs that are associated with an Endpoint.
These are visible to Filters when processing packets and can be used to provide more context about endpoints (e.g whether or not to route a packet to an endpoint). Keys must be of type string otherwise the configuration is rejected.
Metadata associated with an endpoint contain arbitrary key value pairs which Filters can consult when processing packets (e.g they can contain information that determine whether or not to route a particular packet to an endpoint).
Specialist Endpoint Metadata
Access tokens that can be associated with an endpoint are simply a special piece of metadata well known to Quilkin and utilised by the built-in TokenRouter filter to route packets.
Such well known values are placed within an object in the endpoint metadata, under the special key quilkin.dev
.
Currently, only the tokens
key is in use.
As an example, the following shows the configuration for an endpoint with its metadata:
clusters:
default:
localities:
- endpoints:
- address: 127.0.0.1:26000
metadata:
canary: false
quilkin.dev: # This object is extracted by Quilkin and is usually reserved for built-in features
tokens:
- MXg3aWp5Ng== # base64 for 1x7ijy6
- OGdqM3YyaQ== # base64 for 8gj3v2i
An endpoint's metadata can be specified alongside the endpoint in static configuration or using the xDS endpoint metadata field when using dynamic configuration via xDS.
Session
A session represents ongoing communication flow between a client on a Local Port and an Endpoint.
Quilkin uses the "Session" concept to track traffic flowing through the proxy between any client-server pair. A
Session serves the same purpose, and can be thought of as a lightweight version of a TCP
session in that, while a
TCP session requires a protocol to establish and teardown:
- A Quilkin session is automatically created upon receiving the first packet from a client via the Local Port, to be sent to an upstream Endpoint.
- The session is automatically deleted after a period of inactivity (where no packet was sent between either party) - currently 60 seconds.
A session is identified by the 4-tuple (client IP, client Port, server IP, server Port)
where the client is the
downstream endpoint which initiated the communication with Quilkin and the server is one of the upstream Endpoints
that Quilkin proxies traffic to.
Sessions are established after the filter chain completes. The destination Endpoint of a packet is determined by the [filter chain][filter-doc], so a Session can only be created after filter chain completion. For example, if the filter chain drops all packets, then no session will ever be created.
Proxy Filters
In most cases, we would like Quilkin to do some preprocessing of received packets before sending them off to their destination. Because this stage is entirely specific to the use case at hand and differs between Quilkin deployments, we must have a say over what tweaks to perform - this is where filters come in.
Filters and Filter chain
A filter represents a step in the tweaking/decision-making process of how we would like to process our packets. For example, at some step, we might choose to append some metadata to every packet we receive before forwarding it while at a later step, choose not to forward packets that don't meet some criteria.
Quilkin lets us specify any number of filters and connect them in a sequence to form a packet processing pipeline similar to a Unix pipeline - we call this pipeline a Filter chain
. The combination of filters and filter chain allows us to add new functionality to fit every scenario without changing Quilkin's core.
As an example, say we would like to perform the following steps in our processing pipeline to the packets we receive.
- Append a predetermined byte to the packet.
- Compress the packet.
- Do not forward (drop) the packet if its compressed length is over 512 bytes.
We would create a filter corresponding to each step either by leveraging any existing filters that do what we want or writing one ourselves and connect them to form the following filter chain:
append | compress | drop
When Quilkin consults our filter chain, it feeds the received packet into append
and forwards the packet it receives (if any) from drop
- i.e the output of append
becomes the input
into compress
and so on in that order.
There are a few things we note here:
-
Although we have in this example, a filter called
drop
, every filter in the filter chain has the same ability to drop or update a packet - if any filter drops a packet then no more work needs to be done regarding that packet so the next filter in the pipeline never has any knowledge that the dropped packet ever existed. -
The filter chain is consulted for every received packet, and its filters are traversed in reverse order for packets travelling in the opposite direction. A packet received downstream will be fed into
append
and the result fromdrop
is forwarded upstream - a packet received upstream will be fed intodrop
and the result fromappend
is forwarded downstream. -
Exactly one filter chain is specified and used to process all packets that flow through Quilkin.
Configuration Examples
// Wrap this example within an async main function since the // local_rate_limit filter spawns a task on initialization #[tokio::main] async fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.debug.v1alpha1.Debug config: id: debug-1 - name: quilkin.filters.local_rate_limit.v1alpha1.LocalRateLimit config: max_packets: 10 period: 1 clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 2); quilkin::Proxy::try_from(config).unwrap(); }
We specify our filter chain in the .filters
section of the proxy's configuration which has takes a sequence of FilterConfig objects. Each object describes all information necessary to create a single filter.
The above example creates a filter chain comprising a Debug filter followed by a LocalRateLimit filter - the effect is that every packet will be logged and the proxy will not forward more than 10 packets per second.
The sequence determines the filter chain order so its ordering matters - the chain starts with the filter corresponding the first filter config and ends with the filter corresponding the last filter config in the sequence.
Filter Dynamic Metadata
A filter within the filter chain can share data within another filter further along in the filter chain by propagating the desired data alongside the packet being processed. This enables sharing dynamic information at runtime, e.g information about the current packet that might be useful to other filters that process that packet.
At packet processing time each packet is associated with filter dynamic metadata (a set of key-value pairs). Each key is a unique string while its value is an associated quilkin::metadata::Value
.
When a filter processes a packet, it can choose to consult the associated dynamic metadata for more information or itself add/update or remove key-values from the set.
As an example, the built-in [CaptureBytes] filter is one such filter that populates a packet's filter metadata. [CaptureBytes] extracts information (a configurable byte sequence) from each packet and appends it to the packet's dynamic metadata for other filters to leverage. On the other hand, the built-in TokenRouter filter selects what endpoint to route a packet by consulting the packet's dynamic metadata for a routing token. Consequently, we can build a filter chain with a [CaptureBytes] filter preceeding a TokenRouter filter, both configured to write and read the same key in the dynamic metadata entry. The effect would be that packets are routed to upstream endpoints based on token information extracted from their contents.
Well Known Dynamic Metadata
The following metadata are currently used by Quilkin core and built-in filters.
Name | Type | Description |
---|---|---|
quilkin.dev/captured | Bytes | The default key under which the Capture filter puts the byte slices it extracts from each packet. |
Built-in filters
Quilkin includes several filters out of the box.
Filter | Description |
---|---|
Capture | Capture specific bytes from a packet and store them in filter dynamic metadata. |
Compress | Compress and decompress packets data. |
ConcatenateBytes | Add authentication tokens to packets. |
Debug | Logs every packet. |
Drop | Drop all packets |
Firewall | Allowing/blocking traffic by IP and port. |
LoadBalancer | Distributes downstream packets among upstream endpoints. |
LocalRateLimit | Limit the frequency of packets. |
Match | Change Filter behaviour based on dynamic metadata |
Pass | Allow all packets through |
Timestamp | Accepts a UNIX timestamp from metadata and observes the duration between that timestamp and now. |
TokenRouter | Send packets to endpoints based on metadata. |
FilterConfig
Represents configuration for a filter instance.
properties:
name:
type: string
description: |
Identifies the type of filter to be created.
This value is unique for every filter type - please consult the documentation for the particular filter for this value.
config:
type: object
description: |
The configuration value to be passed onto the created filter.
This is passed as an object value since it is specific to the filter's type and is validated by the filter
implementation. Please consult the documentation for the particular filter for its schema.
required: [ 'name' ]
CaptureBytes
The CaptureBytes
filter's job is to find a series of bytes within a packet, and capture it into
Filter Dynamic Metadata, so that it can be utilised by filters further
down the chain.
This is often used as a way of retrieving authentication tokens from a packet, and used in combination with ConcatenateBytes and TokenRouter filter to provide common packet routing utilities.
Capture strategies
There are multiple strategies for capturing bytes from the packet.
Suffix
Captures bytes from the end of the packet.
Prefix
Captures bytes from the start of the packet.
Regex
Captures bytes using a regular expression. Unlike other capture strategies, the regular expression can return one or many values if there are multiple matches.
Filter name
quilkin.filters.capture.v1alpha1.Capture
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.capture.v1alpha1.Capture config: metadataKey: myapp.com/myownkey prefix: size: 3 remove: false clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
type: object
required:
- metadata_key
- strategy
properties:
metadata_key:
description: The key to use when storing the captured value in the filter context. If a match was found it is available under `{{metadata_key}}/is_present`.
allOf:
- $ref: '#/definitions/Key'
strategy:
description: The capture strategy.
allOf:
- $ref: '#/definitions/Strategy'
definitions:
Key:
description: A key in the metadata table.
type: string
Strategy:
description: Strategy to apply for acquiring a set of bytes in the UDP packet
oneOf:
- description: Looks for the set of bytes at the beginning of the packet
type: object
required:
- kind
- size
properties:
kind:
type: string
enum:
- PREFIX
remove:
description: Whether captured bytes are removed from the original packet.
default: false
type: boolean
size:
description: The number of bytes to capture.
type: integer
format: uint32
minimum: 0.0
- description: Look for the set of bytes at the end of the packet
type: object
required:
- kind
- size
properties:
kind:
type: string
enum:
- SUFFIX
remove:
description: The number of bytes to capture.
default: false
type: boolean
size:
description: Whether captured bytes are removed from the original packet.
type: integer
format: uint32
minimum: 0.0
- description: Look for the set of bytes at the end of the packet
type: object
required:
- kind
- pattern
properties:
kind:
type: string
enum:
- REGEX
pattern:
description: The regular expression to use for capture.
type: string
Metrics
quilkin_filter_Capture_packets_dropped_total
A counter of the total number of packets that have been dropped due to their length being less than the configuredsize
.
Compress
The Compress
filter's job is to provide a variety of compression implementations for compression
and subsequent decompression of UDP data when sent between systems, such as a game client and game server.
Filter name
quilkin.filters.compress.v1alpha1.Compress
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.compress.v1alpha1.Compress config: on_read: COMPRESS on_write: DECOMPRESS mode: SNAPPY clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
The above example shows a proxy that could be used with a typical game client, where the original client data is sent to the local listening port and then compressed when heading up to a dedicated game server, and then decompressed when traffic is returned from the dedicated game server before being handed back to game client.
It is worth noting that since the Compress filter modifies the entire packet, it is worth paying special attention to the order it is placed in your Filter configuration. Most of the time it will likely be the first or last Filter configured to ensure it is compressing the entire set of data being sent.
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
type: object
required:
- on_read
- on_write
properties:
mode:
default: SNAPPY
allOf:
- $ref: '#/definitions/Mode'
on_read:
$ref: '#/definitions/Action'
on_write:
$ref: '#/definitions/Action'
definitions:
Action:
description: Whether to do nothing, compress or decompress the packet.
type: string
enum:
- DO_NOTHING
- COMPRESS
- DECOMPRESS
Mode:
description: The library to use when compressing.
type: string
enum:
- SNAPPY
Compression Modes
Snappy
Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression.
Currently, this filter only provides the Snappy compression format via the rust-snappy crate, but more will be provided in the future.
Metrics
quilkin_filter_Compress_packets_dropped_total
Total number of packets dropped as they could not be processed.- Labels:
action
: The action that could not be completed successfully, thereby causing the packet to be dropped.Compress
: Compressing the packet with the configuredmode
was attempted.Decompress
Decompressing the packet with the configuredmode
was attempted.
- Labels:
quilkin_filter_Compress_decompressed_bytes_total
Total number of decompressed bytes either received or sent.quilkin_filter_Compress_compressed_bytes_total
Total number of compressed bytes either received or sent.
ConcatenateBytes
The ConcatenateBytes
filter's job is to add a byte packet to either the beginning or end of each UDP packet that passes
through. This is commonly used to provide an auth token to each packet, so they can be routed appropriately.
Filter name
quilkin.filters.concatenate_bytes.v1alpha1.ConcatenateBytes
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.concatenate_bytes.v1alpha1.ConcatenateBytes config: on_read: APPEND on_write: DO_NOTHING bytes: MXg3aWp5Ng== clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
description: Config represents a `ConcatenateBytes` filter configuration.
type: object
required:
- bytes
properties:
bytes:
type: array
items:
type: integer
format: uint8
minimum: 0.0
on_read:
description: Whether or not to `append` or `prepend` or `do nothing` on Filter `Read`
default: DO_NOTHING
allOf:
- $ref: '#/definitions/Strategy'
on_write:
description: Whether or not to `append` or `prepend` or `do nothing` on Filter `Write`
default: DO_NOTHING
allOf:
- $ref: '#/definitions/Strategy'
definitions:
Strategy:
type: string
enum:
- APPEND
- PREPEND
- DO_NOTHING
Metrics
This filter currently exports no metrics.
Debug
The Debug filter logs all incoming and outgoing packets to standard output.
This filter is useful in debugging deployments where the packets strictly contain valid UTF-8
encoded strings. A generic error message is instead logged if conversion from bytes to UTF-8
fails.
Filter name
quilkin.filters.debug_filter.v1alpha1.Debug
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.debug.v1alpha1.Debug config: id: debug-1 clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
description: A Debug filter's configuration.
type: object
properties:
id:
description: Identifier that will be optionally included with each log message.
type:
- string
- 'null'
Metrics
This filter currently exports no metrics.
Drop
The Drop
filter always drops any packet passed through it. This is useful in
combination with conditional flow filters like Matches
Filter name
quilkin.filters.drop.v1alpha1.Drop
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 clusters: default: localities: - endpoints: - address: 127.0.0.1:26000 - address: 127.0.0.1:26001 filters: - name: quilkin.filters.capture.v1alpha1.Capture config: metadataKey: myapp.com/token prefix: size: 3 remove: false - name: quilkin.filters.match.v1alpha1.Match config: on_read: metadataKey: myapp.com/token branches: - value: abc name: quilkin.filters.pass.v1alpha1.Pass fallthrough: name: quilkin.filters.drop.v1alpha1.Drop "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 2); quilkin::Proxy::try_from(config).unwrap(); }
Configuration
No defined configuration options.
Metrics
This filter currently exports no metrics.
Firewall
The Firewall
filter's job is to allow or block traffic depending on if the incoming traffic's IP and port matches
the rules set on the Firewall filter.
Filter name
quilkin.filters.firewall.v1alpha1.Firewall
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.firewall.v1alpha1.Firewall config: on_read: - action: ALLOW source: 192.168.51.0/24 ports: - 10 - 1000-7000 on_write: - action: DENY source: 192.168.51.0/24 ports: - 7000 clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
description: Represents how a Firewall filter is configured for read and write operations.
type: object
required:
- on_read
- on_write
properties:
on_read:
type: array
items:
$ref: '#/definitions/Rule'
on_write:
type: array
items:
$ref: '#/definitions/Rule'
definitions:
Action:
description: Whether or not a matching [Rule] should Allow or Deny access
oneOf:
- description: Matching rules will allow packets through.
type: string
enum:
- ALLOW
- description: Matching rules will block packets.
type: string
enum:
- DENY
PortRange:
description: Range of matching ports that are configured against a [Rule].
allOf:
- $ref: '#/definitions/Range_of_uint16'
Range_of_uint16:
type: object
required:
- end
- start
properties:
end:
type: integer
format: uint16
minimum: 0.0
start:
type: integer
format: uint16
minimum: 0.0
Rule:
description: Combination of CIDR range, port range and action to take.
type: object
required:
- action
- ports
- source
properties:
action:
$ref: '#/definitions/Action'
ports:
type: array
items:
$ref: '#/definitions/PortRange'
source:
description: ipv4 or ipv6 CIDR address.
type: string
Rule Evaluation
The Firewall filter supports DENY and ALLOW actions for access control. When multiple DENY and ALLOW actions are used for a workload at the same time, the evaluation is processed in the order it is configured, with the first matching rule deciding if the request is allowed or denied:
- If a rule action is ALLOW, and it matches the request, then the entire request is allowed.
- If a rule action is DENY and it matches the request, then the entire request is denied.
- If none of the configured rules match, then the request is denied.
Metrics
quilkin_filter_Firewall_packets_denied_total
Total number of packets denied.quilkin_filter_Firewall_packets_allowed_total
Total number of packets allowed.
Both metrics have the label event
, with a value of read
or write
which corresponds to either on_read
or
on_write
events within the Filter.
LoadBalancer
The LoadBalancer
filter distributes packets received downstream among all upstream endpoints.
Filter name
quilkin.filters.load_balancer.v1alpha1.LoadBalancer
Configuration Examples
#[tokio::main] async fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.load_balancer.v1alpha1.LoadBalancer config: policy: ROUND_ROBIN clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
The load balancing policy (the strategy to use to select what endpoint to send traffic to) is configurable. In the example above, packets will be distributed by selecting endpoints in turn, in round robin fashion.
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
description: The configuration for [`load_balancer`][super].
type: object
properties:
policy:
default: ROUND_ROBIN
allOf:
- $ref: '#/definitions/Policy'
definitions:
Policy:
description: Policy represents how a [`load_balancer`][super] distributes packets across endpoints.
oneOf:
- description: Send packets to endpoints in turns.
type: string
enum:
- ROUND_ROBIN
- description: Send packets to endpoints chosen at random.
type: string
enum:
- RANDOM
- description: Send packets to endpoints based on hash of source IP and port.
type: string
enum:
- HASH
Metrics
This filter currently does not expose any metrics.
LocalRateLimit
The LocalRateLimit filter controls the frequency at which packets received downstream are forwarded upstream by the proxy.
Rate limiting is done independently per source (IP, Port) combination.
Filter name
quilkin.filters.local_rate_limit.v1alpha1.LocalRateLimit
Configuration Examples
// Wrap this example within an async main function since the // local_rate_limit filter spawns a task on initialization #[tokio::main] async fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.local_rate_limit.v1alpha1.LocalRateLimit config: max_packets: 1000 period: 1 clusters: default: localities: - endpoints: - address: 127.0.0.1:7001 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
To configure a rate limiter, we specify the maximum rate at which the proxy is allowed to forward packets. In the example above, we configured the proxy to forward a maximum of 1000 packets per second).
Be aware that due to some optimizations in the current rate limiter implementation, the enforced maximum number of packets is not always exact. It is in theory possible that the rate limiter allows a few packets through, however in practice this would be a rare occurrence and the maximum number of such packets that is in the worse case
N-1
whereN
is the number of threads used to process packets. For example, a configuration allowing 1000 packets per second could potentially allow 1004 packets during some time window if we have up to 4 threads.
Packets that that exceeds the maximum configured rate are dropped.
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
description: Config represents a [self]'s configuration.
type: object
required:
- max_packets
- period
properties:
max_packets:
description: The maximum number of packets allowed to be forwarded by the rate limiter in a given duration.
type: integer
format: uint
minimum: 0.0
period:
description: The duration in seconds during which max_packets applies. If none is provided, it defaults to one second.
type: integer
format: uint32
minimum: 0.0
Metrics
quilkin_filter_LocalRateLimit_packets_dropped_total
A counter over the total number of packets that have exceeded the configured maximum rate limit and have been dropped as a result.
Match
The Match
filter's job is to provide a mechanism to change behaviour based
on dynamic metadata. This filter behaves similarly to the match
expression
in Rust or switch
statements in other languages.
Filter name
quilkin.filters.match.v1alpha1.Match
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 clusters: default: localities: - endpoints: - address: 127.0.0.1:26000 - address: 127.0.0.1:26001 filters: - name: quilkin.filters.capture.v1alpha1.Capture config: metadataKey: myapp.com/token prefix: size: 3 remove: false - name: quilkin.filters.match.v1alpha1.Match config: on_read: metadataKey: myapp.com/token branches: - value: abc name: quilkin.filters.pass.v1alpha1.Pass fallthrough: name: quilkin.filters.drop.v1alpha1.Drop "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 2); quilkin::Proxy::try_from(config).unwrap(); }
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
description: Configuration for [`Match`][super::Match].
type: object
properties:
on_read:
description: Configuration for [`Filter::read`][crate::filters::Filter::read].
anyOf:
- $ref: '#/definitions/DirectionalConfig'
- type: 'null'
on_write:
description: Configuration for [`Filter::write`][crate::filters::Filter::write].
anyOf:
- $ref: '#/definitions/DirectionalConfig'
- type: 'null'
additionalProperties: false
definitions:
Branch:
description: A specific match branch. The filter is run when `value` matches the value defined in `metadata_key`.
type: object
required:
- name
- value
properties:
config: true
name:
type: string
value:
description: The value to compare against the dynamic metadata.
allOf:
- $ref: '#/definitions/Value'
DirectionalConfig:
description: Configuration for a specific direction.
type: object
required:
- branches
- metadataKey
properties:
branches:
description: List of filters to compare and potentially run if any match.
type: array
items:
$ref: '#/definitions/Branch'
fallthrough:
description: The behaviour for when none of the `branches` match.
default:
name: quilkin.filters.drop.v1alpha1.Drop
config: null
allOf:
- $ref: '#/definitions/Filter'
metadataKey:
description: The key for the metadata to compare against.
allOf:
- $ref: '#/definitions/Key'
Filter:
description: Filter is the configuration for a single filter
type: object
required:
- name
properties:
config: true
name:
type: string
additionalProperties: false
Key:
description: A key in the metadata table.
type: string
Value:
anyOf:
- type: boolean
- type: integer
format: uint64
minimum: 0.0
- type: array
items:
$ref: '#/definitions/Value'
- type: string
- type: array
items:
type: integer
format: uint8
minimum: 0.0
View the Match filter documentation for more details.
Metrics
quilkin_filter_Match_packets_matched_total
A counter of the total number of packets where the dynamic metadata matches a branch value.quilkin_filter_Match_packets_fallthrough_total
A counter of the total number of packets that are processed by the fallthrough configuration.
Pass
The Pass
filter that always passes any packet through it. This is useful in
combination with conditional flow filters like Matches
Filter name
quilkin.filters.pass.v1alpha1.Pass
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 clusters: default: localities: - endpoints: - address: 127.0.0.1:26000 - address: 127.0.0.1:26001 filters: - name: quilkin.filters.capture.v1alpha1.Capture config: metadataKey: myapp.com/token prefix: size: 3 remove: false - name: quilkin.filters.match.v1alpha1.Match config: on_read: metadataKey: myapp.com/token branches: - value: abc name: quilkin.filters.pass.v1alpha1.Pass fallthrough: name: quilkin.filters.drop.v1alpha1.Drop "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 2); quilkin::Proxy::try_from(config).unwrap(); }
Configuration
No defined configuration options.
Metrics
This filter currently exports no metrics.
Timestamp
The Timestamp
filter accepts a UNIX timestamp from metadata and observes the
duration between that timestamp and now. Mostly useful in combination with other
filters such as Capture
to pull timestamp data from packets.
Filter name
quilkin.filters.timestamp.v1alpha1.Timestamp
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.capture.v1alpha1.Capture config: metadataKey: example.com/session_duration prefix: size: 3 remove: false - name: quilkin.filters.timestamp.v1alpha1.Timestamp config: metadataKey: example.com/session_duration clusters: default: localities: - endpoints: - address: 127.0.0.1:26000 "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); quilkin::Proxy::try_from(config).unwrap(); }
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
description: Config represents a [self]'s configuration.
type: object
required:
- metadataKey
properties:
metadataKey:
description: The metadata key to read the UTC UNIX Timestamp from.
allOf:
- $ref: '#/definitions/Key'
definitions:
Key:
description: A key in the metadata table.
type: string
Metrics
quilkin_filter_timestamp_seconds{metadata_key, direction}
A histogram of durations frommetadata_key
to now in the packetdirection
.
TokenRouter
The TokenRouter
filter's job is to provide a mechanism to declare which Endpoints a packet should be sent to.
This Filter provides this functionality by comparing a byte array token found in the Filter Dynamic Metadata from a previous Filter, and comparing it to Endpoint's tokens, and sending packets to those Endpoints only if there is a match.
Filter name
quilkin.filters.token_router.v1alpha1.TokenRouter
Configuration Examples
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.token_router.v1alpha1.TokenRouter config: metadataKey: myapp.com/myownkey clusters: default: localities: - endpoints: - address: 127.0.0.1:26000 metadata: quilkin.dev: tokens: - MXg3aWp5Ng== # Authentication is provided by these ids, and matched against - OGdqM3YyaQ== # the value stored in Filter dynamic metadata - address: 127.0.0.1:26001 metadata: quilkin.dev: tokens: - bmt1eTcweA== "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
View the CaptureBytes filter documentation for more details.
Configuration Options (Rust Doc)
$schema: http://json-schema.org/draft-07/schema#
title: Config
type: object
properties:
metadataKey:
description: the key to use when retrieving the token from the Filter's dynamic metadata
default: quilkin.dev/capture
allOf:
- $ref: '#/definitions/Key'
definitions:
Key:
description: A key in the metadata table.
type: string
Metrics
quilkin_filter_TokenRouter_packets_dropped_total
A counter of the total number of packets that have been dropped. This is also provided with aReason
label, as there are differing reasons for packets to be dropped:NoEndpointMatch
- The token provided via the Filter dynamic metadata does not match any Endpoint's tokens.NoTokenFound
- No token has been found in the Filter dynamic metadata.InvalidToken
- The data found for the token in the Filter dynamic metadata is not of the correct data type (Vec)
Sample Applications
Packet Authentication
In combination with several other filters, the TokenRouter
can be utilised as an authentication and access control
mechanism for all incoming packets.
Capturing the authentication token from an incoming packet can be implemented via the CaptureByte filter, with an example outlined below, or any other filter that populates the configured dynamic metadata key for the authentication token to reside.
It is assumed that the endpoint tokens that are used for authentication are generated by an external system, are appropriately cryptographically random and sent to each proxy securely.
For example, a configuration would look like:
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.capture.v1alpha1.Capture # Capture and remove the authentication token config: suffix: size: 3 remove: true - name: quilkin.filters.token_router.v1alpha1.TokenRouter clusters: default: localities: - endpoints: - address: 127.0.0.1:26000 metadata: quilkin.dev: tokens: - MXg3aWp5Ng== # Authentication is provided by these ids, and matched against - OGdqM3YyaQ== # the value stored in Filter dynamic metadata - address: 127.0.0.1:26001 metadata: quilkin.dev: tokens: - bmt1eTcweA== "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 2); quilkin::Proxy::try_from(config).unwrap(); }
On the game client side the ConcatenateBytes filter could also be used to add authentication tokens to outgoing packets.
Writing Custom Filters
The full source code used in this example can be found in
examples/
.
Quilkin provides an extensible implementation of Filters that allows us to plug in custom implementations to fit our needs. This document provides an overview of the API and how we can go about writing our own Filters. First we need to create a type and implement two traits for it.
It's not terribly important what the filter in this example does so let's write
a Greet
filter that appends Hello
to every packet in one direction and
Goodbye
to packets in the opposite direction.
struct Greet;
As a convention within Quilkin: Filter names are singular, they also tend to be a verb, rather than an adjective.
Examples
- Greet not "Greets"
- Compress not "Compressor".
Filter
Represents the actual Filter instance in the pipeline. An
implementation provides a read
and a write
method (both are passthrough
by default) that accepts a context object and returns a response.
Both methods are invoked by the proxy when it consults the filter chain
read
is invoked when a packet is received on the local downstream port and
is to be sent to an upstream endpoint while write
is invoked in the opposite
direction when a packet is received from an upstream endpoint and is to be
sent to a downstream client.
struct Greet;
use quilkin::filters::prelude::*;
impl Filter for Greet {
fn read(&self, ctx: &mut ReadContext) -> Option<()> {
ctx.contents.extend(b"Hello");
Some(())
}
fn write(&self, ctx: &mut WriteContext) -> Option<()> {
ctx.contents.extend(b"Goodbye");
Some(())
}
}
StaticFilter
Represents metadata needed for your [Filter
], most of it has to with defining
configuration, for now we can use ()
as we have no configuration currently.
use quilkin::filters::prelude::*;
struct Greet;
impl Filter for Greet {}
impl StaticFilter for Greet {
const NAME: &'static str = "greet.v1";
type Configuration = ();
type BinaryConfiguration = ();
fn try_from_config(config: Option<Self::Configuration>) -> Result<Self, Error> {
Ok(Self)
}
}
Running
We can run the proxy using [Proxy::TryFrom
][Proxy::TryFrom] function. Let's
add a main function that does that. Quilkin relies on the Tokio async
runtime, so we need to import that crate and wrap our main function with it.
We can also register custom filters in quilkin using FilterRegistry::register
Add Tokio as a dependency in Cargo.toml
.
[dependencies]
quilkin = "0.2.0"
tokio = { version = "1", features = ["full"]}
Add a main function that starts the proxy.
// src/main.rs
#[tokio::main]
async fn main() -> quilkin::Result<()> {
quilkin::filters::FilterRegistry::register(vec![Greet::factory()].into_iter());
let (_shutdown_tx, shutdown_rx) = tokio::sync::watch::channel(());
let server: quilkin::Proxy = quilkin::Config::builder()
.port(7001)
.filters(vec![quilkin::config::Filter {
name: Greet::NAME.into(),
config: None,
}])
.endpoints(vec![quilkin::endpoint::Endpoint::new(
(std::net::Ipv4Addr::LOCALHOST, 4321).into(),
)])
.build()?
.try_into()?;
server.run(shutdown_rx).await
}
Now, let's try out the proxy. The following configuration starts our extended version of the proxy at port 7001 and forwards all packets to an upstream server at port 4321.
# quilkin.yaml
version: v1alpha1
port: 7001
filters:
- name: greet.v1
clusters:
default:
localities:
- endpoints:
- address: 127.0.0.1:7001
Next we to setup our network of services, for this example we're going to use
the netcat
tool to spawn a UDP echo server and interactive client for us to
send packets over the wire.
# Start the proxy
cargo run -- &
# Start a UDP listening server on the configured port
nc -lu 127.0.0.1 4321 &
# Start an interactive UDP client that sends packet to the proxy
nc -u 127.0.0.1 7001
Whatever we pass to the client should now show up with our modification on the
listening server's standard output. For example typing Quilkin
in the client
prints Hello Quilkin
on the server.
Configuration
Let's extend the Greet
filter to have a configuration that contains what
greeting to use.
The Serde crate is used to describe static YAML configuration in code while Tonic/Prost is used to describe dynamic configuration as Protobuf messages when talking to a management server.
YAML Configuration
First let's create the type for our configuration:
- Add the yaml parsing crates to
Cargo.toml
:
# [dependencies]
serde = "1.0"
serde_yaml = "0.8"
- Define a struct representing the config:
// src/main.rs
#[derive(Serialize, Deserialize, Debug, schemars::JsonSchema)]
struct Config {
greeting: String,
}
- Update the
Greet
Filter to take ingreeting
as a parameter:
// src/main.rs
struct Greet {
config: Config,
}
impl Filter for Greet {
fn read(&self, ctx: &mut ReadContext) -> Option<()> {
ctx.contents
.splice(0..0, format!("{} ", self.config.greeting).into_bytes());
Some(())
}
fn write(&self, ctx: &mut WriteContext) -> Option<()> {
ctx.contents
.splice(0..0, format!("{} ", self.config.greeting).into_bytes());
Some(())
}
}
Protobuf Configuration
Quilkin comes with out-of-the-box support for xDS management, and as such needs
to communicate filter configuration over Protobuf with management servers and
clients to synchronise state across the network. So let's add the binary version
of our Greet
configuration.
- Add the proto parsing crates to
Cargo.toml
:
[dependencies]
# ...
tonic = "0.5.0"
prost = "0.7"
prost-types = "0.7"
- Create a Protobuf equivalent of our YAML configuration.
// src/greet.proto
syntax = "proto3";
package greet;
message Greet {
string greeting = 1;
}
- Generate Rust code from the proto file:
There are a few ways to generate Prost code from proto, we will use the prost_build crate in this example.
Add the following required crates to Cargo.toml
, and then add a
build script to generate the following Rust code
during compilation:
# [dependencies]
bytes = "1.0"
# [build-dependencies]
prost-build = "0.7"
// src/build.rs
fn main() {
prost_build::compile_protos(&["src/greet.proto"], &["src/"]).unwrap();
}
To include the generated code, we'll use [tonic::include_proto
], then we just
need to implement std::convert::TryFrom for converting the protobuf message to
equivalvent configuration.
// src/main.rs
mod proto {
tonic::include_proto!("greet");
}
impl TryFrom<proto::Greet> for Config {
type Error = ConvertProtoConfigError;
fn try_from(p: proto::Greet) -> Result<Self, Self::Error> {
Ok(Self {
greeting: p.greeting,
})
}
}
impl From<Config> for proto::Greet {
fn from(config: Config) -> Self {
Self {
greeting: config.greeting,
}
}
}
Now, let's update Greet
's StaticFilter
implementation to use the two
configurations.
// src/main.rs
use quilkin::filters::StaticFilter;
impl StaticFilter for Greet {
const NAME: &'static str = "greet.v1";
type Configuration = Config;
type BinaryConfiguration = proto::Greet;
fn try_from_config(
config: Option<Self::Configuration>,
) -> Result<Self, quilkin::filters::Error> {
Ok(Self {
config: Self::ensure_config_exists(config)?,
})
}
}
That's it! With these changes we have wired up static configuration for our filter. Try it out with the following configuration:
# quilkin.yaml
version: v1alpha1
port: 7001
filters:
- name: greet.v1
config:
greeting: Hey
endpoints:
- address: 127.0.0.1:4321
Proxy Metrics
The following are metrics that Quilkin provides while in Proxy Mode.
General Metrics
The proxy exposes the following general metrics:
-
quilkin_packets_processing_duration_seconds{event}
(Histogram)The total duration of time in seconds that it took to process a packet.
- The
event
label is either:read
: when the proxy receives data from a downstream connection on the listening port.write
: when the proxy sends data to a downstream connection via the listening port.
- The
-
quilkin_packets_dropped_total{reason}
(Counter)The total number of packets (not associated with any session) that were dropped by proxy. Not that packets reflected by this metric were dropped at an earlier stage before they were associated with any session. For session based metrics, see the list of session metrics instead.
reason = NoConfiguredEndpoints
NoConfiguredEndpoints
: No upstream endpoints were available to send the packet to. This can occur e.g if the endpoints cluster was scaled down to zero and the proxy is configured via a control plane.
-
quilkin_cluster_active
The number of currently active clusters.
-
quilkin_cluster_active_endpoints
The number of currently active upstream endpoints. Note that this tracks the number of endpoints that the proxy knows of rather than those that it is connected to (see Session Metrics instead for those)
-
quilkin_bytes_total{event}
The total number of bytes sent or recieved
- The
event
label is either:read
: when the proxy receives data from a downstream connection on the listening port.write
: when the proxy sends data to a downstream connection via the listening port.
- The
-
quilkin_packets_total{event}
The total number of packets sent or recieved.
- The
event
label is either:read
: when the proxy receives data from a downstream connection on the listening port.write
: when the proxy sends data to a downstream connection via the listening port.
- The
-
quilkin_errors_total{event}
The total number of errors encountered while reading a packet from the upstream endpoint.
Session Metrics
The proxy exposes the following metrics around sessions:
-
quilkin_session_active{asn}{ip_prefix}
The number of currently active sessions. If a maxmind database has been provided, the labels are populated:
- The
asn
label is the ASN number of the connecting client. - The
ip_prefix
label is the IP prefix of the connecting client.
- The
-
quilkin_session_duration_secs
(Histogram)A histogram over how long sessions lasted before they were torn down. Note that, by definition, active sessions are not included in this metric.
-
quilkin_session_total
(Counter)The total number of sessions that have been created.
Filter Metrics
-
quilkin_filter_read_duration_seconds{filter}
The duration it took for a
filter
'sread
implementation to execute.- The
filter
label is the name of the filter being executed.
- The
-
quilkin_filter_write_duration_seconds{filter}
The duration it took for a
filter
'swrite
implementation to execute.- The
filter
label is the name of the filter being executed.
- The
Each individual Filter can also expose it's own metrics. See the list of build in Filters for more details.
ASN Maxmind Information
If Quilkin is provided a
remote URL or local file path to a
Maxmind IP Geolocation database through the mmdb
file or
command line
configuration, Quilkin will log the following information in the maxmind information
log.
Field | Description |
---|---|
number | ASN Number |
organization | The organisation responsible for the ASN |
country_code | The corresponding country code |
prefix | The IP prefix CIDR address |
prefix_entity | The name of the entity for the prefix address |
prefix_name | The name of the prefix address |
Maxmind databases often require a licence and/or fee, so they aren't included by default with Quilkin.
Dynamic Configuration using xDS Management Servers
In addition to static configuration provided upon startup, a Quiklin proxy's configuration can also be updated at runtime. The proxy can be configured on startup to talk to a set of management servers which provide it with updates throughout its lifecycle.
Communication between the proxy and management server uses the xDS gRPC protocol, similar to an envoy proxy. xDS is one of the standard configuration mechanisms for software proxies and as a result, Quilkin can be setup to discover configuration resources from any API compatible server. Also, given that the protocol is well specified, it is similarly straight-forward to implement a custom server to suit any deployment's needs.
As described within the xDS-api documentation, the xDS API comprises a set of resource discovery APIs, each serving a specific set of configuration resource types, while the protocol itself comes in several variants. Quilkin implements the Aggregated Discovery Service (ADS) State of the World (SotW) variant with gRPC.
Supported APIs
Since the range of resources configurable by the xDS API extends that of Quilkin's domain (i.e being UDP based, Quilkin does not have a need for HTTP/TCP resources), only a subset of the API is supported. The following lists these relevant parts and any limitation to the provided support as a result:
-
Cluster Discovery Service (CDS): Provides information about known clusters and their membership information.
- The proxy uses these resources to discover clusters and their endpoints.
- While cluster topology information like locality can be provided in the configuration, the proxy currently does not use this information (support may be included in the future however).
- Any load balancing information included in this resource is ignored. For load balancing, use Quilkin filters instead.
- Only cluster discovery type
STATIC
andEDS
is supported. Configuration including other discovery types e.gLOGICAL_DNS
is rejected.
-
Endpoint Discovery Service (EDS): Provides information about endpoints.
- The proxy uses these resources to discover information about endpoints like their IP addresses.
- Endpoints may provide Endpoint Metadata via the metadata field. These metadata will be visible to filters as part of the corresponding endpoints information when processing packets.
- Only socket addresses are supported on an endpoint's address configuration - i.e an IP address and port number combination. Configuration including any other type of addressing e.g named pipes will be rejected.
- Any load balancing information included in this resource is ignored. For load balancing, use Quilkin filters instead.
-
Listener Discovery Service (LDS): Provides information about Filters and Filter Chains.
- Only the
name
andfilter_chains
fields in the Listener resource are used by the proxy. The rest are ignored. - Since Quilkin only uses one filter chain per proxy, at most one filter chain can be provided in the resource. Otherwise the configuration is rejected.
- Only the list of filters specified in the filter chain is used by the proxy - i.e other fields like
filter_chain_match
are ignored. This list also specifies the order that the corresponding filter chain will be constructed. - gRPC proto configuration for Quilkin's built-in filters can be found here. They are equivalent to the filter's static configuration.
- Only the
Connecting to an xDS management server
Connecting a Quilkin proxy to an xDS management server can be implemented via providing one or more URLs to
the management_servers
command line or
file configuration.
Quilkin Built-in xDS Providers
To make xDS integration easier, Quilkin can be run in "xDS Provider Mode".
In this mode, rather than run Quilkin as a proxy, Quilkin will start an xDS management server on the Local Port, with each provider abstracting away the complexity of a full xDS management control plane via integrations with popular projects and artchitecture patterns.
This is driven by Quilkin being executed via the
manage
subcommand, and specifying which provider to be used.
To view all the providers and options for the manage
subcommand, run:
$ quilkin manage --help
Runs Quilkin as a xDS management server, using `provider` as a configuration source
Usage: quilkin manage [OPTIONS] <COMMAND>
Commands:
agones Watches Agones' game server CRDs for `Allocated` game server endpoints, and for a `ConfigMap` that specifies the filter configuration
file Watches for changes to the file located at `path`
help Print this message or the help of the given subcommand(s)
Options:
-p, --port <PORT> [env: QUILKIN_PORT=]
-h, --help Print help information
Filesystem xDS Provider
The filesystem provider watches a configuration file on disk and sends updates to proxies whenever that file changes.
It can be started with using subcommnad manage file
as the following:
quilkin manage --port 18000 file --config-file-path config.yaml
We run this on port 1800, in this example, in case you are running this locally, and the default port is taken up by an existing Quilkin proxy.
After running this command, any proxy that connects to port 18000 will receive updates as configured in config.yaml
file.
You can find the configuration file schema in File Configuration.
Example:
#![allow(unused)] fn main() { let yaml = " version: v1alpha1 filters: - name: quilkin.filters.debug.v1alpha1.Debug config: id: hello clusters: cluster-a: localities: - endpoints: - address: 123.0.0.1:29 metadata: 'quilkin.dev': tokens: - 'MXg3aWp5Ng==' "; let config = quilkin::config::Config::from_reader(yaml.as_bytes()).unwrap(); assert_eq!(config.filters.load().len(), 1); quilkin::Proxy::try_from(config).unwrap(); }
Agones xDS Provider
The Agones xDS Provider is built to simplify Quilkin integration with Agones game server hosting on top of Kubernetes.
This provider watches for changes in Agones
GameServer
resources in a cluster, and
utilises that information to provide Endpoint information to connected Quilkin proxies.
Currently, the Agones provider can only discover resources within the cluster it is running in.
Endpoint Configuration
This provider watches the Kubernetes clusters for Allocated
Agones GameServers
and exposes their IP address and Port as Endpoints to any connected Quilkin proxies.
Since an Agones GameServer can have multiple ports exposed, if multiple ports are in use, the server will pick the first port in the port list.
By default the Agones xDS provider will look in the default
namespace for any GameServer
resources, but it can be
configured via the --gameservers-namespace
argument.
Access Tokens
The set of access tokens for the associated Endpoint can be
set by adding a comma separated standard base64 encoded strings. This must be added under an annotation
quilkin.dev/tokens
in the
GameServer's metadata.
For example:
annotations:
# Sets two tokens for the corresponding endpoint with values 1x7ijy6 and 8gj3v2i respectively.
quilkin.dev/tokens: MXg3aWp5Ng==,OGdqM3YyaQ==
Filter Configuration
The Agones provider watches for a singular ConfigMap
that has the label of quilkin.dev/configmap: "true"
, and any changes that happen to it, and use its contents to
send Filter configuration to any connected Quilkin proxies.
The ConfigMap
contents should be a valid Quilkin file configuration, but with no
Endpoint data.
For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: quilkin-xds-filter-config
labels:
quilkin.dev/configmap: "true"
data:
quilkin.yaml: |
version: v1alpha1
filters:
- name: quilkin.filters.capture.v1alpha1.Capture
config:
suffix:
size: 3
remove: true
- name: quilkin.filters.token_router.v1alpha1.TokenRouter
By default the Agones xDS provider will look in the default
namespace for this ConfigMap
, but it can be
configured via the --config-namespace
argument.
Usage
As an example, the following runs the server with subcommnad manage agones
against a cluster (using default
kubeconfig authentication) where Quilkin pods run in the quilkin
namespace and GameServer
pods run in the
gameservers
namespace:
quilkin manage agones --config-namespace quilkin --gameservers-namespace gameservers
For a full referenmce of deploying this provider in a Kubernetes cluster, with appropriate Deployments, Services, and RBAC Rules, there is an Agones, xDS and Xonotic example.
xDS Metrics
Proxy Mode
Quilkin exposes the following metrics around the management servers and its resources when running as a UDP Proxy:
-
quilkin_xds_connected_state
(Gauge)A boolean that indicates whether or not the proxy is currently connected to a management server. A value
1
means that the proxy is connected while0
means that it is not connected to any server at that point in time. -
quilkin_xds_update_attempt_total
(Counter)The total number of attempts made by a management server to configure the proxy. This is equivalent to the total number of configuration updates received by the proxy from a management server.
-
quilkin_xds_update_success_total
(Counter)The total number of successful attempts made by a management server to configure the proxy. This is equivalent to the total number of configuration updates received by the proxy from a management server and was successfully applied by the proxy.
-
quilkin_xds_update_failure_total
(Counter)The total number of unsuccessful attempts made by a management server to configure the proxy. This is equivalent to the total number of configuration updates received by the proxy from a management server and was rejected by the proxy (e.g due to a bad/inconsistent configuration).
-
quilkin_xds_requests_total
(Counter)The total number of DiscoveryRequests made by the proxy to management servers. This tracks messages flowing in the direction from the proxy to the management server.
xDS Provider Mode
The following metrics are exposed when Quilkin is running as an xDS provider.
-
quilkin_management_server_connected_proxies
(Gauge)The number of proxies currently connected to the server.
-
quilkin_management_server_discovery_requests_total{request_type}
(Counter)The total number of xDS Discovery requests received across all proxies.
request_type
=type.googleapis.com/envoy.config.cluster.v3.Cluster
|type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
|type.googleapis.com/envoy.config.listener.v3.Listener
Type URL of the requested resource
-
quilkin_management_server_discovery_responses_total
(Counter)The total number of xDS Discovery responses sent back across all proxies in response to Discovery Requests. Each Discovery response sent corresponds to a configuration update for some proxy.
request_type
=type.googleapis.com/envoy.config.cluster.v3.Cluster
|type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
|type.googleapis.com/envoy.config.listener.v3.Listener
Type URL of the requested resource
-
quilkin_management_server_endpoints_total
(Gauge)The number of active endpoints discovered by the server. The number of active endpoints correlates with the size of the cluster configuration update sent to proxies.
-
quilkin_management_server_snapshot_generation_errors_total
(Counter)The total number of errors encountered while generating a configuration snapshot update for a proxy.
-
quilkin_management_server_snapshots_generated_total
(Counter)The total number of configuration snapshot generated across all proxies. A snapshot corresponds to a point in time view of a proxy's configuration. However it does not necessarily correspond to a proxy update - a proxy only gets the latest snapshot so it might miss intermediate snapshots if it lags behind.
-
quilkin_management_server_snapshots_cache_size
(Gauge)The current number of snapshots in the in-memory snapshot cache. This corresponds 1-1 to proxies that connect to the server. However the number may be slightly higher than the number of connected proxies since snapshots for disconnected proxies are only periodically cleared from the cache.
Administration Interface
Quilkin exposes an HTTP interface to query different aspects of the server.
It is assumed that the administration interface will only ever be able to be accessible on
localhost
.
By default, the administration interface is bound to [::]:9091
, but it can be configured by the command line flag
--admin-address
or through the proxy configuration file, like so:
admin:
address: [::]:9095
Endpoints
The admin interface provides the following endpoints:
/live
This provides a liveness probe endpoint, most commonly used in Kubernetes based systems.
Will return an HTTP status of 200 when all health checks pass.
/ready
This provides a readiness probe endpoint, most commonly used in Kubernetes based systems.
Depending on whether Quilkin is run in Proxy mode i.e. quilkin run
, vs an xDS provider mode, such as quilkin manage agones
, will dictate how readiness is calculated:
Proxy Mode
Will return an HTTP status of 200 when there is at least one endpoint to send data to. This is primarily to ensure that new proxies that have yet to get configuration information from an xDS server aren't send data until they are fully populated.
xDS Provider Mode
Will return an HTTP status of 200 when all health checks pass.
/metrics
Outputs Prometheus formatted metrics for this instance.
See the Proxy Metrics documentation for what proxy metrics are available.
See the xDS Metrics documentation for what xDS metrics are available.
/config
Returns a JSON representation of the cluster and filterchain configuration that the instance is running with at the time of invocation.
SDKs
You will learn here about Quilkin's lightweight integration SDKs that can be used as clients for interacting with your hosted Quilkin UDP proxies, for situations where it is not possible to run or integrate Quilkin in your game client.
Quilkin Unreal Engine Plugin
This is an alpha version of the Unreal Engine plugin for Quilkin. Currently it only supports adding a routing token in the following format.
<packet> | token | version
X bytes | 16 bytes | 1 bytes
How to install
To get this client proxy installed, the SDK should be located in Engine
path for Plugins, so copy the whole ue4
folder (resides under sdks
folder) in your Unreal Engine path /[UE4 Root]/Engine/Plugins
, then you may want to rename the ue4 folder to Quilkin
. Unreal Engine will automatically discover the plugin by searching for .uplugin
file.
Examples
See the examples folder on Github for configuration and usage examples.
Depending on which release version of Quilkin you are using, you may need to choose the appropriate release tag from the dropdown, as the API surface for Quilkin is still under development.
Examples include:
- Quilkin running as a sidecar while hosted on Agones.
- Quilkin running as an xDS Agones provider for Quilkin proxies.
- iperf3 throughput.
- Grafana dashboards.
...and more!
FAQ
Just how fast is Quilkin? What sort of performance can I expect?
Our current testing shows that on Quilkin shows that it process packets quite fast!
We won't be publishing performance benchmarks, as performance will always change depending on the underlying hardware, number of filters, configurations and more.
We highly recommend you run your own load tests on your platform and configuration, matching your production workload and configuration as close as possible.
Our iperf3 based performance test in the examples' folder is a good starting point.
Since this is still an alpha project, we have plans on investigating further performance improvements in upcoming releases, both from an optimisation and observability perspective as well.
Can I integrate Quilkin with C++ code?
Quilkin is also released as a library, so it can be integrated with an external codebase as necessary.
Using Rust code inside a C or C++ project mostly consists of two parts.
- Creating a C-friendly API in Rust
- Embedding your Rust project into an external build system
See A little Rust with your C for more information.
Over time, we will be expanding documentation on how to integrate with specific engines if running Quilkin as a separate binary is not an option.
I would like to run Quilkin as a client side proxy on a console? Can I do that?
This is an ongoing discussion, and since console development is protected by non-disclosure agreements, we can't comment on this directly.
That being said, we have started implementing lean Client SDK versions of certain filters that work with known supported game engines and languages for circumstances where compiling Rust or providing a separate Quilkin binary as an executable is not an option.
Any reason you didn't contribute this into/extend Envoy?
This is an excellent question! Envoy is an amazing project, and has set many of the standards for how proxies are written and orchestrated, and was an inspiration for many of the decisions made on Quilkin.
However, we decided to build this project separately:
- Envoy seems primarily focused on web/mobile network workloads (which makes total sense), whereas we wanted something specialised on gaming UDP communication, so having a leaner, more focused codebase would allow us to move faster.
- We found the Rust and Cargo ecosystem easier to work with than Bazel and C++, and figured our users would as well.