Appearance
Configuring a Deployment
This guide walks you through some common steps for configuring a Deployment by tweaking metaplay-gameserver Helm values.
Appearance
This guide walks you through some common steps for configuring a Deployment by tweaking metaplay-gameserver Helm values.
You configure Cloud Deployments by providing custom Helm values to our deployment pipeline. Under the hood, we pass them to the metaplay-gameserver
Helm chart during a deployment. Once deployed, the values are available to the game server as environment variables.
In practice, your custom Helm values are in a YAML file that you can place in Backend/Deployments/
with a file name like <environment>-server.yaml
and point to it in your metaplay-project.yaml
file. This means that you can have different values files for different environments, and you can also have different values files for the game server and the bot client.
environments:
- name: Develop
...
# Specify custom Helm values file for the game server to use:
serverValuesFile: Backend/Deployments/develop-server.yaml
# Specify custom Helm values file for the botclient to use:
botclientValuesFile: Backend/Deployments/develop-botclient.yaml
These values files are automatically passed to Helm when deploying with the Metaplay CLI.
Internally, we use the default values from the metaplay-gameserver
Helm chart to configure your Cloud Deployments. As your project matures, you can override these default values to fit your needs.
By default, the game server will use the following runtime options files (located in your project's Backend/Server/Config/
directory):
Options.base.yaml
is used in all deployments.Options.<envFamily>.yaml
is determined by the environment family used: Options.local.yaml
when environment family is Local
(game server running locally).Options.dev.yaml
when environment family is Development
(all development environments).Options.stage.yaml
when environment family is Staging
.Options.prod.yaml
when environment family is Production
.If you want to change this behavior, you can override the default values in your custom values file by defining the config.files
key with your own list of runtime options files:
config:
files:
- "./Config/Options.base.yaml"
- "./Config/Options.dev.yaml"
- "./Config/Options.my-custom.yaml"
Note that you need to specify the full list of runtime options files.
We often include upcoming features in the SDK that are disabled by default until fully tested. To enable specific experimental features for a deployment, you need to explicitly add YAML entries into your custom values file under the experimental
key:
# Enable experimental features
experimental:
<feature name>:
enabled: true
# ... other feature-specific settings
# ... more features
By convention, each Helm chart comes with a values.yaml
file under its root directory, providing pre-defined settings that you can be overridden.
We regularly update our metaplay-gameserver
during SDK updates, so you should look up the version of the chart that matches the version of the SDK you are using and see what options are available for overriding.
You can download all available versions of Metaplay's charts from https://charts.metaplay.dev/. The values.yaml
file will be available under the root directory after extraction.
Latest chart at the time of writing: metaplay-gameserver-0.9.0.tgz
For convenience, here is the latest public metaplay-gameserver
chart's values.yaml
at the time of writing:
# This is a temporary flag which will is used around Metaplay SDK R34 to allow deactivation of all the resources from the Helm chart which will be managed by the infrastructure going forward.
infraMigration: false
# Game server's environment name (defaults to environment's display name, shown in LiveOps dashboard to help with environment identification)
environment:
# Game's environment family (triggers changes in behavior for game server)
environmentFamily: # must be one of "Development", "Staging", or "Production"
# Explicit hostname to use for game server endpoint (if not given, will be inferred from other discovered values; the value will be used
#hostname: # e.g. idler-develop.p1.metaplay.io (if not provided, individual values from service.hostname, admin.hostname, etc. need to be set)
# server image to use
image:
#repository: # "some.repository.url/repositoryName"
tag: # "latest"
pullSecrets: null # "aws-ecr"
pullPolicy: "IfNotPresent"
# Metaplay server configurations
config:
# Metaplay game server configuration files to load (relative to working directory of game server image)
files:
- "./Config/Options.base.yaml"
# secrets seeded via the infrastructure (assuming you are running github:metaplay/infra-modules/environments/aws-region compliant infrastructure stacks
infra:
secretName: "metaplay-config"
secretKeyName: "metaplay-infra-options.yaml"
# List of extra environment variables to load into the game server.
extraEnv: []
# - name: Foo_Bar
# value: "value"
# configs for obtaining Helm details from infrastructure
tenant:
# allow discovering values, if possible, from a secret in the k8s namespace
discoveryEnabled: false
secretName: "metaplay-deployment"
# SDK-related configuration
sdk:
# SDK version, specified from the outside (in R28 and above, this is stored as the label 'io.metaplay.sdk_version' in the docker image metadata)
version:
# Game server shard configurations (i.e. what are the different game server shards are, what topologies they run, how many shard replicas, etc.)
shards:
- name: all
singleton: true
requests:
cpu: 250m
memory: 500Mi
# or non-singleton shard configuration
# - name: logic
# nodeCount: 2
# public: true
# podAnnotations: {}
# podLabels: {}
# podNodeSelector: {}
# podTolerations: []
# podAffinity: {}
# requests:
# cpu: 1500m
# memory: 1500Mi
# topologyKey: logic
# - name: service
# nodeCount: 1
# adminApi: true
# requests:
# cpu: 1500m
# memory: 3500Mi
# topologyKey: service
# Common node selector to apply to all game server shards
nodeSelector: {}
# Common tolerations to apply to all game server shards
tolerations: []
# Common affinity to apply to all game server shards
affinity: {}
# Common topology spread constraints to apply to all game server shards
topologySpreadConstraints: []
# Use host networking for servers (mainly useful for debugging; if you need publicly accessible server ports, use the `public` switch under the relevant shard config under `shards`.
hostNetwork: false
securityContext:
sysctls: []
# Miscellaneous experimental features
experimental:
# attempt to use the infrastructure provided Loki
infraLoki:
enabled: false
secretName: loki-tenant-proxy
infraPrometheus:
enabled: false
secretName: prometheus-tenant-proxy
# Use grafana-operator based approach for setting up Grafana
grafanaOperator:
enabled: false
# Dictionary of dashboards to deploy on Grafana. Key should be the dashboard name and value is a dashboard spec (as per https://grafana.github.io/grafana-operator/docs/api/#grafanadashboardspec).
dashboards:
metaplay-server:
url: https://dashboards.metaplay.dev/stable/metaplay-server-v0.2.7.json
metaplay-overview:
url: "https://dashboards.metaplay.dev/stable/metaplay-overview.json"
# websocket ports will be served by the same game server k8s service, so port numbers should not collide with .service.ports
websockets:
enabled: false
ports:
- port: 9380
name: websocket
targetPort: 9380
database:
backend: MySql # this is currently deprecated and must be MySql; changing it won't have an effect
rbac:
serviceAccount:
enabled: true
create: false
name: gameserver
annotations: {}
role:
create: false
name: gameserver
annotations: {}
# -- Domain suffix for local cluster traffic.
clusterLocalDomain: cluster.local
# shard config is env-specific
dedicatedShardNodes: false
debug:
enablePerfTools: false
headerEcho: false
# Game server player endpoints
service:
enabled: true
# hostname: # defaults to .Values.hostname
annotations: {}
ports:
- port: 9339
name: game
targetPort: 9339
loadbalancerType: "nlb"
ipv6Enabled: true # requires service.loabalancerType to be "nlb"
tls:
enabled: true
# sslCertArn: # if TLS is enabled, you can force a specific AWS ACM certificate to be used; otherwise certificate will attempt to be discovered from the infrastructure
# Game server LiveOps dashboard
admin:
enabled: true
# hostname: # defaults to [game server name]-admin.[game server domain]
annotations: {}
# -- Enable StackAPI stack auth authentication for LiveOps dashboard.
stackAuthEnabled: true
tls:
enabled: true
# sslCertArn: # if TLS is enabled, you can force a specific AWS ACM certificate to be used; otherwise certificate will attempt to be discovered from the infrastructure
# -- Configuration options for exposing a public HTTP endpoint from the web server using the PublicWebApi actor in the game server. Using this feature requires your game server uses Metaplay SDK R32 or later.
publicWebApi:
# -- Enable PublicWebApi.
enabled: true
# -- The public API endpoint will by default be `.Values.hostname` with the first part having a -public suffix (e.g. idler-develop.p1.metaplay.io becomes idler-develop-public.p1.metaplay.io). Setting this value will override the entire hostname.
hostname: null
# -- Dictionary of labels to use for targeting the shard running PublicWebApi.
targetLabels:
metaplay.io/public-web-api: "yes"
# -- Port on the shard running PublicWebApi, default should be 8081, but change this if you have configured your game server to listen on another port.
targetPort: 8081
# -- Additional annotations to set for the publicWebApi Ingress.
ingressAnnotations: {}
# -- Ingress class name to use for publicWebApi Ingress. You should not need to change this.
ingressClassName: nginx
# (Deprecated) Public API endpoints for game server (NOTE: publicApiEndpoints will be deprecated with Metaplay SDK R35, please move to use publicWebApi instead).
publicApiEndpoints: []
#- hostname: idler-develop-webhook.d1.metaplay.io
# annotations: {}
# target:
# root: "/webhook/"
# shard: "service"
#- hostname: idler-develop-public.d1.metaplay.io
# target:
# root: "/public/"
# Parameters for tweaking `helm test` scripts
test:
timeout: 300 # default number of seconds to give for helm tests until timeouting
deleteOnFailure: true # delete the test pods on failure (or leave them hanging for further analysis)
# Grafana
grafana:
enabled: true
adminUser: "admin"
# Starting from metaplay-gameserver Helm chart v0.6.0, by default, the creation of grafana ConfigMap in tenant namespace will be handled by this Helm chart (metaplay-gameserver)
# If necessary, you can still delegate the creation of grafana ConfigMap to the upstream grafana Helm chart by setting the value to true
createConfigmap: false
image:
tag: 10.1.5
# Creates roles allowing for monitoring of in-namespace ConfigMaps and Secrets for data loading.
rbac:
create: true
namespaced: true # Create only namespaced RBAC resources as game server deployment service accounts do not have cluster-wide access
pspEnabled: false
# Used for discovering dashboards and datasources from ConfigMaps and Secrets
sidecar:
dashboards:
enabled: true
datasources:
enabled: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: "default"
orgId: 1
folder: ""
type: "file"
disableDeletion: false
editable: true
options:
path: "/var/lib/grafana/dashboards/default"
# Dashboards to load into Grafana using the chart-native methods (see https://github.com/grafana/helm-charts/tree/main/charts/grafana#import-dashboards)
dashboards:
default:
# metaplay-overview:
# url: "https://dashboards.metaplay.dev/stable/metaplay-overview.json"
# metaplay-server:
# url: "https://dashboards.metaplay.dev/stable/metaplay-server.json"
# General Grafana configs
grafana.ini:
auth:
# Handle logouts via external-auth-server compatible query string arguments
signout_redirect_url: "/oauth/logout"
# Use proxy authentication and expect users to come in via the external-auth-server + nginx reverse proxy
auth.proxy:
enabled: true
header_name: "x-auth-request-email"
header_property: "email"
auto_sign_up: true
grafana_net:
url: https://grafana.net
log:
mode: console
paths:
data: /var/lib/grafana/
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
provisioning: /etc/grafana/provisioning
server:
domain: ""
root_url: "%(protocol)s://%(domain)s:%(http_port)s/grafana/"
serve_from_sub_path: true
users:
allow_sign_up: false
auto_assign_org: true
auto_assign_org_role: "Editor"