Cloud services are frequently updated. This documentation may not reflect the latest changes. Always verify steps and interfaces with the current provider documentation.
This tutorial will show how to run Develocity on Google Kubernetes Engine.
Develocity can be installed into an existing Kubernetes cluster. It can also be installed on a standalone virtual machine, as shown in our Google Compute Engine installation tutorial. This guide shows how to set up a cluster installation on a Google Kubernetes Engine cluster.
This guide is for the latest version of Develocity, and may not work with earlier versions. |
Develocity can generally be installed on Kubernetes clusters running modern Kubernetes versions. The exact supported versions are listed in Develocity’s Self-Hosted Kubernetes Installation Guide. Later versions may be compatible but have not been verified to work.
The majority of this tutorial is a quick start guide to creating and minimally configuring a cluster in GKE for a Develocity installation. If you already have GKE expertise and are able to provision a cluster, you may wish to skip straight to the Develocity installation instructions.
This tutorial is not guaranteed to work (and has not been tested) using Google Cloud’s Assured Workloads for Government. For assistance installing Develocity using Assured Workloads for Government, please contact your account executive or Develocity customer support if you are already a customer. |
Prerequisites
1. A Google Cloud account
You can create an account if you do not already have one.
2. A Develocity license
You can request a Develocity trial here. If you have purchased Develocity, you will already have a license file.
3. A Google Cloud project with billing enabled
You will need a Google Cloud project that has billing enabled for this tutorial. If you have a project you wish to use, ensure it has billing and the GKE APIs enabled. You can create a project by following this guide. To enable billing, follow these instructions for your project.
If you are using a new project, you may be asked to enable various APIs when running commands. Go ahead and do this, they are necessary for the tutorial. |
4. A Google Cloud IAM user with GKE permissions
If you are the project owner, you likely have all permissions already. |
Your Google Cloud IAM user needs permission to create and manage various resources (GKE clusters, static IPs, managed certificates, or buckets). You also need permission to create resources in GKE clusters, including RBAC roles (which requires the container.roles.create
permission). Your nodes will be created using the Compute Engine default service account, so you also need permission to use it.
You can obtain these permissions using the following predefined roles:
-
Kubernetes Engine admin (
roles/container.admin
) -
Certificate Manager Editor (
roles/certificatemanager.editor
) -
Compute Public IP Admin (
roles/compute.publicIpAdmin
) -
Storage Object Admin (
roles/storage.objectAdmin
) for bucket creation and administration -
Service Account User (
roles/iam.serviceAccountUser
) for the Compute Engine default service account
Consult the Develocity installation guide for information about debugging required Kubernetes permissions. |
For more details on Google Kubernetes Engine’s access control model, consult the GKE IAM guide and the GKE access control guide. |
If you choose to follow our Cloud SQL appendix, you will need the permissions described in it, too. |
5. Hostname (optional)
Google Cloud Platform machines are provisioned with an external IP address but no hostname. Develocity is not accessible directly by IP, but there are services like nip.io that automatically map hostnames to IPs, so using your own hostname is not required.
If you want to access Develocity by a host name of your choosing (e.g. develocity.example.com
), you will need the ability to create the necessary DNS record to route this name to the instance IP address.
You will be able to update this later. You can start with a public DNS name based on the IP address, and later reconfigure to use a custom hostname if desired.
Host Requirements
This section outlines cluster and host requirements for the installation.
Currently, Develocity only supports the |
1. Database
Develocity installations have two database options:
-
An embedded database that is highly dependent on disk performance.
-
A user-managed database can be any PostgreSQL database compatible with versions 14 through 17, including Google Cloud SQL.
By default, Develocity stores its data in a PostgreSQL database that is run as part of the application itself, with data being stored in a directory mounted on its host machine.
Cloud SQL Database
There are instructions for using Google Cloud SQL as a user-managed database in Using Cloud SQL as a Develocity user-managed database. This has several benefits, including easier resource scaling, backup management, and failover support. It allows you to store Build Scans in the Object Storage.
2. Storage
Develocity uses persistent volume claims for storing data, logs, and backups. If your cluster is configured with a default StorageClass, this StorageClass will be used.
In case that there is no default StorageClass configured or that you want to use different StorageClasses, you will need to provide the name of the StorageClass to be used for provisioning persistent volumes.
Different StorageClasses can be specified for the different types of storage used.
It’s recommended to use faster StorageClasses for data and a separate slower, cost-efficient one for backups. |
Some Pods are associated with multiple persistent volumes and for Kubernetes platforms with multiple availability zones, the Pods and their persistent volumes must be located in the same zone. In this case it’s recommended to use a StorageClass with a volumeBindingMode
of WaitForFirstConsumer
to ensure that all persistent volumes are provisioned in the same zone that the pod was scheduled in.
It’s strongly recommended to use StorageClasses that allow persistent volume claim expansion if available. This makes expanding storage used as usage of Develocity increases straightforward.
Capacity
The recommended minimum capacities for the persistent volumes are:
Description | Size in GB |
---|---|
Build Scans |
250 |
Build Scans backups |
250 |
Build Cache |
10 |
Test Distribution |
10 |
Logs and Monitoring |
22 |
Embedded Object Storage |
10 |
If you are producing many Build Scans in a day (> 1GB) or intend to retain Build Scans for long periods of time (30 days+) you might want to consider provisioning more storage. If your storage class doesn’t allow expanding volumes, you should also consider preparing for future data growth by adding additional disk capacity upfront.
Performance
For production workloads, the data storage class should exhibit SSD-class disk performance of at least 3000 IOPS (input/output operations per second). The storage classes used for logs and backup volumes might be slower.
Disk performance has a significant impact on Develocity performance. Network file systems (such as Amazon EFS) are not compatible with Develocity due to their performance characteristics. |
Object Storage
Develocity can be configured to store Build Scan data in an Object Storage service, such as Google Cloud Storage. This can improve performance in high-traffic installations by reducing the load on the database. See Build Scan Object Storage in the Develocity Administration Manual for a description of the benefits and limitations.
There are instructions on how to configure Develocity to use Google Cloud Storage as an object store in the Object Storage Configuration section of the Kubernetes Helm Chart Configuration Guide.
3. Network Connectivity
Develocity requires network connectivity for periodic license validation.
|
It is strongly recommended that production installations of Develocity are configured to use HTTPS with a trusted certificate.
When installing Develocity, you will need to provide a hostname, such as develocity.example.com
.
Preinstallation
You need to use a number of tools to create Google Cloud resources and install Develocity. You can either install them locally, or use Google Cloud’s Cloud Shell, which comes with the tools you will need preinstalled and mostly preconfigured.
|
If you decided to use Cloud Shell, complete 2. Configure gcloud
(unless you already have the project and zone configured) and then skip to Creating a Google Kubernetes Engine Cluster.
1. Install gcloud
You will be using the gcloud
command line tool to provision and configure the Google Kubernetes Engine cluster. To install gcloud
on your local machine, follow the instructions in the Google Cloud documentation.
2. Configure gcloud
gcloud configurations are not saved by default by Cloud Shell. To save your configuration, follow these instructions. |
To configure gcloud
, run gcloud init
and follow the initialization guide. You want to use the project you created or decided on in 3. A Google Cloud project with billing enabled. If your project doesn’t have a default zone, you need to set the zone (and region, which will often be set automatically from the zone) to the zone you wish to install Develocity in. If you don’t know which zone or region to select, consult Google Cloud’s region and zone documentation.
Pick the region geographically closest to you or to any pre-existing compute resources, such as CI agents, to ensure the best performance. |
3. Install kubectl
To easily install a gcloud
-managed version of kubectl
, you can run:
$ gcloud components install kubectl
You can also install kubectl
through any other means. The Kubernetes documentation lists some of the most popular options (note that you only need to install kubectl
, not any of the other tools listed there).
4. Install the gcloud
auth plugin for kubectl
To authenticate with your GKE cluster, kubectl
needs a GKE-specific plugin. If you are using an older Kubernetes version, this may not be necessary, but it does no harm.
To install the plugin, run:
$ gcloud components install gke-gcloud-auth-plugin
For more details on installation, and why the plugin is necessary, see Google’s blog post. |
5. Miscellaneous command line tools
This guide assumes that various common command line tools such as wget
, curl
and jq
are already available on your host. Please refer to the official documentation of these tools for installation instructions.
Creating a Google Kubernetes Engine Cluster
In this section you will create a Google Kubernetes engine to run a Develocity instance, and create an external IP for Develocity to use.
If you’re using Cloud Shell, remember to run these commands there. |
1. Create a cluster
For this tutorial, we will use a Standard cluster with the default three nodes, using the e2-standard-4
machine type with 4 vCPUs and 16 GB memory. See Resource requirements in the Self-Hosted Kubernetes Installation Guide for recommendations.
Develocity is fully compatible with Autopilot clusters which will automatically provision nodes based on your workload. There are also GKE solutions for autoscaling nodes horizontally and vertically in standard clusters. |
Name this cluster develocity
. To create it, run:
$ gcloud container clusters create develocity \
--machine-type e2-standard-4 \
--workload-pool=$(gcloud config get project).svc.id.goog
This command will take a while to complete. It will automatically add and activate a kubectl
configuration.
You should then be able to see your cluster when running the following command:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE * gke_project-name_us-west1_develocity gke_project-name_us-west1_develocity gke_project-name_us-west1_develocity
For more details, see GKE’s Standard cluster creation documentation or GKE general documentation. |
2. Create an external IP
Before installing Develocity, you need to create a static external IP for Develocity to use. While it is possible to configure the IP and hostname later by running helm upgrade
with the updated Helm values file, it is easier to configure them now.
To create the static IP, run:
$ gcloud compute addresses create develocity-static-ip --global
You can see the IP address used by the resource by running:
$ gcloud compute addresses describe develocity-static-ip --global --format='value(address)'
3. Configure the hostname
If you intend to use a custom hostname to access your Develocity instance, you now need to add the appropriate DNS records.
Add an A
record for your hostname that points to the IP you created in the previous step. For example:
develocity.example.com A 34.110.226.160
Verify that your DNS record works correctly before installing Develocity, such as by using dig develocity.example.com
.
if you don’t have a domain, use nip.io
It is impossible to use an IP address directly as the Develocity hostname. If you do not wish to set up a permanent DNS record at this time, you can instead use a service like nip.io to provide one based on the IP address. Any of the dash-based naming schemes on the nip.io web page should work, but the following command will generate a useful short name tied to that IP address:
$ DV_IP_ADDR=$(gcloud compute addresses describe develocity-static-ip --global --format='value(address)') && \
echo "develocity-$(printf '%02x' $(echo ${DV_IP_ADDR//./ })).nip.io"
develocity-226ee2a0.nip.io
develocity-226ee2a0.nip.io
can then be used as the hostname.
4. Create a managed SSL certificate
You can provision and use a Google-managed trusted SSL certificate using Kubernetes manifests. This will work even with a hostname you don’t own (such as one from nip.io) as long as it redirects to a Google Cloud load balancer using your certificate.
We will use a managed certificate in this tutorial. If you want to use a custom SSL certificate instead, skip this step and follow the instructions in HTTP or HTTPS when creating your Helm values file below.
To provision a Google-managed SSL certificate, create a managed certificate resource on the cluster.
apiVersion: v1
kind: Namespace
metadata:
name: develocity
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: develocity-cert
namespace: develocity
spec:
domains:
- develocity.example.com (1)
1 | Use the hostname you decided on in 3. Configure the hostname. |
To apply this manifest, run the following command:
$ kubectl apply -f managed-cert.yaml
Note that the certificate won’t start provisioning until Develocity is installed.
For more details, consult Google’s guide to using managed certificates with GKE. |
Installing Develocity
In this section you will install Develocity on your newly created instance. For full details on installation options, please see the Develocity Helm Kubernetes Installation Manual.
1. Install helm
To install Helm, run:
$ curl -qs https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
See Helm’s installation documentation for more details, and non-Linux instructions. |
2. Prepare a Helm values file
Create a Helm values file named values.yaml
as shown below:
global:
hostname: develocity.example.com (1)
externalSSLTermination: true (2)
storage:
data:
class: premium-rwo (3)
backup:
class: standard (4)
ingress:
enabled: true
annotations:
kubernetes.io/ingress.global-static-ip-name: develocity-static-ip (5)
networking.gke.io/managed-certificates: develocity-cert (6)
grpc:
serviceAnnotations:
cloud.google.com/app-protocols: '{"grpc":"HTTP2"}' (7)
1 | Use the hostname you decided on in 3. Configure the hostname or substitute it later as shown below. |
2 | Required to leverage ManagedCertificate provided by Google Cloud in step 4. Create a managed SSL certificate |
3 | Use a high-performance volume for data volumes. See Choosing storage classes. |
4 | Use a low performance volume for backups. See Choosing storage classes. |
5 | Configure the Ingress’s load balancer to use the static IP you created earlier. |
6 | Configure the Ingress’s load balancer to use the managed SSL certificate you created. |
7 | Configure the Ingress’s load balancer to support gRPC protocol via HTTP2 , required for Bazel build-tool. |
When adding things to your Helm values file, merge any duplicate blocks. Alternatively, you can use separate files and pass all of them with --values «file» when running Helm commands. |
This file configures Develocity and its installation. For more details on what is configurable, see the Kubernetes Helm Chart Configuration Guide.
If you want to use a nip.io hostname as described in 3. Configure the hostname
You can substitute it into the Helm values file by running:
$ (
DV_IP_ADDR=$(gcloud compute addresses describe develocity-static-ip --global --format='value(address)')
DV_HOSTNAME="develocity-$(printf '%02x' $(echo ${DV_IP_ADDR//./ })).nip.io"
sed -i "s/develocity.example.com/${DV_HOSTNAME}/g" path/to/values.yaml
)
If you want to leverage CloudSQL PostgreSQL and/or Google Cloud Storage for your installation, please follow appendixes, and return to this procedure when finished. |
Choosing storage classes
In the example Helm values file, we configure Develocity to use a high-performance SSD (the premium-rwo
storage class) for its data volumes. This is optional, but recommended for best performance. See the installation manual’s section on storage requirements for more details.
Similarly, we configure the backup storage and log storage to use non-SSD disks (the standard
storage class). This is more cost-efficient and avoids Google Cloud’s default 500 GB SSD quota.
You can see the performance characteristics of the different Google Cloud disk types in Google Cloud’s disk performance docs. Note that Develocity has a 250 GB main data volume by default.
If you are using a user-managed database (such as a Cloud SQL database), then this data volume is not created. |
Storage classes do not necessarily map to disk types, but you can see which disk type each storage class uses by running:
$ kubectl describe storageclass
The type will be listed as Parameters: type=pd-«disk-type»
for the storage classes that map to persistent disks.
Google Cloud’s default standard-rwo storage class maps to the pd-balanced disk type, which is a SSD, despite the storage class’s name. If you don’t explicitly set a storage class for the backup and logs storage, you will end up using SSDs for backup and log storage. |
3. Install the gradle-enterprise
Helm chart
First, add the https://helm.gradle.com/
helm repository and update it:
$ helm repo add gradle https://helm.gradle.com/ && \
helm repo update gradle
If you’re using an older Helm version (which Cloud Shell may), you may need to run helm repo update instead of helm repo update gradle . |
Second, download post-renderer resources into your current working directory:
$ ( for f in 'backend-config.yaml' 'kustomization.yaml' 'add-develocity-backend-config.sh' 'patches.yaml'
do
curl -qsLO "https://docs.gradle.com/develocity/tutorials/gcp-kubernetes/kustomization/$f"
done
chmod a+x add-develocity-backend-config.sh )
This kustomize script creates additional Google Cloud resources and configures service definitions, both of which are not possible with the Helm chart. |
Make sure that kustomize is available on your computer. If you have opted for Cloud Shell, kustomize is already pre-installed. Otherwise, you can install it with the following command:
$ curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash
Then run helm install
with the following command:
$ helm install \
--create-namespace --namespace develocity \
develocity \
gradle/gradle-enterprise \
--values path/to/values.yaml \(1)
--set-file global.license.file=path/to/develocity.license \(2)
--post-renderer=./add-develocity-backend-config.sh (3)
1 | The Helm values file you created in 2. Prepare a Helm values file. |
2 | The license you obtained in 2. A Develocity license. |
3 | The post renderer script you downloaded |
You should see output similar to this:
NAME: develocity LAST DEPLOYED: Wed Jul 13 04:08:35 2022 NAMESPACE: develocity STATUS: deployed REVISION: 1 TEST SUITE: None
If you instead see an error like:
Error: INSTALLATION FAILED: roles.rbac.authorization.k8s.io is forbidden: User "[email protected]" cannot create resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "develocity": requires one of ["container.roles.create"] permission(s).
4. Wait for Develocity to start
You can see the status of Develocity starting up by examining its pods.
$ kubectl --namespace develocity get pods
NAME READY STATUS RESTARTS AGE gradle-database-5f9ddc958b-pn8wm 0/3 Init:0/3 0 4s gradle-embedded-object-storage-86745c9cd7-2lpb9 0/1 ContainerCreating 0 4s gradle-enterprise-app-58b7b75579-4c5hz 0/1 Init:0/4 0 4s gradle-enterprise-operator-bdb9b67bc-5m9wb 0/1 ContainerCreating 0 4s gradle-keycloak-64cd768b89-rk9g2 0/2 Init:0/2 1 4s gradle-metrics-dbbb75cf5-k86fs 1/2 Running 0 4s gradle-monitoring-849994bb56-4wr9t 0/3 ContainerCreating 0 3s gradle-proxy-6d7b965c4f-b6jdb 2/3 Running 0 3s gradle-test-distribution-broker-5494796d9b-gb289 0/1 Init:0/1 0 3s
If you use the GKE web UI, some deployments or stateful sets will show as Does not have minimum availability while their init containers are running. |
Eventually the pods should all report as Running
:
$ kubectl --namespace develocity get pods
NAME READY STATUS RESTARTS AGE gradle-database-5f9ddc958b-pn8wm 3/3 Running 0 4m41s gradle-embedded-object-storage-86745c9cd7-2lpb9 1/1 Running 0 4m41s gradle-enterprise-app-58b7b75579-4c5hz 1/1 Running 0 4m41s gradle-enterprise-operator-bdb9b67bc-5m9wb 1/1 Running 0 4m41s gradle-keycloak-64cd768b89-rk9g2 2/2 Running 1 4m41s gradle-metrics-dbbb75cf5-k86fs 2/2 Running 0 4m41s gradle-monitoring-849994bb56-4wr9t 3/3 Running 0 4m41s gradle-proxy-6d7b965c4f-b6jdb 3/3 Running 0 4m41s gradle-test-distribution-broker-5494796d9b-gb289 1/1 Running 0 4m41s
Secondly, check that all backends are healthy:
$ kubectl get ingress gradle-enterprise-ingress --namespace develocity --output jsonpath="{.metadata.annotations}" | jq -r '.["ingress.kubernetes.io/backends"]' | jq .
{ "k8s1-6c91367d-develocit-gradle-test-distribution-b-808-e398ea74": "HEALTHY", "k8s1-6c91367d-develocity-gradle-enterprise-app-6011-7ee160cb": "HEALTHY", "k8s1-6c91367d-develocity-gradle-proxy-80-d2766b81": "HEALTHY", "k8s1-6c91367d-kube-system-default-http-backend-80-72022d96": "HEALTHY" }
Once all pods have a status of Running
and the backends are HEALTHY
, the system is up, and you can interact with it by visiting its URL in a web browser. You can also visit the URL immediately once the backends are up and healthy, and will see a starting screen, which will then redirect to a Build Scan list once the app has started.
If the pods do not all start correctly, please see the troubleshooting section in the administration manual.
Once the pods are ready, it may take up to 60 minutes, but not usually that long, for Google to provision the managed SSL certificate. The status of the certificate can be checked by running:
$ kubectl describe managedcertificate --namespace develocity develocity-cert
Look for the Status
, Certificate Status
, and Domain Status
fields in the output.
Using Develocity
Many features of Develocity, including access control, database backups, and Build Scan retention can be configured in Develocity itself, once it is running. The administration manual walks you through the various features you can configure post-installation - you should give the section a read.
For instructions on how to start using Develocity in your builds, consult the Getting Started with Develocity guide. See Teardown and Cleanup for instructions on uninstalling Develocity and deleting related resources, such as persistent disk volumes.
Further reading
-
Develocity Helm Kubernetes Installation Manual — Full installation description and options for this type of installation.
-
Develocity Admin Manual — Admin tasks around Develocity and the build-cache server.
-
Use cases for the build cache — Use cases for Gradle’s build cache, from local-only development to caching task outputs across large teams.
Appendix A: Using Cloud SQL as a Develocity user-managed database
Develocity can use a user-managed database instead of using its own embedded database. This can have a number of benefits, including easier resource scaling (and even autoscaling), easier backup and snapshot management, and failover support. For details on the pros and cons of using a user-managed database with Develocity, see the Database options section of the Kubernetes Helm Chart Configuration Guide. This appendix will walk you through using Google Cloud SQL as a user-managed database.
Obtain the required permissions
You will need permission to create and manage Cloud SQL instances and service accounts, and to add roles to service accounts.
You can obtain these permissions using the following built-in roles: roles/iam.serviceAccountAdmin
, roles/resourcemanager.projectIamAdmin
, and roles/cloudsql.admin
.
If getting roles/iam.serviceAccountAdmin
and roles/resourcemanager.projectIamAdmin
is difficult, you can have someone else who has permission to create service accounts and add roles to them complete 2. Bind Kubernetes Service Account to Cloud SQL permissions.
Set up a Cloud SQL instance
Before starting, it is a good idea to review Develocity’s supported Postgres versions and storage requirements.
1. Decide on a root username and password
Decide on a root password for the database instance. We will refer to it as «db-root-password»
. This is the password you will use for your database connection, so save it somewhere secure.
The superuser is only used by Develocity to set up the database and create migrator and application users. You can avoid using the superuser from Develocity by setting up the database yourself, as described in the Database options section of Kubernetes Helm Chart Configuration Guide. Please contact Gradle support for help with this. |
2. Create the Cloud SQL instance
Create the Cloud SQL instance:
$ gcloud sql instances create develocity \
--edition=enterprise \
--database-version=POSTGRES_17 \
--cpu=2 \
--memory=8GB \
--storage-size=250GB \
--require-ssl \
--database-flags=max_connections=200 \
--zone=$(gcloud config get compute/zone) \(1)
--root-password=«db-root-password» \
&& \
gcloud sql databases create develocity --instance=develocity (2)
1 | The zone where you created your GKE cluster. |
2 | database creation inside the previously created CloudSQL instance |
This will create an instance with 2 CPUs and 8 GB of RAM, with 250 GB of storage, without any replication. The storage will automatically increase if necessary, but will not decrease.
|
Make the database accessible from your GKE cluster
To connect to the instance, you will use the Cloud SQL Auth proxy running as a standalone service. While the official documentation recommends running the proxy as a sidecar container, this does not work for Develocity because some of our init-containers require database access.
1. Create a Kubernetes service account for the Cloud SQL proxy
The Cloud SQL Proxy needs a service account to authenticate with your Cloud SQL instance.
apiVersion: v1
kind: Namespace
metadata:
name: develocity
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dv-database-service-account
namespace: develocity
To apply this manifest, run the following command:
$ kubectl apply -f db-service-account.yaml
2. Bind Kubernetes Service Account to Cloud SQL permissions
You need to bind the previously created service-account to the required role. For this, run the following commands:
$ (PROJECT_ID=$(gcloud config get project); \
PROJECT_NUMBER=$(gcloud projects list \
--filter="$PROJECT_ID" \
--format="value(PROJECT_NUMBER)"); \
NAMESPACE="develocity"; \
KUBERNETES_SERVICE_ACCOUNT="dv-database-service-account"; \
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--role="roles/cloudsql.client" \
--member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/${KUBERNETES_SERVICE_ACCOUNT}")
3. Deploy the Cloud SQL Proxy
Get the connection name of your Cloud SQL instance by running:
$ gcloud sql instances describe develocity --format='value(connectionName)'
We will refer to this as «connection-name»
Then deploy the proxy and create a service for it by applying the following manifest:
apiVersion: v1
kind: Namespace
metadata:
name: develocity
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gradle-database-proxy
namespace: develocity
spec:
selector:
matchLabels:
app.kubernetes.io/part-of: gradle-enterprise
app.kubernetes.io/component: database-proxy
template:
metadata:
labels:
app.kubernetes.io/part-of: gradle-enterprise
app.kubernetes.io/component: database-proxy
spec:
serviceAccountName: dv-database-service-account
containers:
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:latest (1)
ports:
- containerPort: 6543
args: (2)
- "--port"
- "6543"
- "--address"
- "0.0.0.0"
- "«connection-name»" (3)
resources:
requests:
cpu: "1" (4)
memory: "2Gi" (5)
limits:
cpu: "2" (4)
memory: "4Gi" (5)
securityContext:
runAsNonRoot: true
---
apiVersion: v1
kind: Service
metadata:
name: gradle-database-proxy
namespace: develocity
spec:
selector:
app.kubernetes.io/part-of: gradle-enterprise
app.kubernetes.io/component: database-proxy
ports:
- port: 5432
targetPort: 6543
1 | This uses the latest version of the Cloud SQL proxy. It is highly recommended to use a specific version for long-lived environments. |
2 | The arguments to the Cloud SQL proxy. |
3 | The connection name of your Cloud SQL instance, which you can substitute as shown below. |
4 | While it is fairly lightweight, the proxy’s CPU use scales linearly with the amount of database IO. |
5 | While it is fairly lightweight, the proxy’s memory use scales linearly with the amount of active connections. |
|
Easy manifest publication with substitution
To easily apply this manifest while substituting in the correct «connection-name»
, run (verbatim):
$ CONNECTION_NAME=$(gcloud sql instances describe develocity --format='value(connectionName)') && \
sed "s/«connection-name»/${CONNECTION_NAME}/g" | kubectl apply -f -
And then paste the above manifest (verbatim) into stdin
.
When writing or pasting to a shell’s stdin , use EOF (usually ctrl+d ) to end the input. |
Configure Develocity to use your Cloud SQL instance
Add the following configuration snippet to your Helm values file:
database:
location: user-managed
connection:
host: gradle-database-proxy
port: 5432
databaseName: develocity
params: "?ssl=false"
credentials:
superuser:
username: postgres
password: «db-root-password»
If you skipped to this appendix from 2. Prepare a Helm values file while installing Develocity, continue at 3. Install the gradle-enterprise
Helm chart.
|
Appendix B: Using Google Cloud Storage as user-managed Object Storage
Develocity can use a user-managed Object Storage instead of using its own embedded version. This has several benefits, from scalable storage, to reduced operation burden and better backup and failover management. It allows you to store Build Scans in the Object Storage. This appendix will walk you through using Google Cloud Storage as user-managed Object Storage.
1. Create the Google-Cloud Storage bucket
Create two buckets using the gcloud
CLI:
$ gcloud storage buckets create \
gs://develocity-application-data \(1)
--location=$(gcloud config get compute/zone | sed 's@\(.*\)-[a-z]@\1@') \(3)
--uniform-bucket-level-access \
&& \
gcloud storage buckets create \
gs://develocity-monitoring-data \(2)
--location=$(gcloud config get compute/zone | sed 's@\(.*\)-[a-z]@\1@') \(3)
--uniform-bucket-level-access
1 | The name of the bucket meant to store application data, like Build Scans or Build Cache entries |
2 | The name of the bucket meant to store monitoring data, like logs and metrics collected during application lifetime |
3 | The region where we want to store data. For performance reasons, we recommend to use the same region as your cluster uses |
Storing data in different buckets allows you to apply various strategies, such as access control, replication, soft-delete, backup, and more. However, you can use one bucket for both application and monitoring data; this is an operation’s decision based on your practices. |
2. Bind Kubernetes service accounts to IAM Roles required
To interact with the Object Storage, some Develocity components requires access (read, write) to the previously created bucket with the role roles/storage.objectUser
. Association between components and buckets can be described like this:
-
gradle-enterprise-app
andgradle-enterprise-app-background-processor
need to access the application bucket, namedgs://develocity-application-data
-
gradle-enterprise-operator
andgradle-monitoring
need to access the monitoring bucket, namedgs://develocity-monitoring-data
We can achieve those bindings using following commands:
$ (
PROJECT_ID=$(gcloud config get project)
PROJECT_NUMBER=$(gcloud projects list \
--filter="$PROJECT_ID" \
--format="value(PROJECT_NUMBER)")
NAMESPACE="develocity"
for KUBERNETES_SERVICE_ACCOUNT in 'gradle-enterprise-app' 'gradle-enterprise-app-background-processor' (1)
do
gcloud storage buckets add-iam-policy-binding gs://develocity-application-data \
--role="roles/storage.objectUser" \
--member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/${KUBERNETES_SERVICE_ACCOUNT}" \
--condition=None (3)
done
for KUBERNETES_SERVICE_ACCOUNT in 'gradle-enterprise-operator' 'gradle-monitoring' (2)
do
gcloud storage buckets add-iam-policy-binding gs://develocity-monitoring-data \
--role="roles/storage.objectUser" \
--member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/${KUBERNETES_SERVICE_ACCOUNT}" \
--condition=None (3)
done
)
1 | Develocity components requiring access to the application bucket: gradle-enterprise-app and gradle-enterprise-app-background-processor |
2 | Develocity components requiring access to the monitoring bucket: gradle-enterprise-operator and gradle-monitoring |
3 | We iterate and bind the role roles/storage.objectUser to buckets for each Kubernetes Service Account of above lists |
3. Configure Develocity to use your Google Cloud Storage bucket
Develocity must now be configured to use Google Cloud Storage. To do this, you must use the unattended configuration mechanism. This section will describe how to extend your Helm values file to include the correct unattended configuration block for Google storage.
First you need to create a minimal unattended configuration file. This requires you to choose a password for the system user and hash it. To do this, install Develocityctl.
Then run the following command to hash your password, from standard-input (stdin
), and write it to secret.txt
:
$ develocityctl config-file hash -o secret.txt -s -
We will refer to the hashed password, available inside the secret.txt
as «hashed-system-password»
.
To use your buckets, add the following to your Helm values file:
global:
unattended:
configuration:
version: 12 (1)
systemPassword: "«hashed-system-password»" (2)
buildScans:
incomingStorageType: objectStorage
objectStorage:
type: googleCloudStorage (3)
googleCloudStorage:
bucket: develocity-application-data (4)
credentials:
type: workloadIdentity (5)
monitoring: (6)
bucket: develocity-monitoring-data (7)
credentials:
type: workloadIdentity (5)
1 | The version of the unattended configuration |
2 | Your hashed system password |
3 | Object Storage type used in this installation, here set to googleCloudStorage |
4 | Bucket created in 1. Create the Google-Cloud Storage bucket, used for application data storage (Build Cache, Build Scan) |
5 | Authentication mechanism configured for this installation. serviceAccount is available but less secure and though not recommended |
6 | Configuration block dedicated to monitoring data storage (metrics , logs ). Its structure is identical to the one in objectStorage.googleCloudStorage |
7 | Bucket created in 1. Create the Google-Cloud Storage bucket, used for monitoring data (metrics) |
When adding things to your Helm values file, merge any duplicate blocks. Alternatively, you can use separate files and pass all of them with --values «file»
when running Helm commands.
Switching between embedded Object Storage and user-managed Object Storage is not supported. |
Appendix C: Teardown and Cleanup
This appendix will walk you through tearing down Develocity and deleting any resources created by following this tutorial. Before deleting your cluster, you should uninstall the Develocity helm chart. Otherwise, the persistent storage disks will not be deleted.
To uninstall Develocity, run:
$ helm uninstall --namespace develocity develocity
After executing this command, wait a minute or so for the disks to be deleted. The disks backing the three log volumes and the nodes' boot disks will remain. The log disks can be deleted manually if you wish, and the node boot disks will be deleted when you delete the cluster. You can list the disks by running:
$ gcloud compute disks list
To delete the managed certificate and static IP address, run:
$ kubectl delete managedcertificate --namespace develocity develocity-cert && \
gcloud compute addresses delete develocity-static-ip --global
To delete the cluster, you need to enter the command:
$ gcloud container clusters delete develocity
This will ask if you want to continue. Enter y
to delete the cluster.
If you’re using other resources, like a Cloud SQL database, remember to delete them too. Cloud SQL teardown instructions are in the section below.
Cloud SQL
If you followed Using Cloud SQL as a Develocity user-managed database, you have some additional cleanup to do.
Deleting a Cloud SQL instance also deletes any automated backups of its database. |
To delete your Cloud SQL instance, run:
$ gcloud sql instances delete develocity
To also delete the service account binding you created, run:
$ (
PROJECT_ID=$(gcloud config get project)
PROJECT_NUMBER=$(gcloud projects list \
--filter="$PROJECT_ID" \
--format="value(PROJECT_NUMBER)")
NAMESPACE="develocity"
KUBERNETES_SERVICE_ACCOUNT="dv-database-service-account"
gcloud projects remove-iam-policy-binding "$PROJECT_ID" \
--role="roles/cloudsql.client" \
--member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/${KUBERNETES_SERVICE_ACCOUNT}" \
--condition=None
)
If you didn’t get full permissions for service account management, you may not be able to do this yourself. |
Cloud Storage
If you followed Using Google Cloud Storage as user-managed Object Storage, you have some additional cleanup to do.
To delete your Cloud Storage buckets, run:
$ gcloud storage rm --recursive \
gs://develocity-application-data \
gs://develocity-monitoring-data
To also delete the service account bindings you created, run:
$ (PROJECT_ID=$(gcloud config get project)
PROJECT_NUMBER=$(gcloud projects list \
--filter="$PROJECT_ID" \
--format="value(PROJECT_NUMBER)")
NAMESPACE="develocity"
for KUBERNETES_SERVICE_ACCOUNT in 'gradle-enterprise-app' 'gradle-enterprise-app-background-processor' (1)
do
gcloud storage buckets remove-iam-policy-binding gs://develocity-application-data \
--role="roles/storage.objectUser" \
--member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/${KUBERNETES_SERVICE_ACCOUNT}" \
--condition=None (3)
done
for KUBERNETES_SERVICE_ACCOUNT in 'gradle-enterprise-operator' 'gradle-monitoring' (2)
do
gcloud storage buckets remove-iam-policy-binding gs://develocity-monitoring-data \
--role="roles/storage.objectUser" \
--member="principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/${KUBERNETES_SERVICE_ACCOUNT}" \
--condition=None (3)
done)