Install Sourcegraph with Kubernetes
Deploying Sourcegraph into a Kubernetes cluster is for organizations that need highly scalable and available code search and code intelligence.
The Kubernetes manifests for a Sourcegraph on Kubernetes installation are in the repository deploy-sourcegraph.
Requirements
- Sourcegraph Enterprise license. You can run through these instructions without one, but you must obtain a license for instances of more than 10 users.
- Kubernetes v1.15
- Verify that you have enough capacity by following our resource allocation guidelines
- Sourcegraph requires an SSD backed storage class
- Cluster role administrator access
- kubectl v1.15 or later
Make sure you have configured
kubectl
to access your cluster.- If you are using GCP, you’ll need to give your user the ability to create roles in Kubernetes (see GCP’s documentation):
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
Clone the deploy-sourcegraph repository and check out the version tag you wish to deploy.
# 🚨 The master branch tracks development. Use the branch of this repository corresponding to the version of Sourcegraph you wish to deploy, e.g. git checkout 3.24 git clone https://github.com/sourcegraph/deploy-sourcegraph cd deploy-sourcegraph SOURCEGRAPH_VERSION="v3.24.1" git checkout $SOURCEGRAPH_VERSION
Configure the
sourcegraph
storage class for the cluster by reading through “Configure a storage class”.If you want to add a large number of repositories to your instance, you should configure the number of gitserver replicas and the number of indexed-search replicas before you continue with the next step. (See “Tuning replica counts for horizontal scalability” for guidelines.)
Deploy the desired version of Sourcegraph to your cluster:
./kubectl-apply-all.sh
- Monitor the status of the deployment.
watch kubectl get pods -o wide
- Once the deployment completes, verify Sourcegraph is running by temporarily making the frontend port accessible:
kubectl 1.9.x:
kubectl port-forward $(kubectl get pod -l app=sourcegraph-frontend -o template --template="{{(index .items 0).metadata.name}}") 3080
kubectl 1.10.0 or later:
kubectl port-forward svc/sourcegraph-frontend 3080:30080
Open http://localhost:3080 in your browser and you will see a setup page. Congrats, you have Sourcegraph up and running!
Configuration
See the configuration docs.
Troubleshooting
See the Troubleshooting docs.
Updating
See the Upgrading Howto on how to upgrade. See the Upgrading docs for details on what changed in a version and if manual migration steps are necessary.
Cluster-admin privileges
Note: Not all organizations have this split in admin privileges. If your organization does not then you don’t need to change anything and can ignore this section.
The default installation has a few manifests that require cluster-admin privileges to apply. We have labelled all resources with a label indicating if they require cluster-admin privileges or not. This allows cluster admins to install the manifests that cannot be installed otherwise.
- Manifests deployed by cluster-admin
./kubectl-apply-all.sh -l sourcegraph-resource-requires=cluster-admin
- Manifests deployed by non-cluster-admin
./kubectl-apply-all.sh -l sourcegraph-resource-requires=no-cluster-admin
We also provide an overlay that generates a version of the manifests that does not require cluster-admin privileges.
Cloud installation guides
Follow the instructions linked in the table below to provision a Kubernetes cluster for the infrastructure provider of your choice, using the recommended node and list types in the table.
Note: Sourcegraph can run on any Kubernetes cluster, so if your infrastructure provider is not listed, see the “Other” row. Pull requests to add rows for more infrastructure providers are welcome!
Provider | Node type | Boot/ephemeral disk size |
---|---|---|
Compute nodes | ||
Amazon EKS (better than plain EC2) | m5.4xlarge | N/A |
AWS EC2 | m5.4xlarge | N/A |
Google Kubernetes Engine (GKE) | n1-standard-16 | 100 GB (default) |
Azure | D16 v3 | 100 GB (SSD preferred) |
Other | 16 vCPU, 60 GiB memory per node | 100 GB (SSD preferred) |