engineering blog-dk-blue-1-1200x628
Engineering · 4 min read

Connect to Google Kubernetes with GCP credentials and pure Golang

Authenticate to GKE without kubeconfig

Recently I built a small service that periodically connects to all of our Kubernetes clusters in a given Google Cloud Platform project and reports on what is running in each one. As part of building this service, I had to solve a pretty basic problem: how do I connect to GKE cross-cluster (or even cross-project)? A service running in Kubernetes can talk to its own cluster pretty trivially, but going cross-cluster means thinking about authentication. In this article, I’ll share some Go code to allow GKE-hosted services to connect with external clusters using Google Service Account permissions.

At the outset, I figured that what I wanted to do must be possible, since this is exactly how `gcloud container clusters` works. Running a command like:

creates or updates a local ~/.kube/config file. This config file is how kubectl actually connects to GKE clusters using GCP credentials. I just needed to mimic the same pieces in a Go server. Should be pretty easy right?

If you just want the answer, feel free to jump down to the end and copy-paste my final solution! But if you’re interested in how I got there, read on.

Searching the internet was my first task, but unfortunately I couldn’t find exactly what I needed

The first answer I found on the subject suggested, as one possible solution, bundling the gcloud binary with my service, and having my service literally run gcloud container clusters at runtime. I found that idea problematic:

  • Bundling gcloud would add significant complication to the service image build; I’d be going off rails from the streamlined build process we already have in place for Go services

  • Invoking gcloud is ultimately a stateful operation in the container; I wouldn’t be able to write encapsulated code and forget about it, or analyze multiple clusters in parallel without worrying about mutable config state.

The second answer I found suggested producing a valid ~/.kube/config file in advance, bundling it with my service, and using that to connect to various kube clusters. That didn’t work for me either; I wanted my service to be fully dynamic, not something I’d have to rebuild and redeploy whenever we added or removed a cluster. But the article did give me an idea...

Digging deeper into the library code

If all I needed was a valid ~/.kube/config, then ultimately I just needed to see how the information in ~/.kube/config was used in the Go library code, and try to reproduce the same behavior in memory. With that in mind, I went diving into the code in k8s.io/client-go/tools/clientcmd and k8s.io/client-go/tools/clientcmd/api to see what was going on under the hood. I soon discovered that k8s.io/client-go/tools/clientcmd/api.Config

Is the Go type that represents ~/.kube/config, and clientcmd.NewNonInteractiveClientConfig. These would be the key to creating a usable kubernetes.Interface to a particular cluster.

Trial and Error

My process for getting there was fairly straightforward, if a bit trial-and-error. My idea was to see if I could produce a valid ~/.kube/config that kubectl would accept and successfully operate with. I figured if I could do that, I’d be most of the way there.

I used a google.golang.org/api/container/v1.Service, calling svc.Projects.Zones.Clusters.List for the GCP project id I was interested in, and using - as the zone. This returned a list of all the k8s clusters in a given project, and blessedly included all of the information I’d need to construct my clientcmd/api.Config. I then iterated on my code, dumping the resulting config to disk and comparing it to my existing ~/.kube/config that gcloud container clusters had produced. Eventually, I was able to produce a minimal ~/.kube/config that kubectl itself could actually use to successfully run:

Constructing the kubernetes.Interface

Once I got the config working for kubectl, I needed to get it working in Go. It wasn’t too hard to stitch together the right construction:

The CurrentContext bit is part of what makes the encapsulation work well, I can specify the active cluster I want per k8s client, but the general config is immutable and can be shared across any number of k8s clients.

Last missing piece: GCP auth plugin

My first attempt failed with this error: no Auth Provider found for name "gcp". Thankfully, this time around the internet gave me exactly what I needed (https://github.com/kubernetes/client-go/issues/242). All I needed to do was add a magic import to wire up the gcp auth provider:

As soon as I added that import, everything suddenly worked.

Full solution

To put it all together, here’s a fully working sample, a simple Go binary you can run even on your local machine. (You just need to gcloud auth login and then try to connect to a GCP project you have access to.)

Sample output:

Conclusion

I hope you find this solution useful! At Fullstory, we always like to go the extra mile to make our services as clean, stateless, and resilient as possible. If you have a passion for building solid backend systems or helping manage production systems, we’re always looking for people like you. :)

Want a perfect website or app? Fullstory can help. Request a demo today.

author

Scott Blum

Senior Staff Software Engineer

Systems engineer at Fullstory since 2015. Formerly at Square. Formerly Google Web Toolkit compiler guy. Committer on Apache Solr and Apache Curator.