So you think it would be easy to use Cloud SQL for your Grafana

Well... I thought so too and the spoiler alert is that it wasn't easy (at least for me)

The goal I tried to achieve was to let the Grafana running (installed via helm chart) on GKE to use Cloud SQL instead of default sqlite3 for data persistency and better management.

Grafana saves your data into a database. By default, it is configured to use sqlite3 (on local disk) which doesn't give you much control of data safety with various scenarios.

(sudden) Prerequisites

Sorry to interrupt you, but to see what I'm talking about in this post, you might need to be familiar with Kubernetes aka k8s, GCP (or AWS or other cloud providers), Cloud SQL (just a bit), Grafana, Helm, Terraform.

Just in case you are not familiar with any of those, here are my short descriptions for you:

- k8s lets your (docker) container run in multiple hosts for scalability and it's super cool.
- GCP is google's version of AWS and I find it pretty cool and modern (also few frustrations as well)
- Cloud SQL is managed relational database service, I think you can compare this to AWS RDS
- Grafana lets you visualize your time-series data into beautiful dashboards.
- Helm allows you to packages k8s manifests and there are tons are packagings (charts) done by many people you can find and use
- Terraform allow you to express your infrastructure/system as a code and it also helps you to actually create/update/delete the systems/resources respecting your code.

And an assumption

Also in this post I'm assuming that you are familiar with VPC and GKE and you already configured that with TF yourself. Even if you are not, it's a whole other topic, so I wouldn't go for that part in this post.

possibly like this:

module "my_vpc" {
   # ...
module "my_gke" {
   # ...

What I imagined it would be.

Simply spin up a Cloud SQL instance by adding few resources or a module as a Terraform code and feed some environment variables to Grafana, so it can start storing data into the Cloud SQL instance. I expected this would take maybe a half a day or a full day if I'm distracted with other things to do too. But that's not what happened and I admit I was quite naive.

What actually happened

I tried to spin up Cloud SQL instance first

Since I'm going to do this with Terraform there are at least two choices I was given:

I went with the module way. Also using a private IP (not using public IP) was a requirement, (sure, less exposure to the dangerous internet is more secure 😉) And I chose PostgreSQL over MySQL. I wrote some code looks like this below:

module "postgresql_db" {
  source  = "GoogleCloudPlatform/sql-db/google//modules/postgresql"
  version = "3.1.0"

  name             = "my-postgres"
  database_version = "POSTGRES_11"
  project_id       = var.project_id
  zone             = "c"
  region           = "us-central1"
  tier             = "db-f1-micro"

  backup_configuration = {
    enabled = true

  ip_configuration = {
    ipv4_enabled        = false # to disable public ip
    # The VPC network from which the Cloud SQL instance is accessible for private IP.
    private_network     = module.my_vpc.network_self_link 
    require_ssl         = true
    authorized_networks = []

And when I ran (apply) this code, I got an error looks something like this:

google_sql_database_instance.postgresql_db: Error waiting for Create Instance: Failed to create subnetwork. Please create Service Networking connection with service '' from consumer project 'my-project' network 'my-vpc' again

So I googled and I learned that in order to make private IP Cloud SQL instance that can be reachable from your existing network in GCP (VPC), you need to peer your VPC and Google's service network if you haven't already.

You also might need to enable this API as well.

If you have this configured already, then you can verify that by a few commands and it will look like this:

$ gcloud services vpc-peerings list --network=my-vpc
network: projects/xxxx/global/networks/my-vpc
peering: servicenetworking-googleapis-com
- my-peering
service: services/
network: projects/xxxx/global/networks/my-vpc
peering: cloudsql-postgres-googleapis-com
- my-peering
service: services/

$ gcloud compute addresses list
NAME            ADDRESS/RANGE   TYPE            PURPOSE          NETWORK    ... 
my-p-ip   172.x.y.0/24         INTERNAL   VPC_PEERING   my-vpc         ...

You can find more on details here

Peering to GCP's Service Networking

Since peering wasn't configured for my VPC, so I added this TF code below that is doing essentially this.

Which I could find on the Terraform google provider page

module "postgresql_db" {
  # ...

resource "google_compute_global_address" "private_ip_address" {
  name          = "my-p-ip"
  purpose       = "VPC_PEERING"
  address_type  = "INTERNAL"
  prefix_length = 24   # choose your ideal CIDR range
  network       = module.my_vpc.network_self_link

resource "google_service_networking_connection" "private_vpc_connection" {
  network                 = module.my_vpc.network_self_link
  service                 = ""
  reserved_peering_ranges = []

why not network = google_compute_network.private_network.self_link?

because I'm using this module instead of google_compute_network

This effectively will let you have a private IP range and service networking connection that you need to be able to spin up Cloud SQL instance peered to your VPC, so now the instance is created and running.

This post is already getting a bit long so I would like to split this up into multiple posts. Next one is here

Learn Something New Everyday,
Connect With The Best Developers!

Sign Up Now!

& 500k+ others use Hashnode actively.

No Comments Yet