My FeedDiscussionsHeadless CMS
New
Sign in
Log inSign up
Learn more about Hashnode Headless CMSHashnode Headless CMS
Collaborate seamlessly with Hashnode Headless CMS for Enterprise.
Upgrade ✨Learn more

So you think it would be easy to use Cloud SQL for your Grafana (2)

Heechul Ryu's photo
Heechul Ryu
·Mar 29, 2020

This post is being continued from "So you think it would be easy to use Cloud SQL for your Grafana"

In the first post, I was able to launch a Cloud SQL instance via Terraform.


Does it mean this terraform code is ready yet?

Maybe not! Because I decided to rename the module and what happend was that it deleted and attempted to recreate the instance with the same name and it failed. Because there is a restriction on GCP for creating SQL instance with the same name within a certain amount time. Which unfortunately terraform apply will not give you the helpful error easily at the time of writing this post, instead it will just give you error with time out.

After experiencing multiple time out with terraform apply, I figured out the real reason for the error by creating/deleting/recreating instances with the same name on GCP Web Console.

You can read about the restriction here.


So let's prepare for that case

by adding TF code looks like this:

resource "random_id" "db_name_suffix" {
  byte_length = 4
}

module "postgresql_db" {
  # ...
  name = "my-postgres-${random_id.db_name_suffix.hex}"
  # ...
}

And that's probably why you see many TF examples using random_id resource.

A question you might ask in this section

what happens with backups when the instance is deleted? $ gcloud sql backups list --instance=<your-instance-name>

more on here

Now Cloud SQL instance seems to be ready. Let's move on to Grafana.


Grafana to connect Cloud SQL instance

installing (very simple) Grafana chart with helm provider on Terraform will look like this:

resource "helm_release" "grafana" {
  name  = "grafana"
  chart = "stable/grafana"  # https://github.com/helm/charts/tree/master/stable/grafana
}

Injecting environment variables to Grafana chart

Before I start injecting the right value, I first tested if I can inject any env properly

resource "helm_release" "grafana" {
  # ...

  values = [<<-EOT
    # this will override values.yaml in grafana chart
    env:
    - name: ENV_TEST
      value: env-test
    EOT
  ]
}

And it didn't work, why?

Because I did it wrong and I got this tricky error message:

# Error: template: grafana/templates/secret.yaml:1:58: executing "grafana/templates/secret.yaml" at <.Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE>: can't evaluate field GF_SECURITY_ADMIN_PASSWORD__FILE in type interface {}

and with that, I dug in the wrong place for a while. My fault (It's alright I do that ~sometimes~ many times)

figuring out the correct values can be tricky when you don't find the example code easily on the values.yaml file in the chart (I will contribute this part in the chart later)

Anyway the correct way was this:

resource "helm_release" "grafana" {
  # ...

  values = [<<-EOT
    env:
      ENV_TEST: env-test
    EOT
  ]
}

Know what to inject

Now I know how to deliver pieces of information to the Grafana chart via environment variables, it's time to figure out what to deliver which you can find out at here.

And I figured that I will have to pass these information under [database] section in the page I linked above.

type = postgres
host = <SQL instance's privte ip>
user = <database username>
password = <database user password>
ssl_mode = verify-full # because Grafana docs said so 😉.
ca_cert_path = <the path that CA cert file locates>
client_key_path = <the path that client private key file locates>
client_cert_path = <the path that client cert file locates>

Wait wait... what are all these SSL and cert and key path stuff and why?

remember that I did put require_ssl = true in postgresql_db module?

That means we can't connect "insecurely" we need to obtain the SQL instance's CA cert and issue client cert and client private key to prevent MITM

But isn't it already pretty safe with private IP?

Grafana will be running in GKE in VPC (Private network) in GCP and trying to connect to Cloud SQL instance via Private IP in Google's protected service network, aren't we safe enough?

No, I mean yeah you are pretty safe in terms of general security but other threat scenarios are still possible. You (or someone else) might run some untrusted services in your k8s cluster that might try sniffing your packets. Your traffic might go though not necessarily trusted layers of in the network either by a sidecar container or whatever layers in between the source and destination (especially if you are using service mesh) which could be happening either right now or in the future.

So your traffic better to be encrypted (end-to-end) even inside of the private network.

If you are still not convinced, a good article to convince you would be this


Delivering information

Grafana will understand the various information via environment variables with this format GF_<SectionName>_<KeyName>

which I will write like this:

env:
  GF_DATABASE_TYPE: postgres
  GF_DATABASE_HOST: <instance privte ip>
  ...

Filling the missing gap

Now we know what to provide and why we provide them and also what's missing, let me fill the gap.

# adding username and password in the module
module "postgresql_db" {
  # ...

  ip_configuration = {
    # ...
  }

  # not a good idea to hardcode ID and password like this,
  # but for simplicity in this post, I will keep this way
  user_name     = "my-user"
  user_password = "my-user-pw"
}

# adding client certs
resource "google_sql_ssl_cert" "postgresql" {
  common_name = "my-client"
  instance    = module.postgresql_db.instance_name
}

# generate k8s secret to being accessible from Grafana
resource "kubernetes_secret" "grafana_sql_creds" {
  metadata {
    name = "grafana-sql-creds"
  }

  data = {
    # you might want to use `locals` to make this DRY
    username = "my-user"
    password = "my-user-pw"

    "ca.crt"     = google_sql_ssl_cert.postgresql_db.server_ca_cert
    "client.crt" = google_sql_ssl_cert.postgresql_db.cert
    "client.key" = google_sql_ssl_cert.postgresql_db.private_key
  }
}

Now gaps are filled so time to deliver that to the Grafana chart

resource "helm_release" "grafana" {
  # ...

  values = [<<-EOT
    # mount secret as file in the container
    extraSecretMounts:
      - name: sql-certs
        secretName: ${kubernetes_secret.grafana_sql_creds.metadata[0].name}
        mountPath: /etc/grafana-certs

    # non credentials envs
    env:
      GF_DATATBASE_TYPE: postgres
      GF_DATATBASE_HOST: ${module.postgresql_db.private_ip_address}
      GF_DATATBASE_SSL_MODE: verify-full
      GF_DATATBASE_CA_CERT_PATH: /etc/grafana-certs/ca.crt
      GF_DATATBASE_CLIENT_KEY_PATH: /etc/grafana-certs/client.key
      GF_DATATBASE_CLIENT_CERT_PATH: /etc/grafana-certs/client.crt

    # get env from secret
    envValueFrom:
      GF_DATATBASE_USER:
        secretKeyRef:
          name: ${kubernetes_secret.grafana_sql_creds.metadata[0].name}
          key: username
      GF_DATATBASE_PASSWORD:
        secretKeyRef:
          name: ${kubernetes_secret.grafana_sql_creds.metadata[0].name}
          key: username
    EOT
  ]
}

A little explanation on the code above

  1. using envValueFrom: to avoid exposing secrets when installing the chart.
  2. mounting certs and key files as a volume with extraSecretMounts:

It seems like things look quite right, let's find out if this works.

This post also got a bit long again, so the rest will be in the next post!