profile for Gajendra D Ambi on Stack Exchange, a network of free, community-driven Q&A sites

Wednesday, September 19, 2018

Hashicorp Terraform, Vault, IBM CAM, postgresql on IBM Cloud

Here is the total workflow:

  1. provision a postgresql (compose for postgresql) database instance on IBM Cloud
  2. create 2 users
  3. set 40 character strong password to these new 2 users
  4. a connection limit of 200 on the instance
  5. Distribute these connections like this - 90%  to one user 5% to another user 5% to admin
  6. white list ip addresses to this db
  7. store the secrets about the provisioned instance into hashicorp vault
    1. get client id by giving root token to vault
    2. using this client id + role id get client token
    3. create a json payload containing these secrets
    4. using the combination of client id and client token write this json data to a dynamic path at hashicorp vault so that applications/users can consume them as and when needed.
  8. send email to someone with all these details
Tools used:
IBM Cloud Automation Manager (iCAM)
Hashicorp Terraform
Hashicorp Vault
Bash scripts
Now this post is going to be a TLDR post for the future me since I like to store things as note since you can't really trust your brain to remember or remind you of the flow and approach you took. Coding part can be figured out later since an improved code can be availed for the same approach in future, I don't want to put full code and explain why and what. 
brief about iCAM
When you provision stuff from terraform you provide a vars.json which has all the variables. It is good cook once eat all the time approach. You bake a template and then consume it using your applications or some other scripts or even humans. iCAM makes it even prettier by auto generating a GUI based on your vars.json. So you upload a vars.json and main.tf to iCAM and you have a GUI based consumer of your baked cookies aka terraform templates. 

resource "ibm_service_instance" "service" {
  name                        = "${var.instancename}"
  space_guid                  = "${data.ibm_space.spaceData.id}"
  service                     = "compose-for-postgresql"
  plan                        = "${var.plan}"
  parameters                  = { db_version= "${var.db_version}" , cluster_id= "${var.cluster_id}" }
}

you can of course choose the service name from one of the following.
  • compose-for-elasticsearch
  • compose-for-etcd
  • compose-for-janusgraph
  • compose-for-mongodb
  • compose-for-mysql
  • compose-for-postgresql
  • compose-for-rabbitmq
  • compose-for-redis
  • compose-for-rethinkdb
  • compose-for-scylladb

At the end of it emits some outputs and we want to use them for the postprovisioning stuff.


locals {
uri_cli = "${lookup(ibm_service_key.serviceKey.credentials,"uri_cli","")}"
split = "${split(" ", local.uri_cli)}"
pass = "${element(split("=",element(local.split, 0)),1)}"
host = "${element(split("=",element(local.split, 3)),1)}"
port = "${element(split("=",element(local.split, 4)),1)}" 
}

So as per the
IBM terraform git docs, you will get all the access info via ibm_service_key.serviceKey.credentials.
Locally we want to extract some of that info and use it for our next tasks.
Terraform is not GA yet and it is written in go whose main reason of existence is to offer parallel processing. So you want to halt all the code after the creation of db code for few seconds to make sure the instance is ready to take some requests from you again.

resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 100"
}
depends_on = ["ibm_service_key.serviceKey"]
}

You can do that like this. Now here i have created a dependency that it should run only after "ibm_service_key.serviceKey" task/code. 

You want to use
provider "postgresql" {
version="0.1.0"
host = "${local.host}"
port = "${local.port}"
database = "${var.database}"
username = "${var.admin}"
password = "${local.pass}"
}
for making a connection to the deployed instance. More info can be found at terraform>provider>postgresql docs.

resource "random_string" "user1" {
length = 40
min_lower = 10
min_upper = 10
min_numeric = 10
min_special = 10
}

Same way generate password for user2 as well. Then you can use that generated password as an input for the user1 or user2.
resource "postgresql_role" "user1" {
name = "${var.monitoring_user}"
login = true
password = "${random_string.user1.result}"
connection_limit = 5
depends_on = ["null_resource.delay"]
}

Even though terraform and vault are from hashicorp the integration is there to read from vault and not directly write to it the way we want. So we had to fall back to shell script.


SECRET_ID="$(curl -sSkX POST -H "X-Vault-Token: $ROOT_TOKEN" <fixed address+location at vault>/secret-id | jq -r ".data.secret_id")"
CLIENT_TOKEN="$(curl -sSkX POST -d "{ \"role_id\": \" $ROLE_ID\", \"secret_id\": \" $SECRET_ID\" }" <$VAULT_ADDR>/v1/auth/approle/login | jq -r ".auth.client_token")"
curl -sSkX POST -H "X-Vault-Token: $CLIENT_TOKEN" -d "{ \"data\": { \"instance2\": \"dummy_name2\", \"admin\": \"password\" }}" https://<vaultaddress>/v1/secret/<myfixedpath>/instance_name

Above is an abstract example on how to write to hashicorp vault. vault highly recommends that you configure some of the variables above as environment variables. But running the script directly from terraform as local-exec and then inline is a pain since it will complain about characters and it will misinterpret many of the things as it's own variables. It is better you go ahead and write the actual script with dummy values to a shell script file and using sed command replace those dummy values with actual values.
variable "vault_commands" {
type = "string"
default = <<EOF
some dummy commands with dummy_value1 dummy_value2
EOF
}


resource "null_resource" "shell_file" {
provisioner "file" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}

content = "${var.vault_commands}"
destination = "${var.instancename}.sh"
}

depends_on = ["null_resource.delay1"]
}

resource "null_resource" "vault_write" {
provisioner "remote-exec" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}
inline = [
"sed -i 's/dummy_value1/${var.realvalue1}/g' ${var.instancename}.sh",
"sed -i 's/dummy_value2/${local.realvalue2}/g' ${var.instancename}.sh",
"chmod +x ${var.instancename}.sh",
"./${var.instancename}.sh",
"rm -rf ${var.instancename}.sh",
]
}
depends_on = ["null_resource.delay2"]
}

In the above snippet. I am letting iCAM>terraform create a shell script on a remote linux VM, then using sed command replacing the dummy values from my template shell script with actual values, make it executable, execute it and then delete it.
https://www.compose.com/articles/the-ibm-cloud-compose-api/
You follow a similar approach to do other activities like whitelisting ip's etc., using curl. create a remote script, execute and delete it. Don't worry too much about the code as it will become absolute from version to version but approach will help you anytime.