profile for Gajendra D Ambi on Stack Exchange, a network of free, community-driven Q&A sites

Wednesday, September 19, 2018

Hashicorp Terraform, Vault, IBM CAM, postgresql on IBM Cloud

Here is the total workflow:

  1. provision a postgresql (compose for postgresql) database instance on IBM Cloud
  2. create 2 users
  3. set 40 character strong password to these new 2 users
  4. a connection limit of 200 on the instance
  5. Distribute these connections like this - 90%  to one user 5% to another user 5% to admin
  6. white list ip addresses to this db
  7. store the secrets about the provisioned instance into hashicorp vault
    1. get client id by giving root token to vault
    2. using this client id + role id get client token
    3. create a json payload containing these secrets
    4. using the combination of client id and client token write this json data to a dynamic path at hashicorp vault so that applications/users can consume them as and when needed.
  8. send email to someone with all these details
Tools used:
IBM Cloud Automation Manager (iCAM)
Hashicorp Terraform
Hashicorp Vault
Bash scripts
Now this post is going to be a TLDR post for the future me since I like to store things as note since you can't really trust your brain to remember or remind you of the flow and approach you took. Coding part can be figured out later since an improved code can be availed for the same approach in future, I don't want to put full code and explain why and what. 
brief about iCAM
When you provision stuff from terraform you provide a vars.json which has all the variables. It is good cook once eat all the time approach. You bake a template and then consume it using your applications or some other scripts or even humans. iCAM makes it even prettier by auto generating a GUI based on your vars.json. So you upload a vars.json and main.tf to iCAM and you have a GUI based consumer of your baked cookies aka terraform templates. 

resource "ibm_service_instance" "service" {
  name                        = "${var.instancename}"
  space_guid                  = "${data.ibm_space.spaceData.id}"
  service                     = "compose-for-postgresql"
  plan                        = "${var.plan}"
  parameters                  = { db_version= "${var.db_version}" , cluster_id= "${var.cluster_id}" }
}

you can of course choose the service name from one of the following.
  • compose-for-elasticsearch
  • compose-for-etcd
  • compose-for-janusgraph
  • compose-for-mongodb
  • compose-for-mysql
  • compose-for-postgresql
  • compose-for-rabbitmq
  • compose-for-redis
  • compose-for-rethinkdb
  • compose-for-scylladb

At the end of it emits some outputs and we want to use them for the postprovisioning stuff.


locals {
uri_cli = "${lookup(ibm_service_key.serviceKey.credentials,"uri_cli","")}"
split = "${split(" ", local.uri_cli)}"
pass = "${element(split("=",element(local.split, 0)),1)}"
host = "${element(split("=",element(local.split, 3)),1)}"
port = "${element(split("=",element(local.split, 4)),1)}" 
}

So as per the
IBM terraform git docs, you will get all the access info via ibm_service_key.serviceKey.credentials.
Locally we want to extract some of that info and use it for our next tasks.
Terraform is not GA yet and it is written in go whose main reason of existence is to offer parallel processing. So you want to halt all the code after the creation of db code for few seconds to make sure the instance is ready to take some requests from you again.

resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 100"
}
depends_on = ["ibm_service_key.serviceKey"]
}

You can do that like this. Now here i have created a dependency that it should run only after "ibm_service_key.serviceKey" task/code. 

You want to use
provider "postgresql" {
version="0.1.0"
host = "${local.host}"
port = "${local.port}"
database = "${var.database}"
username = "${var.admin}"
password = "${local.pass}"
}
for making a connection to the deployed instance. More info can be found at terraform>provider>postgresql docs.

resource "random_string" "user1" {
length = 40
min_lower = 10
min_upper = 10
min_numeric = 10
min_special = 10
}

Same way generate password for user2 as well. Then you can use that generated password as an input for the user1 or user2.
resource "postgresql_role" "user1" {
name = "${var.monitoring_user}"
login = true
password = "${random_string.user1.result}"
connection_limit = 5
depends_on = ["null_resource.delay"]
}

Even though terraform and vault are from hashicorp the integration is there to read from vault and not directly write to it the way we want. So we had to fall back to shell script.


SECRET_ID="$(curl -sSkX POST -H "X-Vault-Token: $ROOT_TOKEN" <fixed address+location at vault>/secret-id | jq -r ".data.secret_id")"
CLIENT_TOKEN="$(curl -sSkX POST -d "{ \"role_id\": \" $ROLE_ID\", \"secret_id\": \" $SECRET_ID\" }" <$VAULT_ADDR>/v1/auth/approle/login | jq -r ".auth.client_token")"
curl -sSkX POST -H "X-Vault-Token: $CLIENT_TOKEN" -d "{ \"data\": { \"instance2\": \"dummy_name2\", \"admin\": \"password\" }}" https://<vaultaddress>/v1/secret/<myfixedpath>/instance_name

Above is an abstract example on how to write to hashicorp vault. vault highly recommends that you configure some of the variables above as environment variables. But running the script directly from terraform as local-exec and then inline is a pain since it will complain about characters and it will misinterpret many of the things as it's own variables. It is better you go ahead and write the actual script with dummy values to a shell script file and using sed command replace those dummy values with actual values.
variable "vault_commands" {
type = "string"
default = <<EOF
some dummy commands with dummy_value1 dummy_value2
EOF
}


resource "null_resource" "shell_file" {
provisioner "file" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}

content = "${var.vault_commands}"
destination = "${var.instancename}.sh"
}

depends_on = ["null_resource.delay1"]
}

resource "null_resource" "vault_write" {
provisioner "remote-exec" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}
inline = [
"sed -i 's/dummy_value1/${var.realvalue1}/g' ${var.instancename}.sh",
"sed -i 's/dummy_value2/${local.realvalue2}/g' ${var.instancename}.sh",
"chmod +x ${var.instancename}.sh",
"./${var.instancename}.sh",
"rm -rf ${var.instancename}.sh",
]
}
depends_on = ["null_resource.delay2"]
}

In the above snippet. I am letting iCAM>terraform create a shell script on a remote linux VM, then using sed command replacing the dummy values from my template shell script with actual values, make it executable, execute it and then delete it.
https://www.compose.com/articles/the-ibm-cloud-compose-api/
You follow a similar approach to do other activities like whitelisting ip's etc., using curl. create a remote script, execute and delete it. Don't worry too much about the code as it will become absolute from version to version but approach will help you anytime.






























Monday, September 17, 2018

DBaaS: postgresql or mysql ?!

So recently I had to spin up some databases on ibm cloud using ibm compose for a bank. I noticed that most of the import postgresql settings require an access to the OS on which it is installed which makes it less ideal for a DBaaS (database as a service). We are using IBM's very own iCAM aka IBM Cloud Automation Manager which uses compose feature on IBM cloud. iCAM itself uses terraform for provisioning and auto generates a GUI bases on your vars.json. you should try it, its free for exploring.
ex:-
We need to change the user authentication from ident to md5. You need access to pg_hba.conf which will be usually at /opt/postgresql/9.5/data. So if you are someone who is providing DBaaS (elephantsql, azure, gcp, aws) then you will be limiting users with some key configurations since I cannot give all my users access to pg_hba.conf file.
another ex;
I need to whitelist some ip addresses
How it is done in postgresql
local      database  user  auth-method  [auth-options]
host       database  user  address  auth-method  [auth-options]
hostssl    database  user  address  auth-method  [auth-options]
hostnossl  database  user  address  auth-method  [auth-options]
host       database  user  IP-address  IP-mask  auth-method  [auth-options]
hostssl    database  user  IP-address  IP-mask  auth-method  [auth-options]
hostnossl  database  user  IP-address  IP-mask  auth-method  [auth-options]

How is it done in mysql
GRANT SELECT, SHOW VIEW
ON $db_name.*
TO $userName@`83.141.3.27` IDENTIFIED BY ‘$userPassword’;
I actually like postgresql since it offers a lot of good stuff and my favorite jsontype data input but it being a new and younger player than mysql has decided to have such configuriton options available at the OS layer is a bummer. So, yes if you are someone who needs to edit pg_hba.conf for whatever reason then go for mysql. Mysql has started offering json type data input too.
The only way to get around this is to use rest api of the cloud provider and try to do it there. If you are doing it on IBM bluemix then this will help you out.
https://www.compose.com/articles/the-ibm-cloud-compose-api/#availableapicalls

Friday, September 14, 2018

ssh to hosts without credentials

[TLDR]

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
sudo su
yum install nano -y
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo systemctl restart sshd
ssh-keygen -t rsa

ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>

So one of the main requirements for the members of kubernetes clusters is to be able to ssh without credentials.
I tested this against CENTOS 7 virtual machines on
AWS
DigitalOcean.
lines
1.switch to root user
2. install nano -y (optional)
3. enable login with password option
4. restart ssh service.
5. generate an rsa ssh key
7-12. copy the ssh key to the target machines.
[OR]
just copy paste the whole thing on all machines.
Note:- don't enter a passphrase while generating key.

Tuesday, August 14, 2018

Deploying a postgresql database instance on aws with terraform

So I needed to get the postgresql instance deployed on aws with terraform. I know it is easier to get this done via aws cli or python boto3 but it is easier with terraform. It is supposed to be non coder friendly. You do however need to search a lot online and official github and documentation.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
provider "aws" {
  access_key = "ACCESS_KEY"
  secret_key = "SECRET_KEY"
  region     = "us-east-1"
}

resource "random_string" "password" {
  length = 30
  special = true
  number = true
  lower = true
  upper = true
}

resource "aws_db_instance" "default" {
  allocated_storage    = 10
  storage_type         = "gp2"
  engine               = "postgres"
  engine_version       = "9.5"
  instance_class       = "db.t2.micro"
  name                 = "postgres"
  username             = "postgres"
  password = "${random_string.password.result}"
}

line
1 you are telling hashicorp terraform about where this should be deployed.
2. access key which you can take it from aws console while you were creating the user
3. secret key which you can take it from aws console while you were creating the user
4. if you know aws then you know what it is. You can choose whatever region you want for this testing purpose though.
7-13. This is a terraform's feature to generate a random string and here we have used some parameters from https://www.terraform.io/docs/providers/random/r/string.html which basically creates a random string of 30 characters and it mandates that this string should contain special, numbers, lower case and upper case letters.
15-24. Here you are telling terraform about what resource to deploy. In our case it is "aws_db_instance".
23. Uses the password generated by the code from 7-13 for the psql instance.
Now I have got to change the provider to IBM cloud and use hashicorp's vault for credentials management. That will be another blog when I figure it out.

Thursday, August 9, 2018

Deploying an instance on AWS via terraform

So I am trying to explore terraform and how to deploy instances on cloud using terraform.
first follow this to configure aws cli on your windows
http://www.cloudishes.com/2017/12/amazon-aws-automation.html
Not mandatory but recommended.
 Refer my previous post on installing terraform on windows.
Now

provider "aws" {
  access_key = "<access_key?"
  secret_key = "<secret_key?"
  region = "ap-south-1"
}
resource "aws_instance" "example" {
  ami           = "ami-cce794a3"
  instance_type = "t2.micro"
}

create a aws.tf file and paste this code.
Run

terraform init
terraform apply
from the same directory and it might ask you to type yes for a plan. Do that and you are done. Now you can go see your instance on AWS and it is live.
you will notice that it keeps asking for a build plan every time you do this. Instead you can create a build-plan first and then apply that.

terraform plan -out build-plan
terraform apply build-plan