profile for Gajendra D Ambi on Stack Exchange, a network of free, community-driven Q&A sites

Saturday, September 29, 2018

Django developing to production best practices.

So this is going to be a note, reminder and a collections of lessons to the future me and others like me who want to learn, develop, plan and deploy a django web server.

1. Start from bottom
Start with a bare minimum unless you absolutely need something which can't be achieved with the default stuff that comes with django. ex: Unless you need your data to be stored as json you don't need a full fledged database like postgre sql or mysql till you go to production. Start only with a virtualenv and django in it.

2. Stick to matured than young and trendy
ex: I switched to materialize css and later during production came to know that the django allauth package I am using does not work well with materialize css. So I switched back to the time tested bootstrap. If I had a lot of previous experience with django development and deployment then I Should have tried to be courageous and try the trendy stuff but I wanted to be sure/safe than sorry. I wanted to deploy my app on GKE (google kubernetes engine), explored amazone ECS and azure AKS too but it was adding too much of overhead for the size of developing team (just I, me and myself). There are a lot of security aspects you need to figure out and it is expensive too. It is easier to find tutorials, guides, documentation for matured, old trusted, tested and tasted by pros than the new kids on the block.

3. Start small, dream big
I want with single VM at the initial stage of my project with a plan to add more VMs and districute the responsibilities amongs these cloud servers later (app server, db server, web server etc.,). If you bite the bullet and went with a 3 tier in the beginning itself then 
  1. you will have too shell out a lot more
  2. performance issues will arise 
  3. you have to monitor a lot more servers now and scale all of them accordingly as per their load.
  4. You might even hit the network traffic limit your cloud provider has for your server (unlikely but possible)
4. One life is too short to re invent everything
Re use the tools, packages developed by pros who have spend the better half of their life and time in them. Ex: django allauth. It is a well trusted, widely used package with many stars, forks and followers on github. Build on others work, because they have done it too.

5. Deploy your app before it is perfect.
Most fall into this pit. Do not worry about the looks yet. If you application is ready with the most functionality then deploy it to know what else you want to start implementing now itself instead of developing a full fledged application and realizing that you should have to redo a lot of things since in production it is bombing badly.

6. Add only 1 element at a time
So let us say currently you have django (with default sqlite db) app ready to be deployed. Add these elements in these order to avoid graying out most of your hair (If you have any).
  1. Add  gunicorn and NGINX (dont choose alternative) and make them work using a shell script.
  2. Now make that script part of a systemd service (yes, use linux) so that it monitors that script, runs that script. You can also alternatively choose supervisor or upstart but they seem to add another layer of toolset to learn, maintain and another layer of failure. Try to stick with something embedded in the operating system itself. I found the systemd option to be easier to learn. I had nothing but failures with supervisor, upstart and the likes of it.
  3. Now replace the sqlite with a production grade database server, preferably an SQL server which also offers nosql stuff too. I Chose psotgresql since I had some experience with it than mysql which I have not played around with at all.
  4. Now add a domain (if you don't have it then buy one for production and one for development purposes) preferably something not for production but for development or for testing.
    1. web browser<-->domain provider<-->cloud provider
  5. Once you are able to access your site via a browser then add cloudflare to your site (to protect yourself against DDOS attack). 
    1. web browser<-->cloudflare<-->domain provider<-->cloud provider
  6. Now sign up for a smtp provider to send mails and integrate that with cloudflare.
  7. Add https using certbot and lets encrypt. Make sure it auto renews before it expires.
7. Security
  1. Search 'security settings for django' and try implement as much as possible from the official django webpage result that you get. 
  2. Serach 'django vulnerability scan' and use them to scan your site for any vulnerabilities and implement all of them as much as possible
    start with https://www.ponycheckup.com.
  3. Security is a journey and not a destination. So have a list of the packages that you have for your project, their versions and their specific vulnerabilities.
8. Code management
  1. Add a config.py to your production and development servers and to the local repo of all your developers. Add it to gitignore. Check this. http://www.cloudishes.com/2018/09/git-and-ignoring-files.html
  2. Have a master branch. Have a dev branch. Not more than 2 or 2 percentile of your total size of developers should have access to write to this. It means make it a protected branch if you are using github pro/enterprise. So only merging that can happen to master should be from dev. Anyone can create a branch from dev but before merging it should be approved by at least 1 or 2 reviewers. 
  3. Once the code is on the dev branch and done testing on the developing server then push it to master remote branch.
  4. In your production server, to a pull of the dev server content only and see how it goes for a day. Which means the old working code is only on the master production branch. The remote master, remote dev, developing server dev, developing server master have the same latest code. After 24 hours or 2 days or whatever time that you have decided, if you do not face any major problems then on the production server switch to master, pull down master from remote master. commit. Restart your django server's system service which now makes it run on the latest code. I know this is not a proper CI CD pipeline but as I said earlier. Add only one element at a time. 
  5. Once you are comfortable with all these tools that you are already using and confident then add other devops tools if you like. Try to choose those tools which require least maintenance , capex, opex.ex: gitlab has an auto CI/CD tool online if you have your repository hosted on their site.
  6. Never merge your code to dev until you have locally tested and 100% sure that it works well and with everything else in your project smoothly. Never push your code to local or remote master unless you are absolutely sure and your peers too have individually tested and okayed it. code, test, review (self and others) then push up.
9. Back it up
1. Make sure you regularly backup your master branch, database and server itself on every cycle (every sprint or on a particular day of each week or on a predefined date or event)


Friday, September 28, 2018

configuring IBM bluemix provider for terraform on windows

There is no official IBM provider plugin available for terraform for now. We can easily configure this by
  1. Download the windows 64 version of the IBM providre plugin from https://github.com/IBM-Cloud/terraform-provider-ibm/releases
  2. Extract the exe from that archive
  3. Create a terraform.d at any place of your choosing
  4. Create a plugins directory inside the above directory
  5. create a windows_amd64 directory inside the above directory
  6. copy the downloaded exe file into the windows_amd64  and plugins directory
  7. Now copy the terraform.d file to all the available directories below
    1. C:\Users\<username>\AppData
    2. C:\Users\<username>\AppData\Local
    3. C:\Users\<username>\AppData\Local<whatever>
    4. C:\Users\<username>\AppData\Roaming
Now create a main.tf file just with the following lines
provider ibm {}
and do a terraform init and it will work.

Wednesday, September 26, 2018

git and ignoring files

So if you had thought that creating a .gitignore file in your git repository's root was enough then you are wrong. You will notice that that is what is the official way to ignore file. Here is the problem.

  1. I started working on a project and stored all secrets in secrets.py
  2. 7 more people in my team joined it and now we decided that we should not commit secrets.py anymore to dev server. 
  3. Everybody added it to their .gitignore file but no go
What to do?

1
2
git rm <filepath>/<filename>
git rm --cached <filepath>/<filename>

Since the file was being tracked earlier it will still be tracked. So remove that from tracking and you are good to go.


Wednesday, September 19, 2018

Hashicorp Terraform, Vault, IBM CAM, postgresql on IBM Cloud

Here is the total workflow:

  1. provision a postgresql (compose for postgresql) database instance on IBM Cloud
  2. create 2 users
  3. set 40 character strong password to these new 2 users
  4. a connection limit of 200 on the instance
  5. Distribute these connections like this - 90%  to one user 5% to another user 5% to admin
  6. white list ip addresses to this db
  7. store the secrets about the provisioned instance into hashicorp vault
    1. get client id by giving root token to vault
    2. using this client id + role id get client token
    3. create a json payload containing these secrets
    4. using the combination of client id and client token write this json data to a dynamic path at hashicorp vault so that applications/users can consume them as and when needed.
  8. send email to someone with all these details
Tools used:
IBM Cloud Automation Manager (iCAM)
Hashicorp Terraform
Hashicorp Vault
Bash scripts
Now this post is going to be a TLDR post for the future me since I like to store things as note since you can't really trust your brain to remember or remind you of the flow and approach you took. Coding part can be figured out later since an improved code can be availed for the same approach in future, I don't want to put full code and explain why and what. 
brief about iCAM
When you provision stuff from terraform you provide a vars.json which has all the variables. It is good cook once eat all the time approach. You bake a template and then consume it using your applications or some other scripts or even humans. iCAM makes it even prettier by auto generating a GUI based on your vars.json. So you upload a vars.json and main.tf to iCAM and you have a GUI based consumer of your baked cookies aka terraform templates. 

resource "ibm_service_instance" "service" {
  name                        = "${var.instancename}"
  space_guid                  = "${data.ibm_space.spaceData.id}"
  service                     = "compose-for-postgresql"
  plan                        = "${var.plan}"
  parameters                  = { db_version= "${var.db_version}" , cluster_id= "${var.cluster_id}" }
}

you can of course choose the service name from one of the following.
  • compose-for-elasticsearch
  • compose-for-etcd
  • compose-for-janusgraph
  • compose-for-mongodb
  • compose-for-mysql
  • compose-for-postgresql
  • compose-for-rabbitmq
  • compose-for-redis
  • compose-for-rethinkdb
  • compose-for-scylladb

At the end of it emits some outputs and we want to use them for the postprovisioning stuff.


locals {
uri_cli = "${lookup(ibm_service_key.serviceKey.credentials,"uri_cli","")}"
split = "${split(" ", local.uri_cli)}"
pass = "${element(split("=",element(local.split, 0)),1)}"
host = "${element(split("=",element(local.split, 3)),1)}"
port = "${element(split("=",element(local.split, 4)),1)}" 
}

So as per the
IBM terraform git docs, you will get all the access info via ibm_service_key.serviceKey.credentials.
Locally we want to extract some of that info and use it for our next tasks.
Terraform is not GA yet and it is written in go whose main reason of existence is to offer parallel processing. So you want to halt all the code after the creation of db code for few seconds to make sure the instance is ready to take some requests from you again.

resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 100"
}
depends_on = ["ibm_service_key.serviceKey"]
}

You can do that like this. Now here i have created a dependency that it should run only after "ibm_service_key.serviceKey" task/code. 

You want to use
provider "postgresql" {
version="0.1.0"
host = "${local.host}"
port = "${local.port}"
database = "${var.database}"
username = "${var.admin}"
password = "${local.pass}"
}
for making a connection to the deployed instance. More info can be found at terraform>provider>postgresql docs.

resource "random_string" "user1" {
length = 40
min_lower = 10
min_upper = 10
min_numeric = 10
min_special = 10
}

Same way generate password for user2 as well. Then you can use that generated password as an input for the user1 or user2.
resource "postgresql_role" "user1" {
name = "${var.monitoring_user}"
login = true
password = "${random_string.user1.result}"
connection_limit = 5
depends_on = ["null_resource.delay"]
}

Even though terraform and vault are from hashicorp the integration is there to read from vault and not directly write to it the way we want. So we had to fall back to shell script.


SECRET_ID="$(curl -sSkX POST -H "X-Vault-Token: $ROOT_TOKEN" <fixed address+location at vault>/secret-id | jq -r ".data.secret_id")"
CLIENT_TOKEN="$(curl -sSkX POST -d "{ \"role_id\": \" $ROLE_ID\", \"secret_id\": \" $SECRET_ID\" }" <$VAULT_ADDR>/v1/auth/approle/login | jq -r ".auth.client_token")"
curl -sSkX POST -H "X-Vault-Token: $CLIENT_TOKEN" -d "{ \"data\": { \"instance2\": \"dummy_name2\", \"admin\": \"password\" }}" https://<vaultaddress>/v1/secret/<myfixedpath>/instance_name

Above is an abstract example on how to write to hashicorp vault. vault highly recommends that you configure some of the variables above as environment variables. But running the script directly from terraform as local-exec and then inline is a pain since it will complain about characters and it will misinterpret many of the things as it's own variables. It is better you go ahead and write the actual script with dummy values to a shell script file and using sed command replace those dummy values with actual values.
variable "vault_commands" {
type = "string"
default = <<EOF
some dummy commands with dummy_value1 dummy_value2
EOF
}


resource "null_resource" "shell_file" {
provisioner "file" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}

content = "${var.vault_commands}"
destination = "${var.instancename}.sh"
}

depends_on = ["null_resource.delay1"]
}

resource "null_resource" "vault_write" {
provisioner "remote-exec" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}
inline = [
"sed -i 's/dummy_value1/${var.realvalue1}/g' ${var.instancename}.sh",
"sed -i 's/dummy_value2/${local.realvalue2}/g' ${var.instancename}.sh",
"chmod +x ${var.instancename}.sh",
"./${var.instancename}.sh",
"rm -rf ${var.instancename}.sh",
]
}
depends_on = ["null_resource.delay2"]
}

In the above snippet. I am letting iCAM>terraform create a shell script on a remote linux VM, then using sed command replacing the dummy values from my template shell script with actual values, make it executable, execute it and then delete it.
https://www.compose.com/articles/the-ibm-cloud-compose-api/
You follow a similar approach to do other activities like whitelisting ip's etc., using curl. create a remote script, execute and delete it. Don't worry too much about the code as it will become absolute from version to version but approach will help you anytime.






























Monday, September 17, 2018

DBaaS: postgresql or mysql ?!

So recently I had to spin up some databases on ibm cloud using ibm compose for a bank. I noticed that most of the import postgresql settings require an access to the OS on which it is installed which makes it less ideal for a DBaaS (database as a service). We are using IBM's very own iCAM aka IBM Cloud Automation Manager which uses compose feature on IBM cloud. iCAM itself uses terraform for provisioning and auto generates a GUI bases on your vars.json. you should try it, its free for exploring.
ex:-
We need to change the user authentication from ident to md5. You need access to pg_hba.conf which will be usually at /opt/postgresql/9.5/data. So if you are someone who is providing DBaaS (elephantsql, azure, gcp, aws) then you will be limiting users with some key configurations since I cannot give all my users access to pg_hba.conf file.
another ex;
I need to whitelist some ip addresses
How it is done in postgresql
local      database  user  auth-method  [auth-options]
host       database  user  address  auth-method  [auth-options]
hostssl    database  user  address  auth-method  [auth-options]
hostnossl  database  user  address  auth-method  [auth-options]
host       database  user  IP-address  IP-mask  auth-method  [auth-options]
hostssl    database  user  IP-address  IP-mask  auth-method  [auth-options]
hostnossl  database  user  IP-address  IP-mask  auth-method  [auth-options]

How is it done in mysql
GRANT SELECT, SHOW VIEW
ON $db_name.*
TO $userName@`83.141.3.27` IDENTIFIED BY ‘$userPassword’;
I actually like postgresql since it offers a lot of good stuff and my favorite jsontype data input but it being a new and younger player than mysql has decided to have such configuriton options available at the OS layer is a bummer. So, yes if you are someone who needs to edit pg_hba.conf for whatever reason then go for mysql. Mysql has started offering json type data input too.
The only way to get around this is to use rest api of the cloud provider and try to do it there. If you are doing it on IBM bluemix then this will help you out.
https://www.compose.com/articles/the-ibm-cloud-compose-api/#availableapicalls

Friday, September 14, 2018

ssh to hosts without credentials

[TLDR]

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
sudo su
yum install nano -y
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo systemctl restart sshd
ssh-keygen -t rsa

ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>

So one of the main requirements for the members of kubernetes clusters is to be able to ssh without credentials.
I tested this against CENTOS 7 virtual machines on
AWS
DigitalOcean.
lines
1.switch to root user
2. install nano -y (optional)
3. enable login with password option
4. restart ssh service.
5. generate an rsa ssh key
7-12. copy the ssh key to the target machines.
[OR]
just copy paste the whole thing on all machines.
Note:- don't enter a passphrase while generating key.