profile for Gajendra D Ambi on Stack Exchange, a network of free, community-driven Q&A sites

Saturday, November 3, 2018

Problem with CentOS 7 VM Guest tools, screen resize on VirtualBox

First of all I know that all the so called developers care only and mostly about mac, apple and its eco system (shit-stem).
The actual developers who develop for linux are those who care nothing about those who are beginners, intermediate or anything less than those who can write a custom kernel and prefer to work mostly in CLI and hate anything other than C.
Then there is windows devs who care nothing about quality but quantity (microsoft).
I am on windows unfortunately since I like gaming and most enterprises still offer windows laptop to them at office.
I want to use VMware Workstation but it isnt free. It does not have a windowed mode for VMs like virtualbox. Docker, containers, kubernetes, vagrant and all these upcoming devops tools support mainly and only virtualbox since it is cross platform and free. So I am forced to use virtualbox.
Now, I want the screen resolution of my cent os linux to be at least 1080p but in any VMs (including vmware) you will never see 1080p as the resolution. So hoping that it will come up if you get the additional tools installed you try to install it.


1
2
3
4
yum install perl gcc dkms kernel-devel kernel-headers make bzip2
yum update kernel*
yum install -y kernel-devel kernel-devel-$(uname -r)
ls -l /usr/src/kernels/$(uname -r)

Now mount the guest tools image. It will ask you to run and go run it. Install it. Reboot it. You will see that
Auto resize guest display and
virtual screen 1
options are all grayed out.
Now god only knows why and how. Once you make the auto resize guest display ungrayed out then you get to choose the resolutions under virtual screen 1. I usually
reboot 3-4 times,
power off,
wait for a minute.
close the virtual box application.
start the vbox.
start the vm.
It usually takes 2-3 hours to have a centOS VM ready to a stage where I Can change the resolution of it. It is so sad that oracle is willingly or unwillingly or for whatever messed up reason not allowing its engineers to create a repo where one can just download and install the guest tools for the VM from inside it via yum or rpm. Even the paid vmware workstation too doesnt have a repository for linux guests. what a shame that they still want us to mount an ISO and install from it. I think that it is pure arrogance and lack of competition. Look at intel & AMD or nvidia and Radeon graphics cards. No competition to them; nobody improves unless they have to, unless it is a threat to their well being.
YOU HAVE TO DO THIS EVERY TIME YOU UPDATE YOUR SYSTEM. :(

Friday, November 2, 2018

Install and Default python 3.x on CentOS 7


TLDR
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
#install the centos7 repository
sudo yum -y install https://centos7.iuscommunity.org/ius-release.rpm
#install python3.6
sudo yum -y install python36u
# install pip3 for python 3.x (since pip is for python 2.x)
sudo yum -y install python36u-pip
# it shows the default python 2.x
python -V
# this is how you run the python 3.x but it is inconvenient
python3.6 -V
# so set alias
# when we run python the OS will not autotranslate that the alias value python3.6
alias python=python3.6
alias pip=pip3.6
# make these alias changes permanent
source ~/.bashrc
# install virtualenv
pip install virtualenv
# how to create a virtual environment for python3? 
# virtualenv <name of the virtual environment>

When we get our linux box it has python 2.7.x but we know that most of the development and the future sustenance is towards python 3.x. So let us do these

  1. Install CentOS IUS repository
  2. Install python 3.x
  3. Install pip3.x
  4. Make python 3 as your default python
  5. Make pip3 as your default pip

Run line 2 to install the centos repository.
Run line 4 to install python 3.6
Run line 6 to install the python package manager 3.x for python 3.x
Run line 13-17 for all user accounts where you want the python 3.x and pip3.x to be the default.
Log in as that user and run 13-17.
Bonus : - Run line 18 to have the virtual environment installed for your system.
Now run python and it defaults to python 3.6. do exit() to exit out of python.
Run pip and it will default to pip3.6.

Saturday, September 29, 2018

Django developing to production best practices.

So this is going to be a note, reminder and a collections of lessons to the future me and others like me who want to learn, develop, plan and deploy a django web server.

1. Start from bottom
Start with a bare minimum unless you absolutely need something which can't be achieved with the default stuff that comes with django. ex: Unless you need your data to be stored as json you don't need a full fledged database like postgre sql or mysql till you go to production. Start only with a virtualenv and django in it.

2. Stick to matured than young and trendy
ex: I switched to materialize css and later during production came to know that the django allauth package I am using does not work well with materialize css. So I switched back to the time tested bootstrap. If I had a lot of previous experience with django development and deployment then I Should have tried to be courageous and try the trendy stuff but I wanted to be sure/safe than sorry. I wanted to deploy my app on GKE (google kubernetes engine), explored amazone ECS and azure AKS too but it was adding too much of overhead for the size of developing team (just I, me and myself). There are a lot of security aspects you need to figure out and it is expensive too. It is easier to find tutorials, guides, documentation for matured, old trusted, tested and tasted by pros than the new kids on the block.

3. Start small, dream big
I want with single VM at the initial stage of my project with a plan to add more VMs and districute the responsibilities amongs these cloud servers later (app server, db server, web server etc.,). If you bite the bullet and went with a 3 tier in the beginning itself then 
  1. you will have too shell out a lot more
  2. performance issues will arise 
  3. you have to monitor a lot more servers now and scale all of them accordingly as per their load.
  4. You might even hit the network traffic limit your cloud provider has for your server (unlikely but possible)
4. One life is too short to re invent everything
Re use the tools, packages developed by pros who have spend the better half of their life and time in them. Ex: django allauth. It is a well trusted, widely used package with many stars, forks and followers on github. Build on others work, because they have done it too.

5. Deploy your app before it is perfect.
Most fall into this pit. Do not worry about the looks yet. If you application is ready with the most functionality then deploy it to know what else you want to start implementing now itself instead of developing a full fledged application and realizing that you should have to redo a lot of things since in production it is bombing badly.

6. Add only 1 element at a time
So let us say currently you have django (with default sqlite db) app ready to be deployed. Add these elements in these order to avoid graying out most of your hair (If you have any).
  1. Add  gunicorn and NGINX (dont choose alternative) and make them work using a shell script.
  2. Now make that script part of a systemd service (yes, use linux) so that it monitors that script, runs that script. You can also alternatively choose supervisor or upstart but they seem to add another layer of toolset to learn, maintain and another layer of failure. Try to stick with something embedded in the operating system itself. I found the systemd option to be easier to learn. I had nothing but failures with supervisor, upstart and the likes of it.
  3. Now replace the sqlite with a production grade database server, preferably an SQL server which also offers nosql stuff too. I Chose psotgresql since I had some experience with it than mysql which I have not played around with at all.
  4. Now add a domain (if you don't have it then buy one for production and one for development purposes) preferably something not for production but for development or for testing.
    1. web browser<-->domain provider<-->cloud provider
  5. Once you are able to access your site via a browser then add cloudflare to your site (to protect yourself against DDOS attack). 
    1. web browser<-->cloudflare<-->domain provider<-->cloud provider
  6. Now sign up for a smtp provider to send mails and integrate that with cloudflare.
  7. Add https using certbot and lets encrypt. Make sure it auto renews before it expires.
7. Security
  1. Search 'security settings for django' and try implement as much as possible from the official django webpage result that you get. 
  2. Serach 'django vulnerability scan' and use them to scan your site for any vulnerabilities and implement all of them as much as possible
    start with https://www.ponycheckup.com.
  3. Security is a journey and not a destination. So have a list of the packages that you have for your project, their versions and their specific vulnerabilities.
8. Code management
  1. Add a config.py to your production and development servers and to the local repo of all your developers. Add it to gitignore. Check this. http://www.cloudishes.com/2018/09/git-and-ignoring-files.html
  2. Have a master branch. Have a dev branch. Not more than 2 or 2 percentile of your total size of developers should have access to write to this. It means make it a protected branch if you are using github pro/enterprise. So only merging that can happen to master should be from dev. Anyone can create a branch from dev but before merging it should be approved by at least 1 or 2 reviewers. 
  3. Once the code is on the dev branch and done testing on the developing server then push it to master remote branch.
  4. In your production server, to a pull of the dev server content only and see how it goes for a day. Which means the old working code is only on the master production branch. The remote master, remote dev, developing server dev, developing server master have the same latest code. After 24 hours or 2 days or whatever time that you have decided, if you do not face any major problems then on the production server switch to master, pull down master from remote master. commit. Restart your django server's system service which now makes it run on the latest code. I know this is not a proper CI CD pipeline but as I said earlier. Add only one element at a time. 
  5. Once you are comfortable with all these tools that you are already using and confident then add other devops tools if you like. Try to choose those tools which require least maintenance , capex, opex.ex: gitlab has an auto CI/CD tool online if you have your repository hosted on their site.
  6. Never merge your code to dev until you have locally tested and 100% sure that it works well and with everything else in your project smoothly. Never push your code to local or remote master unless you are absolutely sure and your peers too have individually tested and okayed it. code, test, review (self and others) then push up.
9. Back it up
1. Make sure you regularly backup your master branch, database and server itself on every cycle (every sprint or on a particular day of each week or on a predefined date or event)


Friday, September 28, 2018

configuring IBM bluemix provider for terraform on windows

There is no official IBM provider plugin available for terraform for now. We can easily configure this by
  1. Download the windows 64 version of the IBM providre plugin from https://github.com/IBM-Cloud/terraform-provider-ibm/releases
  2. Extract the exe from that archive
  3. Create a terraform.d at any place of your choosing
  4. Create a plugins directory inside the above directory
  5. create a windows_amd64 directory inside the above directory
  6. copy the downloaded exe file into the windows_amd64  and plugins directory
  7. Now copy the terraform.d file to all the available directories below
    1. C:\Users\<username>\AppData
    2. C:\Users\<username>\AppData\Local
    3. C:\Users\<username>\AppData\Local<whatever>
    4. C:\Users\<username>\AppData\Roaming
Now create a main.tf file just with the following lines
provider ibm {}
and do a terraform init and it will work.

Wednesday, September 26, 2018

git and ignoring files

So if you had thought that creating a .gitignore file in your git repository's root was enough then you are wrong. You will notice that that is what is the official way to ignore file. Here is the problem.

  1. I started working on a project and stored all secrets in secrets.py
  2. 7 more people in my team joined it and now we decided that we should not commit secrets.py anymore to dev server. 
  3. Everybody added it to their .gitignore file but no go
What to do?

1
2
git rm <filepath>/<filename>
git rm --cached <filepath>/<filename>

Since the file was being tracked earlier it will still be tracked. So remove that from tracking and you are good to go.


Wednesday, September 19, 2018

Hashicorp Terraform, Vault, IBM CAM, postgresql on IBM Cloud

Here is the total workflow:

  1. provision a postgresql (compose for postgresql) database instance on IBM Cloud
  2. create 2 users
  3. set 40 character strong password to these new 2 users
  4. a connection limit of 200 on the instance
  5. Distribute these connections like this - 90%  to one user 5% to another user 5% to admin
  6. white list ip addresses to this db
  7. store the secrets about the provisioned instance into hashicorp vault
    1. get client id by giving root token to vault
    2. using this client id + role id get client token
    3. create a json payload containing these secrets
    4. using the combination of client id and client token write this json data to a dynamic path at hashicorp vault so that applications/users can consume them as and when needed.
  8. send email to someone with all these details
Tools used:
IBM Cloud Automation Manager (iCAM)
Hashicorp Terraform
Hashicorp Vault
Bash scripts
Now this post is going to be a TLDR post for the future me since I like to store things as note since you can't really trust your brain to remember or remind you of the flow and approach you took. Coding part can be figured out later since an improved code can be availed for the same approach in future, I don't want to put full code and explain why and what. 
brief about iCAM
When you provision stuff from terraform you provide a vars.json which has all the variables. It is good cook once eat all the time approach. You bake a template and then consume it using your applications or some other scripts or even humans. iCAM makes it even prettier by auto generating a GUI based on your vars.json. So you upload a vars.json and main.tf to iCAM and you have a GUI based consumer of your baked cookies aka terraform templates. 

resource "ibm_service_instance" "service" {
  name                        = "${var.instancename}"
  space_guid                  = "${data.ibm_space.spaceData.id}"
  service                     = "compose-for-postgresql"
  plan                        = "${var.plan}"
  parameters                  = { db_version= "${var.db_version}" , cluster_id= "${var.cluster_id}" }
}

you can of course choose the service name from one of the following.
  • compose-for-elasticsearch
  • compose-for-etcd
  • compose-for-janusgraph
  • compose-for-mongodb
  • compose-for-mysql
  • compose-for-postgresql
  • compose-for-rabbitmq
  • compose-for-redis
  • compose-for-rethinkdb
  • compose-for-scylladb

At the end of it emits some outputs and we want to use them for the postprovisioning stuff.


locals {
uri_cli = "${lookup(ibm_service_key.serviceKey.credentials,"uri_cli","")}"
split = "${split(" ", local.uri_cli)}"
pass = "${element(split("=",element(local.split, 0)),1)}"
host = "${element(split("=",element(local.split, 3)),1)}"
port = "${element(split("=",element(local.split, 4)),1)}" 
}

So as per the
IBM terraform git docs, you will get all the access info via ibm_service_key.serviceKey.credentials.
Locally we want to extract some of that info and use it for our next tasks.
Terraform is not GA yet and it is written in go whose main reason of existence is to offer parallel processing. So you want to halt all the code after the creation of db code for few seconds to make sure the instance is ready to take some requests from you again.

resource "null_resource" "delay" {
provisioner "local-exec" {
command = "sleep 100"
}
depends_on = ["ibm_service_key.serviceKey"]
}

You can do that like this. Now here i have created a dependency that it should run only after "ibm_service_key.serviceKey" task/code. 

You want to use
provider "postgresql" {
version="0.1.0"
host = "${local.host}"
port = "${local.port}"
database = "${var.database}"
username = "${var.admin}"
password = "${local.pass}"
}
for making a connection to the deployed instance. More info can be found at terraform>provider>postgresql docs.

resource "random_string" "user1" {
length = 40
min_lower = 10
min_upper = 10
min_numeric = 10
min_special = 10
}

Same way generate password for user2 as well. Then you can use that generated password as an input for the user1 or user2.
resource "postgresql_role" "user1" {
name = "${var.monitoring_user}"
login = true
password = "${random_string.user1.result}"
connection_limit = 5
depends_on = ["null_resource.delay"]
}

Even though terraform and vault are from hashicorp the integration is there to read from vault and not directly write to it the way we want. So we had to fall back to shell script.


SECRET_ID="$(curl -sSkX POST -H "X-Vault-Token: $ROOT_TOKEN" <fixed address+location at vault>/secret-id | jq -r ".data.secret_id")"
CLIENT_TOKEN="$(curl -sSkX POST -d "{ \"role_id\": \" $ROLE_ID\", \"secret_id\": \" $SECRET_ID\" }" <$VAULT_ADDR>/v1/auth/approle/login | jq -r ".auth.client_token")"
curl -sSkX POST -H "X-Vault-Token: $CLIENT_TOKEN" -d "{ \"data\": { \"instance2\": \"dummy_name2\", \"admin\": \"password\" }}" https://<vaultaddress>/v1/secret/<myfixedpath>/instance_name

Above is an abstract example on how to write to hashicorp vault. vault highly recommends that you configure some of the variables above as environment variables. But running the script directly from terraform as local-exec and then inline is a pain since it will complain about characters and it will misinterpret many of the things as it's own variables. It is better you go ahead and write the actual script with dummy values to a shell script file and using sed command replace those dummy values with actual values.
variable "vault_commands" {
type = "string"
default = <<EOF
some dummy commands with dummy_value1 dummy_value2
EOF
}


resource "null_resource" "shell_file" {
provisioner "file" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}

content = "${var.vault_commands}"
destination = "${var.instancename}.sh"
}

depends_on = ["null_resource.delay1"]
}

resource "null_resource" "vault_write" {
provisioner "remote-exec" {
connection {
type = "ssh"
user = "root"
host = "1.1.1.1"
password = "root_pass"
}
inline = [
"sed -i 's/dummy_value1/${var.realvalue1}/g' ${var.instancename}.sh",
"sed -i 's/dummy_value2/${local.realvalue2}/g' ${var.instancename}.sh",
"chmod +x ${var.instancename}.sh",
"./${var.instancename}.sh",
"rm -rf ${var.instancename}.sh",
]
}
depends_on = ["null_resource.delay2"]
}

In the above snippet. I am letting iCAM>terraform create a shell script on a remote linux VM, then using sed command replacing the dummy values from my template shell script with actual values, make it executable, execute it and then delete it.
https://www.compose.com/articles/the-ibm-cloud-compose-api/
You follow a similar approach to do other activities like whitelisting ip's etc., using curl. create a remote script, execute and delete it. Don't worry too much about the code as it will become absolute from version to version but approach will help you anytime.






























Monday, September 17, 2018

DBaaS: postgresql or mysql ?!

So recently I had to spin up some databases on ibm cloud using ibm compose for a bank. I noticed that most of the import postgresql settings require an access to the OS on which it is installed which makes it less ideal for a DBaaS (database as a service). We are using IBM's very own iCAM aka IBM Cloud Automation Manager which uses compose feature on IBM cloud. iCAM itself uses terraform for provisioning and auto generates a GUI bases on your vars.json. you should try it, its free for exploring.
ex:-
We need to change the user authentication from ident to md5. You need access to pg_hba.conf which will be usually at /opt/postgresql/9.5/data. So if you are someone who is providing DBaaS (elephantsql, azure, gcp, aws) then you will be limiting users with some key configurations since I cannot give all my users access to pg_hba.conf file.
another ex;
I need to whitelist some ip addresses
How it is done in postgresql
local      database  user  auth-method  [auth-options]
host       database  user  address  auth-method  [auth-options]
hostssl    database  user  address  auth-method  [auth-options]
hostnossl  database  user  address  auth-method  [auth-options]
host       database  user  IP-address  IP-mask  auth-method  [auth-options]
hostssl    database  user  IP-address  IP-mask  auth-method  [auth-options]
hostnossl  database  user  IP-address  IP-mask  auth-method  [auth-options]

How is it done in mysql
GRANT SELECT, SHOW VIEW
ON $db_name.*
TO $userName@`83.141.3.27` IDENTIFIED BY ‘$userPassword’;
I actually like postgresql since it offers a lot of good stuff and my favorite jsontype data input but it being a new and younger player than mysql has decided to have such configuriton options available at the OS layer is a bummer. So, yes if you are someone who needs to edit pg_hba.conf for whatever reason then go for mysql. Mysql has started offering json type data input too.
The only way to get around this is to use rest api of the cloud provider and try to do it there. If you are doing it on IBM bluemix then this will help you out.
https://www.compose.com/articles/the-ibm-cloud-compose-api/#availableapicalls

Friday, September 14, 2018

ssh to hosts without credentials

[TLDR]

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
sudo su
yum install nano -y
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo systemctl restart sshd
ssh-keygen -t rsa

ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>
ssh-copy-id -i ~/.ssh/id_rsa.pub root@<hostname or ip address>

So one of the main requirements for the members of kubernetes clusters is to be able to ssh without credentials.
I tested this against CENTOS 7 virtual machines on
AWS
DigitalOcean.
lines
1.switch to root user
2. install nano -y (optional)
3. enable login with password option
4. restart ssh service.
5. generate an rsa ssh key
7-12. copy the ssh key to the target machines.
[OR]
just copy paste the whole thing on all machines.
Note:- don't enter a passphrase while generating key.

Tuesday, August 14, 2018

Deploying a postgresql database instance on aws with terraform

So I needed to get the postgresql instance deployed on aws with terraform. I know it is easier to get this done via aws cli or python boto3 but it is easier with terraform. It is supposed to be non coder friendly. You do however need to search a lot online and official github and documentation.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
provider "aws" {
  access_key = "ACCESS_KEY"
  secret_key = "SECRET_KEY"
  region     = "us-east-1"
}

resource "random_string" "password" {
  length = 30
  special = true
  number = true
  lower = true
  upper = true
}

resource "aws_db_instance" "default" {
  allocated_storage    = 10
  storage_type         = "gp2"
  engine               = "postgres"
  engine_version       = "9.5"
  instance_class       = "db.t2.micro"
  name                 = "postgres"
  username             = "postgres"
  password = "${random_string.password.result}"
}

line
1 you are telling hashicorp terraform about where this should be deployed.
2. access key which you can take it from aws console while you were creating the user
3. secret key which you can take it from aws console while you were creating the user
4. if you know aws then you know what it is. You can choose whatever region you want for this testing purpose though.
7-13. This is a terraform's feature to generate a random string and here we have used some parameters from https://www.terraform.io/docs/providers/random/r/string.html which basically creates a random string of 30 characters and it mandates that this string should contain special, numbers, lower case and upper case letters.
15-24. Here you are telling terraform about what resource to deploy. In our case it is "aws_db_instance".
23. Uses the password generated by the code from 7-13 for the psql instance.
Now I have got to change the provider to IBM cloud and use hashicorp's vault for credentials management. That will be another blog when I figure it out.

Thursday, August 9, 2018

Deploying an instance on AWS via terraform

So I am trying to explore terraform and how to deploy instances on cloud using terraform.
first follow this to configure aws cli on your windows
http://www.cloudishes.com/2017/12/amazon-aws-automation.html
Not mandatory but recommended.
 Refer my previous post on installing terraform on windows.
Now

provider "aws" {
  access_key = "<access_key?"
  secret_key = "<secret_key?"
  region = "ap-south-1"
}
resource "aws_instance" "example" {
  ami           = "ami-cce794a3"
  instance_type = "t2.micro"
}

create a aws.tf file and paste this code.
Run

terraform init
terraform apply
from the same directory and it might ask you to type yes for a plan. Do that and you are done. Now you can go see your instance on AWS and it is live.
you will notice that it keeps asking for a build plan every time you do this. Instead you can create a build-plan first and then apply that.

terraform plan -out build-plan
terraform apply build-plan




Wednesday, August 8, 2018

getting started with terraform

So the terraform installation instructions on their site are not straight forward. Here how I did it
Linux (VM) : CentOS 6

Download the binaries
https://www.terraform.io/downloads.html


1
2
3
4
5
yum install -y zip unzip # install zip,unzip if not already present
unzip terraform_*.* # unzip terraform and cd into the zipped directory
echo $PATH # list the runtime path
mv terraform /usr/local/bin/ # move the terraform binary file to the runtime path
terraform -v # check the version of the terraform

windows

  • Download from https://www.terraform.io/downloads.html
  • unzip it.
  • create a terraform directory directly under C:/
  • then move the terraform.exe to the C:/terraform
  • launch our good old command window and run 'set PATH=%PATH%;C:\terraform'
  • Since we want it to be accessible via powershell to, go to environment variables and add the 'C:\terraform' to your PATH.
  • Run terraform -v
good luck.

Friday, July 27, 2018

Kubernetes on windows


  1. I hope you have enough rights on your windows machine to get started.
  2. Turn windows features on or off > disable hyper v;enable containers
  3. Install Virtualbox
  4. Search for ‘docker toolbox for windows’ and install it
  5. The usual method of installing kubernetes cli via powershell won’t work most of the times so install chocolatey

1
Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

     6. Install the kubernetes cli (kubectl)

1
choco install kubernetes-cli

Run minikube now on your windows to get started and good luck.

Tuesday, July 24, 2018

Learn with me Python 3.7

1.Python 3.7 Introduction
2.Python - Input Output [3.7]
3.Python - Data Types [3.7]
4.Python - Data Types - Integers [3.7]
5.Python - Data Type - numbers - float and ...[3.7]
6.Python - Data Types - Strings [3.7]
7.Python - Data Types - Lists [3.7]
8.Python - Data Types - Tuple [3.7]
9. Python - Data Types - Sets [3.7]
10.Python - Data Types - Dictionary [3.7]
11.Python - Loops (for and while) [3.7]
12.Python - Operators [3.7]
13.Python - Explore and inspect [3.7]
14.Python - Shallow Copy and Deep Copy [3.7]
15.Python - Package management with pip and VirtualEnv [3.7]
16.Python - Math [3.7]
17.Python - formatting [3.7]
18.Python - Conditional statements (if elif else) (decision making) [3.7]
19.Python - flow control (break, continue, pass) [3.7]
20.Python - Functions, *args, **kwargs, recursions [3.7]
21.Python - Closures [3.7]
22.Python - Decorators [3.7]
23.Python - Exceptions and error handling [3.7]
24.Python - Variables [3.7]
25.Python - List Comprehensions [3.7]
26.Python - Set comprehensions [3.7]
27.Python - Dictionary comprehensions [3.7]
28.Python - Resource or context management [3.7]
29.Python - OOP - class [3.7]

Python 3.7 Introduction

So why do you wanna learn python? or why should you.

  1. Because I love it
  2. Beginners language
  3. easy to learn
  4. easy to teach
  5. readable code
  6. do more with less code
  7. faster development
  8. opensource
  9. cost effective
  10. Career or job security
  11. It is the master of many trades and jack of none.....
I can go on and one. It is one of the top tool used in
  1. web development
  2. application development
  3. Artificial Intelligence, Machine Learning, Deep Learning, Data Science
  4. Healthcare
  5. Internet of things [IOT]
  6. robotics
  7. automation (IT or anything else)
  8. mathematics, physics
  9. statistics
  10. finance
  11. weather, earthquake prediction to name a few
I am pretty sure I have missed many aspects of it but you get the point. If you are someone who is planning to have a career and you want something which opens a lot of doors for you, not just one or two then python is the answer since it makes you eligible in a lot of areas. Later you can branch out or up with python in whichever stream you want/like to.



Monday, July 23, 2018

Why containers and why not VMs?! or Vice Versa


I hope the above gives you a visual representation of what the container stuff is all about. Let me also collage some major differences.

Scalability
A VM is limited to the hardware that it is running on. The maximum memory, cpu or network resource a VM can have is limited by the hardware that it is running on. Many a times there is also a limitation of the virtualization layer tech that you might be using. If it is VMware, my favorite one then you have a cap on what is the maximum cpu and memory that you can assign. It is a scale up architecture. It is a like a huge tall building with many floors. Ultimately there is a limitation (call it gravity, sustainibility or whatever) and you can't build a skyscraper tall enough to touch the moon.

Containers are scale out. you can always have a building which can be as wide as the earth allows it to be. Instead of getting big like a VM it multiplies or clones itself as and when needed and the clones disappear automagically when the load is low.
winner:container

Security
Containers are fairly new and thus they aren't as secure as a VM architecture is. It is not the fault of containers but the security itself and how the tools are developed and who is their main target audience. Most of the current IT security tools, software are all designed for a traditional datacenter and not the micro architecture. This can be countered if you develop your applications to be cloud ready or cloud native.
winner:VM

Simplicity
A tradition VM architecture is easier and faster to deploy and get started than a container based implementation unless you are starting off new and you start your development with container based architecture as your platform. The skillgap, industry readiness are also adding to this factor.
winner:VM

Availability
Max availability a VM based architecture provides a node failure. A node1 goes down with VM1 but node2 has a copy of that VM and that takes over without a downtime. What if the 3 nodes go down? or 5 nodes? Containers offer a never go down architecture  since it is a scale out architecture. You can have your copies of the container run on different nodes and you can just mention how many nodes you want to span them across and that is it. An orchestrator engine like kubernetes or docker swarm will take care of the rest.
winner:container

Portability
Let us say you have the need of AI/ML frameworks for your project. You can spin up a kubernetes cluster on AKS (azure kubernetes service) to access azure ML/DL frameworks or do it on google to access their AI/ML/DL frameworks and they can easily span over to your on premise datacenter. You can migrate your workload/container between different cloud providers including your on/off premise datacenters. You start off with azure. Tomorrow you might want to go to google or aws or move back to your on premise hardware. That is all possible with containers and this is a big win.
winner:container

Did I miss anything? Do let me know.

Sunday, July 22, 2018

Django dyanimic url for employees or users

Let us say you are creating a site and you want your users to sign up and you want them to redirected once they log in. Below are some url formats for some well known sites.
linkedin
https://www.linkedin.com/in/<username>/
facebook
https://www.facebook.com/<username>/
How can we do that?
It is extremely inefficient to have an html page created for each user and it is a very heavy load on your site.
How to achieve this?
By default django and allauth redirect the logged in user to domain/profile so let us use that to do some magic. Here is my urls.py


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
from django.contrib import admin
from django.urls import path, include
from . import views as core_views
from stars import views as star_views

urlpatterns = [
    path('admin/', admin.site.urls),
    path('', core_views.welcome, name="welcome"),
    path('settings/', core_views.settings, name='settings'),

    # allauth
    path('accounts/', include('allauth.urls')),

    # stars
    path('profile/', star_views.profile, name="profile"),
    path('<slug:pid>/', star_views.rprofile, name='rprofile'),
]

Concentrate on the line 15,16. So I am letting the django/allauth redirect the url to domain/profile.html and then use the star_views.profile to redirect the request to star_views.rprofile and that is done like this

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
from django.shortcuts import render
from django.contrib.auth.models import User
from django.http import HttpResponseRedirect, HttpResponse
from django.shortcuts import redirect
##Create your views here.

def profile(request):
    site = 'http://127.0.0.1:8000'
    username = request.user.username
    url = f'{site}/{username}'
    return HttpResponseRedirect(url)

def rprofile(request, pid):
    u = User.objects.get(username=pid)
    if u:
        return render(request, 'profile.html')

Now every user has his own url which will be easy to share.

Django Allauth signal to trigger something

So I am still testing the waters of django and I am still like a baby swimmer with swim tires. I had some trouble in using django allauth signals to do something.
Django 2.0
Django allauth
All in a virtualenv.
So here is how you use signals trigger some action. Go to https://github.com/pennersr/django-allauth/blob/master/docs/signals.rst to checkout the signal allauth emits at various stages. I am interested in the signal which gets dispatched after the user signs up. I am using email+password to signup and signin.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
from django.db import models
import time
from django.contrib.auth.models import User
from allauth.account.signals import user_signed_up, password_set
from django.dispatch import receiver, Signal

# Create your models here.
@receiver(user_signed_up)
def employeeID(sender=User, **kwargs):
    old_username = kwargs['user']
    user = User.objects.get(username = old_username)
    user.username = str(time.time()).split('.')[0]
    user.save()

1-5 import modules.
8. mention the signal that we want to use.
as per the allauth documents mentioned in the https://github.com/pennersr/django-allauth/blob/master/docs/signals.rst this signal contains the following.
user_signed_up = Signal(providing_args=["request", "user"])
So the entire info received from the signal is passed on to the funtion at line 9 as **kwargs.
10. extract the signedup user's username.
11. query the database for the data of that user.
12. change the username to some random number.
13. save the changes to the database.

So, when we are using email+password only to signup and signin, allauth still creates a username='email ID' of the user (without the @email.com, so if i signed up with mail@email.com then my username will be mail.) So I thought I will give my own username and it worked.

Thursday, June 28, 2018

Upgrading your vcenter server made easy with Hybrid Linked Mode

So whenever you upgrade an environment you must start with HCL [hardware compatibility list]. Let us pretend that you want to go from vsphere 6.5 to 6.7 without any downtime. hmm... yes you are asking for a pony and thanks to vmware you have it too.
I always say that the first point of contact should be upgrade first because it will be backward compatible of what lies beneath. Anyway assuming that you made sure that the rest of your stuff like version of your NSX, VRA is all taken care of. You have to then upgrade your
vcenter>esxi>vm tools>vm hardware.
Get a new vcenter 6.7 appliance ready.
Create HLM between the old and the new vcenter.
Decommission the old one.
Yes, that  is it. If you are not satisfied with this high level plan then there are so many bloggers wanting to be at the yearly top 100 virtualization blog list (and its awesome) by http://vsphere-land.com (http://vsphere-land.com/news/top-vblog-2017-full-results.html) and they have detailed posts on how to do hybrid linked mode. It is easy and you dont have to pull out any hair.
I still do recommend the old ways of having a plan B as backup. Yes take a backup of your old vcenter before you get on with this plan/task.
What if you have 2 vcsa already in linked mode and you want to retain the networking information of them?
Let us say that you have vcsa1 linked with vcsa2.

  1. Decommission the vcsa2 from linked mode with vcsa1.
  2. Shutdown the vcsa2, disable the network adapter
  3. get your new vcsa with your newer desired version and assign the vcsa2 networking details (hostname, ip..) and join the linked mode with vcsa1.
  4. Make sure all is well and they are in sync.
  5. Decommission the vcsa1
  6. deploy the newer versioned vcsa and assign vcsa1 networking details to it.
congratulate yourself.

Saturday, May 12, 2018

getting it on with docker

So I use centos for most of experiments, lab work at home. I just have this love hate relationship with it i simply cant explain or resist. Needed get the docker, docker swarm, docker compose to work on cent os.
Optionally please set up vmware tools on your centOS 7. I recommend it.
So here is how I set it up.


1
2
3
yum install epel-release # get the yum repository installed
yum install docker
sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose # It installs the docker composer of a version which is needed for the docker to work with. 
lines
1. install the epel repository
2. install docker which will also get you the docker swarm
3. installs the docker compose of a version which is needed for the docker version that you installed at line 2.
Get the visual studio code if you do not already have it

sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo'
yum check-update
sudo yum install code
you will notice that you cannot just run docker-compose because you will get permission denied error. So let us do this final bit.
1
2
chown root /usr/local/bin/docker-compose
chmod 777 /usr/local/bin/docker-compose
We are not done yet.

1
2
systemctl enable docker
service docker start
line 1. enable the docker service to start at boot up.
line 2. start the docker service manually for now.

Wednesday, May 2, 2018

Get that damn VMware tools working on centos 7

So I have realized that even though virtualbox is more suited for devops activities like vagrant, docker, container, kubernetes etc., I still somehow like vmware workstation; may be because I just like VMware since I have doing VMware stuff from a very long time. I just liked the grouping of VMs, folder, tabs and more.
I keep hitting a small hurdle and that is getting the VMware tools installed on it. I am currently using centos for jenkins, docker, kubernetes, vagrant, openstack and more. So here is a just a reminder for the future me to just throw these lines at the terminal (preferably as a root user).

UNAME=$(uname -r)
yum install g++ gcc make kernel-headers kernel-devel-${UNAME%.*} -y

  1. Then you can just open the mounted iso in a terminal.
  2. copy the archive to a different system folder.
  3. untar it.
  4. cd into the unarchived folder.
  5. run the perl installer of vmware tools.

Thursday, April 26, 2018

Get set powercli 10

So Powercli 10 is out and powercli 6.5.3+ can only be availed via powershell gallery. Here is what you need to do. I assume that you are one of those who are using windows 10.

  1. Close all commandline windows; cmd, powershell, powercli etc.,
  2. Run powershell (not ISE, just powershell) as administrator
  3. Run the following command in your powershell
    Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
    Accept or click yes to all and close the powershell window.
  4. Now do the step (1 and )2 again.
  5. Run the following to get your powercli 10 installed. Just accept whatever prompt it gives that is choose Y for yes and A for all.
    Install-Module -Name VMware.PowerCLI
  6. Now run the following.
    Import-Module VMware.VimAutomation.Core
  7. The following will opt you out from the customer experience program.
    Set-PowerCLIConfiguration -Scope AllUsers -ParticipateInCEIP $false
    If you wish to opt in then you can change the $false to $true. Here I am using the scope as allusers to make sure all users have this setting.
  8. Now let us set the powercli to ignore the unsigned user certificate error warnings.
    Set-PowerCLIConfiguration -InvalidCertificateAction ignore -Scope AllUsers -Confirm:$false
Now you are good to user the powercli as you are used to. 

Saturday, April 7, 2018

Ansible or Chef ? and Why?

First of all why do you need anything like ansible/chef/puppet/salt which can mainly be classified as configuration management and automation tools.
These are today's devops needs of an IT firm. You want to deploy, configure or manage the configuration of many machines across different platforms (local or cloud) then you need one.
So you have 2 types of CMT (configuration management tools).

ANSIBLE
========

  1. You want/need it to be agentless
    So if your targets are majorly devices and not operating systems or applications then you need this. If you are managing hardware routers , switches or devices where you can have an SSH connection but you cannot install any specific package in it to manage. You can't install your own package or an agent into a cisco nexus switch or any other switch of any other company. The vendors usually have a strict lock on what can be installed on these devices for security reasons. Ansible is most and best known for network automation for this same reason.
  2. Most of your infrastructure is mainly opensource/linux based.
    All ansible requires is SSH and linux systems are mainly managed via ssh.
  3. You like bash or python
    Ansible uses python and python 2.x is present by default on your gnu/linux machines.
  4. You are adventurous and do not mind coming up with your modules (write your own playbook)

CHEF
=====
  1. The need of an agent being present at the target machine/component to be managed isn't a bother.
  2. you want to manage windows, linux, mac seamlessly
  3. you like/know ruby more than you bash/shell/python
  4. you need a more mature product and better documentation
  5. Larger community (which translates to having more ready made modules available for common IT configuration management)
Currently I am fiddling with chef and I am digging it.