Saturday, April 7, 2018

Ansible or Chef ? and Why?

First of all why do you need anything like ansible/chef/puppet/salt which can mainly be classified as configuration management and automation tools.
These are today's devops needs of an IT firm. You want to deploy, configure or manage the configuration of many machines across different platforms (local or cloud) then you need one.
So you have 2 types of CMT (configuration management tools).


  1. You want/need it to be agentless
    So if your targets are majorly devices and not operating systems or applications then you need this. If you are managing hardware routers , switches or devices where you can have an SSH connection but you cannot install any specific package in it to manage. You can't install your own package or an agent into a cisco nexus switch or any other switch of any other company. The vendors usually have a strict lock on what can be installed on these devices for security reasons. Ansible is most and best known for network automation for this same reason.
  2. Most of your infrastructure is mainly opensource/linux based.
    All ansible requires is SSH and linux systems are mainly managed via ssh.
  3. You like bash or python
    Ansible uses python and python 2.x is present by default on your gnu/linux machines.
  4. You are adventurous and do not mind coming up with your modules (write your own playbook)

  1. The need of an agent being present at the target machine/component to be managed isn't a bother.
  2. you want to manage windows, linux, mac seamlessly
  3. you like/know ruby more than you bash/shell/python
  4. you need a more mature product and better documentation
  5. Larger community (which translates to having more ready made modules available for common IT configuration management)
Currently I am fiddling with chef and I am digging it.

Wednesday, April 4, 2018

Deploying instances on gcp (google cloud platform) via powershell

This is that time. That time where I put my hands inside the gcp cookie jar and try to see what I find.
Make sure you have done this first though.
You also have to log into your gcp and enable google compute API. I think is a nice touch by gcp. You decide which of your services should have API access and which shouldnt. May be you can have some people or applications have API access and some don't. In this way you can get this configured. More on that later. May be... Below is a screen grab of my API board.

and ya...wait for a while btw after this, otherwise you will get this.

Add-GceInstance : Google.Apis.Requests.RequestError
Access Not Configured. Compute Engine API has not been used in project 1234567 before or it is disabled. Enable it by visiting then retry. If you enabled this API 
recently, wait a few minutes for the action to propagate to our systems and retry. [403]
Errors [
 Message[Access Not Configured. Compute Engine API has not been used in project 1234567 before or it is disabled. Enable it by visiting then retry. If you enabled this API 
recently, wait a few minutes for the action to propagate to our systems and retry.] Location[ - ] Reason[accessNotConfigured] 
At line:16 char:20
+ ... ta_Config | Add-GceInstance -Project $project -Zone $zone -Region $re ...
+                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Add-GceInstance], GoogleApiException
    + FullyQualifiedErrorId : Google.GoogleApiException,Google.PowerShell.ComputeEngine.AddGceInstanceCmdlet
Now let us list out the images that we have there first. We want to deploy a tiny one first since I can't afford to pay for games, cloud costs :|.

PS C:\WINDOWS\system32> Get-GceImage | select Family

Let us select the one which is highlighted.
I also needed to choose a zone and a region associated with that. Check this out.

$vm_name = 'ubuntuDummy' # name of the instance
$machine_type = 'Linux' # Description of the instance
$project = 'dummies' # name of my project which i created from the gcp console

# go here and choose
$zone = 'us-east1-b' 
$region = 'us-east1'

# choosing an image from the list of google images
$myImage = Get-GceImage | where Family -Match 'ubuntu-1404-lts'
# create a configuration for our instance
$my_insta_Config = New-GceInstanceConfig $vm_name -MachineType $machine_type -DiskImage $myImage -Region $region

# deploy our instance
$my_insta_Config | Add-GceInstance -Project $project -Zone $zone -Region $region

Good luck.


Tuesday, April 3, 2018

Deploying Vagrant VM on AWS

Let us first make sure we have the vagrant aws plugin ready.

vagrant plugin install vagrant-aws
Create a new directory.
get inside that directory
Now let us add a dummy box to AWS made just for this.
Create a Vagrantfile by running vagrant init.

mkdir lab_aws 
cd lab_aws
vagrant box add aws-dummy
vagrant init
Then open the Vagrantfile with any text editor and populate it with the following.

# Install the below plugin with 'vagrant plugin install vagrant-aws'
require 'vagrant-aws'

# VM config
Vagrant.configure('2') do |config|

  # dummy AWS box = 'awsDummy'
  # settings from aws
  config.vm.provider :aws do |aws, override|
    # aws credentials
    aws.access_key_id = 'xxxxxxxxxxxxxxxxxxxx'
    aws.secret_access_key = 'yyyyyyyyyyyyyyy'
    aws.keypair_name = "vagrant" # ssh key pair name
    aws.ami = 'ami-5cd4a126'
    aws.region = 'us-east-1'
    aws.instance_type = 't2.micro'
    aws.security_groups = "vagrant" # enabled ssh ports in/out
    # the below line will help you avoid asking for username and password for smb if you are doing this from windows
    config.vm.synced_folder ".", "/vagrant", disabled: true
    override.ssh.username = 'vagrant'
    override.ssh.private_key_path = '<path>/vagrant.pem'


So the above are the entries for your new Vagrantfile.
I chose the region 'us-east-1'
I also created a security policy group "vagrant" and enabled ssh on it. Make sure this security group is created on your chosen region in the vagrantfile. Also, make sure the ami ID that you have chosen is present in that region. I say to be safe just mimic your aws configuration to mirror this vagrantfile. Also, have your key pair download and saved somewhere. Give that path in the vagrantfille. Now just do a vagrant up and your VM will be deployed on AWS.

Monday, April 2, 2018

Setting up chef + vagrant + virtualbox on windows for aws, azure and google cloud

So spinning up VMs on aws, gcp and azure via GUI, vagrant, SDK and their native cli was okay
but when you want to do configuration management and automation tools like chef then it gets a lot more interesting.

  1. chef sdk
  2. vagrant
  3. Virtualbox 5.1 or higher

My plan was/is/will be (mostly)---
step 1. deploy workload via GUI on gcp/aws/azure
step 2. deploy workload via SDK on gcp/aws/azure
step 3. deploy workload via vagrant+virtualbox on gcp/aws/azure
step 4. deploy workload via chef --> vagrant+virtualbox on gcp/aws/azure
So now i have to do step 4.
Download and install chef sdk for windows from
Launch your powershell 5.x or 6 if you are brave and run

PS C:\WINDOWS\system32> chef -v
Chef Development Kit Version: 2.5.3
chef-client version: 13.8.5
delivery version: master (73ebb72a6c42b3d2ff5370c476be800fee7e5427)
berks version: 6.3.1
kitchen version: 1.20.0
inspec version: 1.51.21

PS C:\WINDOWS\system32> vagrant --version
Vagrant 2.0.3
and my god takes soooooo much of time on windows just to report the installed sdk info. On any nix systems it is just a second.
PS D:\> mkdir chef

    Directory: D:\

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----       03-04-2018  12.19 AM                chef

PS D:\> cd chef
PS D:\chef> vagrant box add bento/centos-7.2
==> box: Loading metadata for box 'bento/centos-7.2'
    box: URL:
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
the provider you will be working with.

1) parallels
2) virtualbox
3) vmware_desktop

Enter your choice: 2
==> box: Adding box 'bento/centos-7.2' (v2.3.1) for provider: virtualbox
    box: Downloading:
    box: Progress: 100% (Rate: 1902k/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'bento/centos-7.2' (v2.3.1) for 'virtualbox'!
PS D:\chef>
I created a chef directory
added a box to my vagrant from the vagrant cloud. bento/centos-7.2 is the box maintained by chef so I want to play safe and want to start with that. Since I did not mention the provider it gave me 3 choices and I chose virtualbox.
So now vagrant has a box added and ready to deploy on virtualbox.
Now run

vagrant init

vagrant up
 and we have our chef centos box ready to play with.
Let us ssh to our new box with

vagrant ssh
Then go to /home/vagrant and install the downloaded chef rpm
sudo rpm -ivh chefdk*.rpm
If you get some permission errors then do a chmod 777 on that rpm to set proper permissions.
Run the following to verify what you have
# chef verify
# chef-client -v
# chef --version
# sudo yum install vim, nano -y
The above will install some text editors which we will need.

Setting up your machine for GCP (google cloud platform) with powershell, python and gcloud cli

So I wanted to setup my google cloud platform (gcp) on my desktop. I meant to say I wanted to be able to connect to gcp via powershell, python and gcloud cli. Apparently it seems the competition between google, amazon and microsoft is benefiting us. They have made it easier than ever. I was however disappointed that you cannot get this to work in a virtual environment if you are on windows. Sad face :(.
It is fairly straight forward.

  1. launch your command prompt as administrator
  2. Download and install gcloud SDK from here.
  3. Once the installation is complete run gcloud init (it was preselected for me) to setup the credentials. 
  4. Now let us say yes and a browser opens up asking you for confirmation. 
    I just clicked on allow.
  5. I had a dummy project created. It gives you an option to create a project too if you don't have one created yet. 
  6. I chose 1 and now i can interact with that project with gcloud sdk with the weapon of my choice; python, powershell or gcloud cli.
Now I have my playground ready. So let me play. I will try to update here about my endeavors as much as possible.

Sunday, April 1, 2018

Deploying ubuntu linux VM on Azure via cloud azure cli or powershell

So it is quite interesting. Unlike my previous posts if you don't want to setup your machine for azure by download either python or powershell sdk or any cli locally then there is a quicker and better way.
Here is exactly what it takes to deploy a VM on azure.

  1. Create an azure account if you haven't already with pay as you go model you pay for what you use. so no worried. Use it like a lab, where you delete stuff once you are done.
  2. Launch the online azure cli. You can either use powershell or bash. Whichever you love.
  3. So first I want to create a resource group.
az group create -n linuxVms -l westus
      This btw is azure cli, not azure powershell syntax. Here I am creating a group callled linuxVms and the location of that is at the west us datacenter of azure.
      4. Now let us deploy a llinux vm from an ubuntu image
az vm create -g linuxVms -n dummyUbuntu -i UbuntuLTS --generate-ssh-keys
         So we are creating a VM and the
image is UbuntuLTS 
VM name is dummyUbuntu 
group name is linuxVms 
You are done! :)