profile for Gajendra D Ambi on Stack Exchange, a network of free, community-driven Q&A sites

Thursday, July 20, 2023

Windows bluetooth crackling sound [cause, solution]

 So I had/have this issue with windows where bluetooth speakers start producing cracklintg/cracking sound for no reason. scraping the internet yielded no solution. Windows troubleshooter has solved no problem in all these years and that stays the same today. Here are the details:

Type of PC: laptop

make: 2023

Ram: 32GB

GPU: Integrated RTX4080

OEM: Asus

CPU: AMD

Wireless I/O devices:

  1. keychron k2 wireless keyboard (bluetooth)
  2. logitech m720 mouse (bluetooth)
  3. logitech g pro mouse  (2.4ghz unifying logi dongle, not bluetooth)
  4. harmon kardon aura studio 3 (bluetooth)

What didnt work?

  1. windows inbuilt troubleshooter
  2. driver update
  3. driver reinstall
  4. reinstall OS
  5. Unpair all bluetooth devices, uninstall, pair them again, update their drivers (via windows udpate driver)
  6. External USB bluetooth 5.0 dongle
What worked?
  1. Except bluetooth speaker and bluetooth keyboard, dont use any more bluetooth devices
Cause:
Theoretically bluetooth supports 6 to 7 bluetooth devices but in practice due to bandwidth throtting by OEMs & bandwidth limitation of the BT technology, if BT speakers are used, then use just 1 more BT device and not anymore.
So currently the 2 mice that I have from logitech, they both come with their proprietory dongle to connect with, it takes up 2 more USB ports on the laptop (So I bought a docking station) but at least it is headache free.
Suggestions to OEMs:
1. Make multi device bluetooth keyboards but which can be paired with mouse, so whenever I switch connection between devices, the mouse+keyboard switch together.
2. Make PC, Laptops support multiple bluetooth interfaces, just like multiple NIC/PCI/PCIe cards, and we can connect our devices to any of the BT interface in the OS.

Friday, May 26, 2023

Postgresql on cloud, self or hosted, if self then monolithic or k8s? which k8s operator?

 So I have been planning to move my django sites to kubernetes to get that automated horizontal and vertical scaling, failover, high availability, fault tolerance and other good stuff. Database on kubernetes has been a very huge topic of , to do or not to do...yes a year back, I would not have recommended it. Operators were just kicking in. Now many seem to have matured everyone has their own querks and problems.

Why not hosted?

There are many DBaaS. All cloud providers provide one. Many provide only DBaaS such as elephantSQL. What I am afraid about is freedom, loss of freedom. We all know what happened to parler.com, they got just booted off their cloud account, data gone, applications gone. One day their account just got locked out. So all these DBaaS providers do not offer replication, backup to intracloud target (that is, having a standby node on another competing cloud provider, or your own on prem server or an object storage from some other provider or a NAS server at home etc.,). Sure they provide headache free, automated, regular backup, PITR - point in time recovery, base backup, incremental backup, differential backup, standy node etc., You dont have to have a team of DB engineers who maintain it and do all these day 2 operations. You remember what happened to gitlab dont you? Engineer accidentally deleted their entire primary postgresql database. Yes, they did and they found out that the backups that they had were no good.  So you are outsourcing all your headache to someone else. This is good, but not enough.

I want to be able to have my backup or replication or DR solution on some other site, some other cloud provider or a target of my choice. 

Self hosted: Monolithic or k8s

If you go monolithic then I highly recomment citus. It is an extension, which allows you to scale, shard at will and it is a clustering solution so you get the HA. There are of course the OG solutions like perconi, patroni, EDB, many paid, free and opensource solutions. You do however have to learn all these implentations, do a POC in house, verify all DR options, test HA, FT before going for any of them.

Self hosted: k8s operator

I considered and evaluated

crunchydata

zalando

kubedb

stackgres

percona 

but went with cnpg aka cloud native postgresql operator. Why? because of the first 2 words. Cloud native. Everyone else is trying to adopt what has been tested in the monolithic world to kubernetes world to get that autoscaling, inbuild HA, FT, auto recovery options from k8s but they are built for cloud or k8s, they are adopted to cloud/k8s. cnpg is built for k8s. Also their documentation was a lot better than any of the top above,udpated and their example deployment manifests per different options, just work. 

So wish me luck. I hope all my django apps stay afloat and sail smoothly because their DB is sailing smoothly.

Tuesday, February 21, 2023

Hackerproof injection of secrets in kubernetes where none but your apps can use variables & secrets

 Okay, okay. It is not exactly hacker*proof* but definitely hacker resistant (you know like water resistant but not water proof). When we inject secrets into pods either via 

  1. mounting, 
  2. setting environment variables
  3. using a helm values file with secrets and env vars
  4. using CICD to inject while deployment
  5. most tidious way: read from vault but this requires a role_id, secret_id of vault to be against set in pod as environment variable or some other way. Circular problem.
We stumbled upon a problem which actually ended up as a solution for our longterm question. How to hide secrets from a hacker or a rogue app which does ssh to the pod. If I am a hacker, I have ssh access to the pod, then I can see all secrets and environment variables. Can i set them up where only app can see it and nobody can?
example:
  1. open up 2 shells
  2. In shell 1 open your bashrc file, set some export or env vars and save.
  3. Do `. /path/to/bashrc fiile' (notice the dot and a space after it)
  4. In shell 1 you can access your new environment variable
  5. In shell you cant or wont
example:
  1. open up 2 shells
  2. in shell one set an environment variable in the shell export xyz=abc and see if you can access it. you do.
  3. In shell 2 you cant access the same.
Solution based on the above.
  1. Mount secrets via vault or some place to some location in the pod /path/to/my/mount/vaultsecrets
  2. In your entrypoint.sh file for your app, somewhere before you start your app, add `. /path/to/my/mount/vaultsecrets
Now when your pod on first boot, runs the entrypoint.sh, it sources all vars from your mount point and then starts your app in the same tty shell which means only this shell can access these environment variables and not the entire OS.
I had this secret injection employed with helm yaml file and I was itching for a more secure option to ensure nobody sees the secrets even if they connect to the pod but when we migrated from gitlab to github and github actions, my associate opted in this sidecar injection along with the helm yaml we had. It seems this unintended parallele secret injection via sidecar came as a blessing.
gotcha: So a hacker can literally manually run the command in the point no.2 and get the secrets?! Yes. but you can also add add another step after point. 2. Unmount that path. So now, you mounted the secrets, sourced it to a particular tty shell of the app and then unmounted it. 

Sunday, February 12, 2023

VMs (Virtual Machines) on k8s with kube-virt against VMs on openstack or hypervisors

Virtual Machines (VMs) have been a popular way of deploying and running applications for many years. With the advent of cloud computing and the need for scalable, highly available infrastructure, VMs have found new homes in platforms like OpenStack and Hypervisors. But with the rise of Kubernetes, VMs on Kubernetes with KubeVirt have become increasingly popular, offering several advantages over traditional VMs on Hypervisors and OpenStack.

  1. Improved Resource Management: Kubernetes, with its powerful scheduler, ensures that VMs are efficiently deployed and resourced according to the requirements of the applications they host. This results in a more optimized and cost-effective deployment, as VMs are only given the resources they actually need, instead of being over-provisioned.

  2. Enhanced Networking: KubeVirt integrates with the Kubernetes networking model, providing a powerful and flexible way to manage network connections between VMs and other components within a cluster. This allows for easy scaling and migration of VMs without having to worry about network configurations.

  3. Improved Security: VMs on Kubernetes with KubeVirt can leverage Kubernetes security features, such as network segmentation, secrets management, and pod security policies, to provide a secure and controlled environment for deploying applications.

  4. Easier Migration: One of the biggest benefits of VMs on Kubernetes with KubeVirt is that they can be easily migrated between clusters and across cloud providers. This makes it easier for organizations to move their applications to new infrastructure as needed, without having to worry about compatibility issues or reconfiguring network connections.

  5. Increased Flexibility: KubeVirt provides a unified way to manage both VMs and containers within a single cluster, giving organizations greater flexibility in choosing the best deployment option for their applications. This allows for a more streamlined and efficient deployment process, as well as the ability to run legacy applications in a modern, scalable infrastructure.

In conclusion, VMs on Kubernetes with KubeVirt offer several key advantages over traditional VMs on Hypervisors and OpenStack, including improved resource management, enhanced networking, improved security, easier migration, and increased flexibility. As organizations look to modernize their infrastructure and move to the cloud, VMs on Kubernetes with KubeVirt are becoming an increasingly popular choice, offering a powerful and scalable platform for deploying applications.

SDLC with k8s

Software Development Life Cycle (SDLC) is a process of designing, developing, and deploying software applications. It involves various phases, including planning, analysis, design, development, testing, and deployment. With the increasing popularity of Kubernetes, organizations are looking for ways to integrate it into their SDLC process. Kubernetes, an open-source container orchestration system, can help simplify the deployment and management of complex applications. In this article, we will discuss the benefits of incorporating Kubernetes into the SDLC process and how it can help improve the overall software development process.

  1. Improved Deployment Process Kubernetes makes it easier to deploy and manage complex applications. With Kubernetes, you can define and automate your deployment process, making it easier to deploy applications consistently and repeatedly. This helps to minimize downtime and reduces the risk of human error. Additionally, Kubernetes provides features such as rolling updates, which allow you to update your application without affecting the availability of your services.

  2. Faster Testing With Kubernetes, you can easily spin up test environments in minutes, allowing you to test your application in a variety of scenarios. This speeds up the testing process and helps to catch bugs and issues early on in the development process. Additionally, Kubernetes provides features such as automatic rollbacks, which allow you to revert to a previous version of your application in case of a failure.

  3. Improved Collaboration Kubernetes makes it easier for development teams to work together by providing a unified platform for deploying and managing applications. This helps to reduce the risk of conflicting changes and improves collaboration between developers, testers, and operations teams. Additionally, Kubernetes provides a centralized management system for applications, making it easier for teams to collaborate and manage their applications.

  4. Scalability and Flexibility Kubernetes provides a scalable and flexible platform for deploying and managing applications. With Kubernetes, you can easily scale your applications up or down based on demand, making it easier to manage the resources needed for your applications. Additionally, Kubernetes provides features such as automatic scaling, which allows your applications to automatically scale based on the usage patterns.

  5. Cost Savings By integrating Kubernetes into the SDLC process, organizations can reduce the time and cost associated with deploying and managing applications. With Kubernetes, you can automate the deployment and management process, reducing the need for manual intervention. Additionally, Kubernetes provides a unified platform for deploying and managing applications, reducing the need for multiple tools and systems.

In conclusion, incorporating Kubernetes into the SDLC process provides numerous benefits, including improved deployment processes, faster testing, improved collaboration, scalability and flexibility, and cost savings. By integrating Kubernetes into the SDLC process, organizations can improve the overall software development process and deliver better quality applications faster.

Friday, February 10, 2023

Static IP address for a VM on k8s via kube-virt

KubeVirt is an open-source project that allows you to run virtual machines (VMs) on top of Kubernetes. If you want to assign a static IP to a VM running in KubeVirt, you will need to configure the network settings for the VM.

You might be familiar with metallb loadbalancer on k8s. You create an ip pool or multiple ip pools and when a service requests an IP, It will auto assign the IP.

Similarly you create an ip pool here an ip pool, when you create VMs, They will automatically get one IP with kube virt.

Here are the steps to assign a static IP to a VM in KubeVirt:

  1. Create a Network Attachment Definition (NAD) that specifies the static IP address you want to assign to the VM. For example: 
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: my-static-ip-network
spec:
  config: '{
      "cniVersion": "0.3.0",
      "name": "my-static-ip-network",
      "type": "ipvlan",
      "ipam": {
          "type": "host-local",
          "subnet": "10.244.0.0/16",
          "routes": [
              { "dst": "0.0.0.0/0" }
          ],
          "ranges": [
              [
                  {
                      "subnet": "10.244.0.0/24",
                      "gateway": "10.244.0.1"
                  }
              ]
          ],
          "config": [
              {
                  "subnet": "10.244.0.0/24",
                  "gateway": "10.244.0.1",
                  "ipMasq": true
              }
          ]
      }
  }'

    
       2. Apply the NAD to your Kubernetes cluster using kubectl apply:
kubectl apply -f my-static-ip-network.yaml 
      3. Update your VM definition to use the NAD. This can be done by adding a network section to the spec section of your VM definition. For example:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: my-vm
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: my-vm
    spec:
      domains:
        - type: kvm
          resources:
            requests:
              memory: 64M
          devices:
            interfaces:
            - name: eth0
              bridge: {}
              network: my-static-ip-network
              model: virtio
              macAddress: "52:54:00:12:34:56"
   4. Apply the updated VM definition to your Kubernetes cluster using kubectl apply: 
kubectl apply -f my-vm.yaml
Once the VM is started, it should be assigned the static IP address specified in the NAD. You can verify this by checking the IP address of the VM from within the VM or by using kubectl get pod to inspect the network configuration of the pod that represents the VM in Kubernetes.

Sunday, January 22, 2023

Importance of lift and shift architecture and migration to azure in a week or less!

 the architecture for hardware layer, OS layer, orchestration layer, application layer, auto devops was all in my bucket along with a lot more for a project. We wanted to have a sytem in house, a private cloud, a private cloud for a hardware company. Yes, you heard it right. This is also why the core of what we wanted was not available from any public cloud providers. They in turn depended on us for their CPUs, GPUs and all the BIOS, drivers, firmware, frameworks associated with them to work perfectly in their cloud for their customers. Then why did we move to cloud and what part?!

I am in IT from 2005 and I am from that generation which went gaga over intel celeron, floppy drives etc., Building computers, workstations, storage servers, NAS, compute servers was a side hustle even during the call center days. So when people were moving from baremetal to hypervisors (VMware boom), many of us realized that how important it is to have a decoupled, well connected lift and shift architecture, where you can lift your entire architecture and shift it to a new platform, when you wanted it, wherever you wanted it. This requires a couple of things which seem trivial but they are a sweet trap that many people fall into.

  1.  choose technologies/products which are independent or not vendor specific
    1. ex: using jenkins over github github actions or gitlab enterprise; because jenkins works with anyone.
  2. prefer opensource over closed source
    1. ex: jenkins over github actions (fyi we are using github actions btw, I know I am not practising what I preach but you will see why, later)
  3. Matured over new kid on the block
    1. ex: jenkins or jenkins x over bamboo or circleci. [I am keep saying jenkins not because I have married it, but because most people can relate to it]
  4. Dont chase the shiny.
    Some have this habit of chasing the latest, greatest, new shiny thing. May be it is the Gen Z fast fasion habit. It does not matter the product or solution is dynasaur old or born yesterday, if it solves most of your problems then you choose it. If there is a tie or a dilemna, then choose the oldest. Older and more matured a product is, bigger will be the userbase, higher will be the number of issues reported and number of solutions offered per issue. So almost all the time all your issues related to it will be old and solved by somebody else years ago.
  5. one who can do more than more who do one.
    When you are choosing products, choose those who meet most of your requirements than choosing one product per requirement.
    ex: You want nosql and sql database, So you go with postgresql and mongodb. I will go with just postgresql since it offers sql and no sql, unless there is a specific requirement which is only met by mongodb and its document structure etc.,
  6. Automate except the 1st one
    I had to have many k8s clusters deployed, managed, monitored. So I did the 1st one manually. Automated everything. Destroyed it and rebuilt with automation. When I was happy about the results, I used the same automation for all k8s cluster. 
  7. Backup and restore
    Automate the backup of your databases, you should be able to restore it to 15-30 days prior to the corruption. You should only be attending it if it is not working and never attend it or know that it exists if it is working. 
  8. DR and SR
    For our usecase this was not an issue but may be for you it will be. All your backups should ideally be on a different network, different site, different platform. If your DB is on azure CA region, then your backups should be on AWS alask region. Your backup site which comes up should be in different site, different state, if possible different timezone. This is from my experience with many datacenters we built at EMC for some of world's biggest banks like State Bank of India, Citibank etc., Disaster Recovery and Site Recovery are a whole new game. It should never be or never can be a one man show
The above are the most important but not all. I was also hosting our own git with gitlab on k8s which was also our AutoDevops cluster with gitlab runners.
Rest of the clusters were for different environment and project. 

AutoDevOps

When you are the one guy who is supposed to install, manage, administer, monitor multiple k8s clusters, responsible for hosting codebase for the projects, then you better also have automated devops just like the other pieces, else you will have insomnia.
  1. Implement a git plan (in our case it was gitflow)
  2. production branches are always protected
  3. develop branches are always protected
  4. None can merge to develop branch
  5. Any can do a pull request from develop branch
  6. None shall merge to develop without a peer reviewed PR.
  7. Depending on which branch the code is pushed to, it should only get deployed to that environment. If code gets merged to develop branch, then it gets deployed in develop environment. main branch to production etc.,
  8. All repos go through SAST and reports are generated and enforced
  9. No secrets will be in the code or even avoid putting them in the repo's settings. Let them be injected during CICD.
You will add more to this list as you go along but these should be a must.
So now we come to the part where we make the applications 'lift and shift' compliant.
  1. 1 application per repo
  2. all applications get dockerized during CICD
  3. docker images gets pushed to a remote registry, you can a separate namespace per environment. ex: docker.io/myproject/develop/myapp:latest, docker.io/myproject/production/myapp:latest etc.,
  4. All apps get deployed via helm chart and variables injected with values.yaml during cicd.
  5. Always be stateless with your containerized applications unless you really can't, like with databases.
  6. Have prometheus+grafana alerts set for each of your app if it goes down or becomes unavailable, linked to your email or office chat application.
The above makes your applications 'lift and shift' ready. Now we want to make what lies underneath, lift and shift ready, k8s.
Just how you I had a gitlab-ci.yaml for each application's autodevops, now you want to have a gitlab-ci.yaml (or whatever you are using) for your k8s cluster, where it will deploy all of the below in order with 1 click to run one cicd pipeline. I suggest you always have static reserved IP addresses available for many of your crucial apps like nginx ingress controller.
  1. It first deploys storage, I recommend nfs via helm chart from opensource community. most reliable and easy
  2. deploys nginx controller
  3. deploys on prem LB like a metallb with op pool
  4. deploys databases and backend appliations like rabbitmq, vault etc.,
  5. deploys all the middleware like webservers like django or flask or node etc.,
  6. deploys all the front end apps in series.
  7. deploys all the ingresses for each of the app
  8. deploys your monitoring solutons (prometheus, loki, grafana etc.,)
IF you are wondering what I used for on prem k8s installation, for production cluster it was rke2 with kube-vip with cluster master nodes offering HA (high availability) and FT (Fault Tolerance) for all other clusters, it was rke1. Now let us make this k8s layer too lift and shift ready.
for rke it is easy. If you have the cluster.yaml file with all the configuration ready, then it is a 1 click shell script to deploy or destroy your whole cluster.
rke2 is a bit of work to create that automation script but it will be same.
So now you have a 
  1. 1 click script which can have a multi node k8s cluster.
  2. map your installed clusters to your cicd engine (jenkins, gitlabci etc.,)
  3. using the CICD from the previously defined stage, you deploy all the cluster's apps, backend, middleware, front end, monitoring tools etc.,
IF you do it rightly, then if your cluster does down or gets destroyed in the evening, you will have the cluster redeployed in less than few hours. Where have you heard of an entire project or infrastructure which can be destroyed and redeployed with almost full automation in less than a few hours? Thanks to k8s, containerization this is possible.

Azure Migration

    Even though we had on prem k8s cluster, the problem was the hardware, network, power outage, drives failing, motherboard frying or some other failure would cripple us and I had to be on my toes with my recently joined colleague too. We realized we are spending more on maintaining the infrastructure than our actual goal, so we decided to move to azure. So instead of on prem k8s, we shall use AKS, azure k8s service. Just to give you an example, our monitoring and logging and alerting system consisted

  1.  prometheus server per k8s
  2. one log collecting loki deployment per k8s
  3. a thanos metric aggregator which collects all data from all prometheus servers and offers it to grafana
  4. 1 grafana front end
  5. 1 dashboard per k8s environment
  6. 1 dashboard per app per cluster
  7. Alerts set per app per cluster/environment
After we moved to azure, sysdig and ELK stack hosted on cloud took care of this part too. More free time, more investment in our actual goal.
I will give you the oversimplified version of migration to AKS from on prem k8s.
  1. Move repos from gitlab to github
  2. convert gitlabci to github actions. In almost all github actions, the actual action is shell script, so wrapper was different but shell script had to be a copy paste in many jobs
  3. So now the github actions will deploy all your apps in one go (earlier it was done by gitlabci but not github actions)
If we discount ourselves from the delay in getting access and some delays with the IT and networking which are not in our hands, then the actual migration happened in one weekday. The best part is I got late access, which means my colleague who joined late to the organization and project was able to migrate it in a weekday. May be if it was someone else, they would have took more time but nonetheless such migrations use to take weeks of planning and a month or more with many involved in such mgiration. In the case of virtualization techs, it would take months. Since we moved to azure and we had a good enterprise subscription, that means I was greedy and did not want to maintain our own codebase anymore, especially if we are already paying.
If your entire infrastructure can be moved in a week by someone who has only a 2nd hand knowledge of it, it shows 2 things, how *lift and shift* ready the architecture was and how good the guy who did it is too.
TLDR
Ensure all of your applications are as decoupled, connected, containerized as much as possible and you have no vendor lock in products in your architecture.