profile for Gajendra D Ambi on Stack Exchange, a network of free, community-driven Q&A sites

Thursday, July 20, 2023

Windows bluetooth crackling sound [cause, solution]

 So I had/have this issue with windows where bluetooth speakers start producing cracklintg/cracking sound for no reason. scraping the internet yielded no solution. Windows troubleshooter has solved no problem in all these years and that stays the same today. Here are the details:

Type of PC: laptop

make: 2023

Ram: 32GB

GPU: Integrated RTX4080

OEM: Asus

CPU: AMD

Wireless I/O devices:

  1. keychron k2 wireless keyboard (bluetooth)
  2. logitech m720 mouse (bluetooth)
  3. logitech g pro mouse  (2.4ghz unifying logi dongle, not bluetooth)
  4. harmon kardon aura studio 3 (bluetooth)

What didnt work?

  1. windows inbuilt troubleshooter
  2. driver update
  3. driver reinstall
  4. reinstall OS
  5. Unpair all bluetooth devices, uninstall, pair them again, update their drivers (via windows udpate driver)
  6. External USB bluetooth 5.0 dongle
What worked?
  1. Except bluetooth speaker and bluetooth keyboard, dont use any more bluetooth devices
Cause:
Theoretically bluetooth supports 6 to 7 bluetooth devices but in practice due to bandwidth throtting by OEMs & bandwidth limitation of the BT technology, if BT speakers are used, then use just 1 more BT device and not anymore.
So currently the 2 mice that I have from logitech, they both come with their proprietory dongle to connect with, it takes up 2 more USB ports on the laptop (So I bought a docking station) but at least it is headache free.
Suggestions to OEMs:
1. Make multi device bluetooth keyboards but which can be paired with mouse, so whenever I switch connection between devices, the mouse+keyboard switch together.
2. Make PC, Laptops support multiple bluetooth interfaces, just like multiple NIC/PCI/PCIe cards, and we can connect our devices to any of the BT interface in the OS.

Friday, May 26, 2023

Postgresql on cloud, self or hosted, if self then monolithic or k8s? which k8s operator?

 So I have been planning to move my django sites to kubernetes to get that automated horizontal and vertical scaling, failover, high availability, fault tolerance and other good stuff. Database on kubernetes has been a very huge topic of , to do or not to do...yes a year back, I would not have recommended it. Operators were just kicking in. Now many seem to have matured everyone has their own querks and problems.

Why not hosted?

There are many DBaaS. All cloud providers provide one. Many provide only DBaaS such as elephantSQL. What I am afraid about is freedom, loss of freedom. We all know what happened to parler.com, they got just booted off their cloud account, data gone, applications gone. One day their account just got locked out. So all these DBaaS providers do not offer replication, backup to intracloud target (that is, having a standby node on another competing cloud provider, or your own on prem server or an object storage from some other provider or a NAS server at home etc.,). Sure they provide headache free, automated, regular backup, PITR - point in time recovery, base backup, incremental backup, differential backup, standy node etc., You dont have to have a team of DB engineers who maintain it and do all these day 2 operations. You remember what happened to gitlab dont you? Engineer accidentally deleted their entire primary postgresql database. Yes, they did and they found out that the backups that they had were no good.  So you are outsourcing all your headache to someone else. This is good, but not enough.

I want to be able to have my backup or replication or DR solution on some other site, some other cloud provider or a target of my choice. 

Self hosted: Monolithic or k8s

If you go monolithic then I highly recomment citus. It is an extension, which allows you to scale, shard at will and it is a clustering solution so you get the HA. There are of course the OG solutions like perconi, patroni, EDB, many paid, free and opensource solutions. You do however have to learn all these implentations, do a POC in house, verify all DR options, test HA, FT before going for any of them.

Self hosted: k8s operator

I considered and evaluated

crunchydata

zalando

kubedb

stackgres

percona 

but went with cnpg aka cloud native postgresql operator. Why? because of the first 2 words. Cloud native. Everyone else is trying to adopt what has been tested in the monolithic world to kubernetes world to get that autoscaling, inbuild HA, FT, auto recovery options from k8s but they are built for cloud or k8s, they are adopted to cloud/k8s. cnpg is built for k8s. Also their documentation was a lot better than any of the top above,udpated and their example deployment manifests per different options, just work. 

So wish me luck. I hope all my django apps stay afloat and sail smoothly because their DB is sailing smoothly.

Tuesday, February 21, 2023

Hackerproof injection of secrets in kubernetes where none but your apps can use variables & secrets

 Okay, okay. It is not exactly hacker*proof* but definitely hacker resistant (you know like water resistant but not water proof). When we inject secrets into pods either via 

  1. mounting, 
  2. setting environment variables
  3. using a helm values file with secrets and env vars
  4. using CICD to inject while deployment
  5. most tidious way: read from vault but this requires a role_id, secret_id of vault to be against set in pod as environment variable or some other way. Circular problem.
We stumbled upon a problem which actually ended up as a solution for our longterm question. How to hide secrets from a hacker or a rogue app which does ssh to the pod. If I am a hacker, I have ssh access to the pod, then I can see all secrets and environment variables. Can i set them up where only app can see it and nobody can?
example:
  1. open up 2 shells
  2. In shell 1 open your bashrc file, set some export or env vars and save.
  3. Do `. /path/to/bashrc fiile' (notice the dot and a space after it)
  4. In shell 1 you can access your new environment variable
  5. In shell you cant or wont
example:
  1. open up 2 shells
  2. in shell one set an environment variable in the shell export xyz=abc and see if you can access it. you do.
  3. In shell 2 you cant access the same.
Solution based on the above.
  1. Mount secrets via vault or some place to some location in the pod /path/to/my/mount/vaultsecrets
  2. In your entrypoint.sh file for your app, somewhere before you start your app, add `. /path/to/my/mount/vaultsecrets
Now when your pod on first boot, runs the entrypoint.sh, it sources all vars from your mount point and then starts your app in the same tty shell which means only this shell can access these environment variables and not the entire OS.
I had this secret injection employed with helm yaml file and I was itching for a more secure option to ensure nobody sees the secrets even if they connect to the pod but when we migrated from gitlab to github and github actions, my associate opted in this sidecar injection along with the helm yaml we had. It seems this unintended parallele secret injection via sidecar came as a blessing.
gotcha: So a hacker can literally manually run the command in the point no.2 and get the secrets?! Yes. but you can also add add another step after point. 2. Unmount that path. So now, you mounted the secrets, sourced it to a particular tty shell of the app and then unmounted it. 

Sunday, February 12, 2023

VMs (Virtual Machines) on k8s with kube-virt against VMs on openstack or hypervisors

Virtual Machines (VMs) have been a popular way of deploying and running applications for many years. With the advent of cloud computing and the need for scalable, highly available infrastructure, VMs have found new homes in platforms like OpenStack and Hypervisors. But with the rise of Kubernetes, VMs on Kubernetes with KubeVirt have become increasingly popular, offering several advantages over traditional VMs on Hypervisors and OpenStack.

  1. Improved Resource Management: Kubernetes, with its powerful scheduler, ensures that VMs are efficiently deployed and resourced according to the requirements of the applications they host. This results in a more optimized and cost-effective deployment, as VMs are only given the resources they actually need, instead of being over-provisioned.

  2. Enhanced Networking: KubeVirt integrates with the Kubernetes networking model, providing a powerful and flexible way to manage network connections between VMs and other components within a cluster. This allows for easy scaling and migration of VMs without having to worry about network configurations.

  3. Improved Security: VMs on Kubernetes with KubeVirt can leverage Kubernetes security features, such as network segmentation, secrets management, and pod security policies, to provide a secure and controlled environment for deploying applications.

  4. Easier Migration: One of the biggest benefits of VMs on Kubernetes with KubeVirt is that they can be easily migrated between clusters and across cloud providers. This makes it easier for organizations to move their applications to new infrastructure as needed, without having to worry about compatibility issues or reconfiguring network connections.

  5. Increased Flexibility: KubeVirt provides a unified way to manage both VMs and containers within a single cluster, giving organizations greater flexibility in choosing the best deployment option for their applications. This allows for a more streamlined and efficient deployment process, as well as the ability to run legacy applications in a modern, scalable infrastructure.

In conclusion, VMs on Kubernetes with KubeVirt offer several key advantages over traditional VMs on Hypervisors and OpenStack, including improved resource management, enhanced networking, improved security, easier migration, and increased flexibility. As organizations look to modernize their infrastructure and move to the cloud, VMs on Kubernetes with KubeVirt are becoming an increasingly popular choice, offering a powerful and scalable platform for deploying applications.

SDLC with k8s

Software Development Life Cycle (SDLC) is a process of designing, developing, and deploying software applications. It involves various phases, including planning, analysis, design, development, testing, and deployment. With the increasing popularity of Kubernetes, organizations are looking for ways to integrate it into their SDLC process. Kubernetes, an open-source container orchestration system, can help simplify the deployment and management of complex applications. In this article, we will discuss the benefits of incorporating Kubernetes into the SDLC process and how it can help improve the overall software development process.

  1. Improved Deployment Process Kubernetes makes it easier to deploy and manage complex applications. With Kubernetes, you can define and automate your deployment process, making it easier to deploy applications consistently and repeatedly. This helps to minimize downtime and reduces the risk of human error. Additionally, Kubernetes provides features such as rolling updates, which allow you to update your application without affecting the availability of your services.

  2. Faster Testing With Kubernetes, you can easily spin up test environments in minutes, allowing you to test your application in a variety of scenarios. This speeds up the testing process and helps to catch bugs and issues early on in the development process. Additionally, Kubernetes provides features such as automatic rollbacks, which allow you to revert to a previous version of your application in case of a failure.

  3. Improved Collaboration Kubernetes makes it easier for development teams to work together by providing a unified platform for deploying and managing applications. This helps to reduce the risk of conflicting changes and improves collaboration between developers, testers, and operations teams. Additionally, Kubernetes provides a centralized management system for applications, making it easier for teams to collaborate and manage their applications.

  4. Scalability and Flexibility Kubernetes provides a scalable and flexible platform for deploying and managing applications. With Kubernetes, you can easily scale your applications up or down based on demand, making it easier to manage the resources needed for your applications. Additionally, Kubernetes provides features such as automatic scaling, which allows your applications to automatically scale based on the usage patterns.

  5. Cost Savings By integrating Kubernetes into the SDLC process, organizations can reduce the time and cost associated with deploying and managing applications. With Kubernetes, you can automate the deployment and management process, reducing the need for manual intervention. Additionally, Kubernetes provides a unified platform for deploying and managing applications, reducing the need for multiple tools and systems.

In conclusion, incorporating Kubernetes into the SDLC process provides numerous benefits, including improved deployment processes, faster testing, improved collaboration, scalability and flexibility, and cost savings. By integrating Kubernetes into the SDLC process, organizations can improve the overall software development process and deliver better quality applications faster.