Most applications are moving to containers because of big benefits this approach gives us: scalability, immutability, efficiency, and so on. But adding abstraction layers could make much more difficult other tasks. One of them could be security. Today, I would like to imagine a scenario where we have our web server running in a container. It is running smoothly until we detect that we are under a DDoS attack. While our server is under attack, the users can experiment long latencies or even they can’t access to the service.
Working in a really cool project with Clear ML, I had to figure out why it was not possible to install some Python packages from pypi. First feedback I received was “docker image has a bug”, but when I went deeper, I saw that behaviour was not related to the image, nvidia/cuda, or to the site, pypi. The strange behaviour affected, mainly but not always, to https, but also from other docker images: ubuntu, nginx, … Therefore, something was happening between the docker daemon and the virtual machine.
Some days ago, my Ubuntu warned me that could be upgraded to 22.04, the new LTS version, called Jammy Jellyfish. I thought that was the moment and I proceeded. I had to say that I faced several issues, like some issues with Python3.10 that affected many applications (you know, Ubuntu uses Python a lot) but it was easy to solve just making some re-installation. But I was really upset when I saw that my Ethernet interface was not working.
Git is probably the most used version control system out there. It is full of features and gives a lot of flexibility to the people involved with the code. One of these features is the possibility of sharing the same code to different remote repositories. The use case can be really different: different audience targets, private and public repos and so on. Trying to share the same code in different places by hand can be a nightmare, instead, it is really easy thanks to git.
NOTE: This content is just for testing and experimentation. It can provoke many security issues therefore, do not use in a production environment. Some days ago, a question was raised in a Slack channel of Clastix, the company I currently work for: Could I launch a kind cluster in a remote machine? I directly thought, yes, you could, but let’s see how. Kind is a magnificient tool for those people who develop for Kubernetes or test their SW directly in Kubernetes.
Some days ago I released openstackcli, a new containerized tool to work with Openstack API. The motivation of this project was driven by a missing feature from OVH’s Managed Kubernetes: Snapshots from in-use volumes. If you are already working with Kubernetes, you would like to perform everything in a Kubernetes way, including volume’s snapshots, and this is possible as you can see in the docs. That would be as easy as:
Having several users in a GIT hosting service is really normal: a user for your professional projects, a user for a company that you work with/for, a user for your pet projects and so on. Different users need different registered SSH keys in the same GIT service provider, so your PC can raise some messages like the following when you try to clone a project: Permission denied, please try again.
One of the first steps when you work with data which must be stored is to estimate the disk size. This is really hard, especially when your application is new. Working in a cloud environment gives us the flexibility of using the resources that we need in almost any moment without wasting our money in resources that we will use in the future, if everything goes right. If your application is working in a cloud virtual machine and it needs to store some data, the approach would be to attach a new disk for this data, instead of using the default disk of the virtual machine.
Do you like comics? I am sure that you do. You can find a lot of them on The Eye web, so I have been playing with a project using Go in order to download the comics to read them when you whish, Comiccon. Comiccon is a toy project to download and keep updated comics. It takes advantage of the goroutines to download several comics at the same time. The limitation is the number of your CPUs.
We all know the potential of having Kubernetes in production. How easy is to build distributed systems and maintain them. When you want to locally test your complex application there are some tools like: minikube or microk8s what are really good, but they are one-single node cluster. What happen when I want to test my application, locally, but even closer to a real environment? To help with this issue, I have been working on a project which uses Vagrant to create a bunch of virtual machines which will work as Kubernetes nodes, and other stuffs that we will see later, and Ansible to configure them.