Once you have deployed your Kubernetes infrastructure, you have a control plane and a . You define how you want Kubernetes to manage your Kubernetes objects through tools that interact with the API. Kubernetes objects are all those persistent entities in the Kubernetes system, such as your Pods, Nodes, Services, Namespaces, ConfigMaps, Events.
Most operations can be performed through the kubectl command-line interface or other command-line tools, such as kubeadm, which in turn use the API.
kubectl is the command-line tool where you run most of the commands to manage the Kubernetes clusters. Use
kubectl to deploy applications, inspect and manage cluster resources, and view logs.
In this post, learn how to use the Kubernetes documentation to discover the objects, how to figure out to describe the state you want for your Kubernetes objects. In particular, you will want to know the fields to use in your .yaml files and how to determine what the default values are. You will also learn the basic
Continue reading “Read and write Kubernetes objects using kubernetes.io API reference documentation”
When you are building your cloud infrastructure, you can think of it as code. Infrastructure as code means that the virtual machines, networking, and storage can all be thought of as code. On Azure, you can build your infrastructure using Azure Resource Manager (ARM) templates and deploy using PowerShell. You could also use PowerShell or Azure CLI to express your infrastructure. Many enterprises use Terraform, an open source infrastructure as code provider by HashiCorp, to build, change, version cloud infrastructure.
You can use Terraform across multiple platforms, including Amazon Web Services, IBM Cloud (formerly Bluemix), Google Cloud Platform, DigitalOcean, Linode, Microsoft Azure, Oracle Cloud Infrastructure, OVH, Scaleway VMware vSphere or Open Telekom Cloud, OpenNebula and OpenStack. In this article, we’ll explore Azure. At a high level, you write the configuration of your infrastructure in Terraform files that can describe the infrastructure of a single application or of your entire data center, and then apply it to the target cloud (in this case Azure).
In this article, you install Terraform and configure it, create the Terraform configuration plans for two resource groups an AKS cluster and Azure Log Analytics workspace, and apply the plans into Azure. Continue reading “Walkthrough: Create Azure Kubernetes Service (AKS) using Terraform”
Azure Kubernetes Service (AKS) provides a hosted Kubernetes service where Azure handles critical tasks like health monitoring and maintenance for you. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. When you create AKS, Azure provides the Kubernetes control plane. You need manage only the agent nodes within your clusters.
There are several ways to deploy to Azure, including using the portal, Azure CLI, Azure PowerShell, and Terraform.
In this walkthrough, you will create an AKS cluster using an ARM template and then use Azure CLI to deploy a simple application to the cluster. You will review the design decisions made for the walkthrough, see how the template supports Kubenet for Kubernetes networking, role-based-access-control (RBAC) and how it supports managed identities to communicate with other Azure resources. Finally, you will use a Kubernetes manifest file to define the desired state of the cluster, and test the application.
Continue reading “Walkthrough: Create Azure Kubernetes Service (AKS) using ARM template”
Azure Functions provides serverless computing as Functions-as-a-Service, which provides a platform for you to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app.
Azure Functions executes code to respond to changes in data, responding to messages, running on a schedule, or as the result of an HTTP request.
Typically, you just deploy the function into an existing base container provided by Microsoft. But if you specific needs, such as specific version, you can deploy your Function app as a custom container into the Azure Functions service.
As an alternative to Azure service, you can deploy Azure Functions into your own Kubernetes deployment and run Functions along side your other Kubernetes deployments.
With Azure Functions service you no longer need to manage disk capacity or memory. The compute requirements are are handled automatically. You pay for what and when you use it, rather than fixed sizes and memory required by other Azure services.
You can use a Docker container to deploy your function app to Azure Functions. You can also deploy Azure Functions app into your own Kubernetes.
In this article, you learn about the key features of Azure Functions with containers.
Let’s get started.
Continue reading “Serverless apps in Kubernetes, Azure Functions”
Azure offers several ways to host your application code. In some recent articles here we described some services and features for App Services and Container Instances. Other alternatives include Azure Batch and Azure Functions.
The Azure Architecture Center provides guidance on how to choose a compute service for your application.
There are tradeoffs between control and ease of management. Infrastructure-as-a-Service (IaaS) vs Platform-as-a-Service (PaaS) offers various levels of control, flexibility, and portability.
Microsoft provides guidance for your compute service selection.
Continue reading “When to use Azure Kubernetes Service (AKS) for compute service”
You can run your web applications in Azure App Service in a fully managed service using either Windows and Linux-based containers. You may not need the overhead of a Kubernetes deployment. App Services provides security, load balancing, autoscaling, and automated management.
In addition, App Services has DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
By fully-managed, we mean App Service automatically patches and maintains the OS and language frameworks for you. Spend time writing great apps and let Azure worry about the platform.
With App Services Environment, deploy your application within a virtual network you define where you can have fine-grained control over inbound and outbound application network traffic.
Continue reading “Alternatives to Azure Kubernetes Service (AKS): Azure App Services”
You may want to use containers for your deployments to Azure, but you may not want all the complexities of either standing up your own Kubernetes cluster on premises or Azure Kubernetes Service (AKS). For example, you may want to run a container for a short time.
Azure Container Instance have fast startup times, can be accessed using an IP address or a fully qualified domain name (FQDN). You can customize the size, use either Linux or Windows containers. You can schedule Linux containers to use NVIDIA Tesla GPU resources (preview).
Let’s learn more about Azure Container Instances.
Continue reading “Alternatives to Azure Kubernetes (AKS): Azure Container Instances”
Kubernetes is a portable, extensible, open-source platform for managing you containerized workloads and services. Kubernetes architecture takes care of scaling and failover of your applications running on a container.
In this post, you will learn about Kubernetes Control Plane Components and the Node Components and how they work together. You will learn about how a pod hosts multiple containers, how multiple pods are in a node, how several nodes are included in a cluster, and how Kubernetes uses a control plane to keep track of what is happening in a cluster.
In short, you will understand the architecture of Kubernetes.
Continue reading “Understand the architecture of Kubernetes”
In this post, learn how microservices, containers, and Kubernetes are all related. One is an architecture, one is a deployment mechanism, and one orchestrates how those deployments will function in production.
A microservice is a program that runs on a server or a virtual computer and responds to some request. Microservices give you a way to build applications that are resilient, highly scalable, independently deployable, and able to evolve quickly.
Microservices have a more narrow scope and focus on doing smaller tasks well.
A container is just a process spawned from an executable file, running on a Linux machine, which has some restrictions applied to it.
Kubernetes (aka, K8s) help you increase your infrastructure utilization through the efficient sharing of computing resources across multiple processes. Kubernetes is the master of dynamically allocating computing resources to fill the demand. The side benefits of K8s that make the transition to microservices much easier.
Let’s see how that works.
Continue reading “Understand microservices, containers, Kubernetes”
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes, pronounced “koo-ber-net-eez,” the name is Greek for “helmsman of a ship.” Build, deliver, and scale containerized apps faster with Kubernetes, sometimes referred to as “k8s”
In this blog, you explore what Kubernetes is, the advantages and disadvantages.
Let’s start with some definitions.
Continue reading “Get acquainted with Kubernetes”