Skip to main content

14 posts tagged with "KubeVela"

View All Tags

· 4 min read

You may have learned from this blog that we can use vela to manage cloud resources (like s3 bucket, AWS EIP and so on) via the terraform plugin. We can create an application which contains some cloud resource components and this application will generate these cloud resources, then we can use vela to manage them.

Sometimes we already have some Terraform cloud resources which may be created and managed by the Terraform binary or something else. In order to have the benefits of using KubeVela to manage the cloud resources or just maintain consistency in the way you manage cloud resources, we may want to import these existing Terraform cloud resources into KubeVela and use vela to manage them. But if we just create an application which describes these cloud resources, the cloud resources will be recreated and may lead to errors. To fix this problem, we made a simple backup_restore tool. This blog will show you how to use the backup_restore tool to import your existing Terraform cloud resources into KubeVela.

Step 1: Create Terraform Cloud Resources

Since we are going to demonstrate how to import an existing cloud resource into KubeVela, we need to create one first. If you already have such resources, you can skip this step.

Before start, make sure you have:

Let's get started!

  1. Create an empty directory to start.

    mkdir -p cloud-resources
    cd cloud-resources
  2. Create a file named main.tf which will create a S3 bucket:

    resource "aws_s3_bucket" "bucket-acl" {
    bucket = var.bucket
    acl = var.acl
    }

    output "RESOURCE_IDENTIFIER" {
    description = "The identifier of the resource"
    value = aws_s3_bucket.bucket-acl.bucket_domain_name
    }

    output "BUCKET_NAME" {
    value = aws_s3_bucket.bucket-acl.bucket_domain_name
    description = "The name of the S3 bucket"
    }

    variable "bucket" {
    description = "S3 bucket name"
    default = "vela-website"
    type = string
    }

    variable "acl" {
    description = "S3 bucket ACL"
    default = "private"
    type = string
    }
  3. Configure the AWS Cloud provider credentials:

    export AWS_ACCESS_KEY_ID="your-accesskey-id"
    export AWS_SECRET_ACCESS_KEY="your-accesskey-secret"
    export AWS_DEFAULT_REGION="your-region-id"
  4. Set the variables in the main.tf file:

    export TF_VAR_acl="private"; export TF_VAR_bucket="your-bucket-name"
  5. (Optional) Create a backend.tf to configure your Terraform backend. We just use the default local backend in this example.

  6. Run terraform init and terraform apply to create the S3 bucket:

    terraform init && terraform apply
  7. Check the S3 bucket list to make sure the bucket is created successfully.

  8. Run terraform state pull to get the Terraform state of the cloud resource and store it into a local file:

    terraform state pull > state.json

Step 2: Import Existing Terraform Cloud Resources into KubeVela

  1. Create the application.yaml file, please ensure that the description of each field of Component is consistent with your cloud resource configuration:

    apiVersion: core.oam.dev/v1beta1
    kind: Application
    metadata:
    name: app-aws-s3
    spec:
    components:
    - name: sample-s3
    type: aws-s3
    properties:
    bucket: vela-website-202110191745
    acl: private
    writeConnectionSecretToRef:
    name: s3-conn
  2. Get the backup_restore tool:

    git clone https://github.com/kubevela/terraform-controller.git
    cd terraform-controller/hack/tool/backup_restore
  3. Run the restore command:

    go run main.go restore --application <path/to/your/application.yaml> --component sample-s3 --state <path/to/your/state.json>

    The above command will resume the Terraform backend in the Kubernetes first and then create the application without recreating the S3 bucket.

That's all! You have successfully migrate the management of the S3 bucket to KubeVela!

What's more

For more information about the backup_restore tool, please read the doc. If you have any problem, issues and pull requests are always welcome.

· 8 min read
Jianbo Sun

Helm Charts are very popular that you can find almost 10K different software packaged in this way. While in today's multi-cluster/hybrid cloud business environment, we often encounter these typical requirements: distribute to multiple specific clusters, specific group distributions according to business need, and differentiated configurations for multi-clusters.

In this blog, we'll introduce how to use KubeVela to do multi cluster delivery for Helm Charts.

If you don't have multi clusters, don't worry, we'll introduce from scratch with only Docker or Linux System required. You can also refer to the basic helm chart delivery in single cluster.

· 8 min read
Jianbo Sun

If you're looking for something to glue Terraform ecosystem with the Kubernetes world, congratulations! You're getting exactly what you want in this blog.

We will introduce how to integrate terraform modules into KubeVela by fixing a real world problem -- "Fixing the Developer Experience of Kubernetes Port Forwarding" inspired by article from Alex Ellis.

In general, this article will be divided into two parts:

  • Part.1 will introduce how to glue Terraform with KubeVela, it needs some basic knowledge of both Terraform and KubeVela. You can just skip this part if you don't want to extend KubeVela as a Developer.
  • Part.2 will introduce how KubeVela can 1) provision a Cloud ECS instance by KubeVela with public IP; 2) Use the ECS instance as a tunnel sever to provide public access for any container service within an intranet environment.

OK, let's go!

· 13 min read

KubeVela is a modern software delivery control panel. The goal is to make application deployment and O&M simpler, more agile, and more reliable in today's hybrid multi-cloud environment. Since the release of Version 1.1, the KubeVela architecture has naturally solved the delivery problems of enterprises in the hybrid multi-cloud environments and has provided sufficient scalability based on the OAM model, which makes it win the favor of many enterprise developers. This also accelerates the iteration of KubeVela.

In Version 1.2, we released an out-of-the-box visual console, which allows the end user to publish and manage diverse workloads through the interface. The release of Version 1.3 improved the expansion system with the OAM model as the core and provides rich plug-in functions. It also provides users with a large number of enterprise-level functions, including LDAP permission authentication, and provides more convenience for enterprise integration. You can obtain more than 30 addons in the addons registry of the KubeVela community. There are well-known CNCF projects (such as argocd, istio, and traefik), database middleware (such as Flink and MySQL), and hundreds of cloud vendor resources.

In Version 1.4, we focused on making application delivery safe, foolproof, and transparent. We added core functions, including multi-cluster permission authentication and authorization, a complex resource topology display, and a one-click installation control panel. We comprehensively strengthened the delivery security in multi-tenancy scenarios, improved the consistent experience of application development and delivery, and made the application delivery process more transparent.

· 8 min read

One of the biggest requests from KubeVela community is to provide a transparent delivery process for resources in the application. For example, many users prefer to use Helm Chart to package a lot of complex YAML, but once there is any issue during the deployment, such as the underlying storage can not be provided normally, the associated resources are not created normally, or the underlying configuration is incorrect, etc., even a small problem will be a huge threshold for troubleshooting due to the black box of Helm chart. Especially in the modern hybrid multi-cluster environment, there is a wide range of resources, how to obtain effective information and solve the problem? This can be a very big challenge.

resource graph

As shown in the figure above, KubeVela has offered a real-time observation resource topology graph for applications, which further improves KubeVela's application-centric delivery experience. Developers only need to care about simple and consistent APIs when initiating application delivery. When they need to troubleshoot problems or pay attention to the delivery process, they can use the resource topology graph to quickly obtain the arrangement relationship of resources in different clusters, from the application to the running status of the Pod instance. Automatically obtain resource relationships, including complex and black-box Helm Charts.

In this post, we will describe how this new feature of KubeVela is implemented and works, and the roadmap for this feature.

· 10 min read
Jianbo Sun

This blog will introduce how to use CUE and KubeVela to build you own abstraction API to reduce the complexity of Kubernetes resources. As a platform builder, you can dynamically customzie the abstraction, build a path from shallow to deep for your developers per needs, adapt to growing number of different scenarios, and meet the iterative demands of the company's long-term business development.

· 16 min read
KubeVela Community

Thanks to the contribution of hundreds of developers from KubeVela community and around 500 PRs from more than 30 contributors, KubeVela version 1.3 is officially released. Compared to v1.2 released three months ago, this version provides a large number of new features in three aspects as OAM engine (Vela Core), GUI dashboard (VelaUX) and addon ecosystem. These new features are derived from the in-depth practice of many end users such as Alibaba, LINE, China Merchants Bank, and iQiyi, and then finally become part of the KubeVela project that everyone can use out of the box.

Pain Points of Application Delivery

So, what challenges have we encountered in cloud-native application delivery?

· 7 min read
Wei Duan

Under today's multi-cluster business scene, we often encounter these typical requirements: distribute to multiple specific clusters, specific group distributions according to business need, and differentiated configurations for multi-clusters.

KubeVela v1.3 iterates based on the previous multi-cluster function. This article will reveal how to use it to do swift multiple clustered deployment and management to address all your anxieties.

· 6 min read
Xiangbo Ma

The cloud platform development team of China Merchants Bank has been trying out KubeVela since 2021 internally and aims to using it for enhancing our primary application delivery and management capabilities. Due to the specific security concern for financial insurance industry, network control measurements are relatively strict, and our intranet cannot directly pull Docker Hub image, and there is no Helm image source available as well. Therefore, in order to landing KubeVela in the intranet, you must perform a complete offline installation.

This article will take the KubeVela V1.2.5 version as an example, introduce the offline installation practice to help other users easier to complete KubeVela's deployment in offline environment.

· 10 min read
Tianxin Dong

At the background of Machine learning goes viral, AI engineers not only need to train and debug their models, but also need to deploy them online to verify how it looks(of course sometimes, this part of the work is done by AI platform engineers. ). It is very tedious and draining AI engineers.

In the cloud-native era, our model training and model serving are also usually performed on the cloud. Doing so not only improves scalability, but also improves resource utility. This is very effective for machine learning scenarios that consume a lot of computing resources.

But it is often difficult for AI engineers to use cloud-native techniques. The concept of cloud native has become more complex over time. Even to deploy a simple model serving on cloud native architecture, AI engineers may need to learn several additional concepts: Deployment, Service, Ingress, etc.

As a simple, easy-to-use, and highly scalable cloud-native application management tool, KubeVela enables developers to quickly and easily define and deliver applications on Kubernetes without knowing any details about the underlying cloud-native infrastructure. KubeVela's rich extensibility extends to AI addons and provide functions such as model training, model serving, and A/B testing, covering the basic needs of AI engineers and helping AI engineers quickly conduct model training and model serving in a cloud-native environment.

This article mainly focus on how to use KubeVela's AI addon to help engineers complete model training and model serving more easily.