Skip to main content

3 posts tagged with "CI/CD"

View All Tags

· 4 min read

You may have learned from this blog that we can use vela to manage cloud resources (like s3 bucket, AWS EIP and so on) via the terraform plugin. We can create an application which contains some cloud resource components and this application will generate these cloud resources, then we can use vela to manage them.

Sometimes we already have some Terraform cloud resources which may be created and managed by the Terraform binary or something else. In order to have the benefits of using KubeVela to manage the cloud resources or just maintain consistency in the way you manage cloud resources, we may want to import these existing Terraform cloud resources into KubeVela and use vela to manage them. But if we just create an application which describes these cloud resources, the cloud resources will be recreated and may lead to errors. To fix this problem, we made a simple backup_restore tool. This blog will show you how to use the backup_restore tool to import your existing Terraform cloud resources into KubeVela.

Step 1: Create Terraform Cloud Resources

Since we are going to demonstrate how to import an existing cloud resource into KubeVela, we need to create one first. If you already have such resources, you can skip this step.

Before start, make sure you have:

Let's get started!

  1. Create an empty directory to start.

    mkdir -p cloud-resources
    cd cloud-resources
  2. Create a file named which will create a S3 bucket:

    resource "aws_s3_bucket" "bucket-acl" {
    bucket = var.bucket
    acl = var.acl

    description = "The identifier of the resource"
    value = aws_s3_bucket.bucket-acl.bucket_domain_name

    output "BUCKET_NAME" {
    value = aws_s3_bucket.bucket-acl.bucket_domain_name
    description = "The name of the S3 bucket"

    variable "bucket" {
    description = "S3 bucket name"
    default = "vela-website"
    type = string

    variable "acl" {
    description = "S3 bucket ACL"
    default = "private"
    type = string
  3. Configure the AWS Cloud provider credentials:

    export AWS_ACCESS_KEY_ID="your-accesskey-id"
    export AWS_SECRET_ACCESS_KEY="your-accesskey-secret"
    export AWS_DEFAULT_REGION="your-region-id"
  4. Set the variables in the file:

    export TF_VAR_acl="private"; export TF_VAR_bucket="your-bucket-name"
  5. (Optional) Create a to configure your Terraform backend. We just use the default local backend in this example.

  6. Run terraform init and terraform apply to create the S3 bucket:

    terraform init && terraform apply
  7. Check the S3 bucket list to make sure the bucket is created successfully.

  8. Run terraform state pull to get the Terraform state of the cloud resource and store it into a local file:

    terraform state pull > state.json

Step 2: Import Existing Terraform Cloud Resources into KubeVela

  1. Create the application.yaml file, please ensure that the description of each field of Component is consistent with your cloud resource configuration:

    kind: Application
    name: app-aws-s3
    - name: sample-s3
    type: aws-s3
    bucket: vela-website-202110191745
    acl: private
    name: s3-conn
  2. Get the backup_restore tool:

    git clone
    cd terraform-controller/hack/tool/backup_restore
  3. Run the restore command:

    go run main.go restore --application <path/to/your/application.yaml> --component sample-s3 --state <path/to/your/state.json>

    The above command will resume the Terraform backend in the Kubernetes first and then create the application without recreating the S3 bucket.

That's all! You have successfully migrate the management of the S3 bucket to KubeVela!

What's more

For more information about the backup_restore tool, please read the doc. If you have any problem, issues and pull requests are always welcome.

· 8 min read
Jianbo Sun

If you're looking for something to glue Terraform ecosystem with the Kubernetes world, congratulations! You're getting exactly what you want in this blog.

We will introduce how to integrate terraform modules into KubeVela by fixing a real world problem -- "Fixing the Developer Experience of Kubernetes Port Forwarding" inspired by article from Alex Ellis.

In general, this article will be divided into two parts:

  • Part.1 will introduce how to glue Terraform with KubeVela, it needs some basic knowledge of both Terraform and KubeVela. You can just skip this part if you don't want to extend KubeVela as a Developer.
  • Part.2 will introduce how KubeVela can 1) provision a Cloud ECS instance by KubeVela with public IP; 2) Use the ECS instance as a tunnel sever to provide public access for any container service within an intranet environment.

OK, let's go!

· 13 min read

KubeVela is a modern software delivery control panel. The goal is to make application deployment and O&M simpler, more agile, and more reliable in today's hybrid multi-cloud environment. Since the release of Version 1.1, the KubeVela architecture has naturally solved the delivery problems of enterprises in the hybrid multi-cloud environments and has provided sufficient scalability based on the OAM model, which makes it win the favor of many enterprise developers. This also accelerates the iteration of KubeVela.

In Version 1.2, we released an out-of-the-box visual console, which allows the end user to publish and manage diverse workloads through the interface. The release of Version 1.3 improved the expansion system with the OAM model as the core and provides rich plug-in functions. It also provides users with a large number of enterprise-level functions, including LDAP permission authentication, and provides more convenience for enterprise integration. You can obtain more than 30 addons in the addons registry of the KubeVela community. There are well-known CNCF projects (such as argocd, istio, and traefik), database middleware (such as Flink and MySQL), and hundreds of cloud vendor resources.

In Version 1.4, we focused on making application delivery safe, foolproof, and transparent. We added core functions, including multi-cluster permission authentication and authorization, a complex resource topology display, and a one-click installation control panel. We comprehensively strengthened the delivery security in multi-tenancy scenarios, improved the consistent experience of application development and delivery, and made the application delivery process more transparent.