Skip to main content

· 8 min read

One of the biggest requests from KubeVela community is to provide a transparent delivery process for resources in the application. For example, many users prefer to use Helm Chart to package a lot of complex YAML, but once there is any issue during the deployment, such as the underlying storage can not be provided normally, the associated resources are not created normally, or the underlying configuration is incorrect, etc., even a small problem will be a huge threshold for troubleshooting due to the black box of Helm chart. Especially in the modern hybrid multi-cluster environment, there is a wide range of resources, how to obtain effective information and solve the problem? This can be a very big challenge.

resource graph

As shown in the figure above, KubeVela has offered a real-time observation resource topology graph for applications, which further improves KubeVela's application-centric delivery experience. Developers only need to care about simple and consistent APIs when initiating application delivery. When they need to troubleshoot problems or pay attention to the delivery process, they can use the resource topology graph to quickly obtain the arrangement relationship of resources in different clusters, from the application to the running status of the Pod instance. Automatically obtain resource relationships, including complex and black-box Helm Charts.

In this post, we will describe how this new feature of KubeVela is implemented and works, and the roadmap for this feature.

· 10 min read
Jianbo Sun

This blog will introduce how to use CUE and KubeVela to build you own abstraction API to reduce the complexity of Kubernetes resources. As a platform builder, you can dynamically customzie the abstraction, build a path from shallow to deep for your developers per needs, adapt to growing number of different scenarios, and meet the iterative demands of the company's long-term business development.

· 16 min read
KubeVela Community

Thanks to the contribution of hundreds of developers from KubeVela community and around 500 PRs from more than 30 contributors, KubeVela version 1.3 is officially released. Compared to v1.2 released three months ago, this version provides a large number of new features in three aspects as OAM engine (Vela Core), GUI dashboard (VelaUX) and addon ecosystem. These new features are derived from the in-depth practice of many end users such as Alibaba, LINE, China Merchants Bank, and iQiyi, and then finally become part of the KubeVela project that everyone can use out of the box.

Pain Points of Application Delivery

So, what challenges have we encountered in cloud-native application delivery?

· 7 min read
Wei Duan

Under today's multi-cluster business scene, we often encounter these typical requirements: distribute to multiple specific clusters, specific group distributions according to business need, and differentiated configurations for multi-clusters.

KubeVela v1.3 iterates based on the previous multi-cluster function. This article will reveal how to use it to do swift multiple clustered deployment and management to address all your anxieties.

· 6 min read
Xiangbo Ma

The cloud platform development team of China Merchants Bank has been trying out KubeVela since 2021 internally and aims to using it for enhancing our primary application delivery and management capabilities. Due to the specific security concern for financial insurance industry, network control measurements are relatively strict, and our intranet cannot directly pull Docker Hub image, and there is no Helm image source available as well. Therefore, in order to landing KubeVela in the intranet, you must perform a complete offline installation.

This article will take the KubeVela V1.2.5 version as an example, introduce the offline installation practice to help other users easier to complete KubeVela's deployment in offline environment.

· 10 min read
Tianxin Dong and Yicai Yu

With the rapid development of cloud-native, how can we use cloud to empower business development? When launching applications, how can cloud developers easily develop and debug applications in a multi-cluster and hybrid cloud environment? In the deployment process, how to make the application deployment have sufficient verification and reliability?

These crucial issues are urgently needed to be resolved.

In this article, we will use KubeVela and Nocalhost to provide a solution for cloud debugging and multi-cluster hybrid cloud deployment.

When a new application needs to be developed and launched, we hope that the results of debugging in the local IDE can be consistent with the final deployment state in the cloud. Such a consistent posture gives us the greatest confidence in deployment and allows us to iteratively apply updates in a more efficient and agile way like GitOps. That is: when new code is pushed to the code repository, the applications in the environment are automatically updated in real time.

Based on KubeVela and Nocalhost, we can deploy application like below:

alt

As shown in the figure: Use KubeVela to create an application, deploy the application to the test environment, and pause it. Use Nocalhost to debug the application in the cloud. After debugging, push the debugged code to the code repository, use GitOps to deploy with KubeVela, and then update it to the production environment after verification in the test environment.

In the following, we will introduce how to use KubeVela and Nocalhost for cloud debugging and multi-cluster hybrid cloud deployment.

· 10 min read
Tianxin Dong

At the background of Machine learning goes viral, AI engineers not only need to train and debug their models, but also need to deploy them online to verify how it looks(of course sometimes, this part of the work is done by AI platform engineers. ). It is very tedious and draining AI engineers.

In the cloud-native era, our model training and model serving are also usually performed on the cloud. Doing so not only improves scalability, but also improves resource utility. This is very effective for machine learning scenarios that consume a lot of computing resources.

But it is often difficult for AI engineers to use cloud-native techniques. The concept of cloud native has become more complex over time. Even to deploy a simple model serving on cloud native architecture, AI engineers may need to learn several additional concepts: Deployment, Service, Ingress, etc.

As a simple, easy-to-use, and highly scalable cloud-native application management tool, KubeVela enables developers to quickly and easily define and deliver applications on Kubernetes without knowing any details about the underlying cloud-native infrastructure. KubeVela's rich extensibility extends to AI addons and provide functions such as model training, model serving, and A/B testing, covering the basic needs of AI engineers and helping AI engineers quickly conduct model training and model serving in a cloud-native environment.

This article mainly focus on how to use KubeVela's AI addon to help engineers complete model training and model serving more easily.

· 6 min read

KubeVela currently supports AWS, Azure, GCP, AliCloud, Tencent Cloud, Baidu Cloud, UCloud and other cloud vendors, and also provides a quick and easy command line tool to introduce cloud resources from cloud providers. But supporting cloud resources from cloud providers one by one in KubeVela is not conducive to quickly satisfying users' needs for cloud resources. This doc provides a solution to quickly introduce the top 50 most popular cloud resources from AWS in less than 100 lines of code.

We also expect users to be inspired by this article to contribute cloud resources for other cloud providers.

· 12 min read

As the cloud native technologies grows continuously, more and more infrastructure capabilities are becoming standardized PaaS or SaaS products. To build a product you don't need a whole team to do it nowadays. Because there are so many services that can take roles from software developing, testing to infrastructure operations. As driven the culture of agile development and cloud native technologies, more and more roles can be shifted left to developers, e.g. testing, monitoring, security. As emphasized by the DevOps concepts, it can be done in the development phase for the work of monitoring, security, and operations via open source projects and cloud services. Nonetheless, this also creates huge challenges to developers, as they might lack the control of diverse products and complex APIs. Not only do they have to make choices, but also they need to understand and coordinate the complex, heterogeneous infrastructure capabilities in order to satisfy the fast-changing requirements of the business.

This complexity and uncertainty has exacerbated the developer experience undoubtedly, reducing the delivery efficiency of business system, increasing the operational risks. The tenet of developer experience is simplicity and efficiency. Not only the developers but also the enterprises have to choose the better developer tools and platforms to achieve this goal. This is also the focus of KubeVela v1.2 and upcoming release that to build a modern platform based on cloud native technologies and covering development, delivery, and operations. We can see from the following diagram of KubeVela architecture that developers only need to focus on applications per se, and use differentiated operational and delivery capabilities around the applications.

image.png

· 13 min read
Tianxin Dong

KubeVela is a simple, easy-to-use, and highly extensible cloud-native application platform. It can make developers deliver microservices applications easily, without knowing Kubernetes details.

KubeVela is based on OAM model, which naturally solves the orchestration problems of complex resources. It means that KubeVela can manage complex large-scale applications with GitOps. Convergence of team and system size after the system complexity problem.