Announcing Smart Karpenter Version 1.1.0
veena.png
Veena Jayaram
Staff Technical Writer/Program Manager
facebook sharefacebook sharefacebook sharefacebook share
Announcing Smart karpenter_version1.1.0.jpg
Content
Key Highlights in Smart Karpenter Version 1.1.0
Support for LKE Enterprise
Core Upgrade - Smart Karpenter 1.6.2
Oracle Bare Metal GPU Support (Beta)
Key Benefits
How It Works
Getting Started

We are proud to announce the release of Smart Karpenter version 1.1.0, which went live on September 12, 2025. It’s our next-generation autoscaling solution designed to simplify Kubernetes scaling, reduce cloud costs, and enable DevOps teams to focus on building rather than constant tuning.

Key Highlights in Smart Karpenter Version 1.1.0

This release introduces major features and enhancements across infrastructure, application scaling, and AI-driven decision making.

Support for LKE Enterprise

With this release, Smart Karpenter can now provision and manage nodes in Linode Kubernetes Engine (LKE) Enterprise as a target environment.
For more details, see acquiring a license for LKE Enterprise and configuring Smart Karpenter on LKE.

Core Upgrade - Smart Karpenter 1.6.2

We’ve upgraded the Smart Karpenter core to version 1.6.2, bringing stability, performance enhancements, and security improvements inherited from upstream.

Oracle Bare Metal GPU Support (Beta)

Smart Karpenter now supports Oracle Bare Metal GPU compute shape, BM.GPU.A10. You can include this instance type in your NodePool definitions.
The Oracle Bare Metal GPU compute shape node support is currently in Beta.
Visit the Bare Metal GPU NodePool documentation to get started.

Key Benefits

With these features, organizations can expect:

  • Up to 70% reduction in cloud costs by ensuring right-sized node and pod provisioning and eliminating over-provisioning.
  • Better SLO (Service Level Objective) compliance, especially during unpredictable traffic or spikes.
  • Less manual effort: no more tweaking thresholds, dummy pods, or maintaining large static node pools.

How It Works

Smart Karpenter combines two main layers:

  1. Smart Scaler (AI / Prediction Layer)
    Deployed via Helm in Observation mode initially. It monitors real-time metrics across app services, builds a service graph, forecasts demand, and suggests optimal pod/node counts.
  2. Karpenter ( Node Provisioning Layer)
    After confidence is established, in Optimize (or Run) mode, the predictions from Smart Scaler feed into Karpenter which then provisions nodes just-in-time. This ensures nodes are started when needed, used efficiently, and scaled down as demand falls.

Getting Started

Smart Karpenter v1.1.0 is already helping teams in production, and new users can get started today.
For detailed installation instructions and documentation, visit our Smart Karpenter documentation.

 

Recommended Blogs
Veena Jayaram
OCI and AMD Executive RoundTable
Automatically rebalance GPU and CPU capacity in real time to meet dynamic workload demands—idle slots are reclaimed, tasks finish faster, and costs shrink.
Veena Jayaram
Spot GPU Harnessing
Tap into discounted spot-instance GPUs for non-critical or batch AI jobs—keeping performance high while lowering compute spend.
Veena Jayaram
Live-Pulse Observability
Tap into discounted spot-instance GPUs for non-critical or batch AI jobs—keeping performance high while lowering compute spend.