JCacheX: The Modern Java Caching Library You Need

TL;DR: JCacheX is a high‑performance Java caching library that gets you production‑ready results fast: profile‑based tuning (READ_HEAVY, WRITE_HEAVY, HIGH_PERFORMANCE, ZERO_COPY, MEMORY_EFFICIENT), rich eviction strategies, first‑class Spring Boot integration, Kotlin extensions, detailed stats, and optional Kubernetes‑native distributed caching. It’s designed for low latency, clarity, and real‑world operability.

Why another Java cache?

Teams often face a trade‑off: simple caches that struggle in production vs. complex setups that take weeks to tune. JCacheX aims to be both fast and pragmatic:

  • Profile‑based defaults: Pick a profile that matches your workload and go. No guesswork.
  • Advanced eviction: TinyWindowLFU, LRU, LFU, FIFO, FILO, Weight‑based, IdleTime and more.
  • Low‑latency core: Optimized data structures with a “ZeroCopy” path for hot GETs.
  • Spring Boot native: Annotations, auto‑configuration, health, metrics, management.
  • Kotlin‑first extensions: Operators, coroutines, DSL configuration, collection helpers.
  • Distributed option: Kubernetes discovery + TCP protocol for simple, resilient clustering.
  • Operational visibility: Stats, health indicators, and metrics for real production use.

Main features at a glance

  • Cache Profiles: READ_HEAVY, WRITE_HEAVY, HIGH_PERFORMANCE, ZERO_COPY, MEMORY_EFFICIENT and more.
  • Eviction strategies: TinyWindowLFU, LRU, LFU, FIFO, FILO, WeightBased, IdleTime.
  • Rich API: sync/async operations, remove/clear, collections view, stats.
  • Spring integration: @JCacheXCacheable, @JCacheXCacheEvict, @JCacheXCachePut, @JCacheXCaching, auto‑config, health, metrics.
  • Kotlin extensions: idiomatic operators, coroutine support, DSL config, safe ops.
  • Kubernetes distributed cache: KubernetesDistributedCache, KubernetesNodeDiscovery, TcpCommunicationProtocol.

Getting started

1) Add dependencies

Use the latest version from the releases page.

Maven

<!-- Core JCacheX library -->
<dependency>
  <groupId>io.github.dhruv1110</groupId>
  <artifactId>jcachex-core</artifactId>
  <version>2.0.1</version>
</dependency>

<!-- Spring Boot integration (optional) -->
<dependency>
  <groupId>io.github.dhruv1110</groupId>
  <artifactId>jcachex-spring</artifactId>
  <version>2.0.1</version></dependency>

<!-- Kotlin extensions (optional) -->
<dependency>
  <groupId>io.github.dhruv1110</groupId>
  <artifactId>jcachex-kotlin</artifactId>
  <version>2.0.1</version>
</dependency>

Gradle

dependencies {
  implementation 'io.github.dhruv1110:jcachex-core:2.0.1'
  implementation 'io.github.dhruv1110:jcachex-spring:2.0.1'   // optional
  implementation 'io.github.dhruv1110:jcachex-kotlin:2.0.1'   // optional
}

2) Your first cache in Java

import io.github.dhruv1110.jcachex.Cache;
import io.github.dhruv1110.jcachex.CacheConfig;
import io.github.dhruv1110.jcachex.profiles.CacheProfilesV3;

public class QuickStart {
  public static void main(String[] args) {
    Cache<String, String> cache = Cache.create(
        CacheConfig.builder()
          .applyProfile(CacheProfilesV3.READ_HEAVY()) // pick a tuned profile
          .maximumSize(10_000L)
          .recordStats(true)
          .build()
    );

    cache.put("k", "v");
    String v = cache.get("k");  // low-latency GET
    cache.remove("k");
    cache.clear();

    System.out.println(cache.stats()); // hit/miss counts, rates, etc.
  }
}

3) Spring Boot integration

Enable caching and use the annotations—auto‑configuration wires it all up:

@SpringBootApplication
@EnableCaching
public class DemoApplication { }
@Service
public class UserService {
  @JCacheXCacheable(cacheName = "users", profile = "READ_HEAVY")
  public User findById(Long id) { /* ... */ }

  @JCacheXCacheEvict(cacheName = "users", allEntries = true)
  public void evictAllUsers() { /* ... */ }
}

application.yml

jcachex:
  enabled: true
  autoCreateCaches: true
  default:
    maximumSize: 1000
    expireAfterSeconds: 600
    enableStatistics: true
  caches:
    users:
      profile: READ_HEAVY
      maximumSize: 1000
      expireAfterSeconds: 300

4) Kotlin extensions (optional, but delightful)

import io.github.dhruv1110.jcachex.kotlin.*

val cache = jcachex {
  profile = Profiles.READ_HEAVY
  maximumSize = 10_000
  stats = true
}

cache["k"] = "v"
val v = cache["k"]       // operator get
val value = cache.getOrPut("miss") { computeExpensive() }

cache.stats.prettyPrint()

5) Distributed caching on Kubernetes (optional)

For services that need a shared, in‑cluster cache:

  • KubernetesNodeDiscovery finds peers via the API server.
  • TcpCommunicationProtocol ships updates efficiently.
  • KubernetesDistributedCache coordinates membership and consistency trade‑offs.
# Example K8s manifests (excerpt)
apiVersion: v1
kind: Service
metadata:
  name: jcachex-cache
spec:
  clusterIP: None  # headless service for peer discovery
  selector: { app: your-app }
  ports:
    - name: jcx-tcp
      port: 7600

Best practices: prefer small, bounded entries; set clear TTL/size; expose metrics; load‑test with realistic key distributions; and validate failure modes (pod churn, network partitions).

What sets JCacheX apart?

1) Profile‑driven performance, not configuration bloat

Instead of hundreds of dials, you pick an intent (read‑heavywrite‑heavyzero‑copy, etc.). Profiles encapsulate tuned settings and internal data‑structure choices—then you tweak a few essentials (size, TTL) as needed.

2) Low‑latency, allocation‑aware core

Hot paths are optimized to reduce allocations and contention. In our internal benchmarks (JMH on modern hardware), the “ZeroCopy” profile delivered single‑digit‑nanosecond GETs for resident data. Your mileage will vary—measure on your hardware and workload.

3) A complete production story

  • Observability: built‑in cache stats, Spring Actuator health/metrics, and management endpoints.
  • Operational safety: predictable eviction, bounded memory strategies, profile‑level controls.
  • Integration: Spring annotations, Kotlin ergonomics, and optional Kubernetes distribution.
  • Caffeine: a fantastic in‑JVM cache. JCacheX leans into profile‑based tuning, explicit zero‑copy paths, and an optional Kubernetes distributed mode for simple in‑cluster sharing—handy if you want a single library to cover both local and small distributed use cases.
  • Redis: a great remote data store and cache. JCacheX is in‑process (and optionally in‑cluster), avoiding network hops for ultra‑low latency; it’s not a replacement for Redis as a shared, cross‑service datastore—think of JCacheX as your hot path, with Redis remaining your durable/shared layer where needed.
  • Guava: solid but aging caching utilities. JCacheX focuses on modern workloads: profiles, Kotlin, Spring, and deeper observability.

Real‑world tips

  • Pick the profile that matches your access pattern, then validate with load tests.
  • Use bounded sizes and appropriate TTLs. Monitor hit‑rate, eviction rate, and latency.
  • For distributed mode, start small, keep entries compact, and treat the cluster as an optimization, not a source of truth.

What’s next?


Benchmark notes: All performance numbers are from our controlled JMH runs and are workload/hardware specific. Always benchmark with your data, cardinalities, and concurrency.

Admin

Recent Posts

AI Prompts for Developers: Think Like a Principal Engineer

Developers often struggle to get actionable results from AI coding assistants. This guide provides 7…

2 days ago

How to Train and Publish Your Own LLM with Hugging Face (Part 3: Publishing & Sharing)

In the final part of our Hugging Face LLM training series, learn how to publish…

4 days ago

How to Train and Publish Your Own LLM with Hugging Face (Part 2: Fine-Tuning Your Model)

In Part 2 of our Hugging Face series, you’ll fine-tune your own AI model step…

4 days ago

How to Train and Publish Your Own LLM with Hugging Face (Part 1: Getting Started)

Kickstart your AI journey with Hugging Face. In this beginner-friendly guide, you’ll learn how to…

4 days ago

The Hidden 2017 Breakthrough Behind ChatGPT, Claude, and Gemini

Discover how the 2017 paper Attention Is All You Need introduced Transformers, sparking the AI…

5 days ago

OpenAI’s New Budget Plan: Everything to Know About ChatGPT Go

OpenAI just launched ChatGPT Go, a new low-cost plan priced at ₹399/month—India-only for now. You…

5 days ago

This website uses cookies.