TL;DR: JCacheX is a high‑performance Java caching library that gets you production‑ready results fast: profile‑based tuning (READ_HEAVY, WRITE_HEAVY, HIGH_PERFORMANCE, ZERO_COPY, MEMORY_EFFICIENT), rich eviction strategies, first‑class Spring Boot integration, Kotlin extensions, detailed stats, and optional Kubernetes‑native distributed caching. It’s designed for low latency, clarity, and real‑world operability.
Teams often face a trade‑off: simple caches that struggle in production vs. complex setups that take weeks to tune. JCacheX aims to be both fast and pragmatic:
remove
/clear
, collections view, stats.@JCacheXCacheable
, @JCacheXCacheEvict
, @JCacheXCachePut
, @JCacheXCaching
, auto‑config, health, metrics.KubernetesDistributedCache
, KubernetesNodeDiscovery
, TcpCommunicationProtocol
.Use the latest version from the releases page.
<!-- Core JCacheX library -->
<dependency>
<groupId>io.github.dhruv1110</groupId>
<artifactId>jcachex-core</artifactId>
<version>2.0.1</version>
</dependency>
<!-- Spring Boot integration (optional) -->
<dependency>
<groupId>io.github.dhruv1110</groupId>
<artifactId>jcachex-spring</artifactId>
<version>2.0.1</version>
</dependency>
<!-- Kotlin extensions (optional) -->
<dependency>
<groupId>io.github.dhruv1110</groupId>
<artifactId>jcachex-kotlin</artifactId>
<version>2.0.1</version>
</dependency>
dependencies {
implementation 'io.github.dhruv1110:jcachex-core:2.0.1'
implementation 'io.github.dhruv1110:jcachex-spring:2.0.1' // optional
implementation 'io.github.dhruv1110:jcachex-kotlin:2.0.1' // optional
}
import io.github.dhruv1110.jcachex.Cache;
import io.github.dhruv1110.jcachex.CacheConfig;
import io.github.dhruv1110.jcachex.profiles.CacheProfilesV3;
public class QuickStart {
public static void main(String[] args) {
Cache<String, String> cache = Cache.create(
CacheConfig.builder()
.applyProfile(CacheProfilesV3.READ_HEAVY()) // pick a tuned profile
.maximumSize(10_000L)
.recordStats(true)
.build()
);
cache.put("k", "v");
String v = cache.get("k"); // low-latency GET
cache.remove("k");
cache.clear();
System.out.println(cache.stats()); // hit/miss counts, rates, etc.
}
}
Enable caching and use the annotations—auto‑configuration wires it all up:
@SpringBootApplication
@EnableCaching
public class DemoApplication { }
@Service
public class UserService {
@JCacheXCacheable(cacheName = "users", profile = "READ_HEAVY")
public User findById(Long id) { /* ... */ }
@JCacheXCacheEvict(cacheName = "users", allEntries = true)
public void evictAllUsers() { /* ... */ }
}
jcachex:
enabled: true
autoCreateCaches: true
default:
maximumSize: 1000
expireAfterSeconds: 600
enableStatistics: true
caches:
users:
profile: READ_HEAVY
maximumSize: 1000
expireAfterSeconds: 300
import io.github.dhruv1110.jcachex.kotlin.*
val cache = jcachex {
profile = Profiles.READ_HEAVY
maximumSize = 10_000
stats = true
}
cache["k"] = "v"
val v = cache["k"] // operator get
val value = cache.getOrPut("miss") { computeExpensive() }
cache.stats.prettyPrint()
For services that need a shared, in‑cluster cache:
KubernetesNodeDiscovery
finds peers via the API server.TcpCommunicationProtocol
ships updates efficiently.KubernetesDistributedCache
coordinates membership and consistency trade‑offs.# Example K8s manifests (excerpt)
apiVersion: v1
kind: Service
metadata:
name: jcachex-cache
spec:
clusterIP: None # headless service for peer discovery
selector: { app: your-app }
ports:
- name: jcx-tcp
port: 7600
Best practices: prefer small, bounded entries; set clear TTL/size; expose metrics; load‑test with realistic key distributions; and validate failure modes (pod churn, network partitions).
Instead of hundreds of dials, you pick an intent (read‑heavy, write‑heavy, zero‑copy, etc.). Profiles encapsulate tuned settings and internal data‑structure choices—then you tweak a few essentials (size, TTL) as needed.
Hot paths are optimized to reduce allocations and contention. In our internal benchmarks (JMH on modern hardware), the “ZeroCopy” profile delivered single‑digit‑nanosecond GETs for resident data. Your mileage will vary—measure on your hardware and workload.
Benchmark notes: All performance numbers are from our controlled JMH runs and are workload/hardware specific. Always benchmark with your data, cardinalities, and concurrency.
Developers often struggle to get actionable results from AI coding assistants. This guide provides 7…
In the final part of our Hugging Face LLM training series, learn how to publish…
In Part 2 of our Hugging Face series, you’ll fine-tune your own AI model step…
Kickstart your AI journey with Hugging Face. In this beginner-friendly guide, you’ll learn how to…
Discover how the 2017 paper Attention Is All You Need introduced Transformers, sparking the AI…
OpenAI just launched ChatGPT Go, a new low-cost plan priced at ₹399/month—India-only for now. You…
This website uses cookies.