Class ThreadedLRUCacheStrategy<K,V>
- Type Parameters:
K- the type of keys maintained by this cacheV- the type of mapped values
- All Implemented Interfaces:
Closeable,AutoCloseable,Map<K,V>
Map interface for convenience.
Algorithm: This implementation uses a zone-based eviction strategy with sample-15 approximate LRU:
- Zone A (0 to capacity): Normal operation, no eviction needed
- Zone B (capacity to 1.5x): Background cleanup brings cache back to capacity
- Zone C (1.5x to 2x): Probabilistic inline eviction (probability increases as size approaches 2x)
- Zone D (2x+): Hard cap - evict before insert to maintain bounded memory
Sample-15 Eviction: Instead of sorting all entries (O(n log n)), we sample 15 random entries and evict the oldest one. This provides ~99% accuracy compared to true LRU (based on Redis research) with O(1) cost.
Memory Guarantee: The cache will never exceed 2x the specified capacity, allowing users to size their cache with predictable worst-case memory usage.
The Threaded strategy allows for O(1) access for get(), put(), and remove() without blocking in the common case.
It uses ConcurrentHashMapNullSafe internally for null key/value support.
LRUCache supports null for both key and value.
Architecture: All ThreadedLRUCacheStrategy instances share a single cleanup thread that runs every 500ms. Each cache registers itself via a WeakReference, allowing garbage collection of unused caches.
- Author:
- John DeRegnaucourt (jdereg@gmail.com)
Copyright (c) Cedar Software LLC
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
License
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
-
Nested Class Summary
-
Constructor Summary
ConstructorsConstructorDescriptionThreadedLRUCacheStrategy(int capacity) Create a ThreadedLRUCacheStrategy with the specified capacity.ThreadedLRUCacheStrategy(int capacity, int cleanupDelayMillis) Deprecated. -
Method Summary
Modifier and TypeMethodDescriptionvoidclear()voidclose()computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) booleancontainsKey(Object key) booleancontainsValue(Object value) entrySet()booleanvoidForces an immediate cleanup of this cache (for testing).intinthashCode()booleanisEmpty()keySet()Associates the specified value with the specified key in this cache.voidputIfAbsent(K key, V value) voidshutdown()Shuts down this cache, removing it from the shared cleanup task.static booleanShuts down the shared cleanup scheduler used by all ThreadedLRUCacheStrategy instances.intsize()toString()values()Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, waitMethods inherited from interface java.util.Map
compute, computeIfPresent, forEach, getOrDefault, merge, remove, replace, replace, replaceAll
-
Constructor Details
-
ThreadedLRUCacheStrategy
public ThreadedLRUCacheStrategy(int capacity) Create a ThreadedLRUCacheStrategy with the specified capacity.The cache uses a zone-based eviction strategy:
- Up to 1.5x capacity: Background cleanup only
- 1.5x to 2x capacity: Probabilistic inline eviction
- At 2x capacity: Hard cap with evict-before-insert
Memory usage is guaranteed to never exceed 2x the specified capacity.
- Parameters:
capacity- int maximum size for the LRU cache.- Throws:
IllegalArgumentException- if capacity is less than 1
-
ThreadedLRUCacheStrategy
Deprecated.UseThreadedLRUCacheStrategy(int)instead.Create a ThreadedLRUCacheStrategy with the specified capacity.Note: The cleanupDelayMillis parameter is deprecated and ignored.
- Parameters:
capacity- int maximum size for the LRU cache.cleanupDelayMillis- ignored (formerly: milliseconds before scheduling cleanup)
-
-
Method Details
-
shutdown
public void shutdown()Shuts down this cache, removing it from the shared cleanup task. -
forceCleanup
public void forceCleanup()Forces an immediate cleanup of this cache (for testing). -
shutdownScheduler
public static boolean shutdownScheduler()Shuts down the shared cleanup scheduler used by all ThreadedLRUCacheStrategy instances.- Returns:
- true if the scheduler terminated cleanly, false if it timed out or was interrupted
-
close
public void close()- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable
-
getCapacity
public int getCapacity()- Returns:
- the maximum number of entries in the cache.
-
get
-
put
Associates the specified value with the specified key in this cache.Zone-based eviction:
- Zone A/B (0 to 1.5x): Insert and return immediately
- Zone C (1.5x to 2x): Insert, then probabilistically evict
- Zone D (2x+): Insert, then evict until under hard cap
Note: We insert first, then enforce limits. This avoids the TOCTOU race where multiple threads check size, all see "under limit", then all insert. By checking after insert and looping until under hardCap, we guarantee the hard cap is enforced.
-
putAll
-
isEmpty
public boolean isEmpty() -
remove
-
computeIfAbsent
- Specified by:
computeIfAbsentin interfaceMap<K,V>
-
putIfAbsent
- Specified by:
putIfAbsentin interfaceMap<K,V>
-
clear
public void clear() -
size
public int size() -
containsKey
- Specified by:
containsKeyin interfaceMap<K,V>
-
containsValue
- Specified by:
containsValuein interfaceMap<K,V>
-
entrySet
-
keySet
-
values
-
equals
-
hashCode
public int hashCode() -
toString
-
ThreadedLRUCacheStrategy(int)instead.