# JAVA O(1) very easy solution using 3 HashMaps and LinkedHashSet

• ``````public class LFUCache {
HashMap<Integer, Integer> vals;
HashMap<Integer, Integer> counts;
int cap;
int min = -1;
public LFUCache(int capacity) {
cap = capacity;
vals = new HashMap<>();
counts = new HashMap<>();
lists = new HashMap<>();
}

public int get(int key) {
if(!vals.containsKey(key))
return -1;
int count = counts.get(key);
counts.put(key, count+1);
lists.get(count).remove(key);
if(count==min && lists.get(count).size()==0)
min++;
if(!lists.containsKey(count+1))
return vals.get(key);
}

public void set(int key, int value) {
if(cap<=0)
return;
if(vals.containsKey(key)) {
vals.put(key, value);
get(key);
return;
}
if(vals.size() >= cap) {
int evit = lists.get(min).iterator().next();
lists.get(min).remove(evit);
vals.remove(evit);
}
vals.put(key, value);
counts.put(key, 1);
min = 1;
}
}
``````

• Very compact solution !
My understanding:

1. the least recently+frequently used value to be removed is the first element in LinkedHashSet with the lowest count/frequency.
2. min is used to track the group of elements with least frequency
3. lists maps frequency to groups, each element in same group has the same count.

• Nice and concise solution.
Just one tip.

for the code

``````if(count==min && lists.get(count).size()==0)
min++;
``````

you do not have to compare count with min since if lists.get(count).size() == 0, count must equal min and just update min as count + 1.

Thank you for your solution. It helps me a lot.

• @shen5630 No, thats not true. There might be cases where a HashSet may be empty corresponding to a count which is not minimum. In that case, we cannot set the minimum as count + 1. One of the test cases fails to corroborate this point.

• Thanks for the solution, very concise!

• Nice solution ðŸ˜˜

• Why frequency is being updated on put if we try to replace value by key?

• @shen5630

That is not true, I agree with @arjun1610

@BogdanovMK

Because that key is being used again. So, its frequency should be increased.

• @aaaeeeo

Nice solution!
Just that I have a question, I think we should remove the key from the counts map as well
when the cache is full even though it won't affect the result, what do you think?
counts.remove(evit);

``````if(vals.size() >= cap) {
int evit = lists.get(min).iterator().next();
lists.get(min).remove(evit);
vals.remove(evit);
}
``````

• @jun711
You're right

• containsKey runs in linear time. You should get the value from the map and compare it to null.

• @aaaeeeo Great solution!

At first I didn't understand why we didn't need to care about nextMin and we just could add 1 to min. Now it makes sense that min will always reset to 1 when adding a new item. And also, min can never jump forward more than 1 since updating an item only increments it's count by 1.

``````public class LFUCache {

private int min;

private final int capacity;
private final HashMap<Integer, Integer> keyToVal;
private final HashMap<Integer, Integer> keyToCount;

public LFUCache(int capacity) {
this.min = -1;
this.capacity = capacity;
this.keyToVal = new HashMap<>();
this.keyToCount = new HashMap<>();
this.countToLRUKeys = new HashMap<>();
}

public int get(int key) {
if (!keyToVal.containsKey(key)) return -1;

int count = keyToCount.get(key);
countToLRUKeys.get(count).remove(key); // remove key from current count (since we will inc count)
if (count == min && countToLRUKeys.get(count).size() == 0) min++; // nothing in the current min bucket

putCount(key, count + 1);
return keyToVal.get(key);
}

public void put(int key, int value) {
if (capacity <= 0) return;

if (keyToVal.containsKey(key)) {
keyToVal.put(key, value); // update key's value
get(key); // update key's count
return;
}

if (keyToVal.size() >= capacity)
evict(countToLRUKeys.get(min).iterator().next()); // evict LRU from this min count bucket

min = 1;
putCount(key, min); // adding new key and count
keyToVal.put(key, value); // adding new key and value
}

private void evict(int key) {
countToLRUKeys.get(min).remove(key);
keyToVal.remove(key);
}

private void putCount(int key, int count) {
keyToCount.put(key, count);
}
}
``````

• Thanks for you nice solutions. And this is my python solution follow your idea.

``````from collections import OrderedDict
class LFUCache(object):
def __init__(self, capacity):
"""
:type capacity: int
"""
self.capacity=capacity
self.caches={}
self.cachesfre={}
self.frekey={}
self.minfre=-1

def get(self, key):
"""
:type key: int
:rtype: int
"""
if key not in self.caches:
return -1
fre=self.cachesfre[key]
self.cachesfre[key]+=1
del self.frekey[fre][key]
if fre+1 in self.frekey:
self.frekey[fre+1][key]=-1
else:
self.frekey[fre+1]=OrderedDict()
self.frekey[fre+1][key]=-1
if fre==self.min and len(self.frekey[fre])==0:
self.min+=1
return self.caches[key]
def put(self, key, value):
"""
:type key: int
:type value: int
:rtype: void
"""
if self.capacity==0:
return
if key in self.caches:
self.caches[key]=value
self.get(key)
else:
if len(self.caches)>=self.capacity:
minkey=self.frekey[self.min].popitem(last=False)[0]
del self.caches[minkey]
del self.cachesfre[minkey]
self.min=1
self.caches[key]=value
self.cachesfre[key]=self.min
if self.min in self.frekey:
self.frekey[self.min][key]=-1
else:
self.frekey[self.min]=OrderedDict()
self.frekey[self.min][key]=-1``````

``````public class LFUCache {
int capacity;
int size=0;
int min=-1;
Map<Integer,Integer> map;
Map<Integer,Integer> counts;
public LFUCache(int capacity) {
map=new HashMap<>();
counts=new HashMap<>();
lists=new HashMap<>();
this.capacity=capacity;
}

public int get(int key) {
if(!map.containsKey(key)){
return -1;
}
int val=map.get(key);
int count=counts.get(key);

counts.put(key,count+1);
lists.get(count).remove(key);

if(min==count&&lists.get(count).size()==0){
min++;
}

if(!lists.containsKey(count+1)){
}

return val;
}

public void put(int key, int value) {
if(capacity<=0) return;

if(map.containsKey(key)){
map.put(key,value);
get(key);
return;
}

if(size>=capacity){
int key_min=lists.get(min).iterator().next();
lists.get(min).remove(key_min);
map.remove(key_min);
size--;
}
size++;
map.put(key,value);
counts.put(key,1);
min=1;
}
}

/**
* Your LFUCache object will be instantiated and called as such:
* LFUCache obj = new LFUCache(capacity);
* int param_1 = obj.get(key);
* obj.put(key,value);
*/
``````

• @aaaeeeo
In the following block of codes within `set()` method, `counts` is not updated, I don't why? In fact, after adding one line `counts.remove(evit)`, the solution is also accepted.

``````if(vals.size() >= cap) {
int evit = lists.get(min).iterator().next();
lists.get(min).remove(evit);
vals.remove(evit);
}
``````

• if(count==min && lists.get(count).size()==0)
min++;

Can someone help me understand this logic?
Why do we need to increment `min` in this case and what does it mean? Thanks.

• @fishercoder
I understood it now.
It means:

``````                  if (the count of this key equals to current minimumCount
AND
this count does not have any entries in the cache) {
So, we will increment minimumCount by 1 to get the next LFU cache entry
when we need to evict.
}
``````

• This post is deleted!

• @aaaeeeo the logic of this solution is very clear and easily understood. It has fewer lines of code in relative to the 2-hashmap and DLL solution. I wonder if any potential drawbacks exist in this solution regarding its performance and resource overhead. Thanks

• @aaaeeeo
hi, with the following test, the lists (linkedhashset) will have 1000 pairs after calling set 1000 times. Can we do anything to make the size of lists consistent with capacity.

``````    public static void main(String[] args) {
LFUThreeHashMap lfu = new LFUThreeHashMap(5);
for (int i = 0; i < 1000; i++) {
lfu.set(1, 20);
}
}``````

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.