JAVA-----------Easy Version To Understand!!!!


  • 58
    H

    1.The key to solve this problem is using a double linked list which enables us to quickly move nodes.
    2.The LRU cache is a hash table of keys and double linked nodes. The hash table makes the time of get() to be O(1). The list of double linked nodes make the nodes adding/removal operations O(1).

    class Node {
    int key;
    int value;
    Node pre;
    Node next;

    public Node(int key, int value) {
    	this.key = key;
    	this.value = value;
    }
    

    }
    public class LRUCache {

    HashMap<Integer, Node> map;
    int capicity, count;
    Node head, tail;

    public LRUCache(int capacity) {
    	this.capicity = capacity;
    	map = new HashMap<>();
    	head = new Node(0, 0);
    	tail = new Node(0, 0);
    	head.next = tail;
    	tail.pre = head;
    	head.pre = null;
    	tail.next = null;
    	count = 0;
    }
    
    public void deleteNode(Node node) {
    	node.pre.next = node.next;
    	node.next.pre = node.pre;
    }
    
    public void addToHead(Node node) {
    	node.next = head.next;
    	node.next.pre = node;
    	node.pre = head;
    	head.next = node;
    }
    
    public int get(int key) {
    	if (map.get(key) != null) {
    		Node node = map.get(key);
    		int result = node.value;
    		deleteNode(node);
    		addToHead(node);
    		return result;
    	}
    	return -1;
    }
    
    public void set(int key, int value) {
    	if (map.get(key) != null) {
    		Node node = map.get(key);
    		node.value = value;
    		deleteNode(node);
    		addToHead(node);
    	} else {
    		Node node = new Node(key, value);
    		map.put(key, node);
    		if (count < capicity) {
    			count++;
    			addToHead(node);
    		} else {
    			map.remove(tail.pre.key);
    			deleteNode(tail.pre);
    			addToHead(node);
    		}
    	}
    }
    

    }


  • 8
    G

    Great solution. One thing can be improved is to use map.size() directly in the capacity maintenance instead of having a counter of the nodes for that.


  • 0
    A

    Could you explain your solution please...What does new Node(0, 0) do? Is that some kind of dummy? Some more explanation on the various methods would be nice...thanks


  • 3
    D

    @aoben10 That's a dummy head and a dummy tail


  • 0
    I

    I think the question would also declare the key cannot be 0, otherwise it would fail since the keys of both dummy head and tail are 0. And I still confused that once the <key, node> pair has been removed from the Hashmap, how can that node still link to others? Wouldn't it be collected by garbage collector? Thanks.


  • 2
    R

    @icezhou0784 You don't have to worry about other key's being 0 since head and tail are never added to the map; they're only dummy nodes used for the linked list operations.
    When you remove a pair from a map, you are only removing the map's references to the key and value objects; the key and value objects are only collected at some later point if there are no other references to them. (It helps to think of these references as C pointers)


  • 0
    J
    This post is deleted!

  • 0
    N

    When I run this answer, it shows "runtime error". Could anyone explain it?


  • 0
    C

    @nsbdsxh Could be because method "set" was renamed to "put" recently.


  • 0

    Very concise solution. However, I found myself uncomfortable when I use the two (0,0) nodes as head and tail node because they are not existed. So I modify your code by simply using common head and tail node without being predefined. Hope it helps.

    class Node{
            private int key;
            private int value;
            Node pre, next;
            public Node(int key, int value){
                this.key = key;
                this.value = value;
                this.pre = null;
                this.next = null;
            }
        }
        HashMap<Integer, Node> map;
        Node head, tail;
        int capacity, count;
        public LRUCache(int capacity) {
            this.capacity = capacity;
            this.count = 0;
            this.map = new HashMap<>();
            this.head = null;
            this.tail = null;
        }
        public void deleteNode(Node node){
            if(node == head && node == tail){
                head = tail = null;
            }else if(node == head){
                head = head.next;
            }else if(node == tail){
                tail = node.pre;
                tail.next = null;
            }else{
                node.pre.next = node.next;
                node.next.pre = node.pre;
            }
        }
        public void addToHead(Node node){
            if(head == null && tail == null){
                head = tail = node;
                return ;
            }
            node.next = head;
            node.next.pre = node;
            node.pre = null;
            head = node;
        }
        public int get(int key) {
            if(map.get(key) == null) return -1;
            Node node = map.get(key);
            int res = node.value;
            deleteNode(node);
            addToHead(node);
            return res;
        }
        
        public void put(int key, int value) {
            if(map.get(key) != null){
                Node node = map.get(key);
                node.value = value;
                deleteNode(node);
                addToHead(node);
            }else{
                Node node = new Node(key, value);
                map.put(key, node);
                if(count < capacity){
                    count ++;
                    addToHead(node);
                }else{
                    map.remove(tail.key);
                    deleteNode(tail);
                    addToHead(node);
                }
            }
        }
    

  • 0
    D

    Solution doesn't seem to meet the requirements of a LRUCache. Order matters. Hash Sets do not order according to usage.


Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.