Find Duplicate


  • 0

    Click here to see the full article post


  • 0

    we may not need to remove ( and ) around the content as they can be treated as part of the key too(key and (key) are the same in this algorithm):

    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    
    public class Solution {
        public List<List<String>> findDuplicate(String[] paths) {
            List<List<String>> result = new ArrayList<>();
    
            HashMap<String, List<String>> map = new HashMap<>();
            for (String path : paths) {
                String[] files = path.split(" ");
                for (int i = 1; i < files.length; i++) {
                    int p = files[i].indexOf('(');
                    String key = files[i].substring(p);
                    if (!map.containsKey(key)) map.put(key, new ArrayList<>());
                    map.get(key).add(files[0] + "/" + files[i].substring(0, p));
                }
            }
    
            for (Map.Entry<String, List<String>> entry : map.entrySet()) {
                if (entry.getValue().size() > 1) result.add(entry.getValue());
            }
            return result;
        }
    }
    

  • 0
    P

    To save space for really large files, I would store hashCode of string content as key in HashMap. And key lookup will also go faster when hashCode is used for really large files.


Log in to reply
 

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.