Just want to share my humble opinions for discussion:
If anyone has a better solution, I would appreciate it if you'd like to correct and enlighten me:-)
In real-world file system, we usually store large file in multiple "chunks" (in GFS, one chunk is 64 MB),so we have meta data recording the file size,file name and index of different chunks along with each chunk's checkSum (the xor for the content).
So when we upload a file, we record the meta data as mentioned above.
When we need to check for duplicates, we could simply check the meta data:
1.Check if files are of the same size;
2.if step 1 passes, compare the first chunk's checkSum
3.if step 2 passes, check the second checkSum
and so on.
There might be false positive duplicates, because two different files might share the same checkSum.
In the way mentioned above, we could read the meta data instead of the entire file, and compare the information KB by KB.
Using checkSum, we could quickly and accurately find out the non-duplicated files. But to totally avoid getting the false positive, we need to compare the content chunk by chunk when we find two "duplicates" using checkSum.
I'm so sorry to hear that.My English is not good,but I like coding and share my solution.I think the most important thing is to share my idea and code.Using Chinese is to help me share them exactly.I will try my best to use English to describe my solution. Thanks for your reminding me~