I have two text files files (TXT) which contain over 2 million distinct file names. I want to loop through all the names in the first file and find those that are also present in the second text file.
I have tried looping through the StreamReader
but it takes a lot of time. I also tried the code below, but it still takes too much time.
StreamReader first = new StreamReader(path);
string strFirst = first.ReadToEnd();
string[] strarrFirst = strFirst.Split('\n');
bool found = false;
StreamReader second = new StreamReader(path2);
string str = second.ReadToEnd();
string[] strarrSecond = str.Split('\n');
for (int j = 0; j < (strarrFirst.Length); j++)
{
found = false;
for (int i = 0; i < (strarrSecond .Length); i++)
{
if (strarrFirst[j] == strarrSecond[i])
{
found = true;
break;
}
}
if (!found)
{
Console.WriteLine(strarrFirst[j]);
}
}
What is a good way to compare the files?
How about this:
var commonNames = File.ReadLines(path).Intersect(File.ReadLines(path2));
That's O(N + M) instead of your current solution which tests every line in the first file with every line in the second file - O(N * M).
That's assuming you're using .NET 4. Otherwise, you could use File.ReadAllLines
, but that will read the whole file into memory. Or you could write the equivalent of File.ReadLines
yourself - it's not terribly hard.
Ultimately you're likely to be limited by file IO by the time you've got rid of the O(N * M) problem in your current code - there's not much way to get round that.
EDIT: For .NET 2, first let's implement something like ReadLines
:
public static IEnumerable<string> ReadLines(string file)
{
using (TextReader reader = File.OpenText(file))
{
string line;
while ((line = reader.ReadLine()) != null)
{
yield return line;
}
}
}
Now we really want to use a HashSet<T>
, but that wasn't in .NET 2 - so let's use Dictionary<TKey, TValue>
instead:
Dictionary<string, string> map = new Dictionary<string, string>();
foreach (string line in ReadLines(path))
{
map[line] = line;
}
List<string> intersection = new List<string>();
foreach (string line in ReadLines(path2))
{
if (map.ContainsKey(line))
{
intersection.Add(line);
}
}
Try something like this to speed it up a bit ...
var path = string.Empty;
var path2 = string.Empty;
var strFirst = string.Empty;
var str = string.Empty;
var strarrFirst = new List<string>();
var strarrSecond = new List<string>();
using (var first = new StreamReader(path))
{
strFirst = first.ReadToEnd();
}
using (var second = new StreamReader(path2))
{
str = second.ReadToEnd();
}
strarrFirst.AddRange(strFirst.Split('\n'));
strarrSecond.AddRange(str.Split('\n'));
strarrSecond.Sort();
foreach(var value in strarrFirst)
{
var found = strarrSecond.BinarySearch(value) >= 0;
if (!found) Console.WriteLine(value);
}
Just for fun, I've tried Jon Skeet method and own:
var guidArray = Enumerable.Range(0, 1000000).Select(x => Guid.NewGuid().ToString()).ToList();
string path = "first.txt";
File.WriteAllLines(path, guidArray);
string path2 = "second.txt";
File.WriteAllLines(path2, guidArray.Select(x=>DateTime.UtcNow.Ticks % 2 == 0 ? x : Guid.NewGuid().ToString()));
var start = DateTime.Now;
var commonNames = File.ReadLines(path).Intersect(File.ReadLines(path2)).ToList();
Console.WriteLine((DateTime.Now - start).TotalMilliseconds);
start = DateTime.Now;
var lines = File.ReadAllLines(path);
var hashset = new HashSet<string>(lines);
var lines2 = File.ReadAllLines(path2);
var result = lines2.Where(hashset.Contains).ToList();
Console.WriteLine((DateTime.Now - start).TotalMilliseconds);
Console.ReadKey();
And Skeet's method was tiny bit faster (1453.0831 vs 1488.0851, iDevForFun method was quite slow - 12791.7316), so i think under layers should happen same thing as I was trying to do manually with hashset.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.