Here's the relevant code:
var textAsset : TextAsset;
var lineCounter = 0;
function Start() {
readHugeFile();
}
function readHugeFile() {
var startTimer = Time.realtimeSinceStartup;
if (textAsset == null) return;
reader = new StringReader(textAsset.text);
line = reader.ReadLine();
while (line != null)
{
lineCounter++;
line = reader.ReadLine();
}
var timeExpired = Time.realtimeSinceStartup - startTimer;
print("Time Expired:");
print(timeExpired);
print("Number of Lines:");
print(lineCounter);
}
Ok, by points.
I'm not interested in alternatives to reading these files. Alternative/smarter code would be super welcome!
This code is to darn SLOW when the number of lines grows.
This thing scales weirdly. The length of time taken isn't linear with the number of lines in the file.
Some random sampling of text files of different sizes.
(Edits after modification suggested)
Number of Lines Lines Read / second | *Lines Read / second
57,012 251 518,000
54,818 245 550,000
41,239 359 412,000
38,138 385 545,000
32,305 447 538,000
25,908 551 518,000
21,098 681 527,000
17,668 803 589,000
16,038 891 535,000
13,118 1161 656,000
11,616 1263 581,000
2,293 6551 573,000
WTH?
Thanks for any answers, pointers, or directions for better code. Am I doing something silly here?
*Notes The large variance in the number in the far right column are a result of variances in actual line lengths in the files, rounding error, and poor use of significant digits by myself.
With the largest file tested here, the suggested change of code represents a 2000X speed increase!