I have a a bit of standard WWW code:
// Start a download of the given URL
www = new WWW(sURL);
// wait until the download is done
while( ![url]www.isDone[/url] || [url]www.error[/url] != null)
{
progressMessage += [url]www.progress[/url] + " ";
yield;
}
if( [url]www.error[/url] != null )
{
progressMessage += "\nError" + [url]www.error;[/url]
return;
}
Surfing to http://google.com prints out the attached progress message and a final progress value of 17.66516!
It looks to me like you should change the code in your while loop to something like:
while( ![url]www.isDone[/url] [url]www.error[/url] == null)
That way it will break if there’s an error or if it’s done. The way you have it now, if there’s an error it will get stuck in an infinite loop. I don’t know if that solves your real problem at all, but, just thought it was worth pointing out.
What’s the value of www.progress after you get out of the loop, BTW? Are you sure it isn’t just hitting a part of the source that reads really quickly? Not sure if that’s the problem either, but, may be worth checking
Hmm interesting, you were right, the logic was flawed - but fixing it doesn’t change much expect that the last sequence of numbers now is:
1 4.515837 16.71493 16.71493 16.71493 16.71493.
Typing in www.google.com instead of http://google.com returns
0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 10.61539 22.81448