Like @SkaredCreations showed you can simply iterate through an array and load one by one. If you want to load multiple pages simultaneously you have to use either
multiple coroutines, one for each download
a single coroutine but you start all downloads right in the beginning (store the www objects in a list / array) and then iterate through the array and process them when finished.
Usually the first method is the more flexible as each coroutine runs on it’s own and is finished on it’s own.
When using one coroutine there are several ways to handle each download. It’s usually a good idea to wrap the download itself in a seperate class. That way you can bundle the WWW object with the used URL, maybe a timeout counter and even a callback method. Also common is to implement some kind of retry (combined with a retry counter) in case of an error.
public class WWWDownload
{
public WWW download;
public string URL;
public WWWDownload(string aURL)
{
URL = aURL;
download = new WWW(aURL);
}
}
IEnumerator Download(string[] URLs)
{
List<WWWDownload> downloads = new List<WWWDownload>(URLs.Length);
// Start the downloads
for(int i = 0; i < URLs.Length; i++)
{
downloads.Add( new WWWDownload(URLs*) );*
} // wait for them to finish. while(downloads.Count > 0) { for(int i = downloads.Length-1; i >= 0; i–) { if (downloads*.download.isDone)* { parser(downloads_.download.text, downloads*.URL); downloads.RemoveAt(i); // The download is finished, remove it from the list. } } yield return null; } }* No matter which approach you use you should keep in mind that most webservers don’t allow many simultaneously connections (usually only two). edit It of course depends on your implementation of your “parser” method. You just asked for “generic downloads” so how to handle the returned data depends on the kind of data and for what it’s actually downloaded._