View Single Post
  #5 (permalink)  
Old May 18th, 2004
verdyp's Avatar
verdyp verdyp is offline
LimeWire is International
 
Join Date: January 13th, 2002
Location: Nantes, FR; Rennes, FR
Posts: 306
verdyp is flying high
Default

Actually Limewire uploads files by fragments of max 100KB (with a 10 bytes overlay). This allows faster propagation of the download mesh with partial file sharing is enabled in the downloader, that performs successive requests on its connection to upload new fragments and advertize its locations, and to give feedback to the uploader that it now shares some fragments of the uploaded file.

If there's something that may be impoved is to allow interleaving of download requests: for example, after uploading 1MB to the downloader (10 fragments), its connection would be rescheduled (in the active queue), to allow concurrent uploads to perform their swarmed requests, allowing serving more clients.

I think that LimeWire could maintain a higher number of active queuing connections, while also maintaining a low number of active uploads. For example the soft maximum would be set to 5 for low low bandwidth connections, but there should be no problem to maintain a list of 64 active queues.

Even on "symetric" broadband connections, the upload stream is most often limited face to the download stream (most often the effective upload bandwidth is about a quarter of the download bandwidth). If you're not convinced, look at websites that compare the speed of various ISPs (ZDNet has such a online tool for US, in France we've got grenouille.com): you can have a symetric 2Mbit/s cable connection that delivers upstream only about 512kbit/s. (Cable providers do not indicate measures of this assymetry, I think it's a lie to their customers, which is more obvious on ADSL where advertized upstreams are much more accurate and effective).

Smarter management of upload queues would allow non-infinite delay for transfers from hosts with limited upstream bandwidth, so that very large files will still continue to be uploaded when allowing also to deliver smaller files. My opinion is that filesize should not be taken too much into consideration, but rather the diversity (because now swarmed transfers work perfectly and with the same performance on slow and broadband connections).

For this reason, I think that bandwidth filters in searches should be removed. There's as much value for a swarmed transfer from modem users or from broadband users. Queueing should be based on smaller units than just full files.

Gnutella has never been so fast today. If we want even faster transfers we must maximize the effect of swarmed transfers by allowing more hosts downloading concurrently the same files, with PFSP enabled.

There's less risk to upload a single large file to many hosts (that will then collaborate to interchange their file fragments), than to let a single host download a very large file and then disconnect immediately once the transfer finishes. This would make the mesh much more resistant to single points of failures (a host that disconnects just after its transfer terminates). It would really increase the variety of available contents on the Gnet, and larger sets of alternate locations for large files.
__________________
LimeWire is international. Help translate LimeWire to your own language.
Visit: http://www.limewire.org/translate.shtml
Reply With Quote