Thread: 97k chunk
View Single Post
  #3 (permalink)  
Old March 10th, 2003
sberlin sberlin is offline
Software Developer
 
Join Date: November 4th, 2002
Location: New York
Posts: 1,366
sberlin is flying high
Default

better ability to swarm is really the #1 thing.

imagine you're downloading from 2 hosts. if you request a range, you MUST download that entire range before picking another -- otherwise you have to disconnect & reconnect, which is almost ensuring it won't complete.

so, if you pick a reasonably small range (97k) then every 97k you can pick a new range to DL. if a 3rd or 4th source comes in, you can easily plop that into the swarming algorithm and pick successive 97k chunks from different hosts.

if you were downloading 500k or 1mb from each host, then you would have to disconnect from someone to begin downloading at a new spot.

small chunks really do help you download faster from more sources.

the exchange of alternate locations is a side benefit that helps the swarming too. in the future, it can make verification easier with tigertree stuff. and further in the future, pipelining the requests would make the latency virtually nonexistant.
Reply With Quote