View Single Post
  #4 (permalink)  
Old November 9th, 2001
Moak's Avatar
Moak Moak is offline
Join Date: September 7th, 2001
Location: Europe
Posts: 816
Moak is flying high
Default More about a Gnutella Swarming idea

Hmm yes, the current Gnutella protocoll would be basically okay for swarming.

Suprising, isn't it? Some smaller changes I guess, but all compatible to older clients, most ideas are allready discussed somewhere or proposed by other users. A new client must provide extra logic, to maintain a pool of swarming parts:

Finding most requested files is the first item: Every client could maintain a statistics of highly uploaded files (files often downloaded by other users from itself) and tells other clients which they are (e.g. once on connect). I think we should not use search queries to maintain these statistics, because they are too inaccurate and upcoming query caches or super peers (which I both highly recommend) would falsify those results. Every client can now calculate which files are highly requested (within the current horizon) and tries to download a random part from a random client. Then adds this part into a swarming pool. This pool could be refreshed time by time and should not grow over a specific size (e.g. some MBs on harddisk).

Finding matching partials (I call the small parts of a file "partials") could be easy solved: Just run a normal search. - Okay, we should add an improvement here: As far as we know from multiple source resuming servants (e.g Xolox), there is a problem with not matching partials. It happens that downloaded partials do NOT match to each other and a lot of bandwith is wasted by downloading partials which are not from the same file (Xolox and that 80%-90%-99%-50% problem). Ooops, to avoid this... I would highly recommend to add hashs to any gnutella traffic that is file-related (search and download). A 'hash' is a unique identifyer (or call it a kind of checksum) for a file within a typical horizon. So indexing files and exchange hashs could be a clue to improve "automatic researches" which indeed is a "must-have" for paralled or multihosted downloads. Why? Once you have downloaded 25% of a file called "Shritney Pears.doc" and the host disappears, you need to download the remaining somewhere else. Automatic researches for "Pears" can help, but only if you use a unique hash, you make sure that results match.... before even downloading them.

Downloading partials is an allready solved item: Right now only Xolox provides parallel downloads from multiple peers (fast!) and as another example all FastTrack clients do (Morpheus/Kazaa/Grokster)... but wait a while, more will come for sure! Parallel or segmented downloads of one file is a "must-have" for swarming, no protocoll change at all needed.

As an advantage from swarming I see especially making low bandwith user (modem user) to be a valueable resource! No more free loading and higher bandwith for all.

Hope it helps, Moak

PS: Another cool feature to improve downloads could be "specialized gnutella horizons"... if you're interested read this:

Last edited by Moak; November 9th, 2001 at 10:17 PM.
Reply With Quote