Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   New Feature Requests (https://www.gnutellaforums.com/new-feature-requests/)
-   -   Segmented Hashes and Partial File Sharing (https://www.gnutellaforums.com/new-feature-requests/10612-segmented-hashes-partial-file-sharing.html)

gnutellafan April 18th, 2002 10:36 AM

Segmented Hashes and Partial File Sharing
 
I would like to point the LW dev team here for some ideas on the development of segmented hashes and the ability to share partial files:
http://gnutellaforums.com/showthread...9&pagenumber=4

I think that this is an extremely important feature. There are probably terabyts of partial files that could be tapped as an additional resource to speed up gnutella downloads and make them more reliable.

Taliban April 18th, 2002 11:16 AM

I don't believe segmented hashes are so important for sharing incomplete files.

I already played around a little with the source code to make LW share incomplete files (was too lazy to complete it, though), and implementing it would not be hard compared to the amount of work it takes to implement HUGE properly.

I think it could easily be done after LimeWire supports HUGE without the need for segmented Hashes:

A & B are LimeWire nodes

A could search for a file by hash, B would answer, that it has got the file, although having only parts of it. A would request the parts of the file it wants (could be multiple parts when using HTTP 1.1), B would either send subsections of that parts (A could easily adjust to the ranges sent by B) or respond with an HTTP Error "416 Requested Range Not Satisfiable".

There is no point where you need hashes for this process.

gnutellafan April 18th, 2002 01:02 PM

your not thinking about malicious nodes
 
One thing many people forget is malicious nodes. Say some company working for the RIAA wants to disrupt gnutella. This could easily be done by suppling fake data. Segmented downloads make this even easier since one node can sent just a small peice of fake data to currupt many dls.

The segmented hash model provides even more protection against malicious nodes than HUGE since the client could check each segment as it came in instead of waiting for the whole file. The current system could be more easily abused by a malicious client providing the correct hash for the wrong data. The client would never know it was the wrong data until the entire file was dl, and rehashed (thats alot of wasted bandwith for a 700mb file). Then the whole file would have to be scrapped (you dont know what part is wrong, just that the file is corrupted). With this system you could check it 1mb at a time. IF one of those segments is bad you get rid of that one, not the whole file.

This benefit is in addition to increased resources ect.

I thought HUGE support was in the latest LW release? Taliban, or anyone else with some coding know how, it would be great if someone could implement this so it could be tried. I think once its seen how much this could help every modern client will have to have it.


All times are GMT -7. The time now is 06:33 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.