Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   General Gnutella Development Discussion (https://www.gnutellaforums.com/general-gnutella-development-discussion/)
-   -   97.7k chunk size (https://www.gnutellaforums.com/general-gnutella-development-discussion/17172-97-7k-chunk-size.html)

Thad November 13th, 2002 10:44 AM

97.7k chunk size
 
[Apologies in advance if this isn't the right place to mention this, but... ]

Acquisition 0.67, a Mac OS X Gnutella client, implements the new "chunk" download method where files are downloaded in 97.7k segments. I understand this is a new Gnutella feature (or possibly just LimeWire feature?) designed to improve the robustness of the download system. Unfortunately, it does not work very well yet. Most of the time, it will result in the client downloading 97.7k worth of data, disconnecting, and then attempting to reconnect to continue the download. Needless to say, this prevents the connection from ramping up to full speed, and also frequently results in someone else taking the "free" download slot before your client can reconnect.

What is the justification for breaking downloads into these small chunks, and are there any plans to address the serious problems caused by this method of handling downloads?

linuxrocks November 13th, 2002 10:51 AM

Re: 97.7k chunk size
 
This is also used in BearShare and is called 'Partial Content'. The advantages of this are that the chunks are smaller for the client and the download will complete faster. Also, this rotates the connections of Downloads. Say you're downloading a file that has 100 sources, your client may only be able to download from 16 sources at a time. What this does is allow the client to rotate hosts that it is downloading from. Hopefully this isn't too wordy! :)

Thad November 13th, 2002 11:12 AM

Thanks for the reply, linuxrocks. However, in my experience, downloads do not complete faster with the "partial content" feature. (Like I said, most of them do not complete at all, they simply disconnect at 97.7k, never to resume.) In fact, I'm not sure how it could be faster, as I have a DSL connection that needs a solid, continuous connection in order to ramp up to full speed, and that just doesn't happen in the space of 97.7k.

As for rotating downloads from multiple hosts, that might be great if you are downloading Eminem's latest, but most of the time the file I'm trying to download is only hosted by one source. When I've finally managed to track down a rare file, the last thing I want is to have my connection to the host severed mere seconds after it begun.

I'm not sure if the problem that the client is actually sending a "disconnect" message after each 97.7 chunk, or if some hosts incorrectly interpret the end of a chunk as their cue to to close the connection and offer up the download slot to someone else. Regardless, at this point this "feature" is causing far more harm than good and I hope these problems will be addressed soon.

linuxrocks November 13th, 2002 12:10 PM

Quote:

Originally posted by Thad
I'm not sure if the problem that the client is actually sending a "disconnect" message after each 97.7 chunk, or if some hosts incorrectly interpret the end of a chunk as their cue to to close the connection and offer up the download slot to someone else.

Look in your 'Console' or equivalent. There should be the information you need there.

trap_jaw November 13th, 2002 02:43 PM

The smaller chunk size does not reduce download speed as much as the LimeWire/Acquisition UI tells you. (Try a network monitor or something and look at the overall downstream) The downloads are a lot more robust since then and LimeWire/Acquisiton does not create large files containing mainly empty data any more. Corruptions should be identifed earlier and the overall downloads should be more robust. - This new feature might also come very handy if tree hashes and partial file sharing are implemented one day - although partial filesharing is generally overestimated (except for it can be used to force users to share - although it is my experience that with higher connection speeds freeloaders are becoming rarer).

Thad November 13th, 2002 03:31 PM

trap_jaw,

Hmm... I'll grant you I may be deceived about the download speed by the UI. So perhaps you are right that the small chunk size will eventually be a good thing overall. But for the moment, it is a serious hindrance since it doesn't maintain a steady connection -- it downloads 97.7k and then disconnects, leaving your download aborted practically as soon as it had began Yes, it tries to reconnect but 99% that doesn't happen. So while this feature may be good for the network overall, it's terrible for the end users who are stuck with it (at least until the problems with the implementation is resolved).

trap_jaw November 13th, 2002 11:29 PM

This can only happen if you were downloading from a buggy client that claims to support http 1.1 but does not really.

I've seen that happen, too.

Thad November 14th, 2002 12:03 AM

That's what I thought, but there are apparently an awful lot of buggy clients out there. It's difficult to know which ones are the culprits since that information is not available to Acquisition users, but I know for a fact that Shareaza doesn't support this (i.e., it boots Acq users after the first 97.7k). There are surely others, given the howls of outrage from users when this "feature" was introduced.

Anyway, Dave Wantanabe, the developer of Acquisition has taken steps to try to fix this on his end, and it seems to be working better now -- so far.

sdsalsero November 15th, 2002 11:17 PM

chunks emulate Ethernet
 
more specifically, it's trying to copy the sucess of the CS/MA nature of ethernet -- instead of trying to establish a constant connection (with guaranteed throughput), experience has shown that it's better to have a "collision domain" where everybody competes for temporary access. If two requests collide, they both back-off for a random amount of time and try again. As long as the signaling speed is fast and you don't have more than about 25 requestors, you get consistent transfer rates (60-70% of the theoretical).

P.S. I know I've got the acronym for this technique wrong but I'm tired and don't want to try and look it up tonight... :)

Thad November 16th, 2002 03:53 AM

I think it's obvious from the experience of many users (not just me -- really, check the Acquisition forums) that this is probably okay for local networks, and maybe even okay for files that are commonly hosted by, well, a host of clients, but it is extremely bad for that most common of Gnutella situations, i.e., when many clients are after a file hosted by a tiny minority of hosts. Especially when the clients play by different rules.

If you have one client that implements the chunk downloads (handing over its download spot to someone else after only a 97.7k transfer) versus another client that hangs on to the connection like a bulldog until it is complete, who do you think is going to end up with the successful download?


All times are GMT -7. The time now is 11:05 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.