fix for "Awaiting Sources"? Why are there so many invalid sources in Gnutella? At LEAST 50% of my download requests end in "Awaiting Sources". Of those, probably half are sources that are no longer on-line, while the other half are sources that I see "Waiting 60s for Busy" a handful of times before it dies and goes to Awaiting. Sometimes, I can Force Resume on these files and get back into Waiting mode. Some specific questions: 1. What can be done to remove invalid sources from the Ultrapeers? Are there certain clients/versions that are particularly guilty of not clearing their disconnected leafs from their source-list? 2. What can be done to reduce premature 'failure' to Awaiting Sources? Does LW fail if just one PING is ignored, or does it pause/retry several times before assuming the source is now off-line? Some suggestions: - do a test PING on all source-IPs received and then remove them from the search results if they're off-line. This would delay the display of results, and would contribute to overall Internet traffic (but not Gnutella search-traffic), but would eliminate the first type of Awaiting Source messages. - change "Awaiting Sources" message to "2 Sources off-line, retrying in 60min". This would give more explanation and would demonstrate to people that LW hasn't just given-up. |
Re: fix for "Awaiting Sources"? Quote:
Quote:
Quote:
Quote:
|
today I found a nice fast host with unique files, but I was only able to download one at a time. If I tried to dl two or more, only the first would connect and the others went into "awaiting". I could only browse host after I removed anything in the dl queue for that host. The host was a LW 3.2.*. Looks like if the host upload settings are for only one slot and one file per person, anything more will trigger the "awaiting" message. |
stief, Good detective work! This is a good example of where it would be useful for the Awaiting situation to be replaced with a 60-min retry. |
I think the developers need to understand that I -- and many others -- consider the Awaiting Sources function to be broken. As another example, before going to bed at night, I'll request a dozen TV episodes. Most of them will report Busy and I'll leave it like that overnight. The next morning, none of them will have downloaded, all of them will report Awaiting, but when I Force Resume each of them, they all resume either Busy or actual downloading. So, LW is obviously giving-up too early. And it would obviously benefit from some kind of long-term retry. Please don't tell me I'm exaggerating or that I don't understand the issues... |
Quote:
|
sdsalsero, You are exaggerating - and you don't understand the issues :-). I know you won't believe me that retrying hosts if the download attempt failed is only a small improvement, so I suggest you see for yourself. LimeWire 3.3beta retries hosts if the first download attempt has failed. |
Quote:
Is a third computer involved here for the firewall/push attempt? --just trying to understand the variables involved when an active host returns "awaiting" one moment and not the next. |
Quote:
I thought about this a little more and there are a couple of other cases in which LimeWire goes into awaiting sources mode. * a FileNotFound response if the uploader rebuilds its library, unshared the file you are looking for. This should happen rarely. * an uploader sharing popular files or acting as an ultrapeer that did not limit the upstream may be overloaded unable to respond to any further download requests, so the downloads fail * a host that thinks it has a direct connection to the internet although it is really firewalled may return queryhits to a firewalled servent. Firewalled servents cannot download from each other. |
Hmmm. Just tried again to trigger the "awaiting" and watched the dl stats. In 10 files from the same source, one of them went to "awaiting" and IO error stats went up by one. Not sure which situation you mentioned relates to the IO error. (I'm using jum's LimeWire Pro 3.3.0jum184; the host is a LW 3.2.1 Pro showing as a 4 star T3 or better) |
trap_jaw, Just so you know, I really do appreciate your efforts to respond to all the requests people make of LW. Having said that, what problem is there with my idea of transforming Awaiting Sources into, e.g. "2 Sources off-line, retrying in 60min" ? I'm using 3.3.0-Beta, and I still find the majority of my download requests at Awaiting after 10-15 minutes. Force Resume then puts most of them back to Busy or Downloading. |
Quote:
|
Fine, so there's a lot of unstable hosts. Why not automate the required workaround? |
I think you're right about "awaiting" needing some work sdsalsero. Still, whatever "automated requeries" are called, more efficient gnutella messages are needed. At 15KB/s, I use up 500 MB of bandwidth as an UP in about 10 hrs (not sure if the stats include compression)--that's not including http traffic up and down. My ISP gives me a rough limit of 1 GB/day without too much hassle. I could disable UP--but more UP's are needed, no? If gnutella could be more efficient in connecting searches and downloads, I'd be all in favour of automating the connections. [btw--I'm getting double notifications of posts--and resetting profile hasn't helped] |
You do not want to flood hosts with connection requests that really don't accept any connections. I've already read complaints from user who haven't run a Gnutella servent in days or even weeks that they are still hammered with connection requests. In addition, a malicious attacker could spoof addresses from the download mesh & query hits to use Gnutella as a DDoS tool. If a host is already overloaded, more connection request won't improve the download stability. On the contrary, the host will have even less bandwidth available to upload and its connections will be even less stable. These are just a few reasons, why you have to be careful about retrying unresponsive hosts. There have to be other solutions, to improve connection stability than hammering them. |
Stief: A bit off topic...I do about 2.7 GB/day upstream as an ultrapeer. I did not know that ISPs have upstream and downstream limits for individual users (I know they do for businesses). Any idea how I can find that out? |
Blackbird, If your ISP hasn't complained, don't ask or bring their attention to it! Trap_jaw, I'm a network admin so I appreciate the concerns re network "elegance". However, an extra ping or two every hour from the dozen or so requestors that might be trying to get a popular file aren't going to overwhelm anybody. Now, obviously, there should be some reasonable limit on how long the s/w continues to try. Maybe set it the same as the Days To Keep Incompletes? That way, people could control it. Or, just default it to a week? That doesn't seem excessive to me. Insisting that end-users continue to hit Force Resume and/or Repeat Search is turning us into lab rats, continually hitting the button for food (or was it pleasure?). :-) |
Well, I already sent an e-mail to them, so too late. |
Quote:
Quote:
Quote:
|
All times are GMT -7. The time now is 07:10 AM. |
Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.
Copyright © 2020 Gnutella Forums.
All Rights Reserved.