Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   LimeWire Beta Archives (https://www.gnutellaforums.com/limewire-beta-archives/)
-   -   fix for "Awaiting Sources"? (https://www.gnutellaforums.com/limewire-beta-archives/21088-fix-awaiting-sources.html)

sdsalsero July 16th, 2003 08:30 AM

fix for "Awaiting Sources"?
 
Why are there so many invalid sources in Gnutella? At LEAST 50% of my download requests end in "Awaiting Sources". Of those, probably half are sources that are no longer on-line, while the other half are sources that I see "Waiting 60s for Busy" a handful of times before it dies and goes to Awaiting. Sometimes, I can Force Resume on these files and get back into Waiting mode.

Some specific questions:

1. What can be done to remove invalid sources from the Ultrapeers? Are there certain clients/versions that are particularly guilty of not clearing their disconnected leafs from their source-list?

2. What can be done to reduce premature 'failure' to Awaiting Sources? Does LW fail if just one PING is ignored, or does it pause/retry several times before assuming the source is now off-line?

Some suggestions:

- do a test PING on all source-IPs received and then remove them from the search results if they're off-line. This would delay the display of results, and would contribute to overall Internet traffic (but not Gnutella search-traffic), but would eliminate the first type of Awaiting Source messages.

- change "Awaiting Sources" message to "2 Sources off-line, retrying in 60min". This would give more explanation and would demonstrate to people that LW hasn't just given-up.

trap_jaw4 July 16th, 2003 09:02 AM

Re: fix for "Awaiting Sources"?
 
Quote:

Originally posted by sdsalsero
1. What can be done to remove invalid sources from the Ultrapeers? Are there certain clients/versions that are particularly guilty of not clearing their disconnected leafs from their source-list?
I don't know any vendor that is particularly unresponsive at the moment.

Quote:

2. What can be done to reduce premature 'failure' to Awaiting Sources? Does LW fail if just one PING is ignored, or does it pause/retry several times before assuming the source is now off-line?
LimeWire goes into Awaiting Sources mode if a direct connection AND a push connection (to a firewalled host) could not be established.

Quote:

do a test PING on all source-IPs received and then remove them from the search results if they're off-line. This would delay the display of results, and would contribute to overall Internet traffic (but not Gnutella search-traffic), but would eliminate the first type of Awaiting Source messages.
Not possible, firewalled servents don't reply to pings and we don't want to overload hosts sharing popular content with ping messages.

Quote:

change "Awaiting Sources" message to "2 Sources off-line, retrying in 60min". This would give more explanation and would demonstrate to people that LW hasn't just given-up.
Most of the time, if a servent is unresponsive, it's better to do another search than to wait another 60 minutes.

stief July 16th, 2003 06:53 PM

today I found a nice fast host with unique files, but I was only able to download one at a time. If I tried to dl two or more, only the first would connect and the others went into "awaiting". I could only browse host after I removed anything in the dl queue for that host.

The host was a LW 3.2.*.

Looks like if the host upload settings are for only one slot and one file per person, anything more will trigger the "awaiting" message.

sdsalsero July 16th, 2003 10:43 PM

stief,
Good detective work! This is a good example of where it would be useful for the Awaiting situation to be replaced with a 60-min retry.

sdsalsero July 17th, 2003 07:05 AM

I think the developers need to understand that I -- and many others -- consider the Awaiting Sources function to be broken. As another example, before going to bed at night, I'll request a dozen TV episodes. Most of them will report Busy and I'll leave it like that overnight. The next morning, none of them will have downloaded, all of them will report Awaiting, but when I Force Resume each of them, they all resume either Busy or actual downloading.

So, LW is obviously giving-up too early. And it would obviously benefit from some kind of long-term retry.

Please don't tell me I'm exaggerating or that I don't understand the issues...

trap_jaw4 July 17th, 2003 08:06 AM

Quote:

Originally posted by stief
today I found a nice fast host with unique files, but I was only able to download one at a time. If I tried to dl two or more, only the first would connect and the others went into "awaiting". I could only browse host after I removed anything in the dl queue for that host.

The host was a LW 3.2.*.

Looks like if the host upload settings are for only one slot and one file per person, anything more will trigger the "awaiting" message.

No. If a LimeWire host had settings for only one slot per person, it would reply with a 503 message (causing the download on your side to be either queued or waiting for busy). The only case I know where this will not happen, is if the host is firewalled and the push attempt fails.

trap_jaw4 July 17th, 2003 08:22 AM

sdsalsero,
You are exaggerating - and you don't understand the issues :-).

I know you won't believe me that retrying hosts if the download attempt failed is only a small improvement, so I suggest you see for yourself.

LimeWire 3.3beta retries hosts if the first download attempt has failed.

stief July 17th, 2003 09:13 AM

Quote:

Originally posted by trap_jaw4 The only case I know where this will not happen, is if the host is firewalled and the push attempt fails.
Thanks for the clarification. Yesterday I was connected directly with the firewall opened to the port I'm using (but force IP was probably checked). Connected to the same host today from my LAN, and saw queuing/busy (and downloads), but occasionally one or two in a batch of downloads would show "awaiting".

Is a third computer involved here for the firewall/push attempt?

--just trying to understand the variables involved when an active host returns "awaiting" one moment and not the next.

trap_jaw4 July 17th, 2003 10:12 AM

Quote:

Originally posted by stief
Is a third computer involved here for the firewall/push attempt?

--just trying to understand the variables involved when an active host returns "awaiting" one moment and not the next.

Newer LimeWires supply the addresses of their ultrapeers in their queryhits, so you can send the push request directly to the ultrapeer to reduce the chances that a push request cannot be transmitted because their is no route to the target host. Traditional pushes are routed the same way the queryhits came. If only one ultrapeer connection along the way failed, the push request might fail.

I thought about this a little more and there are a couple of other cases in which LimeWire goes into awaiting sources mode.

* a FileNotFound response if the uploader rebuilds its library, unshared the file you are looking for. This should happen rarely.
* an uploader sharing popular files or acting as an ultrapeer that did not limit the upstream may be overloaded unable to respond to any further download requests, so the downloads fail
* a host that thinks it has a direct connection to the internet although it is really firewalled may return queryhits to a firewalled servent. Firewalled servents cannot download from each other.

stief July 17th, 2003 10:59 AM

Hmmm. Just tried again to trigger the "awaiting" and watched the dl stats. In 10 files from the same source, one of them went to "awaiting" and IO error stats went up by one.

Not sure which situation you mentioned relates to the IO error.

(I'm using jum's LimeWire Pro 3.3.0jum184; the host is a LW 3.2.1 Pro showing as a 4 star T3 or better)

sdsalsero July 17th, 2003 11:42 AM

trap_jaw,
Just so you know, I really do appreciate your efforts to respond to all the requests people make of LW.

Having said that, what problem is there with my idea of transforming Awaiting Sources into, e.g. "2 Sources off-line, retrying in 60min" ?

I'm using 3.3.0-Beta, and I still find the majority of my download requests at Awaiting after 10-15 minutes. Force Resume then puts most of them back to Busy or Downloading.

trap_jaw4 July 17th, 2003 01:04 PM

Quote:

Originally posted by sdsalsero
Having said that, what problem is there with my idea of transforming Awaiting Sources into, e.g. "2 Sources off-line, retrying in 60min" ?

It's nothing wrong with it. I just don't see that it does much good. It may help you complete one download out of ten or twenty that fail (if you are lucky) but it is not solution for the problem that some connections frequently fail. Those hosts aren't just instable for a few minutes, they are instable all the time, as it seems.

sdsalsero July 17th, 2003 06:26 PM

Fine, so there's a lot of unstable hosts. Why not automate the required workaround?

stief July 17th, 2003 07:03 PM

I think you're right about "awaiting" needing some work sdsalsero.

Still, whatever "automated requeries" are called, more efficient gnutella messages are needed. At 15KB/s, I use up 500 MB of bandwidth as an UP in about 10 hrs (not sure if the stats include compression)--that's not including http traffic up and down. My ISP gives me a rough limit of 1 GB/day without too much hassle.

I could disable UP--but more UP's are needed, no? If gnutella could be more efficient in connecting searches and downloads, I'd be all in favour of automating the connections.

[btw--I'm getting double notifications of posts--and resetting profile hasn't helped]

trap_jaw4 July 17th, 2003 07:20 PM

You do not want to flood hosts with connection requests that really don't accept any connections. I've already read complaints from user who haven't run a Gnutella servent in days or even weeks that they are still hammered with connection requests.
In addition, a malicious attacker could spoof addresses from the download mesh & query hits to use Gnutella as a DDoS tool.
If a host is already overloaded, more connection request won't improve the download stability. On the contrary, the host will have even less bandwidth available to upload and its connections will be even less stable.

These are just a few reasons, why you have to be careful about retrying unresponsive hosts.

There have to be other solutions, to improve connection stability than hammering them.

Blackbird July 17th, 2003 08:07 PM

Stief:

A bit off topic...I do about 2.7 GB/day upstream as an ultrapeer. I did not know that ISPs have upstream and downstream limits for individual users (I know they do for businesses). Any idea how I can find that out?

sdsalsero July 17th, 2003 10:42 PM

Blackbird,
If your ISP hasn't complained, don't ask or bring their attention to it!

Trap_jaw,
I'm a network admin so I appreciate the concerns re network "elegance". However, an extra ping or two every hour from the dozen or so requestors that might be trying to get a popular file aren't going to overwhelm anybody. Now, obviously, there should be some reasonable limit on how long the s/w continues to try. Maybe set it the same as the Days To Keep Incompletes? That way, people could control it. Or, just default it to a week? That doesn't seem excessive to me. Insisting that end-users continue to hit Force Resume and/or Repeat Search is turning us into lab rats, continually hitting the button for food (or was it pleasure?). :-)

Blackbird July 17th, 2003 11:20 PM

Well, I already sent an e-mail to them, so too late.

trap_jaw4 July 18th, 2003 02:48 AM

Quote:

Originally posted by sdsalsero
However, an extra ping or two every hour from the dozen or so requestors that might be trying to get a popular file aren't going to overwhelm anybody.
One or two requests per hour don't hurt, that's right. The assumption that there are only a dozen requestors is a little optimistic. When I shut down LimeWire, I will get 500-2000 incoming connection requests per hour for quite a while (usually until my ISP resets my IP which happens every 24 hours).

Quote:

Now, obviously, there should be some reasonable limit on how long the s/w continues to try. Maybe set it the same as the Days To Keep Incompletes? That way, people could control it. Or, just default it to a week? That doesn't seem excessive to me.
I would be very careful about that. The users with static IPs are probably a minority. Besides I think very few people have the patience to wait a whole week.

Quote:

Insisting that end-users continue to hit Force Resume and/or Repeat Search is turning us into lab rats, continually hitting the button for food (or was it pleasure?). :-)
Well, the repeat search button was necessary because all requeries were banned to save the network. Maybe, automated requeries come back in the next version.


All times are GMT -7. The time now is 07:10 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.