Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   General Discussion (https://www.gnutellaforums.com/general-discussion/)
-   -   Loosing download candidates... (https://www.gnutellaforums.com/general-discussion/6497-loosing-download-candidates.html)

javier__d December 18th, 2001 02:56 AM

Loosing download candidates...
 
Hi.

When downloading multiple files from the same host it happens to me very offen, that few of them start to download and the rest looses the download candidate on this host. I have to go to search window and add the candidate (the same file from the same host) manualy. When first files comlete, the ones I added manualy start to download without any problem.
When I have for example 10 files I didnot find anywhere else and the host has max. uploads from same host set to 2, I have to readd the same files up to 10 times depending on download speed. For more files this could be much worse.

:confused: What happens?

J.

GregorK January 7th, 2002 01:38 AM

Can you check or post the log screen content of a download where this happens?
You can copy it by pressing CTRL-c and paste it with CTRL-v

It might be that the hosts are dropped because Phex can't connect to them directly or because the PUSH route to the host has expired.

Gregor

Unregistered January 10th, 2002 10:27 PM

Same here on the latest version
 
Used to be that the options for each file would keep at least one kandidate

now with 0.6.1 i have to manually seach (and the files seem to be there) and then there is the bug that I need to add them one by one with 'Add download candidate' (selecting groups never worked) and each time it also switches screens (which is no problem on a machine as fast as mine - but it is inconvient anyway)

At any rate used to be that retries would count up to the hundrets ~ now all candidates are lost after the first try !

How come the new phex could be so sure that candidate will not answer sooner or later ?

I suggest (instead of just deleting the candidate references) to devise a system that rates the candidate and to retry the poor ones less and to search again automaticaly when faith in all candidates is lost (only poor options)
This way a system could use way less search bandwith and saty in lurk and retry mode even waiting till a candidate that had gone offline comes back the next day all without generating much extra search traffic !!! This would overall increase the efficiency of the Network !

Anyhow thanks for your time !

Unregistered January 13th, 2002 09:36 PM

Ok I verified it ists truely a problem ...
 
I had found this series of scientific talks
Two of tham at the same host
For luck the first one started immediately ...
The second listing the same IP - I guess tried - and failed and was without candidate after less than 2 minutes

While the first was still downloading from the same IP

That sounds mayorly wrong doesn't it ?

Any recomendations in which file of the source or XML I should be able to fix that ?

cheers

togo

Unregistered January 13th, 2002 09:56 PM

Ok I suspect it in
src/download/DownloadFile.java

/**
* The HOST_BUSY status is handled like an error status currently
*/
public static final int HOST_BUSY = 6;

But I better be honest - I got a lot of other stuff prioritized ...
And it is Code written by others - I don't know how long it would take me to see through it ...

But lets Please sombody who allready 'Gets it' make a fix :)

GregorK January 17th, 2002 08:42 AM

OK guys...

there might be a bug in it that I don't know. But every remove of a download candidate is clearly logged in the little log screen. If you see any removel that you think is not right please post me the screen.

But you can be sure that no host is removed if he returns a host busy signal 503.
Host will be removed if they return 404 or 410 which means the file is not shared ( anymore ).
Also host will be removed if they can't be reached with a connection try and a PUSH. That is because we don't know if the host is firewalled. If we would keep him in the list there is a very big chance you will never reach him.
There are also some other severe errors where we drop the host since Phex is not understand the host very well. ;-)

Ok there can be some optimizations done here or there. But the thing is we have to rewrite the whole download engine for swarming anyway. So I suggest if you find problems send me the log so we can fix them quickly or just stick with the bugs or the not so very good rating of download candidates and we talk about it again after we released swarming.

Gregor

Unregistered January 22nd, 2002 12:54 AM

Ok I have been checking the log frequently and havent found anything that indikates reason for my suspicion - it seems that it is the hosts that did not answer that get removed ...

Could it be a question of the timeout - or that their line is to buisy ?

I mentioned that I had found two individual files on the same host and one started right away - but the other lost the same host and had to as a result generate unnecessary Search traffic if it wanted to get the file again...

What surely can be said is that the efficiency of Phex within the last few versions has gone down a few 100 percent and I suspect the networktraffic up the same factor !!!

It is wrong to drop a Host from the list for not responding one to x times ...

Also next I am going to post this in the Host catcher thread:

I would like to see a filter cause I am suspecting that my default port is blocked here in the University - so why waste so many atempts on hosts with blocked ports from my catcher when reconecting ? when I choose one of the random ports it connects just fine - If I dont it'll take forever to reconnect !

Vielen Dank Greg and all !!!

Unregistered January 22nd, 2002 01:07 AM

What would be Lux
 
If there was a Menu to select for which type of Malrespondance a downloadcandidate should be 'forgotten' - Maybe even with a counter and the option to say if 1-n times Error X then ...

Object oriented Programming is soo Cool !

I have the serious intention to get familiar with the Phex Objects too - right after I get my Priority Project onto that desired Plateau !

Everybody You People are the Wildest :) !

GregorK January 22nd, 2002 01:37 AM

Quote:

Originally posted by Unregistered
I mentioned that I had found two individual files on the same host and one started right away - but the other lost the same host and had to as a result generate unnecessary Search traffic if it wanted to get the file again...

This can happen if the host is firewalled. To find the host Phex has to reverse the route that the Query Hit took. Lets assume host B has the file and you send a query to A. A forwards it to B. Now B returns the search result back to A and A sends it back to you. If the host is firewalled you have to try a PUSH request since you cant connect directly to B. So you are telling A that you like to send a PUSH to B and A forwards it to B. If everything goes right B now connects to you.
But this whole thing is not working if either your host or host A decides in the meantime to drop its neighbor A or B. Then the route is lost and can only be reestablished with a new query.
And if the other servant is many hops away your chance to reach it will get smaller and smaller with every hop.

In your case maybe one of the PUSH routes was still valid but in the meantime the it broke before the request of the second file could get through.

Gregor

Unregistered January 25th, 2002 10:04 PM

Curious Behaviour coincidentaly noticed !
 
Check this out this was the situation I had 2 candidates that staide in my list for several retries
Then I manually added one (24.186.174.241:6346)
and when it failed and got removed the second (http://80.132.186.12:8080) disapeared at the same time even though it wasnt even the current (that was the first now(http://213.139.138.213:5635) that continued to retry ...) I coincidentaly saw that while checking the log ...
Here is an excerpt:
===================
Connect http://213.139.138.213:5635/get/173/Final Fantasy DVD rip DivX.avi
Normal connection ok
Send download handshake: GET /get/173/Final Fantasy DVD rip DivX.avi HTTP/1.0

User-Agent: PHEX 0.6.1 (release)

Range: bytes=279687640-



Remote host replies: HTTP 503 UPLOAD LIMIT REACHED
Remote host is busy.
Start download.
position to read=279687640
Download name=D:\Download\Final Fantasy.avi.dl
Connect http://80.132.186.12:8080/get/275/final-fantasy (engl) (divx).avi
Normal connection ok
Send download handshake: GET /get/275/final-fantasy (engl) (divx).avi HTTP/1.0

User-Agent: PHEX 0.6.1 (release)

Range: bytes=279687640-




Remote host replies: HTTP/1.1 503 SERVER BUSY
Remote host is busy.
Start download.
position to read=279687640
Download name=D:\Download\Final Fantasy.avi.dl
Connect http://24.186.174.241:6346/get/7/Fin...p-Divx-TDF.avi
Normal connection failed. Operation timed out: connect
Try push request.
7:1BC9C5076962E89DFF495BC102E3AF00
Wait for connection from sharing host
Time out waiting for connection from sharing host
Error: java.io.IOException: Time out on requesting push transfer.

========================

Also I am noticing now it seems the adminitration people blocked the default port I don't get anything on 6346 anymore !
Could you set it up so all future phex clients default to random Ports ?

Thanks

T


All times are GMT -7. The time now is 04:25 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.