Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   LimeWire Beta Archives (https://www.gnutellaforums.com/limewire-beta-archives/)
-   -   limewire (3.8.9, 3.8.10, 4.0.4) downloading from bearshare (4.5.0bXX) (https://www.gnutellaforums.com/limewire-beta-archives/25732-limewire-3-8-9-3-8-10-4-0-4-downloading-bearshare-4-5-0bxx.html)

Scott Drysdale May 24th, 2004 03:13 PM

limewire (3.8.9, 3.8.10, 4.0.4) downloading from bearshare (4.5.0bXX)
 
i'm running bearshare betas (currently 4.5.0b44).

when limewire is downloading from bearshare, it sometimes thinks the file i have is larger than it really is. this causes limewire to ask for a final chunk which is too large. bearshare truncates the request to the actual number of bytes left in the file, and sends those bytes. limewire then requests the same (too large) chunk from bearshare, and round and round it goes until i cancel the upload manually.

1) limewire should pay attention to the file size on the source. in case of doing a multi-source download, make sure all sources agree as to the file size as well as the hash.

apparently, there is a bug in some versions of bearshare (not the version i'm running) in which a user can, for example, add and ID3 tag to an MP3 file, which changes it's size, but bearshare fails to make a new hash for the modified file. this causes the hash on the bad bear and my bear to be the same, but the bad bear's file size is larger than mine, triggering the limewire problem.

there's also (i hear) a bug in shareaza partial file sharing messages where the list of partial ranges from shareaza is wrong, leading to the same problem as above.

2) limewire should notice that it's attempting to download the same range more than once in a row and failing, and bail out after some number of attempts to prevent locking up the far end's upload slot.

analysis of the problem can be found in the bearshare forums at this address:
http://www.bearshare.com/forum/showthread.php?t=28238

verdyp May 24th, 2004 04:50 PM

Limewire DOES pay attention to both the filesize and hashes exposed in query hits.
It causes a problem with some BearShare sources that expose incorrect lengths...
The fact that this bug is produced by Shareaza not monitoring its locally modified files and spreading files to BearShare that forget to verify the file downloaded from Shareaza (to see if it matches its supposed SHA1) is not a problem of LimeWire.
If it causes LimeWire to hammer BearShare sources, it's certainly a problem, but the origin is still in Bearshare not verifying the data it sends to the network.
Limewire will integrate patches to detect the case of a file that has been modified on the BearShare host but that BearShare forgot to rehash after the change.
If your Bearshare is exposed to such hammering, you have a bug in BearShare that does not detect such changes in local files, and still accepts to upload files whose SHA1 hash should have been updated. If bearshare had detected this change, it would not reply successfully to the incoming transfer request, but would return a 404.

LimeWire on the opposite is actively monitoring possible changes that may have occured on the local files requested: When a transfer request comes in for a URN, the corresponding file is checked to see if its size or modification date has changed. If such change is detected, then the reply will be 404, the old file entry will be removed from the shared library, and the modified file will be immediately rehashed for future Query Hits.
That's why LimeWire maintains a local cache for shared file properties (name, date, size, precomputed URNs...)

Scott Drysdale May 24th, 2004 05:27 PM

please read it CAREFULLY
 
please read the entire bearshare forum thread carefully.

the bearshare acting as the source and victim of limewire's infinite retries (my machine) DOES NOT have the file size bug, and even if it possibly did, it wouldn't show up because i don't modify my files. SOME OTHER bearshare/shareaza source has lied to limewire, and limewire is using that incorrect value when trying to download from me.

i care a whole hell of a lot when you occupy my upload slots FOREVER with the SAME REQUEST that CANNOT BE SATISFIED BY ME AS A SOURCE because MY COPY OF THE FILE ISN'T AS LONG AS SOME OTHER SOURCE (and thus limewire) BELIEVES IT IS.

fix the infinite loop problem - that's a limewire bug that cannot be explained away as a bug in some other client. think of it as a lesson in error handling.

look into clipping requests to match THAT SOURCE'S file size, so you are ROBUST and won't be affected by other clients' bugs.

what's so hard to grasp here?

verdyp May 24th, 2004 05:46 PM

Your arguments would be valid if only you said which Limewire version is causing such hammering. Can't you see that version number in your incoming download requests?
May be the looping problem is already fixed in LimeWire 4.0 and only affects previous releases (3.8.9 and 3.8.11 were common before 4.0 was released; all 3.9 versions were betas which were much less deployed).

Users are upgrading quite fast now, and all 3.8 and 3.9 versions should upgrade fast. As 4.0 includes much more verifications of sources, the looping problem that occurs only when there are several sources for the same hash will rapidly disappear completely.

I did not say that we won't check the code and won't correct it; it's just that your analysis of the problem is not enough to isolate and solve the problem, caused by older versions of BearShare and Shareaza. We have to live with old or foreign servents, including the possibility of detecting corrupted sources because it is for the safety and security of transfers on Gnutella.

For now I have never seen this problem on my LimeWire: all downloads were either completing successfully or were stopped and could not be resumed only because the source was no longer available. I did not detect any hammering.

What can be done in Limewire is to check that a successful reply that consists only in the 10 bytes of fragment overhead will be interpreted as the source not having the fragment we need.
We just need to check that the source sends more than these 10 bytes, before attempting to retry with another identical fragment.

Scott Drysdale May 24th, 2004 07:30 PM

Quote:

Originally posted by verdyp
Your arguments would be valid if only you said which Limewire version is causing such hammering. Can't you see that version number in your incoming download requests?
May be the looping problem is already fixed in LimeWire 4.0 and only affects previous releases (3.8.9 and 3.8.11 were common before 4.0 was released; all 3.9 versions were betas which were much less deployed).

the title of the thread is "limewire (3.8.9, 3.8.10, 4.0.4) downloading from bearshare (4.5.0bXX)." what more limewire version info do you want?
Quote:

Originally posted by verdyp
I did not say that we won't check the code and won't correct it;
from the bearshare forum thread, posted by "pve", whoever that is:
"It won't be fixed. This 10-bytes overhead is not a bug, but needed for checking fast sources that incorrectly return fake SHA1 values."
Quote:

Originally posted by verdyp
it's just that your analysis of the problem is not enough to isolate and solve the problem, caused by older versions of BearShare and Shareaza.
talk to john lindh. he apparently (read the bearshare forum thread) DID determine that limewire is being confused by a shareaza (x-available-ranges) bug. later, we realized that there was a similar bearshare bug (modifying a file that's already being shared can fail to change the file's advertised hash on SOME versions of bearshare - but not the version i'm running).

in other words, one or more (buggy) bearshare sources may be advertising the same hash even though the file is different (longer) on those sources than on my source. limewire should know both the hash and size advertised by all sources. if hash1 == hash2, but size1 != size2, they're obviously not the same file. limewire should realize that. the hash CANNOT uniquely identify files - you need the size. of course, even then, it's possible to have hash1 == hash2 and size1 == size2 and still have different content, because the hash contains less info than the file itself - but different sizes obviously means different files, regardless of hash.

or it could simply be some other kind of limewire bug, where when there are exactly 10 bytes left in the source, limewire comes up with some weird-*** end-of-last-chunk value. the limewire request/bearshare response pairs captured and posted in the bearshare forum thread seem to support this theory.

please understand the problem before considering or rejecting potential solutions.

verdyp May 24th, 2004 08:02 PM

You say: please understand the problem(...)

I have understood the problem...

You continue: (...) before considering or rejecting potential solutions.

I have NOT rejected potential solutions. But keep us looking on real issues and what is wrong in what LimeWire does and why it does that. We won't break a code that has proven to be extremely stable across all Limewires and with most other servents, just because of a few old Bearshares or some Shareazas that send wrong SHA1 values.

The downloader code is constantly unders scrutiny in Limewire, and it is the one that is taking the longest test time and the most complex test suite for handling variious cases (including ensuring that we can cope with detecting many bugs and weirdness in other servents). The 10-bytes overhead is needed in order to detect such weirdness from some sources.

But please keep your calm. Correcting a bug which has not been reported to us before will take some days at least. Don't flame against Limewire: you admit that BearShare has its own bugs wiith which it must still cope with. It's really difficult to have to manage the case of possible bogous sources, simply because we have to imagine all possible errors or wrong assumptions that some others may have done in a code that we can't see ourself.

Limewire publishes its sources so it's easy for others to check what Limewire does. LimeWire on the opposite has no access to BearShare sources.

OK you signal a problem, but this is only a symptom, not a cure and not the cause. What LimeWire does with a source that it has detected (from query hits) as matching the size and hash for a searched file is not illogical. I certainly causes Limewire hammering some BearShares, but it was not detected before.

One final note: I am not a LimeWire employee. I contribute to LimeWire, and audit and test the code, and propose solutions. It's so easy to start a flame with offensive insults and send critics than trying to help to find solutions. Limewire is not Shareaza and has many internal and external developers working constantly to solve problems and improve the network performance, and contribute with innovative solutions: look at the most useful contributions that BearShare can also use now. Limewire has been very active in describing them, documenting them, discussing them with other developers on the GDF forum.

LimeWire has always considered bug reports carefully and planned them in the development agenda even before adding new features (there are lots of pending features that will come later because solving bugs comes first.)

zab May 25th, 2004 08:14 AM

Scott,

could you give us a request-response log of the problem with a limewire 4.0.x version? The code in question has changed significantly between 3.8.10 and 4.0.x, so this should not be happening.

Scott Drysdale May 12th, 2005 05:02 AM

"10-bytes-forever" redux
 
i'm running bearshare (currently 4.8.0b47, same problem with earlier bears). i don't know HOW many times i've reported this.

limewire apparently gets confused about file sizes. when downloading the last chunk of a file, it will do goofy things. currently seeing this with limewire 4.0.10 & 4.8.1, and 360share/4.2.6 (of course, i've seen this or some variation of it since lw 2.8.x).


example: file size = 5,968,335

while (power on)
{
limewire requests: 5,968,325 - 5,970,621 (2,297 bytes, 2,287 bytes past EOF)
bearshare responds with: 5,968,325 - 5,968,334 (10 bytes)
}

FIX THIS ALREADY! DAMMIT!

sberlin May 12th, 2005 06:19 AM

That shouldn't be happening with 4.8.1, but for what it's worth, pretty much all of downloading (from THEX to requesting to verifying to selection strategy to method names to variable names to serialization to godknowswhatelse) has been rewritten for the next release.

Scott Drysdale May 12th, 2005 06:47 AM

Quote:

Originally posted by sberlin
That shouldn't be happening with 4.8.1
it shouldn't be happening on ANY version :)

Quote:

but for what it's worth, pretty much all of downloading (from THEX to requesting to verifying to selection strategy to method names to variable names to serialization to godknowswhatelse) has been rewritten for the next release.
i've heard that song before, and the problem persists.

i'm really tired of waking up to find several limewires stuck there downloading the same 10 bytes over and over again. asking bearshare to abort the upload usually stops them, but some are quite persistent and immediately come back and do it again.

i recently went from 768/128 DSL to 5M/2M fiber, so:

1) i have more upload slots open. this means more stuck limewires (bad).

2) i have more upload bandwidth. this means stuck limewires don't take such a big bite out of my outgoing bandwidth (good).

it's looking like the only fix is to prevent limewire from downloading. you cannot begin to understand how tired i am of this stupid, easy to fix bug hanging around for years.


All times are GMT -7. The time now is 03:26 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.