Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   LimeWire Beta Archives (https://www.gnutellaforums.com/limewire-beta-archives/)
-   -   LimeWire 4.1.4 Beta (https://www.gnutellaforums.com/limewire-beta-archives/27366-limewire-4-1-4-beta.html)

et voilą August 15th, 2004 09:54 AM

Sorry Jum, I should have told that no, I am not running as UP. In fact, this problem happened since I "updated to java 1.4.2-05 developper preview 3". (biggest mistake I've done, that java was really poor performing). I downloaded the updated version of last week and the problem is less obvious. But I think this is something about the new apple java. This morning LW was using over 200 meg ram and 700 meg of virtual memory!!! It took 30 seconds before the dock showed (autohide is on).

Ciao

jum August 15th, 2004 02:19 PM

Quote:

Originally posted by et voilą
Sorry Jum, I should have told that no, I am not running as UP. In fact, this problem happened since I "updated to java 1.4.2-05 developper preview 3". (biggest mistake I've done, that java was really poor performing). I downloaded the updated version of last week and the problem is less obvious. But I think this is something about the new apple java. This morning LW was using over 200 meg ram and 700 meg of virtual memory!!! It took 30 seconds before the dock showed (autohide is on).

I do not think that it is related to the Java VM version, as I tried it under Windows as well as OS X and the same things happen. I did run it under XP and I get the same problem although the way the system tells me that I am out of memory is a bit differently under OS X than under XP.

et voilą August 15th, 2004 02:59 PM

So it might be related to the idle code in LW? I don't know, that code (idle) was merged 9 months ago.

Sam, any idea?

Ciao ;)

sberlin August 15th, 2004 04:03 PM

It's likely not related to any idle code -- that code doesn't store anything in memory, it just looks at a single value. Out of memory problems tend to arise after a long delay because it is only after a long period of time that enough items can accumulate to cause a memory problem.

Perhaps it is related to Sun caching all IPs in memory forever. We'll experiment with changing that setting (if it is changable?) and see how that helps.

Of course, we're always on the lookout for memory leaks.

Regarding corruption detection -- right now the download code is not structured in a way that we can easily determine which host caused corruption. This will be done at some point, but for now LimeWire just tries to download six times, and if it's still corrupted after the sixth attempt it will give up.

et voilą August 15th, 2004 05:41 PM

Salut Sam, could it simply be that LW is not flushing upload buffers (from the hard disk to ram then virtual memory) ?? It happens often when having simultaneous uploads of a single 700 meg files and no other computer activity. It could happen also for downloads, but I'm mostly uploading.

About the corruption here is what I think LW should do: LW should remember the swarming hosts and put them in quarantine when there are more sources to try. If there are no more sources it should split sources in two, and try half of them, if still corruption, try the other half.

Ciao

sberlin August 15th, 2004 06:00 PM

Hmm -- I don't think there's any flushing of buffers that could cause memory problems. However, there could be things related to uploads. If you can recall any other things of interest that are happening before memory is gobbled up, it'd be very helpful.

Splitting the sources into two is a great idea, but one that is hard to implement in practice. If you have four sources and try two of them, what happens if you succeeded a bit with them and then they went offline? Do you fallback to the other two? If so, and corruption is noticed after it's over, which sources do you retry? It would be much simpler to just do it the way it's supposed to be done -- verify as it's downloaded, but it's not possible to do that right now. It _will_ be done that way at some point, though.

et voilą August 15th, 2004 06:16 PM

Quote:

Originally posted by sberlin
verify as it's downloaded, but it's not possible to do that right now. It _will_ be done that way at some point, though.
I'm not sure about what you mean here. :confused: LW isn't checking downloads against the tiger tree before the final hashing? (well it looks that way as of now) Even with LW verifying a tiger tree chunk that is corrupted while downloaded, you'll have to quarantine hostile clients to finish the download. The yesterdays gnucleus hosts was only 1 of 75 sources, but it always had open uploads slots with great speed -- seems to be intentional to me---. LW gotta protect downloads from those hostiles.

An optimisation would be LW asking sub tiger trees for a often corrupted chunk and disable swarming for those small chunks so LW indentifies the corrupted host. But LW would have to ask the sub tiger trees to a trusty host: that seems difficult, except if you are a Bearshare (read- non hijackable because close source).

Ciao ;)

Edit: concerning memory leaks in uploads: I see magnetmix has america's army ready to download. Suggestion, set a LW with AA shared and monitor memory consomption after idle time and concurrent uploads. I do not have a console in LW to help you debug!

trap_jaw4 August 15th, 2004 11:45 PM

Quote:

Originally posted by sberlin
verify as it's downloaded, but it's not possible to do that right now. It _will_ be done that way at some point, though.
I'm already doing it for my bittorrent downloads - last time I tried doing it with HTTP downloads didn't really work out but the infrastructure is basically there and designed to be used by HTTP downloads as well.

trap_jaw4 August 15th, 2004 11:48 PM

Quote:

Originally posted by et voilą
I'm not sure about what you mean here. :confused: LW isn't checking downloads against the tiger tree before the final hashing? (well it looks that way as of now) Even with LW verifying a tiger tree chunk that is corrupted while downloaded, you'll have to quarantine hostile clients to finish the download. The yesterdays gnucleus hosts was only 1 of 75 sources, but it always had open uploads slots with great speed -- seems to be intentional to me---. LW gotta protect downloads from those hostiles.
I have noted that there seems to be some correlation between the amount of file corruption and the number of GnucDNA based clients you download from. I couldn't verify it yet because the GnucDNA source code isn't very readable - plus the source code doesn't seem to be available via sourceforge's cvs anymore (although that might just be some temporary server failure).

I have also been working on an improved security feature allowing to ban hostile vendors a little easier, if it makes its way in the mainline you could simply edit your limewire.props file to ban them instead of editing the sources.

RKHambridge August 16th, 2004 03:56 AM

Facilities "Lost" in 4.1.x
 
FYI

I've recently downloaded the Beta versions 4.1.3 and 4.1.4

However, I find that I can now longer perform the following from the Library:

1. Edit file names;

2. Drag files into sub-folders.

In both cases I have to use the Explore button and do it in the Apple Finder :confused:

I'm using a Mac G4, OSX 10.2.8


All times are GMT -7. The time now is 10:57 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.