Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   New Feature Requests (https://www.gnutellaforums.com/new-feature-requests/)
-   -   (don't) rebuild the shared files database - virtual volumes would prevent a tedious re-indexing! (https://www.gnutellaforums.com/new-feature-requests/90358-dont-rebuild-shared-files-database-virtual-volumes-would-prevent-tedious-re-indexing.html)

davidf01 January 24th, 2009 01:42 PM

(don't) rebuild the shared files database - virtual volumes would prevent a tedious re-indexing!
 
LW has a brittle database of shared files that can not 'remember' volumes that are temporally offline when LM restarts ...

... this fragile approach to the persistence of the database for shared files causes a huge waste of time (20-30 hours!) for both the user & the cpu & other users of the network (not to mention the unnecessary wear & tear on the drives) when 10,000 files must be reindexed from scratch after a volume is re-attatched.

please allow the user to (explicitly) instruct LW to retain state info for the shared info database so that no reindexing is required when LW restarts without some of the shared volumes being attached.

surely spotlight & especially time machine - on the mac - already do most of the heavy lifting required to maintain a virtual file system.

(yes, i realize that cross-platform functionality in the LM is an important goal; but surely this kind of forking is an acceptable variance since it does not effect the actual gnutella protocol itself).

indeed with a proper session manager, it should be possible to load a variety of different shared files databases into LW (just as disk images can be used to load different virtual machine states into a hypervisor such as vmware). Even simple archiving of the database might a good enough work-around, as compared to a full virtual file system like time machine. However, perhaps the best approach is an intermediate one (between just multiple dumb archives or a full-fledged virtual file system): use a container that is optimized for structured storage: lotus & apple devised Bento for this task in the 90's, and ibm also used it for opendoc; however in this decade, sun's ZFS does a snapshot feature which might also be a useful intermediate solution (and ZFS is built-in to osx snow leopard).

bottom line is that a memory feature for temporally unattached drives would make limewire much more manageable.

anyone else being driven crazy by losing 24 hours of machine time while the shared index is being rebuilt?

Blackhorse 70V January 24th, 2009 02:15 PM

Most users don't share 10,000 files.

Lord of the Rings January 24th, 2009 02:21 PM

LW has always had problems with external drives in my experience. That's why I gave up on the concept. Other p2p programs seem to be able to use external drives much more efficiently.

And yes, if your drive is offline when LW starts up then the library preferences will wipe clean all record of such drives & their files. LW in my experience seems to have problems reading from such drives & remembering them.

I share a lot of files, but they are on internal drives. I have four internal 1 GB drives so that's plenty. Plus a lot of ram & cpu power to cope with them.

OLDUSER January 24th, 2009 02:41 PM

Hashing
 
Hello,
yes hashing it takes long time,
if you use for exemple ubuntu you have to mount internal hard drive at ubuntu
startup otherwise when you start limewire without mount the internal hard drive that contain shared folder , you have to wait the long hashing operation.
Hashing was more better (speed of hashing) with older version 4.12.x of limewire.:xirokrotima:

Olduser

davidf01 January 24th, 2009 03:11 PM

obscurity is not a good reason for a bad design
 
Quote:

Originally Posted by Blackhorse 70V (Post 336939)
Most users don't share 10,000 files.

wow.

it's hard to imagine someone seriously saying that sound engineering principles (like scalability) should actively be discouraged just because the current load factors dont require it!

that was the same brain-dead contempt for architectural planning that tanked twitter as it tried to scale up from some lame PHP code hacked together with spit & rubber-bands.

that engineering team was replaced with some pros who actually had studied SW engineering (god forbid anyone ever would use UML to get it right the first time).

anyways, once short-cuts become the accepted work ethic in one part of the code, this attitude usually ends up percolating through to the rest of the code. Any job worth doing is worth doing weil.

also, i do realize that there is a certain platform-based work ethic involved in my criticism of such lackadaisical standards: in the windows community (and even in the linux world), the standard is usually 'good enough' ... but mac users expect 'insanely great'.

btw: the re-indexing is brutal even if only 1000 files!

ps: just what kind of examination is being made of each file to (re)build this index? - is the standard metadata in the headers not sufficient? ... or does LM literally read through every byte inside the file?! (if so, then scouring for what else?)

Lord of the Rings January 24th, 2009 06:25 PM

I found the LW 5 extremely slow hashing files (LW 5 alpha/beta requires Java 6/1.6). That 'might' be a mac java issue since Java 6 has issues on mac. I ran the LW 5 alpha on a different OSX installation on same computer. For example, I at one point shared around 10,000 files & hashing took almost forever. Even less shared files took ages as you suggested & this included very small files being slow to hash.

LW 4 seems to hash much faster under Java 1.5 on a mac. Though yes, can still take a long time for lots of large files. Especially those over 1 GB in size.

At least the gnutella protocol has allowed larger sized files over very recent years. Instead of being limited to less than 1 GB. I am still unsure of the exact limit now.

Perhaps the LW proggies should look around more. Direct Connect for example which is a program I used to enjoy greatly. They've obviously taken a look at torrents & done their very limited version of working with it.

In LW 5 they have created a friends concept for sharing files ... I suspect more-so copied from the torrent similar concept.

In a very similar fashion to Azureus/Vuse's 3 level set up ... I hope LW introduces a 2-3 level user set up. LW 5 seems to be designed for those who know nothing. For those of us who have been long time users & 'are' interested in the small details, both from a viewing/analizing point & from a download/upload set up point, an optional simple to advanced set-up to give more options for the user would be good IMHO.


All times are GMT -7. The time now is 07:39 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.