|
Register | FAQ | The Twelve Commandments | Members List | Calendar | Arcade | Find the Best VPN | Today's Posts | Search |
General Discussion For anything which doesn't fit somewhere else (for PHEX users) |
| LinkBack | Thread Tools | Display Modes |
| |||
99% Hurdle & Reusing Fragments I am seeing the strange problem that a lot of files (>300MB) make it nice while there are big sections to be downloaded I still haven't figured out how to attach screengrabs but I'll supply some to show that sometimes a segment is either out of sequence or somehow overrun because the next one starts at an offset that would be negative to the previous... That also brings me to the Restarting Fragments issue which I tried and when I copy the Initial_Fragment.file to Sg0Initial_Fragment.file it shows me that it has xHundred percent of the expected 1 Megabyte how do I do it right ? Or better could someone please check the code to see what could be done. Ideal long term dream of course would be a feature to edit, review, abort, delete or insert segments ? I know I am asking much so relax - I'd like to help but I am missing some advanced way of exploring other peoples code It is just overwhelming when you got a few other projects too... |
| |||
I think I understand now ! Its the continous fragmenting to be able 'use' the download candidates even a megabyte is subfragmented again and again... at what point does the program make the choice to not fragment any further and just leave the segment to the first comes basis ? Shouldn't that be adjustable ? |
| |||
Re: I think I understand now ! Quote:
it is adjustable... in the phex sources. In the file phex/download/swarming/SWDownloadConstants.java you can change: public static final int DEFAULT_SEGMENT_LENGTH = 1024 * 1024; and public static final int RESIZE_BOUNDARY_SIZE = 32 * 1024; I've changed them to 32M/128K with good results for large downloads. After changing a full rebuild of phex is required. Helge |
| |||
Now that would be a very nice IMPROVEMENT To make it Soft configurable (maybe even on a file to file base) How big the segments are by default I would think 32MB Max / 1MB Min to be a fair default for large files By the way some of the 99% hurdle files finished after all but not all with full functionality !? |
| |||
Re: Now that would be a very nice IMPROVEMENT Quote:
http://www.gnutellaforums.com/showth...threadid=13076 Helge |
| |||
I am using the latest 073 What I am dreaming of is a version that can repair damaged files Lets start thinking about rational ways to implement that: For example the prefragmentation - Why ? The xml file is indeed filled with 6-700 entries of one megabyte segments before ever realizing need for that - that can't be resource efficient - I recommend a segment linked list that in case a new candidate is found can respond with the largest unassigned segment or half of the largest segment not currently being downloaded. or if that is not available on the fly splits the largest segment even if it is in download proceedings and not smaller than the minimum size. Of course the linked list can help reposes segments that never started and are overrun by a well loading one. Sure its already implemented very similar... Just this fragment first algorithmic seems unnecessary complicated to me ! Ok and what does it have to do with damaged files ? Keep a log of the downloaded segments (even from whom) and then later allow the user (initially manually) to question any segment and request Phex to veryfy it by getting it again ... How about that ? Soon enough we may be able to automate that by implementation of a specific request Segment Sums protocol ! Thanks all for your Great Work ! |
| |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
only corrupt files and fragments? | Wizard | General Gnutella / Gnutella Network Discussion | 0 | February 12th, 2001 09:34 AM |