Gnutella Forums

Gnutella Forums (https://www.gnutellaforums.com/)
-   General Gnutella Development Discussion (https://www.gnutellaforums.com/general-gnutella-development-discussion/)
-   -   Which Clients use a PongCache? (https://www.gnutellaforums.com/general-gnutella-development-discussion/6310-clients-use-pongcache.html)

hermaf December 10th, 2001 05:18 AM

Which Clients use a PongCache?
 
Hey there,

I am trying to do some analysis on the Gnutella network and was wondering which clients already use a pong cache aas proposed by LimeWire?

I know Qtella does since I am hacking the client :) and the latest Bearshare client.

Especially I am interested wether LimeWire uses the PongCache and SwapNut.

(My loggs show that these 2 clients make about 80% of all incomming query hits ... depending on waht you search for there are about 5% BearShare and some MyNapster!, XoloX and Gnucleus clients around)

I need the information about pong caches for further analysis :) Thanks

Felix

Moak December 11th, 2001 11:47 PM

none today.

AFAIK only Limewire's "Sparky" beta version.

TruStarwarrior December 11th, 2001 11:57 PM

Sparky has been integrated into LimeWire's new UltraPeer network structure.

Moak December 12th, 2001 12:04 AM

Does this mean it is allready in a beta version beside "Sparky"?

Christopher Rohrs wrote Wed Dec 12, 2001:
"Actually LW has never released pong-caching in a production version! We did implement it in LW "Sparky", but it never got merged to the main code base." http://groups.yahoo.com/group/the_gdf/message/3892

TruStarwarrior December 12th, 2001 12:08 AM

I can't find the quote now. It would take forever.

But I asked specifically what happened to the Sparky project. He said it underwent some revisions and will integrated into 1.9.

If I understand this all correctly, UltraPeers house cache lists so that repetitive/similar/same searches are NOT repeated. The cache keeps a recent record and sends results back accordingly. So, when 50 people search for the same thing, only the initial search will be made, and the rest is a copy of the search that has just occurred.

Moak December 12th, 2001 12:24 AM

Query caching in a superpeer (Ultrapeer for LW) doesn't mean there is also a pong caching in superpeer mode and/or client mode? Hmm..... I have asked in LW and Phex forum. :)

TruStarwarrior December 12th, 2001 12:38 AM

I am thinking a little clearer right now. Let me explain a little better. I don't think pong cacheing would be the right correct term, so I don't know of anyone that does this. However, what LW is doing with 1.9 is this:
A client (branch, or 'leaf') connects to an ultrapeer and uploads a filelist (it's not really a file list, just a 'representation' - ask afisk for info if you want to know more about it). The actual searches are perfomed by ultrapeers. Search queries are compared to the file lists of the branches, and if any seem to match, the search is passed along to the corresponding user. So, the search will only reach the branch client if it is a relevant search. You can see this in action in 1.9 beta. The Monitor tab shows incoming searches, and all of the searches have words or phrases matching files in your library.
Multiple identical queries are merely directed to the same clients that previous identical queries were directed to.

TruStarwarrior December 12th, 2001 12:41 AM

That last part was a mouthful to say...
:-)

hermaf December 12th, 2001 02:23 AM

Thx for the answers at first :) Let me take this a little further ...

So if I read this correctly this would mean that if I ping my neighbours I will still get the response as decribed in the Gnutella Standard Definition (0.4), or do I get something wrong now?

It looks like most clients limit the TTL time to 7 or something so my horizon is usually around 7 hops. But within this horizon I "should" get back as many pongs as clients available.

The only reason why this pinging for servent detection could not work as described in the Gnutella Standard have been the following facts and I try to get some information (i.e. with this question) whether these will have efect on the number of returned pongs:

1) Pong Caches - but it seems except Qtella no client uses one yet right?

2) MIN_TIME_PING, meaning that a client will throw away any pings from a client if the last ping waas not longer ago than this minimu time to protect the network from ping flooding. - I guess there are some client that do that. I tried to send 10 pings to the attaced servents but most of them answered to only a few them?!?!?

3) TTL_MAX values (=6-7) - my logs show that ecven if I send a deep-space ping with 20 I will get back a hop count at max equal to 10, BUT 8,9 and 10 are VERY rare! -> this means that most clients limit the TTL from packets they send (and forward) to a maximum TTL of 7 (respectively they use 7-HOPS= NEW_TTL)

Anything wrong with this here???

By the way another thing I found as interesting is how Qtella "measures" the number of servents behind a servent attached: It counts every recceived Pong, Query, QueryHit or Push asa a new client (-> Number++) and saves the servents IP and the time when the message was received. After a 3 seconds it deletes all servents from that list that it did not here of again for 40 seconds.

How accurate do you think that number of clients is? Or would it be more accurate to use Pings to estimate/exactly measure the number of clients in your horizon?

BTW: No worry I am not working for a company or something, just writing my thesis on scalability issues in Gnutella networks..

Thanks again ... I appretiate the feedback :)

Moak December 12th, 2001 02:56 AM

Hi, two things come in my mind:

First, Bearshare (perhaps other clients too) have a pong throtteling... which means they will not route a heavy number of pong descriptors in a short time window, but delay them, perhaps also throw some away (?).
Second, we had statistics some months ago saying ping/pong traffic was eating a huge percentage from the gnutella backbone traffic. It is a good idea not to use broadcast pings in a standard client to meassure horizon (avoid network broadcasts), but for sure it is okay in a rarely used statistic tool. I'm not sure if ping/pong for meassure of horizon will still work in future, perhaps Pong Limiting and Pong caching will provide falsified results. See http://www.limewire.com/index.jsp/pingpong and http://www.limewire.com/index.jsp/med_require.

To answer your question (I'm not sure what you want to do exactly): Ping/Pong is theoretically the most accurate Gnutella protocoll v0.4 method to meassure horizon, but unhealthy to the network if many clients do + allready falsified by anti broadcast techniques + limited by TTLs. Perhaps you should ask Limewire how they collect their rolling host count at http://www.limewire.com/index.jsp/size. Other methods for gnutella size meassure could be: Asking query caches how many unique IPs they served, how many unique pings and pongs they received/transmitted. Asking upcoming superpeers, how many unique IDs they have routed. Implement new protocoll features to provide a better horizon estimation (some ideas in this thread).

hermaf December 12th, 2001 04:11 AM

Thx Moak.

What I actually intend to do is to prove the scalability of the Gnutella networks.

One thing is to analyse the network structure to do so. What I try to find out is how network protection mechanisms of clients may influence my measurements that should prove the theoretical results. That is what this is all about.

I send out a Ping every 5 Minuts with a TTL of 7-20. Interestingly I get some "strange results": Some clients attached do not answer at all (which could mean that they disconnected but logs show they are still there) some returne 1500-2500 pongs which I also consider as too much ?!?

Here are some entries of my log:


Dez 10 20:01:34 export qtella logging[3329]: Servent 128.103.189.195 ## 0:1 ## 1:3 ## 2:1 ## 3:8 ## 4:1 ## 5:50 ## 6:41 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 105
Dez 10 20:01:34 export qtella logging[3329]: Servent 24.253.133.117 ## 0:0 ## 1:1 ## 2:5 ## 3:4 ## 4:9 ## 5:9 ## 6:8 ## 7:53 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 89
Dez 10 20:01:34 export qtella logging[3329]: Servent 80.19.204.186 ## 0:1 ## 1:4 ## 2:6 ## 3:19 ## 4:20 ## 5:92 ## 6:119 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 261
Dez 10 20:01:34 export qtella logging[3329]: Servent 158.252.215.47 ## 0:1 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 1
Dez 10 20:01:34 export qtella logging[3329]: Servent 12.89.79.21 ## 0:1 ## 1:2 ## 2:1 ## 3:32 ## 4:1 ## 5:8 ## 6:37 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 82
Dez 10 20:01:34 export qtella logging[3329]: Servent 128.119.246.197 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 0
Dez 10 20:01:34 export qtella logging[3329]: Servent 24.81.77.60 ## 0:0 ## 1:2 ## 2:2 ## 3:7 ## 4:4 ## 5:61 ## 6:49 ## 7:24 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 149
Dez 10 20:01:34 export qtella logging[3329]: Servent 24.49.92.170 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 0
Dez 10 20:01:34 export qtella logging[3329]: Servent 4.61.240.48 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 0
Dez 10 20:01:34 export qtella logging[3329]: Servent 12.255.135.14 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 0
Dez 10 20:01:34 export qtella logging[3329]: Servents logged: 10 ## TTL was set to: 20


Dez 10 20:06:34 export qtella logging[3329]: Servent 128.103.189.195 ## 0:1 ## 1:4 ## 2:1 ## 3:13 ## 4:2 ## 5:46 ## 6:59 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 126
Dez 10 20:06:34 export qtella logging[3329]: Servent 24.253.133.117 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:72 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 72
Dez 10 20:06:34 export qtella logging[3329]: Servent 80.19.204.186 ## 0:1 ## 1:0 ## 2:6 ## 3:1 ## 4:8 ## 5:6 ## 6:34 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 56
Dez 10 20:06:34 export qtella logging[3329]: Servent 158.252.215.47 ## 0:1 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 1
Dez 10 20:06:34 export qtella logging[3329]: Servent 12.89.79.21 ## 0:1 ## 1:1 ## 2:2 ## 3:16 ## 4:4 ## 5:24 ## 6:54 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 102
Dez 10 20:06:34 export qtella logging[3329]: Servent 128.119.246.197 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 0
Dez 10 20:06:34 export qtella logging[3329]: Servent 24.81.77.60 ## 0:0 ## 1:2 ## 2:2 ## 3:7 ## 4:3 ## 5:26 ## 6:25 ## 7:31 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 96
Dez 10 20:06:34 export qtella logging[3329]: Servent 24.49.92.170 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 0
Dez 10 20:06:34 export qtella logging[3329]: Servent 4.61.240.48 ## 0:0 ## 1:0 ## 2:0 ## 3:0 ## 4:0 ## 5:0 ## 6:0 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 0
Dez 10 20:06:34 export qtella logging[3329]: Servent 12.255.135.14 ## 0:0 ## 1:0 ## 2:0 ## 3:4 ## 4:3 ## 5:18 ## 6:12 ## 7:0 ## 8:0 ## 9:0 ## 10:0 ## 11:0 ## 12:0 ## 13:0 ## 14:0 ## 15:0 ## 16:0 ## 17:0 ## 18:0 ## 19:0 ## 20:0 ## total: 37
Dez 10 20:06:34 export qtella logging[3329]: Servents logged: 10 ## TTL was set to: 20

As you can see i.e. the marked servent does not responf but is still connected... I got some thousand log entries ... showing that he is still connected.

So what I try now is to see how some of the network protection mechanisms influence the results or where results like no replies at all come from and from that how I can use my logs for statistics (in a correct way).

Moak December 12th, 2001 04:13 AM

Perhaps you wanna give a grafical visualisation of your log? Sorry, too tired to understand now (long night)... maybe tomorrow.

Moak December 12th, 2001 05:12 AM

hmm, the good old "Is Gnutella scalable?" question.

There are some links but I never investigated that problem, because I personally thought Gnutella was technically never scalable. In reality it is or it seems to be, because of horizons and host caches. Perhaps someone can explain what Gnutella scalabilty means, no really. (We have TTLs and Horizons allready, which doesn't mean every soul in universe can't use Gnutella. Everyone can, in a horizon. Together with superpeers, flow controll, caching and other improvements horions can be increased, improved, dynamic or crosslinked... but we still have horizons, right?)

http://www.google.com/search?q=Gnutella+scalable
http://www.darkridge.com/~jpr5/doc/gnutella.html
http://www.gnutellameter.com/gnutella-editor.html
http://www.gnutella.com/forums/dev/20

TruStarwarrior December 12th, 2001 06:39 AM

I think it's time that the Gnutella protocol is officially updated. Or perhaps make a seperate 'Gnutella v0.x' protocol altogether? The network isn't as efficient as other types, and no matter how many subtle changes developers introduce with their clients, there will chaos, and everyone will be using a different idea or implementation. And seeing the number of new clients appearing everywhere, it would be best to have them adhere to a better standard than the out-dated 0.4 protocol.

Moak December 12th, 2001 07:03 PM

About pong caching: It's used in Sparky beta version only.
Crohrs wrote: "LW 1.9 does not use pong-caching, mainly because we have enough new things in it to keep us busy. :-)" http://www.gnutellaforums.com/showth...2&pagenumber=3

TruStarwarrior December 13th, 2001 02:01 AM

Perhaps in the future LW will work on it. I know that right now, things are very hectic at LimePeer. They're got several bugs in the their UltraPeer system. They are all fixable, though.
:-D
A major problem they have too is the Mac version. It doesn't save files correctly, and people can't open them. Once these things are out of the way, perhaps they'll have time to do pong-caching.

:-)


All times are GMT -7. The time now is 03:18 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.0 ©2011, Crawlability, Inc.

Copyright © 2020 Gnutella Forums.
All Rights Reserved.