Mythbustering Peer-to-peer Traffic Localization
Telecom Italia
enrico.marocco@telecomitalia.it
Bell Labs, Alcatel-Lucent
rimac@bell-labs.com
Bell Labs, Alcatel-Lucent
vkg@alcatel-lucent.com
Peer-to-peer traffic optimization techniques that aim at
improving locality in the peer selection process have attracted
great interest in the research community and have been subject
of much discussion. Some of this discussion has produced
controversial myths, some rooted in reality while others remain
unfounded. This document evaluates the most prominent myths
attributed to P2P optimization techniques by referencing the
most relevant study (or studies) that have addressed facts
pertaining to the myth. Using these studies, we hope to either
confirm or refute each specific myth.
Peer-to-peer (P2P) applications used for file-sharing, streaming
and realtime communications exchange large amounts of data in
connections established among the peers themselves and are
responsible for an important part of the Internet traffic.
Since applications have generally no knowledge of the underlying
network topology, the traffic they generate is frequent cause of
congestions in inter-domain links and significantly contributes
to the raising of transit costs paid by network operators and
Internet Service Providers (ISP).
One approach to reduce congestions and transit costs caused by
P2P applications consists of enhancing the peer selection
process with the introduction of proximity information. This
allows the peers to identify the topologically closer resource
among all the instances of the resources they are looking for.
Several solutions following such an approach have recently been
proposed
, some of which are now being considered for
standardization in the IETF .
Despite extensive research based on simulations and field
trials, it is hard to predict how proposed solutions would
perform in a real-world systems made of millions of peers. For
this reason, possible effects and side-effects of optimization
techniques based on P2P traffic localization have been a matter
of frequent debate. This document describes some of the most
interesting effects, referencing relevant studies which have
addressed them and trying to determine whether and in what
measure they are likely to happen.
Each possible effect -- or Myth -- is examined in three phases:
Facts: in which a list of relevant data is presented,
usually collected from simulations or field trials;
Discussion: in which the reasons for and against the myth
are discussed based on the facts previously listed;
Conclusions: in which the authors try to epress a reasonable
measure of the plausibility of the myth.
This document at the current stage is little more than a
strawman. With the help of the IRTF community, the authors
would like to improve it, in the number of the Facts, in the
quality of the Discussion and, particularly, in the
trustworthiness of the Conclusions.
Terminology defined in is reused here;
other definitions should be consistent with it.
A peer which has a complete copy of the content it is sharing,
and still offers it for upload. The term "seeder" is adopted
from BitTorrent terminology and is used in this document to
indicate upload-only peers also in other kinds of P2P
applications.
A peer which has not yet completed the download of a specific
content (but usually has already started offering for upload
the part it is in possession of). The term "leecher" is
adopted from BitTorrent terminology and is used in this
document to indicate peers which are both uploading and
downloading, also in other kinds of P2P applications.
The group of peers that are uploading and/or downloading
pieces of the same content. The term "swarm" is commonly used
in BitTorrent, to indicate all seeders and leechers exchanging
chuncks of a particular file; however, in this document it is
used more generally, for example, in the case of P2P streaming
applications, to refer to all peers receiving and/or
transmitting the same media stream.
A content exchange strategy where the amount of data sent by a
leecher to another leecher is roughly equal to the amount of
data received from it. P2P applications, most notably
BitTorrent, adopt such an approach to maximize resources
shared by the users.
The status of a swarm where the number of seeders is
significantly greater than the number of leechers (i.e. the
total download demand is abundantly satisfied by the upload
capacity).
The service through which a network can exchange IP packets
with all other networks it is not directly connected to. The
transit service is always regulated by a contract, according
to which the custumer (i.e. a network operator or an ISP) pays
the transit provider per amount of data exchanged.
The direct interconnection between two separate networks for
the purpose of exchanging traffic without recurring to a
transit provider. Peering is usually regulated by agreements
taking in account the amount of traffic generated by each
party in each direction.
The reduction in cross-domain traffic (and thus in transit costs
due to it) is one of the positive effects P2P traffic
localization techniques are expected to cause, and also the main
reason way ISPs look at them with interest. Simulations and
field tests have shown a reduction varying from 20% to 80%.
Various simulations and initial field trials of the P4P solution on average show a 70%
reduction of cross-domain traffic.
Data observed in Comcast's
P4P trial show a 34% reduction of the outgoing P2P
traffic and an 80% reduction of the incoming.
Simulations of the "oracle-based"
approach proposed by researchers at TU Berlin show
an increase in local exchanges from 10% in the unbiased
case to 60%-80% in the localized case.
Tautologically, P2P traffic localization techniques tend to
localize content exchanges, and thus reduce cross-domain
traffic.
Ostensibly, the increase in application performance is the main
reason for the consideration of P2P traffic localization
techniques in academia and industry. The expected increase
depends on the specific application: file sharing applications
witness an increase in the download rate, realtime communication
applications observe lower delay and jitter, and streaming
applications can benefit by a high constant bitrate.
Various simulations and initial field trials of the P4P solution show an average reduction
of download completion times between 10% and 23%.
Data observed in Comcast's
P4P trial show and increase in download rates
between 13% and 85%. Interestingly, the collected data
also indicate that fine-grained localization is less
effective in improving download performance compared
to lower levels of localization.
Data collected in the Ono
experiment show a 31% average download rate
improvement.
In networks where the ISP provides higher bandwidth
for in-network traffic (e.g. as in the case of RDSNET,
described in ), the increase
is significantly higher.
In networks with relatively low uplink bandwidth (as
the case of Easynet, described in ), traffic localization slightly
degrades application performance.
Simulations of the "oracle-based"
approach proposed by researchers at TU Berlin show
a reduction in download times between 16% and 34%.
Simulations by Bell Labs
indicate that localization is not as effective in all
scenarios and that the user experience can suffer in
certain locality-aware swarms based on the actual
implementation of locality.
It seems that traffic localization techniques often cause an
improvement in application performance. However, it must be
noted that such beneficial effects heavily depend on the
network infrastructures. In some cases, for example in
networks with relatively low uplink bandwidth, localization
seems to be useless if not harmful. Also, beneficial effects
depend on the swarm size; imposing locality when only a small
set of local peers are available may even decrease download
performance for local peers.
Very likely, especially for large swarms and in networks with
high capacity.
The increase in uplink bandwidth usage would be a negative
effect, especially in environments where the access network is
based on technologies providing asymmetric upstream/downstream
bandwidth (e.g. DSL or DOCSIS).
Data observed in Comcast's
P4P trial show no increase in the uplink traffic.
Mathematically, average uplink traffic remains the same as
long as the swarm is not in surplus mode. However, in some
particular cases where surplus capacity is available,
localization may lead to local low-bandwiwth leechers
connecting to each other instead of trying the external
seeders. Even if such a phenomenon has not been observed in
simulations and field trials, it could occur to applications
that use localization as the only means for optimization when
some content becomes popular in different areas at different
times (as is the case of prime time TV shows distributed on
BitTorrent networks minutes after getting aired in North
America).
Peering agreements are usually established on a reciprocity
basis, assuming that the amount of data sent and received by
each party is roughly the same (or, in case of asymmetric
traffic volumes, a compensation fee is paid by the party which
would otherwise obtain the most gain). P2P traffic localization
techniques aim at reducing cross-domain traffic and thus might
also impact peering agreements.
No significant publications, simulations or trials have tried
to understand how traffic localization techniques can
influence factors that rule how peering agreements are
established and maintained.
This is a key topic for network operators and ISPs, and
certainly deserves to be analyzed more accurately. Some random
thoughts follow.
It seems reasonable to expect different effects depending on
the kinds of agreements. For example:
ISPs with different customer bases: the ISP whose
customers generate more P2P traffic can achieve a greater
reduction of cross-domain traffic and thus could probably
be in a position to re-negotiate the contract ruling the
peering agreement;
ISPs with similar customer bases:
ISPs with different access technologies: customers of
the ISP which provides higher bandwidth -- and, in
particular, higher uplink bandwidth -- will have more
incentives for keeping their P2P traffic
local. Consequently, the ISP with a better
infrastructure will be able to achieve a greater
reduction in cross-domain traffic and will be probably
in a position to re-negotiate the peering agreement;
ISPs with similar access technologies: both ISPs would
achieve roughly the same reduction in cross-domain
traffic and thus the conditions under which the
peering agreement had been established would not
change much.
As a consequence of the reasoning above, it seems reasonable
to expect that the simple fact that one ISP starts localizing
its P2P traffic will be a strong incentive for the ISPs it
peers with to do that as well.
One of the main goals of P2P traffic localization techniques is
to allow ISPs to keep local a part of the traffic generated by
their customers and thus save on transit costs. However, similar
techniques based on de-localization rather than localization may
be used by those ISP which are also transit providers to
artificially increase the amount of data exchanged with networks
they provide transit to (i.e. pushing the peers run by their
custormers to establish connections with peers in the networks
that pay them for transit).
No significant publications, simulations or trials have tried
to study effects of traffic localization techniques on the
dynamics of transit provision economics.
It is actually very hard to predict how the economics of
transit provision would be affected by the tricks some transit
providers could play on their customers making use of P2P
traffic localization -- or, in this particular case,
de-localization -- techniques. This is also a key topic for
ISPs, definitely worth an accurate investigation.
Probably, the only lesson contentions concerning transit and
peering agreement have teached so far is
that, at the end of the day, no economic factor, no matter how
much relevant it is, is able to isolate different networks
from each other.
Peer selection techniques based on locality information are
certainly beneficial in areas where the density of peers is high
enough, but may cause damages otherwise. Some studies have tried
to understand to what extent locality can be pushed without
damaging peers in isolated parts of the network.
Simulations run by researchers at
INRIA have shown that, in BitTorrent, even when
peer selection is heavily based on locality, swarms do not
get damaged.
Simulations by Bell Labs
indicate that the user experience can suffer in certain
locality-aware swarms based on the actual implementation
of locality.
It seems reasonable to expect that excessive traffic
localization will cause some degree of deterioration in P2P
swarms based on the tit-for-tat approach, and the damages of
such deterioration will likely affect most users in networks
with lower density of peers. However, as shown in , the right balance of randomness and
locality depends on the P2P algorithm.
On the other hand, P2P systems not adopting the tit-for-tat
approach (e.g. the eDonkey network) should not be damaged by
locality-based optimizations.
Plausible, in some circustancies.
No considerations at this time.
This documents tries to summarize discussions happened in live
meetings and on several mailing lists: all those who are reading
this have probably contributed more ideas and more material than
the authors themselves.
Comcast's ISP Experiences In a Recent P4P Technical Trial
Pushing BitTorrent Locality to the Limit
Taming the Torrent: A practical approach to reducing
cross-ISP traffic in P2P systems
Improving User and ISP Experience through ISP-aided P2P
Localityraffic in P2P systems
Applicability and Limitations of Locality-Awareness in
BitTorrent File-Sharing
P4P: Provider Portal for Applications
Application-Layer Traffic Optimization (ALTO) Problem Statement
Application-Layer Traffic Optimization (ALTO) Working Group
Peering Dispute With AOL Slows Cogent Customer Access
Sprint-Cogent Dispute Puts Small Rip in Fabric of Internet