Abstract: The Lov{\'a}sz Local Lemma (LLL) is a powerful tool that can be used to prove that an object
having none of a set of bad properties exists, using the probabilisticmethod. In many applications
of the LLL it is also desirable to explicitly construct the combinatorial object. Recently it was
shown that this is possible using a randomized algorithm in the full asymmetric LLL setting [R.
Moser and G. Tardos, 2010]. A strengthening of the LLL for the case of dense local neighborhoods
proved in [R. Bissacot et al., 2010] was recently also made constructive in [W. Pegden, 2011]. In
another recent work [B. Haupler, B. Saha, A. Srinivasan, 2010], it was proved that the algorithm
of Moser and Tardos is still efficient even when the number of events is exponential. Here we
prove that these last two contributions can be combined to yield a new version of the LLL.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding
density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to
produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Wireless sensor networks are comprised of a vast number of devices, situated in an area of interest that self organize in a structureless network, in order to monitor/record/measure an environmental variable or phenomenon and subsequently to disseminate the data to the control center.
Here we present research focused on the development, simulation and evaluation of energy efficient algorithms, our basic goal is to minimize the energy consumption. Despite technology advances, the problem of energy use optimization remains valid since current and emerging hardware solutions fail to solve it.
We aim to reduce communication cost, by introducing novel techniques that facilitate the development of new algorithms. We investigated techniques of distributed adaptation of the operations of a protocol by using information available locally on every node, thus through local choices we improve overall performance. We propose techniques for collecting and exploiting limited local knowledge of the network conditions. In an energy efficient manner, we collect additional information which is used to achieve improvements such as forming energy efficient, low latency and fault tolerant paths to route data. We investigate techniques for managing mobility in networks where movement is a characteristic of the control center as well as the sensors. We examine methods for traversing and covering the network field based on probabilistic movement that uses local criteria to favor certain areas.
The algorithms we develop based on these techniques operate a) at low level managing devices, b) on the routing layer and c) network wide, achieving macroscopic behavior through local interactions. The algorithms are applied in network cases that differ in density, node distribution, available energy and also in fundamentally different models, such as under faults, with incremental node deployment and mobile nodes. In all these settings our techniques achieve significant gains, thus distinguishing their value as tools of algorithmic design.
Abstract: Counting items in a distributed system, and estimating the cardinality of multisets in particular,
is important for a large variety of applications and a fundamental building block for emerging Internet-scale information systems. Examples of such applications range from optimizing query access plans in peer-to-peer data sharing, to computing the significance (rank/score) of data items in distributed information retrieval. The general formal problem addressed in this article is computing the network-wide distinct number of items with some property (e.g., distinct files with file name
containing “spiderman”) where each node in the network holds an arbitrary subset, possibly overlapping the subsets of other nodes. The key requirements that a viable approach must satisfy are:
(1) scalability towards very large network size, (2) efficiency regarding messaging overhead, (3) load
balance of storage and access, (4) accuracy of the cardinality estimation, and (5) simplicity and easy
integration in applications. This article contributes the DHS (Distributed Hash Sketches) method
for this problem setting: a distributed, scalable, efficient, and accurate multiset cardinality estimator.
DHSis based on hash sketches for probabilistic counting, but distributes the bits of each counter
across network nodes in a judicious manner based on principles of Distributed Hash Tables, paying
careful attention to fast access and aggregation as well as update costs. The article discusses various
design choices, exhibiting tunable trade-offs between estimation accuracy, hop-count efficiency, and
load distribution fairness. We further contribute a full-fledged, publicly available, open-source implementation of all our methods, and a comprehensive experimental evaluation for various settings.
Abstract: Intuitively, Braess's paradox states that destroying a part
of a network may improve the common latency of selsh
ows at Nash
equilibrium. Such a paradox is a pervasive phenomenon in real-world
networks. Any administrator, who wants to improve equilibrium delays
in selsh networks, is facing some basic questions: (i) Is the network
paradox-ridden? (ii) How can we delete some edges to optimize equilibrium
ow delays? (iii) How can we modify edge latencies to optimize
equilibrium
ow delays?
Unfortunately, such questions lead to NP-hard problems in general. In
this work, we impose some natural restrictions on our networks, e.g.
we assume strictly increasing linear latencies. Our target is to formulate
ecient algorithms for the three questions above.We manage to provide:
{ A polynomial-time algorithm that decides if a network is paradoxridden,
when latencies are linear and strictly increasing.
{ A reduction of the problem of deciding if a network with arbitrary
linear latencies is paradox-ridden to the problem of generating all
optimal basic feasible solutions of a Linear Program that describes
the optimal trac allocations to the edges with constant latency.
{ An algorithm for nding a subnetwork that is almost optimal wrt
equilibrium latency. Our algorithm is subexponential when the number
of paths is polynomial and each path is of polylogarithmic length.
{ A polynomial-time algorithm for the problem of nding the best
subnetwork, which outperforms any known approximation algorithm
for the case of strictly increasing linear latencies.
{ A polynomial-time method that turns the optimal
ow into a Nash
ow by deleting the edges not used by the optimal
ow, and performing
minimal modications to the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity
of recognizing the Braess's paradox most severe manifestations,
and our techniques show novel ways of using the probabilisticmethod
and of exploiting convex separable quadratic programs.
Abstract: Intuitively, Braess’s paradox states that destroying a part of a network may improve the common latency of selfish flows at Nash equilibrium. Such a paradox is a pervasive phenomenon in real-world networks. Any administrator who wants to improve equilibrium delays in selfish networks, is facing some basic questions:
– Is the network paradox-ridden?
– How can we delete some edges to optimize equilibrium flow delays?
– How can we modify edge latencies to optimize equilibrium flow delays?
Unfortunately, such questions lead to View the MathML sourceNP-hard problems in general. In this work, we impose some natural restrictions on our networks, e.g. we assume strictly increasing linear latencies. Our target is to formulate efficient algorithms for the three questions above. We manage to provide:
– A polynomial-time algorithm that decides if a network is paradox-ridden, when latencies are linear and strictly increasing.
– A reduction of the problem of deciding if a network with (arbitrary) linear latencies is paradox-ridden to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the edges with constant latency.
– An algorithm for finding a subnetwork that is almost optimal wrt equilibrium latency. Our algorithm is subexponential when the number of paths is polynomial and each path is of polylogarithmic length.
– A polynomial-time algorithm for the problem of finding the best subnetwork which outperforms any known approximation for the case of strictly increasing linear latencies.
– A polynomial-time method that turns the optimal flow into a Nash flow by deleting the edges not used by the optimal flow, and performing minimal modifications on the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity of recognizing the most severe manifestations of Braess’s paradox, and our techniques show novel ways of using the probabilisticmethod and of exploiting convex separable quadratic programs.
Abstract: We study network connection games where the nodes of a networ
k perform edge swaps
in order to improve their communication costs. For the model
proposed by [2], in which the selfish
cost of a node is the sum of all shortest path distances to the o
ther nodes, we use the probabilisticmethod to provide a new, structural characterization of equ
ilibrium graphs. We show how to use this
characterization in order to prove upper bounds on the diame
ter of equilibrium graphs in terms of the
size of the largest
k
-vicinity (defined as the the set of vertices within distance
k
from a vertex), for
any
k
≥
1 and in terms of the number of edges, thus settling positivel
y a conjecture of [2] in the cases
of graphs of large
k
-vicinity size (including graphs of large maximum degree) a
nd of graphs which are
dense enough.
Next, we present a new swap-based network creation game, in w
hich selfish costs depend on the imme-
diate neighborhood of each node; in particular, the profit of
a node is defined as the sum of the degrees
of its neighbors. We prove that, in contrast to the previous m
odel, this network creation game admits
an exact potential, and also that any equilibrium graph cont
ains an induced star. The existence of the
potential function is exploited in order to show that an equi
librium can be reached in expected polyno-
mial time even in the case where nodes can only acquire limite
d knowledge concerning non-neighboring
nodes.
Abstract: In this work we focus on the energy efficiency challenge in wireless sensor networks, from both an on-line perspective (related to routing), as well as a network design perspective (related to tracking). We investigate a few representative, important aspects of energy efficiency: a) the robust and fast data propagation b) the problem of balancing the energy
dissipation among all sensors in the network and c) the problem of efficiently tracking moving
entities in sensor networks. Our work here is a methodological survey of selected results that
have alre dy appeared in the related literature.
In particular, we investigate important issues of energy optimization, like minimizing the total
energy dissipation, minimizing the number of transmissions as well as balancing the energy
load to prolong the system˘s lifetime. We review characteristic protocols and techniques in the recent literature, including probabilistic forwarding and local optimization methods. We study the problem of localizing and tracking multiple moving targets from a network design perspective i.e. towards estimating the least possible number of sensors, their positions and operation characteristics needed to efficiently perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a covering combinatorial problem and then design and analyze an efficient approximate method for sensor placement
and operation.
Abstract: We consider applications of probabilistic techniques in the
framework of algorithmic game theory. We focus on three distinct case
studies: (i) The exploitation of the probabilisticmethod to demonstrate
the existence of approximate Nash equilibria of logarithmic support sizes
in bimatrix games; (ii) the analysis of the statistical conflict that mixed
strategies cause in network congestion games; (iii) the effect of coalitions
in the quality of congestion games on parallel links.
Abstract: We employ here the ProbabilisticMethod, a way of reasoning which shows existence of combinatorial structures and properties to prove refute conjectures. The radiocoloring problem (RCP) is the problem of assigning frequencies to the transmitters of a network so that transmitters of distance one get frequencies that di#er by at least two and any two transmitters of distance one get frequencies that di#er by at least one. The objective of an assignment may be to minimize the number of frequencies used (order) or the range of them (span). Here, we study the optimization version of RCP where the objective is to minimize the order. In graph theory terms the problem is modelled by a variation of the vertex graph coloring problem. We investigate upper bounds for the minimum number of colors needed in a radiocoloring assignment of a graph G. We first provide an upper bound for the minimum number of colors needed to radiocolor a graph G of girth at most 7. Then, we study whether the minimum order of a radiocoloring assignment is determined by local conditions, i.e. by the minimum order radiocoloring assignment of some small subgraphs of it. We state a related conjecture which is analogous to a theorem of Molloy and Reed for the vertex coloring problem [12]. We then investigate whether the conjecture contradicts a Theorem of Molloy and Reed for the vertex coloring when applied on the graph G 2
Abstract: In this paper we solve a particular instance of a class of differential equations that appears not to be
widely documented in the literature and results from the theoretical analysis of a probabilistic key agreement protocol parameterized by an integer k > 1. This class consists of differential equations of the form dy(t) dt = Pk+1 i=0 fi(t)y(t)i, with k > 1 and fi(t); 0 · i · k + 1 specific real functions of t. This form appears to generalize the class of Abel differential equations of the first kind in that it involves the sought function to powers greater than
3, for k > 2. For k > 2 the class of differential equations is not amenable to the general solution methodology of the Abel class of differential equations. To the best of our knowledge, there are no previous efforts to tackle, analytically, differential equations of this form. In this paper we focus on the case k = 3. We show that the solution to the resulting differential equation may be written through the use of a class of Generalized Hyper-
Lambert functions. These functions generalize the well known LambertW function, which frequently arises in the analysis of engineering applications. The solution to the differential equation is achieved, interestingly, through a reduction to a particular real root of the polynomial of degree four that appears in the right hand-side of the differential equation. This technique can be of independent interest and it can be, more generally, applicable to other differential equations of a similar form not amenable to known analytic techniques.
Abstract: The problem of determining the unsatisfiability threshold for random 3-SAT formulas consists in determining the clause to variable
ratio that marks the experimentally observed abrupt change from almost surely satisfiable formulas to almost surely unsatisfiable. Up
to now, there have been rigorously established increasingly better lower and upper bounds to the actual threshold value. In this paper,
we consider the problem of bounding the threshold value from above using methods that, we believe, are of interest on their own
right. More specifically, we show how the method of local maximum satisfying truth assignments can be combined with results for
the occupancy problem in schemes of random allocation of balls into bins in order to achieve an upper bound for the unsatisfiability
threshold less than 4.571. In order to obtain this value, we establish a bound on the q-binomial coefficients (a generalization of the
binomial coefficients). No such bound was previously known, despite the extensive literature on q-binomial coefficients. Finally,
to prove our result we had to establish certain relations among the conditional probabilities of an event in various probabilistic
models for random formulas. It turned out that these relations were considerably harder to prove than the corresponding ones for
unconditional probabilities, which were previously known.