Abstract: The management of Grid resources requires scheduling of both computation and communication tasks at various levels. In this study, we consider the two constituent sub-problems of Grid scheduling, namely: (i) the scheduling of computation tasks to processing resources and (ii) the routing and scheduling of the data movement in a Grid network. Regarding computation tasks, we examine two typical online task scheduling algorithms that employ advance reservations and perform full network simulation experiments to measure their performance when implemented in a centralized or distributed manner. Similarly, for communication tasks, we compare two routing and data scheduling algorithms that are implemented in a centralized or a distributed manner. We examine the effect network propagation delay has on the performance of these algorithms. Our simulation results indicate that a distributed architecture with an exhaustive resource utilization update strategy yields better average end-to-end delay performance than a centralized architecture.

Abstract: Core networks of the future will have a
translucent and eventually transparent optical
structure. Ultra-high-speed end-to-end connectiv-
ity with high quality of service and high reliability
will be realized through the exploitation of opti-
mized protocols and lightpath routing algorithms.
These algorithms will complement a flexible con-
trol and management plane integrated in the
proposed solution. Physical layer impairments
and optical performance are monitored and
incorporated in impairment-aware lightpath rout-
ing algorithms. These algorithms will be integrat-
ed into a novel dynamic network planning tool
that will consider dynamic traffic characteristics,
a reconfigurable optical layer, and varying physi-
cal impairment and component characteristics.
The network planning tool along with extended
control planes will make it possible to realize the
vision of optical transparency. This article pre-
sents a novel framework that addresses dynamic
cross-layer network planning and optimization
while considering the development of a future
transport network infrastructure.

Abstract: Load balancing/sharing is a policy which exploits the communication facility between the servers of a distributed system, by using the exchanging of status information and jobs between any two servers of the system, in order to improve the performance of the whole system. In this work, we propose a new adaptive distributed hierarchical scheme, the Virtual Tree Algorithm (VTA), which creates a virtual binary tree structure over the actual network topology. It uses the Difference-Initiated (DI) technique ([11, 1]) for load balancing/sharing, which needs remote information for the transfer policy, and no additional information for the location policy. We demonstrate here that the introduced virtual construction can keep the exchanged messages to a number favourable to those of the previously known efficient algorithms. To show the above statement and evaluate the performance of our policy, we make use of both analytical and simulation results. By using the simulation model that we developed, we compared our results with one of the most representative and new adaptive, symmetrical, distributed, and efficient algorithms, the Variable Threshold (V THR) algorithm

Abstract: This work addresses networked embedded systems enabling the seam-
less interconnection of smart building automations to the Internet and
their abstractions as web services. In our approach, such abstractions are
used to primarily create a exible, holistic and scalable system and allow
external end-users to compose and run their own smart/green building
automation application services on top of this system.
Towards this direction, in this paper we present a smart building test-
bed consisting of several sensor motes and spanning across seven rooms.
Our test-bed's design and implementation simultaneously addresses sev-
eral corresponding system layers; from hardware interfaces, embedded
IPv6 networking and energy balancing routing algorithms to a RESTful
architecture and over the web development of sophisticated, smart, green
scenarios. In fact, we showcase how IPv6 embedded networking combined
with RESTful architectures make the creation of building automation ap-
plications as easy as creating any other Internet Web Service.

Abstract: In this paper, a new service oriented networking
paradigm is presented, where network nodes (peers) are self-
organized into individual service entities. The key idea relies on
the overlay approach, where there exists a virtual service plane,
fragmented into self-organized and self-managed entities called
islands of service transparency. The islands are formed in an
upstream, ad-hoc mode from the non-networking resources (i.e
VoD, grid server, etc) towards all ingress routers of the network,
using link state advertisements and multi-cost path selection
algorithms (i.e residual bandwidth, server capacity, storage, etc).
Organization and re-organization of nodes around non-network
resources is transparent to end-users, and thus any request
within a specific service island is transparently routed to the
island’s resource for execution. A service proxy is commissioned
to resolve service addresses and service attributes to QoS metrics.
In this paper, we present the main notations and metrics of the
proposed architecture as well as node behavior and potential
GMPLS extensions for implementation issues

Abstract: We present a simple parallel algorithm for the single-source shortest path
problem in planar digraphs with nonnegative real edge weights. The algorithm runs
on the EREW PRAM model of parallel computation in O((n2=+n1&=) log n)
time, performing O(n1+= log n) work for any 0<{\aa}<1/2. The strength of the
algorithm is its simplicity, making it easy to implement and presumable quite
efficient in practice. The algorithm improves upon the work of all previous
parallel algorithms. Our algorithm is based on a region decomposition of the
input graph and uses a well-known parallel implementation of Dijkstra's
algorithm. The logarithmic factor in both the work and the time can be
eliminated by plugging in a less practical, sequential planar shortest path
algorithm together with an improved parallel implementation of Dijkstra's
algorithm.

Abstract: We present an improved upper bound on the competitiveness of the online colouring algorithm First-Fit in disk graphs, which
are graphs representing overlaps of disks on the plane. We also show that this bound is best possible for deterministic online
colouring algorithms that do not use the disk representation of the input graph. We also present a related new lower bound for unit
disk graphs.

Abstract: We consider the problem of planning a mixed line
rates (MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support the transmission at different line
rates. Previously proposed planning algorithms, have used a
transmission reach limit for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more significant between
transmissions that utilize different modulation formats. Thus, the
transmission reach of a connection with a specific modulation
format/line rate depends also on the other connections that copropagate
with it in the network. To plan a MLR WDM network,
we present routing and wavelength assignment (RWA)
algorithms that take into account the adaptation of the
transmission reach of each connection according to the use of the
modulation formats/line rates in the network. The proposed
algorithms are able to plan the network so as to alleviate
interference effects, enabling the establishment of connections of
acceptable quality over paths that would otherwise be prohibited

Abstract: For many random Constraint Satisfaction Problems, by now, we have asymptotically tight estimates of
the largest constraint density for which they have solutions. At the same time, all known polynomial-time algorithms
for many of these problems already completely fail to find solutions at much smaller densities. For example, it is
well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some
of the simplest possible coloring algorithms already achieve this goal. Given the simplicity of those algorithms, one
would expect there is a lot of room for improvement. Yet, to date, no algorithm is known that uses (2 - o)÷ colors,
in spite of efforts by numerous researchers over the years.
In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we believe it is natural to
inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a
subset of {1, . . . , k}n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense
to a phase transition in the geometry of this set. Roughly, the set of k-colorings looks like a giant ball for k ? 2÷, but
like an error-correcting code for k ? (2 - o)÷. We prove that a completely analogous phase transition also occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each problem, its location corresponds precisely with the point were all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to prove rigorously much of the celebrated 1-step Replica-Symmetry-Breaking hypothesis
of statistical physics for random CSPs.

Abstract: For a number of optimization problems on random graphs
and hypergraphs, e.g., k-colorings, there is a very big gap between the
largest average degree for which known polynomial-time algorithms can
find solutions, and the largest average degree for which solutions provably
exist. We study this phenomenon by examining how sets of solutions
evolve as edges are added.We prove in a precise mathematical sense that,
for each problem studied, the barrier faced by algorithms corresponds
to a phase transition in the problems solution-space geometry. Roughly
speaking, at some problem-specific critical density, the set of solutions
shatters and goes from being a single giant ball to exponentially many,
well-separated, tiny pieces. All known polynomial-time algorithms work
in the ball regime, but stop as soon as the shattering occurs. Besides
giving a geometric view of the solution space of random instances our
results provide novel constructions of one-way functions.

Abstract: As a result of recent significant technological advances, a new computing and communication environment, Mobile Ad Hoc Networks (MANET), is about to enter the mainstream. A multitude of critical aspects, including mobility, severe limitations and limited reliability, create a new set of crucial issues and trade-offs that must be carefully taken into account in the design of robust and efficient algorithms for these environments. The communication among mobile hosts is one among the many issues that need to be resolved efficiently before MANET becomes a commodity.
In this paper, we propose to discuss the communication problem in MANET as well as present some characteristic techniques for the design, the analysis and the performance evaluation of distributed communication protocols for mobile ad hoc networks. More specifically, we propose to review two different design techniques. While the first type of protocols tries to create and maintain routing paths among the hosts, the second set of protocols uses a randomly moving subset of the hosts that acts as an intermediate pool for receiving and delivering messages. We discuss the main design choices for each approach, along with performance analysis of selected protocols.

Abstract: An ever growing emphasis is put nowadays in developing personalized journey planning and renewable mobility services in smart cities. These services combine means of scheduled-based public transport and electric vehicles or bikes, using crowdsourcing techniques for collecting real-time traffic information and for assessing the recommended routes. The goal is to develop an information system that will allow the fast, real-time computation of best routes.
The main challenges in developing such an information system are both technological and algorithmic. The technological challenge concerns the collection, storage, management, and updating of a huge volume of transport data that are usually time-dependent, and the provision (through these data) of personalized renewable mobility services in smartphones. This challenge is typically confronted by creating a cloud infrastructure that on the one hand will support the storage, management, and updating of data, while on the other hand it will handle the necessary data feed to the smartphone applications for providing the users with the requested best routes.
The algorithmic challenge concerns the development of innovative algorithms for the efficient provision of journey planning services in smartphones, based on data they will receive from the cloud infrastructure. These services guarantee the computation of realistic and useful best routes, as well as the updating of the precomputed (route and timetable) information, in case of delays of scheduled public transport vehicles, so that the users can online update their routes to destination. The goal is to develop an algorithmic basis for supporting modern renewable mobility services (information systems), such as "mobility on demand'' (where the next leg of a journey is decided in real-time) and "door-to-door'' personalized mobility, in urban scheduled-based public transport environments. Scheduled-based public transport information systems should not only compute in real-time end-user queries requesting best routes, but also to update the timetable information in case of delays.
The core algorithmic issues of mobility and journey planning (regarding the computation of optimal routes under certain criteria) in scheduled-based public transport systems concern the efficient solution of the fundamental earlier arrival (EA) problem (compute a journey from station S to station T minimizing the overall traveling time required to complete the journey), the minimum number of
transfers (MNT) problem (compute a journey from station S to station T minimizing the number of times a passenger is required to change vehicle), and the efficient updating of timetable information system in case of vehicle delays. The EA and MNT problems have been extensively studied in the literature under two main approaches: the array-based modeling (where the timetable is represented as an array) and the graph-based modeling (where the timetable is represented as a graph). Experimental results have shown so far that the array-based approaches are faster in terms of query time than graph-based ones, as they are able to better exploit data locality and do not rely on priority queues. On the other hand, the array-based approaches have not been theoretically or experimentally studied as far as the efficient updating of timetable information, in case of delays, is concerned.
In this thesis, new graph-based models are being developed that solve efficiently the aforementioned fundamental algorithmic mobility problems in urban scheduled-based public transport information systems, along with a mobile application (journey planner) running on Android-based smartphones that includes a service for the evaluation of the recommended routes by the users. In particular:
(a) An extensive comparative evaluation was conducted on graph-based dynamic models that represent big data volumes regarding their suitability for representing timetable information. The study confirmed that the realistic time-expanded model is the most suitable for representing timetable information.
(b) Two new graph-based models have been developed for representing timetable information (in a timetable information system), the reduced time-expanded model and the dynamic timetable model (DTM), both of which are more space-efficient with respect to the realistic time-expanded model. For both of the new models, new efficient algorithms were developed for fast answering of EA and MNT queries, as well as for updating the timetable information representation in case of delays.
(c)An experimental evaluation was conducted with the new graph-based models and their associated query and update algorithms on a set of 14 real-world scheduled-based transportation systems, including the metropolitan areas of Berlin, Athens, London, Rome, and Madrid. The experimental results showed that the query algorithms of the reduced time-expanded model are superior to those of the DTM model, while the reverse is true regarding the update algorithms. In addition, the experimental study showed that the query algorithms of the new graph-based models compete favorably with those of the best array-based models.
(d) A mobile, cloud-based, journey planner (information system) was developed whose core algorithmic engine builds upon the new graph-based models. The mobile application is accompanied by a service that allows the users to assess the recommended journeys. The journey planner demonstrates the practicality of the new graph-based models and their associated query and update algorithms.

Abstract: In this work we study the important problem of colouring squares of planar graphs (SQPG). We design and implement two new algorithms that colour in a different way SQPG. We call these algorithms MDsatur and RC. We have also implemented and experimentally evaluated the performance of most of the known approximation colouring algorithms for SQPG [14, 6, 4, 10]. We compare the quality of the colourings achieved by these algorithms, with the colourings obtained by our algorithms and with the results obtained from two well-known greedy colouring heuristics. The heuristics are mainly used for comparison reasons and unexpectedly give very good results. Our algorithm MDsatur outperforms the known algorithms as shown by the extensive experiments we have carried out.
The planar graph instances whose squares are used in our experiments are “non-extremal” graphs obtained by LEDA and hard colourable graph instances that we construct.
The most interesting conclusions of our experimental study are:
1) all colouring algorithms considered here have almost optimal performance on the squares of “non-extremal” planar graphs. 2) all known colouring algorithms especially designed for colouring SQPG, give significantly better results, even on hard to colour graphs, when the vertices of the input graph are randomly named. On the other hand, the performance of our algorithm, MDsatur, becomes worse in this case, however it still has the best performance compared to the others. MDsatur colours the tested graphs with 1.1 OPT colours in most of the cases, even on hard instances, where OPT denotes the number of colours in an optimal colouring. 3) we construct worst case instances for the algorithm of Fotakis el al. [6], which show that its theoretical analysis is tight.

Abstract: Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar, and sparse networks. The approach used is to preprocess the inputn-vertex network so that afterward, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after anO(n log n) preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in timeO(n2). This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, \~{a}, of the input network. The parameter \~{a} varies between 1 and {\`E}(n); the algorithms perform well when \~{a} = o(n). The value of a min-cut can be found in timeO(n + \~{a}2log \~{a}) and all-pairs min-cut can be solved in timeO(n2 + \~{a}4log \~{a}) for sparse networks. The corresponding running times for planar networks areO(n + \~{a} log \~{a}) andO(n2 + \~{a}3log \~{a}), respectively. The latter bounds depend on a result of independent interest; outerplanar networks have small “mimicking” networks that are also outerplanar.

Abstract: In this paper, we consider the problem of energy balanced data propagation in wireless sensor networks and we generalise previous works by allowing realistic energy assignment. A new modelisation of the process of energy consumption as a random walk along with a new analysis are proposed. Two new algorithms are presented and analysed. The first one is easy to implement and fast to execute. However, it needs a priori assumptions on the process generating data to be propagated. The second algorithm overcomes this need by inferring information from the observation of the process. Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes. This represents an important contribution for propagating energy balanced data in wireless sensor netwoks due to their highly dynamic nature.

Abstract: We have conducted an extensive experimental study on algorithms for fully dynamic transitive
closure. We have implemented the recent fully dynamic algorithms by King [1999], Roditty [2003],
Roditty and Zwick [2002, 2004], and Demetrescu and Italiano [2000, 2005] along with several variants
and compared them to pseudo fully dynamic and simple-minded algorithms developed in a
previous study [Frigioni et al. 2001].We tested and compared these implementations on random inputs,
synthetic (worst-case) inputs, and on inputs motivated by real-world graphs. Our experiments
reveal that some of the dynamic algorithms can really be of practical value in many situations.

Abstract: We have conducted an extensive experimental study on algorithms for fully dynamic transitive
closure. We have implemented the recent fully dynamic algorithms by King [1999], Roditty [2003],
Roditty and Zwick [2002, 2004], and Demetrescu and Italiano [2000, 2005] along with several variants
and compared them to pseudo fully dynamic and simple-minded algorithms developed in a
previous study [Frigioni et al. 2001].We tested and compared these implementations on random inputs,
synthetic (worst-case) inputs, and on inputs motivated by real-world graphs. Our experiments
reveal that some of the dynamic algorithms can really be of practical value in many situations.

Abstract: We perform an extensive experimental study of several dynamic algorithms for transitive closure. In particular, we implemented algorithms given by Italiano, Yellin, Cicerone et al., and two recent randomized algorithms by Henzinger and King. We propose a fine-tuned version of Italiano's algorithms as well as a new variant of them, both of which were always faster than any of the other implementations of the dynamic algorithms. We also considered simple-minded algorithms that were easy to implement and likely to be fast in practice. Wetested and compared the above implementations on random inputs, on non-random inputs that are worst-case inputs for the dynamic algorithms, and on an input motivated by a real-world graph.

Abstract: The “small world” phenomenon, i.e., the fact that the
global social network is strongly connected in the sense
that every two persons are inter-related through a small
chain of friends, has attracted research attention and has
been strongly related to the results of the social
psychologist¢s Stanley Milgram experiments; properties
of social networks and relevant problems also emerge in
peer-to-peer systems and their study can shed light on
important modern network design properties.
In this paper, we have experimentally studied greedy
routing algorithms, i.e., algorithms that route information
using “long-range” connections that function as
shortcuts connecting “distant” network nodes. In
particular, we have implemented greedy routing
algorithms, and techniques from the recent literature in
networks of line and grid topology using parallelization
for increasing efficiency. To the best of our knowledge, no
similar attempt has been made so far

Abstract: Avertical perspective, ranging from management
and routing to physical layer options, concerning dynamic
network monitoring and compensation of impairments
(M&C),is given.Feasibility, reliability,and performance
improvements on reconﬁgurable transparent networksare
expected to arise from the consolidated assessment of network management and control speciﬁcations, as a more accurate evaluation of available M&C techniques. In the network
layer,physical parameters aware algorithms are foreseen to
pursue reliable network performance. In the physical layer,
some new M&C methods were developed and rating of the state-of-the-art reported in literature is given. Optical monitoring implementation and viability is discussed.

Abstract: Urban road networks are represented as directed graphs, accompanied by a metric which assigns cost functions (rather than scalars) to the arcs, e.g. representing time-dependent arc-traversal-times. In this work, we present oracles for providing time-dependent min-cost route plans, and conduct their experimental evaluation on a real-world data set (city of Berlin). Our oracles are based on precomputing all landmark-to-vertex shortest travel-time functions, for properly selected landmark sets. The core of this preprocessing phase is based on a novel, quite efficient and simple oneto-all approximation method for creating approximations of shortest travel-time functions. We then propose three query algorithms, including a PTAS, to efficiently provide mincost route plan responses to arbitrary queries. Apart from the purely algorithmic challenges, we deal also with several
implementation details concerning the digestion of raw traffic data, and we provide heuristic improvements of both the preprocessing phase and the query algorithms. We conduct an extensive, comparative experimental study with all query algorithms and six landmark sets. Our results are quite encouraging, achieving remarkable speedups (at least by two orders of magnitude) and quite small approximation guarantees, over the time-dependent variant of Dijkstra¢s algorithm.

Abstract: We present new combinatorial approximation algorithms for k-set cover. Previous approaches are based on extending the greedy algorithm by efficiently handling small sets. The new algorithms further extend them by utilizing the natural idea of computing large packings of elements into sets of large size. Our results improve the previously best approximation bounds for the k-set cover problem for all values of k ≥ 6. The analysis technique could be of independent interest; the upper bound on the approximation factor is obtained by bounding the objective value of a factor-revealing linear program.

Abstract: We present new combinatorial approximation algorithms for
k-set cover. Previous approaches are based on extending the greedy al-
gorithm by e±ciently handling small sets. The new algorithms further
extend them by utilizing the natural idea of computing large packings
of elements into sets of large size. Our results improve the previously
best approximation bounds for the k-set cover problem for all values
of k ¸ 6. The analysis technique could be of independent interest; the
upper bound on the approximation factor is obtained by bounding the
objective value of a factor-revealing linear program.

Abstract: We study the following Constrained Bipartite Edge Coloring problem: We are given a bipartite graph G=(U,V,E) of maximum degree I with n vertices, in which some of the edges have been legally colored with c colors. We wish to complete the coloring of the edges of G minimizing the total number of colors used. The problem has been proved to be NP-hard even for bipartite graphs of maximum degree three. Two special cases of the problem have been previously considered and tight upper and ower bounds on the optimal number of colors were proved. The upper bounds led to 3/2-approximation algorithms for both problems. In this paper we present a randomized (1.37+o(1))-approximation algorithm for the general problem in the case where max{l,c} = {\`u}(ln n). Our techniques are motivated by recent works on the Circular Arc Coloring problem and are essentially different and simpler than the existing ones.

Abstract: Motivated by the wavelength assignment problem in WDM optical networks, we study path coloring problems in graphs. Given a set of paths P on a graph G, the path coloring problem is to color the paths of P so that no two paths traversing the same edge of G are assigned the same color and the total number of colors used is minimized. The problem has been proved to be NP-hard even for trees and rings.
Using optimal solutions to fractional path coloring, a natural relaxation of path coloring, on which we apply a randomized rounding technique combined with existing coloring algorithms, we obtain new upper bounds on the minimum number of colors sufficient to color any set of paths on any graph. The upper bounds are either existential or constructive.
The existential upper bounds significantly improve existing ones provided that the cost of the optimal fractional path coloring is sufficiently large and the dilation of the set of paths is small. Our algorithmic results include improved approximation algorithms for path coloring in rings and in bidirected trees. Our results extend to variations of the original path coloring problem arizing in multifiber WDM optical networks.

Abstract: We consider approval voting elections in which each voter votes for a (possibly empty) set of candidates and the outcome consists of a set of k candidates for some parameter k, e.g., committee elections. We are interested in the minimax approval voting rule in which the outcome represents a compromise among the voters, in the sense that the maximum distance between the preference of any voter and the outcome is as small as possible. This voting rule has two main drawbacks. First, computing an outcome that minimizes the maximum distance is computationally hard. Furthermore, any algorithm that always returns such an outcome provides
incentives to voters to misreport their true preferences.
In order to circumvent these drawbacks, we consider approximation algorithms, i.e., algorithms that produce an outcome that approximates the minimax distance for any given instance. Such algorithms can be considered as alternative voting rules. We present a polynomial-time 2-approximation algorithm that uses a natural linear programming relaxation for the underlying optimization problem and deterministically
rounds the fractional solution in order to compute the outcome; this result improves upon the previously best known algorithm that has an approximation ratio of 3. We are furthermore interested in approximation algorithms that are resistant to manipulation by (coalitions of) voters, i.e., algorithms that do not motivate voters to misreport their true preferences in order to improve their distance from the outcome. We complement previous results in the literature with new upper and lower bounds on strategyproof and group-strategyproof algorithms.

Abstract: The study of the path coloring problem is motivated by the allocation of optical bandwidth to communication requests in all-optical networks that utilize Wavelength Division Multiplexing (WDM). WDM technology establishes communication between pairs of network nodes by establishing transmitter-receiver paths and assigning wavelengths to each path so that no two paths going through the same fiber link use the same wavelength. Optical bandwidth is the number of distinct wavelengths. Since state-of-the-art technology allows for a limited number of wavelengths, the engineering problem to be solved is to establish communication minimizing the total number of wavelengths used. This is known as the wavelength routing problem. In the case where the underlying network is a tree, it is equivalent to the path coloring problem.
We survey recent advances on the path coloring problem in both undirected and bidirected trees. We present hardness results and lower bounds for the general problem covering also the special case of sets of symmetric paths (corresponding to the important case of symmetric communication). We give an overview of the main ideas of deterministic greedy algorithms and point out their limitations. For bidirected trees, we present recent results about the use of randomization for path coloring and outline approximation algorithms that find path colorings by exploiting fractional path colorings. Also, we discuss upper and lower bounds on the performance of on-line algorithms.

Abstract: We present a 40 Gb/s asynchronous self-routing network and node architecture that exploits bit
and packet level optical signal processing to perform synchronization, forwarding and
switching. Optical packets are self-routed on a hop-by-hop basis through the network by using
stacked optical tags, each representing a specific optical node. Each tag contains control signals
for configuring the switching matrix and forwarding each packet to the appropriate outgoing
link and onto the next hop. Physical layer simulations are performed, modeling each optical subsystem
of the node showing acceptable signal quality and Bit Error Rates. Resource reservationbased
signaling algorithms are theoretically modeled for the control plane capable of providing
high performance in terms of blocking probability and holding time.

Abstract: In this paper, we present BAD, an application-level multi-
cast infrastructure. BAD is designed to improve the perfor-
mance of multicast dissemination trees, under both a static
and a dynamic environment, where the eective bandwidth
of the network links changes with time. Its main goal is
to improve the data rate that end users perceive during
a multicast operation. BAD can be used for the creation
and management of multicast groups. It can be deployed
over any DHT retaining its fundamental advantages of band-
width improvement. BAD consists of a suit of algorithms
for node joins/leaves, bandwidth distribution to heteroge-
neous nodes, tree rearrangement and reduction of overhead.
We have implemented BAD within the FreePastry system.
We report on the results of a detailed performance evalua-
tion which testies for BAD's eciency and low overhead.
Specically, our experiments show that the improvement on
the minimum bandwidth ranges from 40% to 1400% and the
improvement on the average bandwidth ranges from 60% to
250%.

Abstract: We study the problem of scheduling permanent jobs on un-
related machines when the objective is to minimize the Lp
norm of the machine loads. The problem is known as load
balancing under the Lp norm. We present an improved up-
per bound for the greedy algorithm through simple analy-
sis; this bound is also shown to be best possible within the
class of deterministic online algorithms for the problem. We
also address the question whether randomization helps on-
line load balancing under Lp norms on unrelated machines;
this is a challenging question which is open for more than a
decade even for the L2 norm. We provide a positive answer
to this question by presenting the ¯rst randomized online
algorithms which outperform deterministic ones under any
(integral) Lp norm for p = 2; :::; 137. Our algorithms es-
sentially compute in an online manner a fractional solution
to the problem and use the fractional values to make ran-
dom choices. The local optimization criterion used at each
step is novel and rather counterintuitive: the values of the
fractional variables for each job correspond to °ows at an ap-
proximate Wardrop equilibrium for an appropriately de¯ned
non-atomic congestion game. As corollaries of our analysis
and by exploiting the relation between the Lp norm and the
makespan of machine loads, we obtain new competitive algo-
rithms for online makespan minimization, making progress
in another longstanding open problem.

Abstract: We study the problem of scheduling permanent jobs on un-
related machines when the objective is to minimize the Lp
norm of the machine loads. The problem is known as load
balancing under the Lp norm. We present an improved up-
per bound for the greedy algorithm through simple analy-
sis; this bound is also shown to be best possible within the
class of deterministic online algorithms for the problem. We
also address the question whether randomization helps on-
line load balancing under Lp norms on unrelated machines;
this is a challenging question which is open for more than a
decade even for the L2 norm. We provide a positive answer
to this question by presenting the ¯rst randomized online
algorithms which outperform deterministic ones under any
(integral) Lp norm for p = 2; :::; 137. Our algorithms es-
sentially compute in an online manner a fractional solution
to the problem and use the fractional values to make ran-
dom choices. The local optimization criterion used at each
step is novel and rather counterintuitive: the values of the
fractional variables for each job correspond to °ows at an ap-
proximate Wardrop equilibrium for an appropriately de¯ned
non-atomic congestion game. As corollaries of our analysis
and by exploiting the relation between the Lp norm and the
makespan of machine loads, we obtain new competitive algo-
rithms for online makespan minimization, making progress
in another longstanding open problem.

Abstract: Divisible load scenarios occur in modern media server applications since most multimedia applications typically require access to continuous and discrete data. A high performance Continuous Media (CM) server greatly depends on the ability of its disk IO subsystem to serve both types of workloads efficiently. Disk scheduling algorithms for mixed media workloads, although they play a central role in this task, have been overlooked by related research efforts. These algorithms must satisfy several stringent performance goals, such as achieving low response time and ensuring fairness, for the discrete-data workload, while at the same time guaranteeing the uninterrupted delivery of continuous data, for the continuous-data workload. The focus of this paper is on disk scheduling algorithms for mixed media workloads in a multimedia information server. We propose novel algorithms, present a taxonomy of relevant algorithms, and study their performance through experimentation. Our results show that our algorithms offer drastic improvements in discrete request average response times, are fair, serve continuous requests without interruptions, and that the disk technology trends are such that the expected performance benefits can be even greater in the future.

Abstract: The problem of communication among mobile nodes is one of the most fundamental problems in ad hoc mobile networks and is at the core of many algorithms, such as for counting the number of nodes, electing a leader, data processing etc. For an exposition of several important problems in ad hoc mobile networks. The work of Chatzigiannakis, Nikoletseas and Spirakis focuses on wireless mobile networks that are subject to highly dynamic structural changes created by mobility, channel fluctuations and device failures. These changes affect topological connectivity, occur with high frequency and may not be predictable in advance. Therefore, the environment where the nodes move (in three-dimensional space with possible obstacles) as well as the motion that the nodes perform are \textit{input} to any distributed algorithm.

Abstract: We study the problem of maintaining connectivity in a wireless
network where the network nodes are equipped with
directional antennas. Nodes correspond to points on the
plane and each uses a directional antenna modeled by a sector
with a given angle and radius. The connectivity problem
is to decide whether or not it is possible to orient the antennas
so that the directed graph induced by the node transmissions
is strongly connected. We present algorithms for
simple polynomial-time-solvable cases of the problem, show
that the problem is NP-complete in the 2-dimensional case
when the sector angle is small, and present algorithms that
approximate the minimum radius to achieve connectivity for
sectors with a given angle. We also discuss several extensions
to related problems. To the best of our knowledge, the
problem has not been studied before in the literature.

Abstract: We design and implement various algorithms for
solving the static RWA problem with the objective of minimizing
the maximum number of requested wavelengths based on LP
relaxation formulations. We present a link formulation, a path
formulation and a heuristic that breaks the problem in the two
constituent subproblems and solves them individually and
sequentially. The flow cost functions that are used in these
formulations result in providing integer optimal solutions despite
the absence of integrality constraints for a large subset of RWA
input instances, while also minimizing the total number of used
wavelengths. We present a random perturbation technique that is
shown to increase the number of instances for which we find
integer solutions, and we also present appropriate iterative fixing
and rounding methods to be used when the algorithms do not yield
integer solutions. We comment on the number of variables and
constraints these formulations require and perform extensive
simulations to compare their performance to that of a typical minmax
congestion formulation.

Abstract: We address an important communication issue arising in
wireless cellular networks that utilize frequency division
multiplexing (FDM) technology. In such networks, many
users within the same geographical region (cell) can communicate
simultaneously with other users of the network
using distinct frequencies. The spectrum of the available
frequencies is limited; thus, efficient solutions to the call
controlproblemareessential.Theobjectiveofthecallcontrol
problem is, given a spectrum of available frequencies
and users that wish tocommunicate, to maximize the benefit,
i.e., the number of users that communicate without
signalinterference.Weconsidercellularnetworksofreuse
distance k ≥ 2 and we study the online version of the
problem using competitive analysis. In cellular networks
of reuse distance 2, the previously best known algorithm
that beats the lower bound of 3 on the competitiveness
of deterministic algorithms, works on networks with one
frequency, achieves a competitive ratio against oblivious
adversaries, which is between 2.469 and 2.651, and uses
a number of random bits at least proportional to the size
of the network.We significantly improve this result by presentingaseriesofsimplerandomizedalgorithmsthathave
competitiveratiossignificantlysmallerthan3,workonnetworks
with arbitrarily many frequencies, and use only a
constant number of random bits or a comparable weak
random source. The best competitiveness upper bound
we obtain is 16/7 using only four random bits. In cellular
networks of reuse distance k > 2, we present simple
randomized online call control algorithms with competitive
ratios, which significantly beat the lower bounds on
the competitiveness of deterministic ones and use only
O(log k )randombits. Also,weshownewlowerboundson
thecompetitivenessofonlinecallcontrolalgorithmsincellularnetworksofanyreusedistance.
Inparticular,weshow
thatnoonline algorithm can achieve competitive ratio better
than 2, 25/12, and 2.5, in cellular networks with reuse
distancek ∈ {2, 3, 4},k = 5,andk ≥ 6, respectively.

Abstract: Clustering is an important research topic for wireless sensor
networks (WSNs). A large variety of approaches has been
presented focusing on dierent performance metrics. Even
though all of them have many practical applications, an ex-
tremely limited number of software implementations is avail-
able to the research community. Furthermore, these very few
techniques are implemented for specic WSN systems or are
integrated in complex applications. Thus it is very difficult
to comparatively study their performance and almost impos-
sible to reuse them in future applications under a dierent
scope. In this work we study a large body of well estab-
lished algorithms. We identify their main building blocks
and propose a component-based architecture for developing
clustering algorithms that (a) promotes exchangeability of
algorithms thus enabling the fast prototyping of new ap-
proaches, (b) allows cross-layer implementations to realize
complex applications, (c) oers a common platform to com-
paratively study the performance of dierent approaches,
(d) is hardware and OS independent. We implement 5 well
known algorithms and discuss how to implement 11 more.
We conduct an extended simulation study to demonstrate
the faithfulness of our implementations when compared to
the original implementations. Our simulations are at very
large scale thus also demonstrating the scalability of the
original algorithms beyond their original presentations. We
also conduct experiments to assess their practicality in real
WSNs. We demonstrate how the implemented clustering
algorithms can be combined with routing and group key es-
tablishment algorithms to construct WSN applications. Our
study clearly demonstrates the applicability of our approach
and the benets it oers to both research & development
communities.

Abstract: This chapter is an introduction to the basic concepts and advances of a new field, that of Computational (or Algorithmic) Game Theory. We study the computational complexity of Nash equilibria and review the related algorithms proposed in the literature. Then, given the apparent difficulty of computing exact Nash equilibria, we study the efficient computation of approximate notions of Nash equilibria. Next we deal with several computational issues related to the class of congestion games, which model the selfish behavior of individuals when competing on the usage of a common set of resources. Finally, we study the price of anarchy (in the context of congestion games), which is defined as a measure of the performance degradation due to the the lack of coordination among the involved players.

Abstract: This paper addresses the problem of counting the size of a network where (i) processes have the same identifiers (anonymous nodes) and (ii) the et-
work topology constantly changes (dynamic network). Changes are riven by a powerful adversary that can look at internal process states and add and remove edges in order to contrast the convergence of the algorithm to the correct count. The paper proposes two leader-based counting algorithms. Such algorithms are based on a technique that mimics an energy-transfer between network nodes. The first algorithm assumes that the adversary cannot generate either disconnected network graphs or network graphs where nodes have degree greater than D. In such algorithm, the leader can count the size of the network and detect the counting termination in a finite time (i.e., conscious counting algorithm). The second algorithm assumes that the adversary only keeps the network graph connected at any time and we prove that the leader can still converge to a correct count in a finite number of rounds, but it is not conscious when this convergence happens.

Abstract: DAP (Distributed Algorithms Platform) is a generic and homogeneous simulation environment aiming at the implementation, simulation, and testing of distributed algorithms for wired and wireless networks. In this work, we present its architecture, the most important design decisions, and discuss its distinct features and functionalities. DAP allows the algorithm designer to implement a distributed protocol by creating his own customized environment, and programming in a standard programming language in a style very similar to that of a real-world application. DAP provides a graphical user interface that allows the designer to monitor and control the execution of simulations, visualize algorithms, as well as gather statistics and other information for their experimental analysis and testing.

Abstract: In this work we investigate the problem of communication among mobile hosts, one of the most fundamental problems in ad-hoc mobile networks that is at the core of many algorithms. Our work investigates the extreme case of total absence of any fixed network backbone or centralized administration, instantly forming networks based only on mobile hosts with wireless communication capabilities, where topological connectivity is subject to frequent, unpredictable change.
For such dynamically changing networks we propose a set of protocols which exploit the coordinated (by the protocol) motion of a small part of the network in order to manage network operations. We show that such protocols can be designed to work correctly and efficiently for communication by avoiding message flooding. Our protocols manage to establish communication between any pair of mobile hosts in small, a-priori guaranteed expected time bounds. Our results exploit and further develop some fundamental properties of random walks in finite graph.
Apart from studying the general case, we identify two practical and interesting cases of ad-hoc mobile networks: a) hierarchical ad-hoc networks, b) highly changing ad-hoc networks, for which we propose protocols that efficiently deal with the problem of basic communication.
We have conducted a set of extensive experiments, comprised of thousands of mobile hosts in order to validate the theoretical results and show that our protocols achieve very efficient communication under different scenaria.

Abstract: Wireless sensor networks are comprised of a vast number of devices, situated in an area of interest that self organize in a structureless network, in order to monitor/record/measure an environmental variable or phenomenon and subsequently to disseminate the data to the control center.
Here we present research focused on the development, simulation and evaluation of energy efficient algorithms, our basic goal is to minimize the energy consumption. Despite technology advances, the problem of energy use optimization remains valid since current and emerging hardware solutions fail to solve it.
We aim to reduce communication cost, by introducing novel techniques that facilitate the development of new algorithms. We investigated techniques of distributed adaptation of the operations of a protocol by using information available locally on every node, thus through local choices we improve overall performance. We propose techniques for collecting and exploiting limited local knowledge of the network conditions. In an energy efficient manner, we collect additional information which is used to achieve improvements such as forming energy efficient, low latency and fault tolerant paths to route data. We investigate techniques for managing mobility in networks where movement is a characteristic of the control center as well as the sensors. We examine methods for traversing and covering the network field based on probabilistic movement that uses local criteria to favor certain areas.
The algorithms we develop based on these techniques operate a) at low level managing devices, b) on the routing layer and c) network wide, achieving macroscopic behavior through local interactions. The algorithms are applied in network cases that differ in density, node distribution, available energy and also in fundamentally different models, such as under faults, with incremental node deployment and mobile nodes. In all these settings our techniques achieve significant gains, thus distinguishing their value as tools of algorithmic design.

Abstract: We describe our experiences from implementing and integrating a new job scheduling algorithm in the gLite Grid middleware and present experimental results that compare it to the existing gLite scheduling algorithms. It is the first time that gLite scheduling algorithms are put under test and compared with a new algorithm under the same conditions. We describe the problems that were encountered and solved, going from theory and simulations to practice and the actual implementation of our scheduling algorithm. In this work we also describe the steps one needs to follow in order to develop and test a new scheduling algorithm in gLite. We present the methodology followed and the testbed that was set up for the comparisons. Our research sheds light on some of the problems of the existing gLite scheduling algorithms and makes clear the need for the development of new.

Abstract: Direct routing is the special case of bufferless routing where N packets, once injected into the network, must be routed along specific paths to their destinations without conflicts. We give a general treatment of three facets of direct routing: (i) Algorithms. We present a polynomial time greedy algorithm for arbitrary direct routing problems whch is worst-case optimal, i.e., there exist instances for which no direct routing algorithm is better than the greedy. We apply variants of this algorithm to commonly used network topologies. In particular, we obtain near-optimal routing time using for the tree and d-dimensional mesh, given arbitrary sources and destinations; for the butterfly and the hypercube, the same result holds for random destinations. (ii) Complexity. By a reduction from Vertex Coloring, we show that Direct Routing is inapproximable, unless P=NP. (iii) Lower Bounds for Buffering. We show the existence of routing problems which cannot be solved efficiently with direct routing. To solve these problems, any routing algorithm needs buffers. We give nontrivial lower bounds on such buffering requirements for general routing algorithms.

Abstract: We present the first approximate distance oracle for sparse directed networks with time-dependent arc-travel-times determined by continuous, piecewise linear, positive functions possessing the FIFO property. Our approach precomputes (1 + {\aa}) −approximate distance summaries from selected landmark vertices to all other vertices in the network, and provides two sublinear-time query algorithms that deliver constant and (1 + {\'o}) −approximate shortest-travel-times, respectively, for arbitrary origin-destination pairs in the network. Our oracle is based only on the sparsity of the network, along with two quite natural assumptions about travel-time functions which allow the smooth transition towards asymmetric and time-dependent distance metrics.

Abstract: When one engineers distributed algorithms, some special characteristics
arise that are different from conventional (sequential or parallel)
computing paradigms. These characteristics include: the need for either a
scalable real network environment or a platform supporting a simulated
distributed environment; the need to incorporate asynchrony, where arbitrarya
synchrony is hard, if not impossible, to implement; and the generation
of “difficult” input instances which is a particular challenge. In this
work, we identifys ome of the methodological issues required to address
the above characteristics in distributed algorithm engineering and illustrate
certain approaches to tackle them via case studies. Our discussion
begins byad dressing the need of a simulation environment and how asynchronyis
incorporated when experimenting with distributed algorithms.
We then proceed bys uggesting two methods for generating difficult input
instances for distributed experiments, namelya game-theoretic one and another
based on simulations of adversarial arguments or lower bound proofs.
We give examples of the experimental analysis of a pursuit-evasion protocol
and of a shared memorypro blem in order to demonstrate these ideas.
We then address a particularlyi nteresting case of conducting experiments
with algorithms for mobile computing and tackle the important issue of
motion of processes in this context. We discuss the two-tier principle as
well as a concurrent random walks approach on an explicit representation
of motions in ad-hoc mobile networks, which allow at least for averagecase
analysis and measurements and may give worst-case inputs in some
cases. Finally, we discuss a useful interplay between theory and practice
that arise in modeling attack propagation in networks.

Abstract: Wireless sensor networks are comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, which co-operate to accomplish a large sensing task. Such networks can be very useful in practice in applications that require fine-grain monitoring of physical environment subjected to critical conditions (such as inaccessible terrains or disaster places). Very large numbers of sensor devices can be deployed in areas of interest and use self-organization and collaborative methods to form deeply networked environments. Features including the huge number of sensor devices involved, the severe power, computational and memory limitations, their dense deployment and frequent failures, pose new design and implementation aspects. The efficient and robust realization of such large, highly-dynamic, complex, non-conventional environments is a challenging algorithmic and technological task. In this work we consider certain important aspects of the design, deployment and operation of distributed algorithms for data propagation in wireless sensor networks and discuss some characteristic protocols, along with an evaluation of their performance.

Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.

Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.

Abstract: Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network.

Abstract: We address the issue of measuring distribution fairness in Internet-scale networks. This problem has several interesting instances encountered in different applications, ranging from assessing the distribution of load between network nodes for load balancing purposes, to measuring node utilization for optimal resource exploitation, and to guiding autonomous decisions of nodes in networks built with market-based economic principles. Although some metrics have been proposed, particularly for assessing load balancing algorithms, they fall short. We first study the appropriateness of various known and previously proposed statistical metrics for measuring distribution fairness. We put forward a number of required characteristics for appropriate metrics. We propose and comparatively study the appropriateness of the Gini coefficient (G) for this task. Our study reveals as most appropriate the metrics of G, the fairness index (FI), and the coefficient of variation (CV) in this order. Second, we develop six distributed sampling algorithms to estimate metrics online efficiently, accurately, and scalably. One of these algorithms (2-PRWS) is based on two effective optimizations of a basic algorithm, and the other two (the sequential sampling algorithm, LBS-HL, and the clustered sampling one, EBSS) are novel, developed especially to estimate G. Third, we show how these metrics, and especially G, can be readily utilized online by higher-level algorithms, which can now know when to best intervene to correct unfair distributions (in particular, load imbalances). We conclude with a comprehensive experimentation which comparatively evaluates both the various proposed estimation algorithms and the three most appropriate metrics (G, CV, andFI). Specifically, the evaluation quantifies the efficiency (in terms of number of the messages and a latency indicator), precision, and accuracy achieved by the proposed algorithms when estimating the competing fairness metrics. The central conclusion is that the proposed metric, G, can be estimated with a small number of messages and latency, regardless of the skew of the underlying distribution.

Abstract: Using a set of geometric containers to speed up shortest path queries in a weighted graph has been proven a useful tool for dealing with large sparse graphs. Given a layout of a graph G=(V,E), we store, for each edge (u,v)set membership, variantE, the bounding box of all nodes tset membership, variantV for which a shortest u-t-path starts with (u,v). Shortest path queries can then be answered by DijkstraImage restricted to edges where the corresponding bounding box contains the target.
In this paper, we present new algorithms as well as an empirical study for the dynamic case of this problem, where edge weights are subject to change and the bounding boxes have to be updated. We evaluate the quality and the time for different update strategies that guarantee correct shortest paths in an interesting application to railway information systems, using real-world data from six European countries.

Abstract: This paper deals with early obstacles recognition in wireless sensor networks under various traffic
patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a
constant factor of 2ð+1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.

Abstract: Two important performance parameters of distributed, rate-based flow control algorithms are their locality and convergence complexity. The former is characterized by the amount of global knowledge that is available to their scheduling mechanisms, while the latter is defined as the number of update operations performed on rates of individual sessions until max-min fairness is reached. Optimistic algorithms allow any session to intermediately receive a rate larger than its max-min fair rate; bottleneck algorithms finalize the rate of a session only if it is restricted by a certain, highly congested link of the network. In this work, we present a comprehensive collection of lower and upper bounds on convergence complexity, under varying degrees of locality, for optimistic, bottleneck, rate-based flow control algorithms. Say that an algorithm is oblivious if its scheduling mechanism uses no information of either the session rates or the network topology. We present a novel, combinatorial construction of a capacitated network, which we use to establish a fundamental lower bound of dn 4 + n 2 on the convergence complexity of any oblivious algorithm, where n is the number of sessions laid out on a network, and d, the session dependency, is a measure of topological dependencies among sessions. Moreover, we devise a novel simulation proof to establish that, perhaps surprisingly, the lower bound of dn 4 + n 2 on convergence complexity still holds for any partially oblivious algorithm, in which the scheduling mechanism is allowed to use information about session rates, but is otherwise unaware of network topology. On the positive side, we prove that the lower bounds for oblivious and partially oblivious algorithms are both tight. We do so by presenting optimal oblivious algorithms, which converge after dn 2 + n 2 update operations are performed in the worst case. To complete the picture, we show that linear convergence complexity can indeed be achieved if information about both session rates and network topology is available to schedulers. We present a counterexample, nonoblivious algorithm, which converges within an optimal number of n update operations. Our results imply a surprising convergence complexity collapse of oblivious and partially oblivious algorithms, and a convergence complexity separation between (partially) oblivious and nonoblivious algorithms for optimistic, bottleneck rate-based flow control.

Abstract: In this work we study the tractability of well supported approximate
Nash Equilibria (SuppNE in short) in bimatrix games. In view
of the apparent intractability of constructing Nash Equilibria (NE in
short) in polynomial time, even for bimatrix games, understanding the
limitations of the approximability of the problem is of great importance.
We initially prove that SuppNE are immune to the addition of arbitrary
real vectors to the rows (columns) of the row (column) player¢s
payoff matrix. Consequently we propose a polynomial time algorithm
(based on linear programming) that constructs a 0.5−SuppNE for arbitrary
win lose games.
We then parameterize our technique for win lose games, in order to
apply it to arbitrary (normalized) bimatrix games. Indeed, this new technique
leads to a weaker {\"o}−SuppNE for win lose games, where {\"o} = √5−1
2
is the golden ratio. Nevertheless, this parameterized technique extends
nicely to a technique for arbitrary [0, 1]−bimatrix games, which assures
a 0.658−SuppNE in polynomial time.
To our knowledge, these are the first polynomial time algorithms providing
{\aa}−SuppNE of normalized or win lose bimatrix games, for some
nontrivial constant {\aa} ∈ [0, 1), bounded away from 1.

Abstract: Andrews et al. [Automatic method for hiding latency in high bandwidth networks, in: Proceedings of the ACM Symposium on Theory of Computing, 1996, pp. 257–265; Improved methods for hiding latency in high bandwidth networks, in: Proceedings of the Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, 1996, pp. 52–61] introduced a number of techniques for automatically hiding latency when performing simulations of networks with unit delay links on networks with arbitrary unequal delay links. In their work, they assume that processors of the host network are identical in computational power to those of the guest network being simulated. They further assume that the links of the host are able to pipeline messages, i.e., they are able to deliver P packets in time O(P+d) where d is the delay on the link.
In this paper we examine the effect of eliminating one or both of these assumptions. In particular, we provide an efficient simulation of a linear array of homogeneous processors connected by unit-delay links on a linear array of heterogeneous processors connected by links with arbitrary delay. We show that the slowdown achieved by our simulation is optimal. We then consider the case of simulating cliques by cliques; i.e., a clique of heterogeneous processors with arbitrary delay links is used to simulate a clique of homogeneous processors with unit delay links. We reduce the slowdown from the obvious bound of the maximum delay link to the average of the link delays. In the case of the linear array we consider both links with and without pipelining. For the clique simulation the links are not assumed to support pipelining.
The main motivation of our results (as was the case with Andrews et al.) is to mitigate the degradation of performance when executing parallel programs designed for different architectures on a network of workstations (NOW). In such a setting it is unlikely that the links provided by the NOW will support pipelining and it is quite probable the processors will be heterogeneous. Combining our result on clique simulation with well-known techniques for simulating shared memory PRAMs on distributed memory machines provides an effective automatic compilation of a PRAM algorithm on a NOW.

Abstract: One of the most important features in image analysis and understanding is shape. Mathematical morphology is the image processing branch that deals with shape analysis. The definition of all morphological transformations is based on two primitive operations, i.e. dilation and erosion. Since many applications require the solution of morphological problems in real time, researching time efficient algorithms for these two operations is crucial.
In this paper, efficient algorithms for the binary as well as the grey level dilation and erosion are presented and evaluated for an advanced associative processor. It is shown through simulation results that the above architecture is near optimal in the binary case and is also as efficient as the array processor with a 2D-mesh interconnection in the grey level case. Finally, it is proven that the implementation of this image processing machine is economically feasible

Abstract: The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on n vertices using O(n) space such that vertex adjacency is tested in O(l) time. We show here how to construct such a representation efficiently by providing simple and optimal algorithms, both in a sequential and a parallel setting. Our sequential algorithm runs in O(n) time. The parallel algorithm runs in O(log n) time using O(n/log n) CRCW PRAM processors, or inO(log n log* n) time using O(n/log n log* n) EREW PRAM processors. Previous results for this problem are based on matroid partitioning and thus have a high complexity.

Abstract: We examine a task scheduling and data migration problem for grid networks, which we refer to as the Data Consolidation (DC) problem. DC arises when a task concurrently requests multiple pieces of data, possibly scattered throughout the grid network, that have to be present at a selected site before the task¢s execution starts. In such a case, the scheduler and the data manager must select (i) the data replicas to be used, (ii) the site where these data will be gathered for the task to be executed, and (iii) the routing paths to be followed; this is assuming that the selected datasets are transferred concurrently to the execution site. The algorithms or policies for selecting the data replicas, the data consolidating site and the corresponding paths comprise a Data Consolidation scheme. We propose and experimentally evaluate several DC schemes of polynomial number of operations that attempt to estimate the cost of the concurrent data transfers, to avoid congestion that may appear due to these transfers and to provide fault tolerance. Our simulation results strengthen our belief that DC is an important problem that needs to be addressed in the design of data grids, and can lead, if performed efficiently, to significant benefits in terms of task delay, network load and other performance parameters.

Abstract: The Time Dependent Team Orienteering Problem with Time Windows (TDTOPTW) can be used to model several real life problems. Among them, the route planning problem for tourists interested in visiting multiple points of interest (POIs) using public transport. The main objective of this problem is to select POIs that match tourist preferences, while taking into account a multitude of parameters and constraints and respecting the time available for sightseeing in a daily basis. TDTOPTW is NP-hard while almost the whole body of the related literature addresses the non time dependent version of the problem. The only TDTOPTW heuristic proposed so far is based on the assumption of periodic service schedules. Herein, we propose two efficient cluster-based heuristics for the TDTOPTW which yield high quality solutions, take into account time dependency in calculating travel times between POIs and make no assumption on periodic service schedules. The validation scenario for our prototyped algorithms included the metropolitan transit network and real POI sets compiled from Athens (Greece).

Abstract: Intuitively, Braess's paradox states that destroying a part
of a network may improve the common latency of selsh
ows at Nash
equilibrium. Such a paradox is a pervasive phenomenon in real-world
networks. Any administrator, who wants to improve equilibrium delays
in selsh networks, is facing some basic questions: (i) Is the network
paradox-ridden? (ii) How can we delete some edges to optimize equilibrium
ow delays? (iii) How can we modify edge latencies to optimize
equilibrium
ow delays?
Unfortunately, such questions lead to NP-hard problems in general. In
this work, we impose some natural restrictions on our networks, e.g.
we assume strictly increasing linear latencies. Our target is to formulate
ecient algorithms for the three questions above.We manage to provide:
{ A polynomial-time algorithm that decides if a network is paradoxridden,
when latencies are linear and strictly increasing.
{ A reduction of the problem of deciding if a network with arbitrary
linear latencies is paradox-ridden to the problem of generating all
optimal basic feasible solutions of a Linear Program that describes
the optimal trac allocations to the edges with constant latency.
{ An algorithm for nding a subnetwork that is almost optimal wrt
equilibrium latency. Our algorithm is subexponential when the number
of paths is polynomial and each path is of polylogarithmic length.
{ A polynomial-time algorithm for the problem of nding the best
subnetwork, which outperforms any known approximation algorithm
for the case of strictly increasing linear latencies.
{ A polynomial-time method that turns the optimal
ow into a Nash
ow by deleting the edges not used by the optimal
ow, and performing
minimal modications to the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity
of recognizing the Braess's paradox most severe manifestations,
and our techniques show novel ways of using the probabilistic method
and of exploiting convex separable quadratic programs.

Abstract: Intuitively, Braess’s paradox states that destroying a part of a network may improve the common latency of selfish flows at Nash equilibrium. Such a paradox is a pervasive phenomenon in real-world networks. Any administrator who wants to improve equilibrium delays in selfish networks, is facing some basic questions:
– Is the network paradox-ridden?
– How can we delete some edges to optimize equilibrium flow delays?
– How can we modify edge latencies to optimize equilibrium flow delays?
Unfortunately, such questions lead to View the MathML sourceNP-hard problems in general. In this work, we impose some natural restrictions on our networks, e.g. we assume strictly increasing linear latencies. Our target is to formulate efficient algorithms for the three questions above. We manage to provide:
– A polynomial-time algorithm that decides if a network is paradox-ridden, when latencies are linear and strictly increasing.
– A reduction of the problem of deciding if a network with (arbitrary) linear latencies is paradox-ridden to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the edges with constant latency.
– An algorithm for finding a subnetwork that is almost optimal wrt equilibrium latency. Our algorithm is subexponential when the number of paths is polynomial and each path is of polylogarithmic length.
– A polynomial-time algorithm for the problem of finding the best subnetwork which outperforms any known approximation for the case of strictly increasing linear latencies.
– A polynomial-time method that turns the optimal flow into a Nash flow by deleting the edges not used by the optimal flow, and performing minimal modifications on the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity of recognizing the most severe manifestations of Braess’s paradox, and our techniques show novel ways of using the probabilistic method and of exploiting convex separable quadratic programs.

Abstract: In this paper we consider communication issues arising in mobile networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency allocation and the call control problem are essential. In the frequency allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies. In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5. For the call control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio of 2.934 in cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call control algorithms for cellular and arbitrary planar networks, respectively.

Abstract: In this paper we consider communication issues arising in cellular (mobile) networks that utilize frequency division multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency-allocation and the call-control problems are essential. In the frequency-allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call-control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies.
In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5 . For the call-control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio between 2.469 and 2.651 for cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call-control algorithms for cellular and arbitrary planar networks, respectively.

Abstract: We consider the problem of computing minimum congestion,
fault-tolerant, redundant assignments of messages to faulty parallel de-
livery channels. In particular, we are given a set M of faulty channels,
each having an integer capacity ci and failing independently with proba-
bility fi. We are also given a set of messages to be delivered over M, and
a fault-tolerance constraint (1), and we seek a redundant assignment
that minimizes congestion Cong(), i.e. the maximum channel load,
subject to the constraint that, with probability no less than (1 ), all
the messages have a copy on at least one active channel. We present a
4-approximation algorithm for identical capacity channels and arbitrary
message sizes, and a 2l ln(jMj=)
ln(1=fmax)m-approximation algorithm for related
capacity channels and unit size messages.
Both algorithms are based on computing a collection of disjoint chan-
nel subsets such that, with probability no less than (1 ), at least one
channel is active in each subset. The objective is to maximize the sum of
the minimum subset capacities. Since the exact version of this problem
is NP-complete, we present a 2-approximation algorithm for identical
capacities, and a (8 + o(1))-approximation algorithm for arbitrary ca-
pacities.

Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has recently been proposed as a modulation technique for optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments. We consider the planning problem of an OFDM optical network, where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers and choosing an appropriate modulation level, taking into account the transmission distance. We introduce the Routing, Modulation Level and Spectrum Allocation (RMLSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, prove that is also NP-complete and present various algorithms to solve it. We start by presenting an optimal ILP RMLSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RMLSA into its two
substituent subproblems, namely, (i) routing and modulation level, and (ii) spectrum allocation (RML+SA), and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all the connections in the traffic matrix. In the sequential algorithm, we investigate two policies for defining the order in which connections are considered. We also use a simulated annealing meta-heuristic to obtain even better orderings. We examine the performance of the proposed algorithms through simulation experiments and evaluate the spectrum utilization benefits that can be obtained by utilizing OFDM elastic bandwidth allocation, when compared to a traditional WDM network.

Abstract: For a place that gathers millions of people theWeb seems
pretty lonely at times. This is mainly due to the current predominant
browsing scenario; that of an individual participating
in an autonomous surfing session. We believe that
people should be seen as an integral part of the browsing
and searching activity towards a concept known as social
navigation. In this work, we extend the typical web
browser¢s functionality so as to raise awareness of other
people having similar web surfing goals at the current moment.
We further present features and algorithms that facilitate
online communication and collaboration towards common
searching targets. The utility of our system is established
by experimental studies. The extensions we present
can be easily adopted in a typical web browser.

Abstract: In this work we study energy efficient routing strategies
for wireless ad-hoc networks. In this kind of networks,
energy is a scarce resource and its conservation
and efficient use is a major issue. Our strategy follows
the multi-cost routing approach, according to which a
cost vector of various parameters is assigned to each
link. The parameters of interest are the number of hops
on a path, and the residual energy and the transmission
power of the nodes on the path. These parameters
are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. We evaluate the routing algorithms
proposed in a number of scenarios, with respect
to energy consumption, throughput and other performance
parameters of interest. From the experiments
conducted we conclude that routing algorithms that take
into account energy related parameters, increase the
lifetime of the network, while achieving better performance
than other approaches, such as minimum hop
routing.

Abstract: In this work, we propose an energy-efficient multicasting algorithm
for wireless networks for the case where the transmission
powers of the nodes are fixed. Our algorithm is
based on the multicost approach and selects an optimal
energy-efficient set of nodes for multicasting, taking into account:
i) the node residual energies, ii) the transmission
powers used by the nodes, and iii) the set of nodes covered.
Our algorithm is optimal, in the sense that it can
optimize any desired function of the total power consumed
by the multicasting task and the minimum of the current
residual energies of the nodes, provided that the optimization
function is monotonic in each of these parameters. Our
optimal algorithm has non-polynomial complexity, thus, we
propose a relaxation producing a near-optimal solution in
polynomial time. The performance results obtained show
that the proposed algorithms outperform established solutions
for energy-aware multicasting, with respect to both
energy consumption and network lifetime. Moreover, it is
shown that the near-optimal multicost algorithm obtains
most of the performance benefits of the optimal multicost
algorithm at a smaller computational overhead.

Abstract: A crucial issue in wireless networks is to support efficiently communication patterns that are typical in traditional (wired) networks. These include broadcasting, multicasting, and gossiping (all-to-all communication). In this work we study such problems in static ad hoc networks. Since, in ad hoc networks, energy is a scarce resource, the important engineering question to be solved is to guarantee a desired communication pattern minimizing the total energy consumption. Motivated by this question, we study a series of wireless network design problems and present new approximation algorithms and inapproximability results.

Abstract: Many efforts have been done in the last years to model public transport timetables in order to
find optimal routes. The proposed models can be classified into two types: those representing the
timetable as an array, and those representing it as a graph. The array-based models have been
shown to be very effective in terms of query time, while the graph-based models usually answer
queries by computing shortest paths, and hence they are suitable to be used in combination with
speed-up techniques developed for road networks.
In this paper, we focus on the dynamic behavior of graph-based models considering the case
where transportation systems are subject to delays with respect to the given timetable. We
make three contributions: (i) we give a simplified and optimized update routine for the wellknown
time-expanded model along with an engineered query algorithm; (ii) we propose a new
graph-based model tailored for handling dynamic updates; (iii) we assess the effectiveness of
the proposed models and algorithms by an experimental study, which shows that both models
require negligible update time and a query time which is comparable to that required by some
array-based models.

Abstract: We consider classical linear-time planar separator algorithms, determining for a given planar graph a small subset of the nodes whose removal separates the graph into two components of similar size. These algorithms are based upon Planar Separator Theorems, which guarantee separators of size MediaObjects/InlineFigure1.png and remaining components of size less than 2n/3. In this work, we present a comprehensive experimental study of the algorithms applied to a large variety of graphs, where the main goal is to find separators that do not only satisfy upper bounds but also possess other desirable qualities with respect to separator size and component balance. We propose the usage of fundamental cycles, whose size is at most twice the diameter of the graph, as planar separators: For graphs of small diameter the guaranteed bound is better than the MediaObjects/InlineFigure2.png bounds, and it turns out that this simple strategy almost always outperforms the other algorithms, even for graphs with large diameter.

Abstract: Evolutionary Game Theory is the study of strategic interactions
among large populations of agents who base their decisions on simple,
myopic rules. A major goal of the theory is to determine broad classes
of decision procedures which both provide plausible descriptions of selfish
behaviour and include appealing forms of aggregate behaviour. For example,
properties such as the correlation between strategies¢ growth rates
and payoffs, the connection between stationary states and the well-known
game theoretic notion of Nash equilibria, as well as global guarantees of
convergence to equilibrium, are widely studied in the literature.
Our paper can be seen as a quick introduction to Evolutionary Game
Theory, together with a new research result and a discussion of many
algorithmic and complexity open problems in the area. In particular, we
discuss some algorithmic and complexity aspects of the theory, which
we prefer to view more as Game Theoretic Aspects of Evolution rather
than as Evolutionary Game Theory, since the term “evolution” actually
refers to strategic adaptation of individuals¢ behaviour through a
dynamic process and not the traditional evolution of populations. We
consider this dynamic process as a self-organization procedure which,
under certain conditions, leads to some kind of stability and assures robustness
against invasion. In particular, we concentrate on the notion of
the Evolutionary Stable Strategies (ESS). We demonstrate their qualitative
difference from Nash Equilibria by showing that symmetric 2-person
games with random payoffs have on average exponentially less ESS than
Nash Equilibria. We conclude this article with some interesting areas of
future research concerning the synergy of Evolutionary Game Theory
and Algorithms.

Abstract: Energy is a scarce resource in ad hoc wireless networks and it
is of paramount importance to use it e±ciently when establishing com-
munication patterns. In this work we study algorithms for computing
energy-e±cient multicast trees in ad hoc wireless networks. Such algo-
rithms either start with an empty solution which is gradually augmented
to a multicast tree (augmentation algorithms) or take as input an initial
multicast tree and `walk' on di®erent multicast trees for a ¯nite number
of steps until some acceptable decrease in energy consumption is achieved
(local search algorithms).
We mainly focus on augmentation algorithms and in particular we have
implemented a long list of existing such algorithms in the literature and
new ones. We experimentally compare all these algorithms on random
geometric instances of the problem and obtain results in terms of the
energy e±ciency of the solutions obtained. Additional results concerning
the running time of our implementations are also presented. We also ex-
plore how much the solutions obtained by augmentation algorithms can
be improved by local search algorithms. Our results show that one of our
new algorithms and its variations achieve the most energy-e±cient solu-
tions while being very fast. Our investigations shed some light to those
properties of geometric instances of the problem which make augmenta-
tion algorithms perform well.

Abstract: Online and Realtime counting and estimating the cardinality of sets is highly desirable for a large variety of applications, representing a foundational block for the efficient deployment and access of emerging internet scale information systems. In this work we implement three well known duplicate
insensitive counting algorithms and evaluate their performance in a testbed of resource-limited commercial off-the-shelf hardware devices. We focus on devices that can be used in wireless mobile and sensor applications and evaluate the memory complexity, time complexity and absolute error of the algorithms under different realistic scenaria. Our findings indicate the suitability of each algorithm depending on the application characteristics.

Abstract: We present key aspects (hardware, software, topology, networking) of SenseWall, an experimental sensor network test-bed we have created for the implementation and engineering of distributed sensor network algorithms. We then describe how SenseWall has been in particular used to implement two recent state of the art algorithms for energy balanced sensor data propagation. We elaborate on the issues and challenges created by the restrictions and particularities of the experimental test-bed and how we dealt with them. We also carry out a detailed performance evaluation comparing the energy balance protocols to two baseline protocols that include only either single hop or direct data transmissions.

Abstract: We consider the performance of a number of DPLL algorithms on random 3-CNF formulas with n variables and m = rn clauses. A long series of papers analyzing so-called “myopic” DPLL algorithms has provided a sequence of lower bounds for their satisfiability threshold. Indeed, for each myopic algorithm A it is known that there exists an algorithm-specific clause-density, rA , such that if ralgorithms takes exponential time. Specifically, all extensions of orderred-dll take exponential time for r > 2.78 and the same is true for generalized unit clause for all r > 3.1. Our results imply exponential lower bounds for many other myopic algorithms for densities similarly close to the corresponding rA .

Abstract: We propose a fair scheduling algorithm for Computational Grids, called Fair Execution Time Estimation (FETE) algorithm. FETE assigns a task to the computation resource that minimizes what we call its fair execution time estimation. The fair execution time of a task on a certain resource is an estimation of the time by which a task will be executed on the resource, assuming it gets a fair share of the resource’s computational power. Though space-shared scheduling is used in practice, the estimates of the fair execution times are obtained assuming that a time-sharing discipline is used. We experimentally evaluate the proposed algorithm and observe that it outperforms other known scheduling algorithms. We also propose a version of FETE, called Simple FETE (SFETE), which requires no a-priori knowledge of the tasks workload and in most cases has similar performance to that of FETE.

Abstract: We study the problem of fair resource allocation in a simple cooperative multi-agent setting where we have k agents and a set of n objects to be allocated to those agents. Each object is associated with a weight represented by a positive integer or real number. We would like to allocate all objects to the agents so that each object is allocated to only one agent and the weight is distributed fairly. We adopt the fairness index popularized by the networking community as our measure of fairness, and study centralized algorithms for fair resource allocation. Based on the relationship between our problem and number partitioning, we devise a greedy algorithm for fair resource allocation that runs in polynomial time but is not guaranteed to find the optimal solution, and a complete anytime algorithm that finds the optimal solution but runs in exponential time. Then we study the phase transition behavior of the complete algorithm. Finally, we demonstrate that the greedy algorithm actually performs very well and returns almost perfectly fair allocations.

Abstract: ManyWSN algorithms and applications are based on knowledge
regarding the position of nodes inside the network area.
However, the solution of using GPS based modules in order
to perform localization in WSNs is a rather expensive solution
and in the case of indoor applications, such as smart
buildings, is also not applicable. Therefore, several techniques
have been studied in order to perform relative localization
in WSNs; that is, to compute the position of
a node inside the network area relatively to the position
of other nodes. Many such techniques are based on indicators
like the Radio Signal Strength Indicator (RSSI)
and the Link Quality Indicator (LQI). These techniques are
based on the assumption that there is strong correlation between
the Euclidian distance of the communicating motes
and these indicators. Therefore, high values of RSSI and
LQI should indicate physical proximity of two communicating
nodes. However, these indicators do not depend solely on
distance. Physical obstacles, ambient electromagnetic noise
and interferences from other wireless transmissions also affect
the quality of wireless communication in a stochastic
way. In this paper we propose, implement, experimentally
fine tune and evaluate a localization algorithm that exploits
the stochastic nature of interferences during wireless communications
in order to perform localization in WSNs. Our
algorithm is particularly designed for in-door localisation of
moving people in smart buildings. The localisation achieved
is fine-grained, i.e. the position of the target mote is successfully
computed with approximately one meter accuracy.
This fine-grained localisation can be used by smart Building
Management Systems in many applications such as room
adaptation to presence. In our scenario, our proposed algorithm is used by a smart room in order to localise the
position of people inside the room and adapt room illumination
accordingly.

Abstract: The Internet and the Web have arguably surpassed the von Neumann Computer as the most complex computational artifacts of our times. Out of all the formidable characteristics of the Internet/Web, it seems that the most novel is its socio-economic complexity. The Internet and the Web are built, operated and used by a multitude of very diverse economic interests, which compete or collaborate in various degrees. In fact, this suggests that some very important insights about the Web may come from a fusion of ideas from Algorithms with concepts and techniques from Mathematical Economics and Game Theory. Since 2000, a new field has emerged (Algorithmic Game Theory and Computational Internet Economics), which examines exactly such ideas. We feel that this field belongs to Web Science. Some of the main topics of that field examined so far are Internet equilibria, the so called “Price of Anarchy” (a measure of loss of optimality due to selfishness [Koutsoupias,Papadimitriou (1999)], electronic markets and their equilibria, auctions, and algorithmic mechanisms design (inverse game theory or how to design a game in the net in such a clever way that individual players, motivated solely by their selfish interests, actually end up meeting the goals of the game designer!). Our paper here surveys the main concepts and results of this very promising subfield.

Abstract: In this paper we present the design of a simulator platform called FUSE (Fast Universal Simulator Engine). The term Universal means that the Engine can be adapted easily to different domains and be used for varying simulation needs, although our main target is simulation of distributed algorithms in distributed computing environments. The Engine is Fast in the sense that the simulation overhead is minimal and very large systems can be simulated. We discuss the architecture and the design decisions that form the basis of these features. We also describe the functionality that is provided to its users (e.g., monitoring, statistics, etc.).

Abstract: In this work we experimentally study the min order Radiocoloring problem (RCP) on Chordal, Split and Permutation graphs, which are three basic families of perfect graphs. This problem asks to find an assignment using the minimum number of colors to the vertices of a given graph G, so that each pair of vertices which are at distance at most two apart in G have different colors. RCP is an NP-Complete problem on chordal and split graphs [4]. For each of the three families, there are upper bounds or/and approximation algorithms known for minimum number of colors needed to radiocolor such a graph [4,10].
We design and implement radiocoloring heuristics for graphs of above families, which are based on the greedy heuristic. Also, for each one of the above families, we investigate whether there exists graph instances requiring a number of colors in order to be radiocolored, close to the best known upper bound for the family. Towards this goal, we present a number generators that produce graphs of the above families that require either (i) a large number of colors (compared to the best upper bound), in order to be radiocolored, called ldquoextremalrdquo graphs or (ii) a small number of colors, called ldquonon-extremalrdquoinstances. The experimental evaluation showed that random generated graph instances are in the most of the cases ldquonon-extremalrdquo graphs. Also, that greedy like heuristics performs very well in the most of the cases, especially for ldquonon-extremalrdquo graphs.

Abstract: Geographic routing is becoming the protocol of choice for many sensor network applications. Some very efficient geographic routing algorithms exist, however they require a preliminary planarization of the communication graph. Planarization induces overhead which makes this approach not optimal when lightweight protocols are required. On the other hand, georouting algorithms which do not rely on planarization have fairly low success rates and either fail to route messages around all but the simplest obstacles or have a high topology control overhead (e.g. contour detection algorithms). In this entry we describe the GRIC algorithm which was designed to overcome some of those limitations. The GRIC algorithm was proposed in [PN07a]. It is the first lightweight and efficient on demand (i.e. all-to-all) geographic routing algorithm which does not require planarization, has almost 100% delivery rates (when no obstacles are added), and behaves well in the presence of large communication blocking obstacles.

Abstract: A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information
systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra¢s algorithm) on a suitably defined graph. We investigate reduction
of the search space while simultaneously retaining data structures, created during a preprocessing
phase, of size linear (i.e., optimal) to the size of the graph. We show that the search space of
Dijkstra¢s algorithm can be significantly reduced by extracting geometric information from a given
layout of the graph and by encapsulating precomputed shortest-path information in resulted geometric
objects (containers). We present an extensive experimental study comparing the impact of
different types of geometric containers using test data from real-world traffic networks. We also
present new algorithms as well as an empirical study for the dynamic case of this problem, where
edge weights are subject to change and the geometric containers have to be updated and show that
our new methods are two to three times faster than recomputing everything from scratch. Finally,
in an appendix, we discuss the software framework that we developed to realize the implementations
of all of our variants of Dijkstra¢s algorithm. Such a framework is not trivial to achieve as our
goal was to maintain a common code base that is, at the same time, small, efficient, and flexible,
as we wanted to enhance and combine several variants in any possible way.

Abstract: Building efficient internet-scale data management services is the main focus of this chapter. In particular, we aim to show how to leverage DHT technology and extend it with novel algorithms and architectures in order to (i) improve efficiency and reliability for traditional DHT (exact-match) queries, particularly exploiting the abundance of altruism witnessed in real-life P2P networks, (ii) speedup range queries for data stored on DHTs, and (iii) support efficiently and scalably the publish/subscribe paradigm over DHTs, which crucially depends on algorithms for supporting rich queries on string-attribute data.

Abstract: One of the major problems algorithm designers usually face is to know in advance whether a proposed optimization algorithm is going to behave as planned, and if not, what changes are to be made to the way new solutions are examined so that the algorithm performs nicely. In this work we develop a methodology for differentiating good neighborhoods from bad ones. As a case study we consider the structure of the space of assignments for random 3-SAT formulas and we compare two neighborhoods, a simple and a more refined one that we already know the corresponding algorithm behaves extremely well. We give evidence that it is possible to tell in advance what neighborhood structure will give rise to a good search algorithm and we show how our methodology could have been used to discover some recent results on the structure of the SAT space of solutions. We use as a tool Go with the winners, an optimization heuristic that uses many particles that independently search the space of all possible solutions. By gathering statistics, we compare the combinatorial characteristics of the different neighborhoods and we show that there are certain features that make a neighborhood better than another, thus giving rise to good search algorithms.

Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical networks. In such networks and in the absence of regenerators, the signal quality of transmission degrades due to physical layer
impairments. We initially present an algorithm for solving the static RWA problem based on an LP relaxation formulation that tends to yield integer solutions. To account for signal degradation due to physical impairments, we model the effects of the path length, the path hop count, and the interference among ligthpaths by imposing additional (soft) constraints on RWA. The objective of the resulting optimization problem is not only to serve the
connection requests using the available wavelengths, but also to minimize the total accumulated signal degradation on the selected lightpaths. Our simulation studies indicate that the proposed RWA algorithms select the lightpaths for the requested connections so as to avoid impairment generating sources, thus dramatically reducing the overall physical-layer blocking when compared to RWA algorithms that do not account for impairments.

Abstract: Dynamic graph algorithms have been extensively studied in the last two
decades due to their wide applicabilityin manycon texts. Recently, several
implementations and experimental studies have been conducted investigating
the practical merits of fundamental techniques and algorithms. In most
cases, these algorithms required sophisticated engineering and fine-tuning
to be turned into efficient implementations. In this paper, we surveysev -
eral implementations along with their experimental studies for dynamic
problems on undirected and directed graphs. The former case includes
dynamic connectivity, dynamic minimum spanning trees, and the sparsification
technique. The latter case includes dynamic transitive closure and
dynamic shortest paths. We also discuss the design and implementation of
a software libraryfor dynamic graph algorithms.

Abstract: In this work we study the implementation of multicost rout-
ing in a distributed way in wireless mobile ad hoc networks.
In contrast to traditional single-cost routing, where each
path is characterized by a scalar, in multicost routing a
vector of cost parameters is assigned to each network link,
from which the cost vectors of candidate paths are calcu-
lated. These parameters are combined in various optimiza-
tion functions, corresponding to diﬀerent routing algorithms,
for selecting the optimal path. Up until now the performance
of multicost and multi-constrained routing in wireless ad hoc
networks has been evaluated either at a theoretical level or
by assuming that nodes are static and have full knowledge
of the network topology and nodes� state. In the present
paper we assess the performance of multicost routing based
on energy-related parameters in mobile ad hoc networks by
embedding its logic in the Dynamic Source Routing (DSR)
algorithm, which is a well-known fully distributed routing
algorithm. We use simulations to compare the performance
of the multicost-DSR algorithm to that of the original DSR
algorithm and examine their behavior under various node
mobility scenarios. The results conﬁrm that the multicost-
DSR algorithm improves the performance of the network in
comparison to the original DSR algorithm in terms of energy eﬃciency. The multicost-DSR algorithm enhances the
performance of the network not only by reducing energy
consumption overall in the network, but also by spreading
energy consumption more uniformly across the network, pro
longing the network lifetime and reducing the packet drop
probability. Furthermore the delay suﬀered by the packets
reaching their destination for the case of the multicost-DSR
algorithm is shown to be lower than in the case of the orig
inal DSR algorithm.

Abstract: We describe algorithms for finding shortest paths and distances in outerplanar and planar digraphs
that exploit the particular topology of the input graph. An important feature of our algorithms is that they can
work in a dynamic environment, where the cost of any edge can be changed or the edge can be deleted. In the
case of outerplanar digraphs, our data structures can be updated after any such change in only logarithmic time.
A distance query is also answered in logarithmic time. In the case of planar digraphs, we give an interesting
tradeoff between preprocessing, query, and update times depending on the value of a certain topological
parameter of the graph. Our results can be extended to n-vertex digraphs of genus O.n1¡"/ for any " > 0.

Abstract: We consider the online impairment-aware routing
and wavelength assignment (IA-RWA) problem in transparent
WDM networks. To serve a new connection, the online algorithm,
in addition to finding a route and a free wavelength (a lightpath),
has to guarantee its transmission quality, which is affected by
physical-layer impairments. Due to interference effects, the establishment
of the new lightpath affects and is affected by the other
lightpaths. We present two multicost algorithms that account
for the actual current interference among lightpaths, as well as
for other physical effects, performing a cross-layer optimization
between the network and physical layers. In multicost routing,
a vector of cost parameters is assigned to each link, from which
the cost vectors of the paths are calculated. The first algorithm
utilizes cost vectors consisting of impairment-generating source
parameters, so as to be generic and applicable to different physical
settings. These parameters are combined into a scalar cost
that indirectly evaluates the quality of candidate lightpaths. The
second algorithm uses specific physical-layer models to define
noise variance-related cost parameters, so as to directly calculate
the -factor of candidate lightpaths. The algorithms find a set of
so-called nondominated paths to serve the connection in the sense
that no path is better in the set with respect to all cost parameters.
To select the lightpath, we propose various optimization functions
that correspond to different IA-RWA algorithms. The proposed
algorithms combine the strength of multicost optimization with
low execution times, making them appropriate for serving online
connections

Abstract: Wireless networks are nowadays widely used and constantly evolving; we are constantly surrounded by numerous such networks. A modern idea is to exploit this variety and aggregate all available connections together towards improving Internet connectivity. In this paper we present our design of a networking system that enables participating entities to connect with each other and share their possible broadband connections. Every connected entity will gain broadband connection access from the ones that have and share one, independently from how many connections are available and how fast they are. Our goal is to improve the broadband connections' utilization and aim towards fault tolerance on these connections. To this end, the system is built on top of a peer-to-peer network, where the participating entities share their connections according to various scheduling algorithms.

Abstract: With this work we aim to make a three-fold contribution.
We first address the issue of supporting efficiently queries
over string-attributes involving prefix, suffix, containment,
and equality operators in large-scale data networks. Our
first design decision is to employ distributed hash tables
(DHTs) for the data network?s topology, harnessing their
desirable properties. Our next design decision is to derive
DHT-independent solutions, treating DHT as a black box.
Second, we exploit this infrastructure to develop efficient
content based publish/subscribe systems. The main con-
tribution here are algorithms for the efficient processing of
queries (subscriptions) and events (publications). Specifi-
cally, we show that our subscription processing algorithms
require O(logN) messages for a N-node network, and our
event processing algorithms require O(l ? logN) messages
(with l being the average string length).
Third, we develop algorithms for optimizing the proces-
sing of multi-dimensional events, involving several string at-
tributes. Further to our analysis, we provide simulation-
based experiments showing promising performance results
in terms of number of messages, required bandwidth, load
balancing, and response times.

Abstract: This work provides a jitter analysis of size-based
burst assembly algorithms and also discusses other burst assembly
algorithms that use the packet delay as the assembly threshold
to provide a bound on jitter.

Abstract: This paper presents results from the IST Phosphorus project that studies and implements an optical Grid test-bed. A significant part of this project addresses scheduling and routing algorithms and dimensioning problems of optical grids. Given the high costs involved in setting up actual hardware implementations, simulations are a viable alternative. In this paper we present an initial study which proposes models that reflect real-world grid application traffic characteristics, appropriate for simulation purposes. We detail several such models and the corresponding process to extract the model parameters from real grid log traces, and verify that synthetically generated jobs provide a realistic approximation of the real-world grid job submission process.

Abstract: In this paper we present a multicost algorithm for the joint
time scheduling of the communication and computation
resources that will be used by a task. The proposed
algorithm selects the computation resource to execute the
task, determines the path to route the input data, and finds
the starting times for the data transmission and the task
execution, performing advance reservations. We initially
present an optimal scheme of non-polynomial complexity
and by appropriately pruning the set of candidate paths we
also give a heuristic algorithm of polynomial complexity. We
evaluate the performance of our algorithm and compare it to
that of algorithms that handle only the computation or
communication part of the problem separately. We show that
in a Grid network where the tasks are CPU- and dataintensive
important performance benefits can be obtained by
jointly optimizing the use of the communication and
computation resources.

Abstract: In this work we study the combination of multicost
routing and adjustable transmission power in wireless
ad hoc networks, so as to obtain dynamic energy- and
interference-efficient routes to optimize network performance.
In multi-cost routing, a vector of cost parameters is
assigned to each network link, from which the cost vectors
of candidate paths are calculated. Only at the end these
parameters are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. The multi-cost routing problem is a
generalization of the multi-constrained problem, where no
constraints exist, and is also significantly more powerful
than single-cost routing. Since energy is an important
limitation of wireless communications, the cost parameters
considered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be included,
as desired. We assume that nodes can use power control to
adjust their transmission power to the desired level. The
experiments conducted show that the combination of multicost
routing and adjustable transmission power can lead to
reduced interference and energy consumption, improving
network performance and lifetime.

Abstract: In translucent (or managed reach) WDM optical
networks, regenerators are employed at specific nodes. Some of
the connections in such networks are routed transparently, while
others have to go through a sequence of 3R regenerators that serve
as “refueling stations” to restore their quality of transmission
(QoT). We extend an online multicost algorithm for transparent
networks presented in our previous study [1], to obtain an IA-RWA
algorithm that works in translucent networks and makes use,
when required, of the regenerators present at certain locations
of the network. To characterize a path, the algorithm uses a
multicost formulation with several cost parameters, including the
set of available wavelengths, the length of the path, the number of
regenerators used, and noise variance parameters that account for
the physical layer impairments. Given a new connection request
and the current utilization state of the network, the algorithm calculates
a set of non dominated candidate paths, meaning that any
path in this set is not inferior with respect to all cost parameters
than any other path. This set consists of all the cost-effective (in
terms of the domination relation) and feasible (in terms of QoT)
lightpaths for the given source-destination pair, including all the
possible combinations for the utilization of available regenerators
of the network. An optimization function or policy is then applied
to this set in order to select the optimal lightpath. Different optimization
policies correspond to different IA-RWA algorithms.
We propose and evaluate several optimization policies, such as the
most used wavelength, the best quality of transmission, the least
regeneration usage, or a combination of these rules. Our results
indicate that in a translucent network the employed IA-RWA
algorithm has to consider all problem parameters, namely, the
QoT of the lightpaths, the utilization of wavelengths and the
availability of regenerators, to efficiently serve the online traffic.

Abstract: This paper addresses the efficient processing of
top-k queries in wide-area distributed data
repositories where the index lists for the attribute
values (or text terms) of a query are distributed
across a number of data peers and the
computational costs include network latency,
bandwidth consumption, and local peer work.
We present KLEE, a novel algorithmic
framework for distributed top-k queries,
designed for high performance and flexibility.
KLEE makes a strong case for approximate top-k
algorithms over widely distributed data sources.
It shows how great gains in efficiency can be
enjoyed at low result-quality penalties. Further,
KLEE affords the query-initiating peer the
flexibility to trade-off result quality and expected
performance and to trade-off the number of
communication phases engaged during query
execution versus network bandwidth
performance. We have implemented KLEE and
related algorithms and conducted a
comprehensive performance evaluation. Our
evaluation employed real-world and synthetic
large, web-data collections, and query
benchmarks. Our experimental results show that
KLEE can achieve major performance gains in
terms of network bandwidth, query response
times, and much lighter peer loads, all with small
errors in result precision and other result-quality
measures

Abstract: We investigate the existence and efficient algorithmic
construction of close to optimal independent sets in random models
of intersection graphs. In particular, (a) we propose \emph{a new model} for random intersection graphs
($G_{n, m, \vec{p}}$) which includes the model of
\cite{RIG} (the ``uniform" random intersection graphs model) as an
important special case. We also define an interesting variation of
the model of random intersection graphs, similar in spirit to
random regular graphs. (b) For this model we derive \emph{exact formulae} for the mean
and variance of the number of independent sets of size $k$ (for
any $k$) in the graph. (c) We then propose and analyse \emph{three algorithms} for the
efficient construction of large independent sets in this model.
The first two are variations of the greedy technique while the
third is a totally new algorithm. Our algorithms are analysed for
the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding
\emph{close to optimal} independent sets for an interesting range
of graph parameters.

Abstract: We address the issue of measuring storage, or query load distribution fairness in peer-to-peer data management systems. Existing metrics may look promising from the point of view of specific peers, while in reality being far from optimal from a global perspective. Thus, first we define the requirements and study the appropriateness of various statistical metrics for measuring load distribution fairness towards these requirements. The metric proposed as most appropriate is the Gini coefficient (G). Second, we develop novel distributed sampling algorithms to compute G on-line, with high precision, efficiently, and scalably. Third, we show how G can readily be utilized on-line by higher-level algorithms which can now know when to best intervene to correct load imbalances. Our analysis and experiments testify for the efficiency and accuracy of these algorithms, permitting the online use of a rich and reliable metric, conveying a global perspective of the distribution.

Abstract: We consider the problem of searching for a piece of
information in a fully interconnected computer network
(also called a complete network or clique) by exploiting
advice about its location from the network nodes. Each
node contains a database that ?knows? what kind of
documents or information are stored in other nodes
(e.g., a node could be a Web server that answers queries
about documents stored on the Web). The databases in
each node, when queried, provide a pointer that leads to
the node that contains the information. However, this
information is up-to-date (or correct) with some
bounded probability. While, in principle, one may always
locate the information by simply visiting the network
nodes in some prescribed ordering, this requires a time
complexity in the order of the number of nodes of the
network. In this paper, we provide algorithms for locating
an information node in the complete communication
network, which take advantage of advice given from
network nodes. The nodes may either give correct advice,
by pointing directly to the information node, or give
wrong advice, by pointing elsewhere. On the lowerbounds?
side, we show that no fixed-memory (i.e., with
memory independent of the network size) deterministic
algorithm may locate the information node in a constant
(independent of the network size) expected number of
steps. Moreover, if p (1/n) is the probability that a
node of an n-node clique gives correct advice, we show
that no algorithm may locate the information node in an
expected number of steps less than 1/p o(1). To study
how the expected number of steps is affected by the
amount of memory allowed to the algorithms, we give a
memoryless randomized algorithm with expected number
of steps 4/p o(1/p) o(1) and a 1-bit randomized
algorithm requiring on the average at most 2/p o(1)
steps. In addition, in the memoryless case, we also
prove a 4/p lower bound for the expected number of
steps in the case where the nodes giving faulty advice
may decide on the content of this advice in any possible
way and not merely at random (adversarial fault model).
Finally, for the case where faulty nodes behave randomly,
we give an optimal, unlimited memory deterministic
algorithm with expected number of steps bounded
from above by 1/p o(1/p) 1.

Abstract: We consider the problem of searching for a piece of information in a fully interconnected computer network or clique by exploiting
advice about its location from the network nodes Each node contains a
database that knows what kind of documents or information are stored
in other nodes e.g. a node could be a Web server that answers queries
about documents stored on the Web. The databases in each node when
queried provide a pointer that leads to the node that contains the information. However this information is up to date or correct with some
bounded probability. While in principle one may always locate the information by simply visiting the network nodes in some prescribed ordering
this requires a time complexity in the order of the number of nodes of the
network. In this paper we provide algorithms for locating an information node in the complete communication network that take advantage
of advice given from network nodes The nodes may either give correct
advice by pointing directly to the information node or give wrong advice
by pointing elsewhere We show that on the averageif the probability that a node provides correct advice is asymptotically larger than
where is the number of the computer nodes then the average time complexity for locating the information node is asymptotically or depending on the available memory.The probability may in general be a function of the number of network nodes . On the lower bounds
side we prove that noxed memory deterministic algorithm can locate
the information node in nite expected number of steps. We also prove
a lower bound of
for the expected number of steps of any algorithm
that locates the information node in the complete network.

Abstract: Flow control is the main technique currently used to prevent some of the ordered traffic from entering a communication network, and to avoid congestion. A challenging aspect of flow control is how to treat all sessions "fairly " when it is necessary to turn traffic away from the network. In this work, we show how to extend the theory of max-min fair flow control to the case where priorities are assigned to different varieties of traffic, which are sensitive to traffic levels. We examine priorities expressible in the general form of increasing functions of rates, considering yet in combination the more elaborative case with unescapable upper and lower bounds on rates of traffic sessions. We offer optimal, priority bottleneck algorithms, which iteratively adjust the session rates in order to meet a new condition of max-min fairness under priorities and rate bounds. In our setting, which is realistic for today's technology of guaranteed quality of service, traffic may be turned away not only to avoid congestion, but also to respect particular minimum requirements on bandwidth. Moreover, we establish lower bounds on the competitiveness of network-oblivious schemes compared to optimal schemes with complete knowledge of network structure. Our theory extends significantly the classical theory of max-min fair flow control [2]. Moreover, our results on rejected traffic are fundamentally different from those related to call control and bandwidth allocation, since not only do we wish to optimize the number and rates of accepted sessions, but we also require priority fairness.

Abstract: The promises inherent in users coming together to form data
sharing network communities, bring to the foreground new problems formulated
over such dynamic, ever growing, computing, storage, and networking
infrastructures. A key open challenge is to harness these highly
distributed resources toward the development of an ultra scalable, efficient
search engine. From a technical viewpoint, any acceptable solution
must fully exploit all available resources dictating the removal of any
centralized points of control, which can also readily lead to performance
bottlenecks and reliability/availability problems. Equally importantly,
however, a highly distributed solution can also facilitate pluralism in informing
users about internet content, which is crucial in order to preclude
the formation of information-resource monopolies and the biased visibility
of content from economically-powerful sources. To meet these challenges,
the work described here puts forward MINERVA{\^a}{\"i}¿½{\"i}¿½, a novel search
engine architecture, designed for scalability and efficiency. MINERVA{\^a}{\"i}¿½{\"i}¿½
encompasses a suite of novel algorithms, including algorithms for creating
data networks of interest, placing data on network nodes, load balancing,
top-k algorithms for retrieving data at query time, and replication algorithms
for expediting top-k query processing. We have implemented the
proposed architecture and we report on our extensive experiments with
real-world, web-crawled, and synthetic data and queries, showcasing the
scalability and efficiency traits of MINERVA{\^a}{\"i}¿½{\"i}¿½.

Abstract: We consider the problem of computing minimum congestion, fault-tolerant, redundant assignments of messages to faulty, parallel delivery channels. In particular, we are given a set K of faulty channels, each having an integer capacity ci and failing independently with probability fi. We are also given a set M of messages to be delivered over K, and a fault-tolerance constraint (1 - {\aa}), and we seek a redundant assignment {\"o} that minimizes congestion Cong({\"o}), i.e. the maximum channel load, subject to the constraint that, with probability no less than (1 - e), all the messages have a copy on at least one active channel. We present a polynomial-time 4-approximation algorithm for identical capacity channels and arbitrary message sizes, and a 2[ln(|K|/{\aa})/ln(1/fmax)]-approximation algorithm for related capacity channels and unit size messages. Both algorithms are based on computing a collection (K1,., K{\'i}} of disjoint channel subsets such that, with probability no less than (1 - {\aa}), at least one channel is active in each subset. The objective is to maximize the sum of the minimum subset capacities. Since the exact version of this problem is NP-complete, we provide a 2-approximation algorithm for identical capacities, and a polynomial-time (8+o(1))-approximation algorithm for arbitrary capacities.

Abstract: In this work, we explore context-aware application scenarios that become possible utilizing semantically-rich information derived from real-world mobility and presence traces. Traces produced by people carrying personal mobile devices, capturing social and contextual interactions, serve as enables for Future Internet applications. We discuss the fundamental concepts, technical issues and related research challenges. We propose a reference architecture for setting up a system that collects such traces in a Smart City environment. We present the algorithms used to process the traces and infer interactions and interests for the observed populations. We conduct two 3-day trial deployments: one in an office environment and the other in the context of a Smart Conference application. We discuss our findings regarding the system's capability to track interactions and the overall efficacy of the application.

Abstract: In this paper we propose an energy-aware broadcast algorithm for wireless networks. Our algorithm is based on the multicost approach and selects the set of nodes that by transmitting implement broadcasting in an optimally energy-efficient way. The energy-related parameters taken into account are the node transmission power and the node residual energy. The algorithm{\^a}€™s complexity however is non-polynomial, and therefore, we propose a relaxation producing a near-optimal solution in polynomial time. We also consider a distributed information exchange scheme that can be coupled with the proposed algorithms and examine the overhead introduced by this integration. Using simulations we show that the proposed algorithms outperform other solutions in the literature in terms of energy efficiency. Moreover, it is shown that the near-optimal algorithm obtains most of the performance benefits of the optimal algorithm at a smaller computational overhead.

Abstract: A key problem in Grid networks is how to efficiently manage the available infrastructure, in order to
satisfy user requirements and maximize resource utilization. This is in large part influenced by the
algorithms responsible for the routing of data and the scheduling of tasks. In this paper,wepresent several
multi-cost algorithms for the joint scheduling of the communication and computation resources that
will be used by a Grid task. We propose a multi-cost scheme of polynomial complexity that performs
immediate reservations and selects the computation resource to execute the task and determines the
path to route the input data. Furthermore, we introduce multi-cost algorithms that perform advance
reservations and thus also find the starting times for the data transmission and the task execution. We
initially present an optimal scheme of non-polynomial complexity and by appropriately pruning the set
of candidate paths we also give a heuristic algorithm of polynomial complexity. Our performance results
indicate that in a Grid network in which tasks are either CPU- or data-intensive (or both), it is beneficial
for the scheduling algorithm to jointly consider the computational and communication problems. A
comparison between immediate and advance reservation schemes shows the trade-offs with respect to
task blocking probability, end-to-end delay and the complexity of the algorithms.

Abstract: We propose a class of novel energy-efficient multi-cost routing algorithms for wireless mesh networks, and evaluate their performance. In multi-cost routing, a vector of cost parameters is assigned to each network link, from which the cost vectors of candidate paths are calculated using appropriate operators. In the end these parameters are combined in various optimization functions, corresponding to different routing algorithms, for selecting the optimal path. We evaluate the performance of the proposed energy-aware multi-cost routing algorithms under two models. In the network evacuation model, the network starts with a number of packets that have to be transmitted and an amount of energy per node, and the objective is to serve the packets in the smallest number of steps, or serve as many packets as possible before the energy is depleted. In the dynamic one-to-one communication model, new data packets are generated continuously and nodes are capable of recharging their energy periodically, over an infinite time horizon, and we are interested in the maximum achievable steady-state throughput, the packet delay, and the energy consumption. Our results show that energy-aware multi-cost routing increases the lifetime of the network and achieves better overall network performance than other approaches.

Abstract: In this work we study the combination of
multicost routing and adjustable transmission power
in wireless ad-hoc networks, so as to obtain dynamic
energy and interference-efficient routes to optimize network performance. In multi-cost routing, a vector of
cost parameters is assigned to each network link, from
which the cost vectors of candidate paths are calcu-
lated. Only at the end are these parameters combined in
various optimization functions, corresponding to different routing algorithms, for selecting the optimal path.
The multi-cost routing problem is a generalization of
the multi-constrained problem, where no constraints exist, and is also significantly more powerful than single-
cost routing. Since energy is an important limitation of
wireless communications, the cost parameters consid
ered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be in
cluded, as desired.We assume that nodes can use power
control to adjust their transmission power to the desired
level. The experiments conducted show that the com
bination of multi-cost routing and adjustable transmis sion power can lead to reduced interference and energy
consumption, improving network performance and life-
time.

Abstract: We present an architecture for implementing optical
buffers, based on the feed-forward-buffer concept, that can truly
emulate input queuing and accommodate asynchronous packet
and burst operation. The architecture uses wavelength converters
and fixed-length delay lines that are combined to form either a
multiple-input buffer or a shared buffer. Both architectures are
modular, allowing the expansion of the buffer at a cost that grows
logarithmically with the buffer depth, where the cost is measured
in terms of the number of switching elements, and wavelength
converters are employed. The architectural design also provides
a tradeoff between the number of wavelength converters and their
tunability. The buffer architectures proposed are complemented
with scheduling algorithms that can guarantee lossless communication
and are evaluated using physical-layer simulations to
obtain their performance in terms of bit-error rate and achievable
buffer size.

Abstract: We present an architecture for implementing optical
buffers, based on the feed-forward-buffer concept, that can truly
emulate input queuing and accommodate asynchronous packet
and burst operation. The architecture uses wavelength converters
and fixed-length delay lines that are combined to form either a
multiple-input buffer or a shared buffer. Both architectures are
modular, allowing the expansion of the buffer at a cost that grows
logarithmically with the buffer depth, where the cost is measured
in terms of the number of switching elements, and wavelength
converters are employed. The architectural design also provides
a tradeoff between the number of wavelength converters and their
tunability. The buffer architectures proposed are complemented
with scheduling algorithms that can guarantee lossless communication
and are evaluated using physical-layer simulations to
obtain their performance in terms of bit-error rate and achievable
buffer size.

Abstract: Geographic routing scales well in sensor networks, mainly
due to its stateless nature. Still, most of the algorithms are
concerned with finding some path, while the optimality of
the path is difficult to achieve. In this paper we are presenting
a novel geographic routing algorithm with obstacle
avoidance properties. It aims at finding the optimal path
from a source to a destination when some areas of the network
are unavailable for routing due to low local density or
obstacle presence. It locally and gradually with time (but,
as we show, quite fast) evaluates and updates the suitability
of the previously used paths and ignores non optimal paths
for further routing. By means of extensive simulations, we
are comparing its performance to existing state of the art
protocols, showing that it performs much better in terms of
path length thus minimizing latency, space, overall traffic
and energy consumption.

Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical
networks. In such networks and in the absence of regenerators,
the signal quality of transmission degrades due to physical layer
impairments. Because of certain physical effects, routing choices
made for one lightpath affect and are affected by the choices made
for the other lightpaths. This interference among the lightpaths
is particularly difficult to formulate in an offline algorithm since,
in this version of the problem, we start without any established
connections and the utilization of lightpaths are the variables of
the problem.We initially present an algorithm for solving the pure
(without impairments) RWA problem based on a LP-relaxation
formulation that tends to yield integer solutions. Then, we extend
this algorithm and present two impairment-aware (IA) RWA algorithms
that account for the interference among lightpaths in their
formulation. The first algorithm takes the physical layer indirectly
into account by limiting the impairment-generating sources. The
second algorithm uses noise variance-related parameters to directly
account for the most important physical impairments. The
objective of the resulting cross-layer optimization problem is not
only to serve the connections using a small number of wavelengths
(network layer objective), but also to select lightpaths that have
acceptable quality of transmission (physical layer objective).
Simulations experiments using realistic network, physical layer,
and traffic parameters indicate that the proposed algorithms can
solve real problems within acceptable time.

Abstract: The voting rules proposed by Dodgson and Young are both
designed to nd the alternative closest to being a Condorcet
winner, according to two dierent notions of proximity; the
score of a given alternative is known to be hard to compute
under either rule.
In this paper, we put forward two algorithms for ap-
proximating the Dodgson score: an LP-based randomized
rounding algorithm and a deterministic greedy algorithm,
both of which yield an O(logm) approximation ratio, where
m is the number of alternatives; we observe that this result
is asymptotically optimal, and further prove that our greedy
algorithm is optimal up to a factor of 2, unless problems in
NP have quasi-polynomial time algorithms. Although the
greedy algorithm is computationally superior, we argue that
the randomized rounding algorithm has an advantage from
a social choice point of view.
Further, we demonstrate that computing any reasonable
approximation of the ranking produced by Dodgson's rule
is NP-hard. This result provides a complexity-theoretic
explanation of sharp discrepancies that have been observed
in the Social Choice Theory literature when comparing
Dodgson elections with simpler voting rules.
Finally, we show that the problem of calculating the
Young score is NP-hard to approximate by any factor. This
leads to an inapproximability result for the Young ranking.

Abstract: An ad-hoc mobile network is a collection of mobile hosts, with
wireless communication capabilities, forming a temporary network
without the aid of any established fixed infrastructure.
In such networks, topological connectivity is subject to frequent,
unpredictable change. Our work focuses on networks with high
rate of such changes to connectivity. For such dynamic changing
networks we propose protocols which exploit the co-ordinated
(by the protocol) motion of a small part of the network.
We show that such protocols can be designed to work
correctly and efficiently even in the case of arbitrary (but not
malicious) movements of the hosts not affected by the protocol.
We also propose a methodology for the analysis of the expected
behaviour of protocols for such networks, based on the assumption that mobile hosts (whose motion is not guided by
the protocol) conduct concurrent random walks in their
motion space.
Our work examines some fundamental problems such as pair-wise
communication, election of a leader and counting, and proposes
distributed algorithms for each of them. We provide their
proofs of correctness, and also give rigorous analysis by
combinatorial tools and also via experiments.

Abstract: Understanding the graph structure of the Internet is a crucial step for building accurate
network models and designing efﬁcient algorithms for Internet applications.Yet,obtaining this graph
structure can be a surprisingly difﬁcult task,as edges cannot be explicitly queried.For instance,
empirical studies of the network of InternetProtocol (IP) addresses typically rely on indirect methods
like trace route to build what are approximately single-source,all-destinations,shortest-path trees.
These trees only sample a fraction of the network’s edges,and a paper by Lakhinaetal.[2003]found
empirically that there sulting sample is intrinsically biased.Further,in simulations,they observed that the degree distribution under trace route sampling exhibits a power law even when the underlying
degree distribution is Poisson.

Abstract: In future transparent optical networks, it is
important to consider the impact of physical impairments in the
routing and wavelengths assignment process, to achieve efficient
connection provisioning. In this paper, we use classical multi-
objective optimization (MOO) strategies and particularly genetic
algorithms to jointly solve the impairment aware RWA (IA-
RWA) problem. Fiber impairments are indirectly considered
through the insertion of the path length and the number of
common hops in the optimization process. It is shown that
blocking is greatly improved, while the obtained solutions truly
converge towards the Pareto front that constitutes the set of
global optimum solutions. We have evaluated our findings, using
an Q estimator tool, that calculates the signal quality of each path
analytically.
Index Terms RWA, Genetic Algorithm, All-Optical
Networks, Multi Objective Optimization.

Abstract: We study the on-line versions of two fundamental graph problems, maximum independent set and minimum coloring, for the case of disk graphs which are graphs resulting from intersections of disks on the plane. In particular, we investigate whether randomization can be used to break known lower bounds for deterministic on-line independent set algorithms and present new upper and lower bounds; we also present an improved upper bound for on-line coloring.

Abstract: We propose and evaluate an impairment-aware multi-parametric routing and wavelength assignment algorithm for online traffic in transparent optical networks. In such networks the signal quality of transmission degrades due to physical layer impairments. In the multiparametric approach, a vector of cost parameters is assigned to each link, from which the cost vectors of candidate lightpaths are calculated. In the proposed scheme the cost vector includes impairment generating source parameters, such as the path length, the number of hops, the number of crosstalk sources and other inter-lightpath interfering parameters, so as to indirectly account for the physical layer effects. For a requested connection the algorithm calculates a set of candidate lightpaths, whose quality of transmission is validated using a function that combines the impairment generating parameters. For selecting the lightpath we propose and evaluate various optimization functions that correspond to different IA-RWA algorithms. Our performance results indicate that the proposed algorithms utilize efficiently the available resources and minimize the total accumulated signal degradation on the selected lightpaths, while having low execution times.

Abstract: In this paper we propose an energy-efficient broadcast algorithm for wireless networks for the case where the transmission powers of the nodes are fixed. Our algorithm is based on the multicost approach and selects an optimal energy-efficient set of nodes for broadcasting, taking into account: i) the node residual energies, ii) the transmission powers used by the nodes, and iii) the set of nodes that are covered by a specific schedule. Our algorithm is optimal, in the sense that it can optimize any desired function of the total power consumed by the broadcasting task and the minimum of the current residual energies of the nodes, provided that the optimization function is monotonic in each of these parameters. Our algorithm has non-polynomial complexity, thus, we propose a relaxation producing a near-optimal solution in polynomial time. Using simulations we show that the proposed algorithms outperform other established solutions for energy-aware broadcasting with respect to both energy consumption and network lifetime. Moreover, it is shown that the near-optimal multicost algorithm obtains most of the performance benefits of the optimal multicost algorithm at a smaller computational overhead.

Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different type of resources.

Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy-balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy-balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy-balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy-balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different types of resources.

Abstract: Two simple and work-efficient parallel algorithms for the minimum spanning tree problem are presented. Both algorithms perform O( m log n ) work. The first algorithm
runs in O( log^2 n ) time on an EREW PRAM while the second algorithm runs in O( log n ) time on a Common CRCW PRAM.

Abstract: We consider path protection in the routing and
wavelength assignment (RWA) problem for impairment
constrained WDM optical networks. The proposed multicost
RWA algorithms select the primary and the backup lightpaths by
accounting for physical layer impairments. The backup lightpath
may either be activated (1+1 protection) or it may be reserved and
not activated, with activation taking place when/if needed (1:1
protection). In case of 1:1 protection the period of time where the
quality of its transmission (QoT) is valid, despite the possible
establishment of future connections, should be preserved, so as to
be used in case the primary lightpath fails. We show that, by using
the multicost approach for solving the RWA with protection
problem, great benefits can be achieved both in terms of the
connection blocking rate and in terms of the validity period of the
backup lightpath. Moreover the multicost approach, by providing
a set of candidate lightpaths for each source destination pair,
instead of a single one, offers ease and flexibility in selecting the
primary and the backup lightpaths.

Abstract: In this paper we present an implementation and performance evaluation of a descent algorithm that was proposed in \cite{tsaspi} for the computation of approximate Nash equilibria of non-cooperative bi-matrix games. This algorithm, which achieves the best polynomially computable \epsilon-approximate equilibria till now, is applied here to several problem instances designed so as to avoid the existence of easy solutions. Its performance is analyzed in terms of quality of approximation and speed of convergence. The results demonstrate significantly better performance than the theoretical worst case bounds, both for the quality of approximation and for the speed of convergence. This motivates further investigation into the intrinsic characteristics of descent algorithms applied to bi-matrix games. We discuss these issues and provide some insights about possible variations and extensions of the algorithmic concept that could lead to further understanding of the complexity of computing equilibria. We also prove here a new significantly better bound on the number of loops required for convergence of the descent algorithm.

Abstract: We consider the problem of planning a mixed line rates (MLR) WDM transport optical network. In a MLR network, the interference between different modulation format/line rate connections affect the transmission reach of these connections. We present algorithms to plan a MLR network that take into account the variation of the transmission reach according to the use of the modulation formats/line rates in the network.

Abstract: We consider in this paper the problem of scheduling a set of independent
parallel tasks (jobs) with respect to two criteria, namely,
the makespan (time of the last finishing job) and the minsum (average
completion time). There exist several algorithms with a good
performance guaranty for one of these criteria. We are interested
here in studying the optimization of both criteria simultaneously.
The numerical values are given for the moldable task model, where
the execution time of a task depends on the number of processors
alloted to it. The main result of this paper is to derive explicitly
a family of algorithms guaranteed for both the minsum and the
makespan. The performance guaranty of these algorithms is better
than the best algorithms known so far. The Guaranty curve
of the family is the set of all points (x; y) such that there is an
algorithm with guarantees x on makespan and y on the minsum.
When the ratio on the minsum increases, the curve tends to the
best ratio known for the makespan for moldable tasks (3=2). One
extremal point of the curves is a (3;6)-approximation algorithm.
Finally a randomized version is given, which improves this results
to (3;4.08).

Abstract: Wireless sensor network research usually focuses on the reliable and efficient collection of data. Here, we address the next step in the traces lifetime: we aim at investigating and evaluating, by qualitative and quantitative means, data repositories of already collected measurements. We propose the use of a set of new metrics, which enable reliable evaluation of algorithms using traces (both in average cases and "stressful" setups) removing the need for running algorithms in a real testbed, at least in the development stage.

Abstract: Wireless sensor network research usually focuses on the reliable and efficient collection of data. In
this paper we focus on the next step in the lifetime of traces: we aim at investigating and evaluating, by
qualitative and quantitative means, data repositories of already collected measurements. Concerning the
collected datasets, several important topics arise like the need of exchanging traces between researchers
using a common representation of the traces and the need for common classication of the traces based on
a commonly-agreed set of statistical characteristics for in retrospect utilization. In order to qualitatively
address these issues, we propose the use of a novel set of metrics focusing on the in-network data aggregation
problem class. These metrics enable reliable evaluation of algorithms using the same benchmark traces (both
in average cases and \stressful" setups) removing the need for running algorithms in a real testbed, at least
in the initial development stage. We present the results of our research as a rst approach for addressing this
problem, and in order to conrm our method, we characterized several traces with the proposed metrics.
We validate the metrics by predicting the performance of three data-aggregation schemes using the available
traces and checking the results by actually running the algorithms

Abstract: The content-based publish/subscribe (pub/sub)paradigm for system design is becoming increasingly popular, offering unique benefits for a large number of data-intensive applications. Coupled with the peer-to-peer technology, it can serve as a central building block for such applications
deployed over a large-scale network infrastructure. A key problem toward the creation of large-scale contentbased pub/sub infrastructures relates to dealing efficiently with continuous queries (subscriptions) with rich predicates on string attributes; In particular, efficiently and accurately
matching substring queries to incoming events is an open problem. In this work we study this problem. We provide and analyze novel algorithms for processing subscriptions with substring predicates and events in a variety of environments. We provide experimental data demonstrating the
relative performance behavior of the proposed algorithms using as key metrics the network bandwidth requirements, number of messages, load balancing, as well as requirements for extra routing state (and related maintenance) and design flexibility.

Abstract: Grids offer a transparent interface to geographically scattered computation, communication, storage and
other resources. In this chapter we propose and evaluate QoS-aware and fair scheduling algorithms for
Grid Networks, which are capable of optimally or near-optimally assigning tasks to resources, while taking
into consideration the task characteristics and QoS requirements. We categorize Grid tasks according to
whether or not they demand hard performance guarantees. Tasks with one or more hard requirements are
referred to as Guaranteed Service (GS) tasks, while tasks with no hard requirements are referred to as Best
Effort (BE) tasks. For GS tasks, we propose scheduling algorithms that provide deadline or computational
power guarantees, or offer fair degradation in the QoS such tasks receive in case of congestion. Regarding
BE tasks our objective is to allocate resources in a fair way, where fairness is interpreted in the max-min fair
share sense. Though, we mainly address scheduling problems on computation resources, we also look at
the joint scheduling of communication and computation resources and propose routing and scheduling
algorithms aiming at co-allocating both resource type so as to satisfy their respective QoS requirements.

Abstract: We study the on-line version of the maximum independent set problem, for the case of disk graphs which are graphs resulting
from intersections of disks on the plane. In particular, we investigate whether randomization can be used to break known lower
bounds for deterministic on-line independent set algorithms and present new upper and lower bounds.

Abstract: In this work we address the issue of efficient processing of range queries in DHT-based P2P data networks. The novelty of the proposed approach lies on architectures, algorithms, and mechanisms for identifying and appropriately exploiting powerful nodes in such networks. The existence of such nodes has been well documented in the literature and plays a key role in the architecture of most successful real-world P2P applications. However, till now, this heterogeneity has not been taken into account when architecting solutions for complex query processing, especially in DHT networks. With this work we attempt to fill this gap for optimizing the processing of range queries. Significant performance improvements are achieved due to (i) ensuring a much smaller hop count performance for range queries, and (ii) avoiding the dangers and inefficiencies of relying for range query processing on weak nodes, with respect to processing, storage, and communication capacities, and with intermittent connectivity. We present detailed experimental results validating our performance claims.

Abstract: We consider the problem of planning a mixed line rate
(MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support transmission at different line
rates. Previously proposed planning algorithms have used a
transmission reach bound for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more severe among transmissions
that utilize different modulation formats. Thus, the transmission
reach of a connection with a specific modulation format/line rate
depends also on the other connections that co-propagate with it
in the network. To plan a MLR WDM network, we present
routing and wavelength assignment (RWA) algorithms that
adapt the transmission reach of each connection according to the
use of the modulation formats/line rates in the network. The
proposed algorithms are able to plan the network so as to
alleviate cross-rate interference effects, enabling the
establishment of connections of acceptable quality over paths that
would otherwise be prohibited.

Abstract: A key problem in networks that support advance reservations is the routing and time scheduling of connections with flexible starting time and known data transfer size. In this paper we present a multicost routing and scheduling algorithm for selecting the path to be followed by such a connection and the time the data should start and end transmission at each link so as to minimize the reception time at the destination, or optimize some other performance criterion. The utilization profiles of the network links, the link propagation delays, and the parameters of the connection to be scheduled form the inputs to the algorithm. We initially present a scheme of non-polynomial complexity to compute a set of so called non-dominated candidate paths, from which the optimal path can be found. We then propose two mechanisms to appropriately prune the set of candidate paths in order to find multicost routing and scheduling algorithms of polynomial complexity. We examine the performance of the algorithms in the special case of an Optical Burst Switched network. Our results indicate that the proposed polynomial-time algorithms have performance that is very close to that of the optimal algorithm. We also study the effects network propagation delays and link-state update policies have on performance.

Abstract: We propose QoS-aware scheduling algorithms for Grid Networks that are capable of optimally or near-optimally
assigning computation and communication tasks to grid resources. The routing and scheduling algorithms to be
presented take as input the resource utilization profiles and the task characteristics and QoS requirements, and
co-allocate resources while accounting for the dependencies between communication and computation tasks.
Keywords: communication and computation utilization profiles, multicost routing and scheduling, grid
computing.

Abstract: A key problem in networks that support advance reservations is the routing and time scheduling of
connections with flexible starting time and known data transfer size. In this paper we present a multicost
routing and scheduling algorithm for selecting the path to be followed by such a connection and the time the
data should start and end transmission at each link so as to minimize the reception time at the destination,
or optimize some other performance criterion. The utilization profiles of the network links, the link
propagation delays, and the parameters of the connection to be scheduled form the inputs to the algorithm.
We initially present a scheme of non-polynomial complexity to compute a set of so-called non-dominated
candidate paths, from which the optimal path can be found. We then propose two mechanisms to
appropriately prune the set of candidate paths in order to find multicost routing and scheduling algorithms of
polynomial complexity. We examine the performance of the algorithms in the special case of an Optical
Burst Switched network. Our results indicate that the proposed polynomial time algorithms have performance that is very close to that of the optimal algorithm. We also study the effects network
propagation delays and link-state update policies have on performance.

Abstract: A key problem in networks that support advance
reservations is the routing and time scheduling of connections
with flexible starting time. In this paper we present a multicost
routing and scheduling algorithm for selecting the path to be
followed by such a connection and the time the data should start
so as to minimize the reception time at the destination, or some
other QoS requirement. The utilization profiles of the network
links, the link propagation delays, and the parameters of the
connection to be scheduled form the inputs to the algorithm. We
initially present a scheme of non-polynomial complexity to
compute a set of so-called non-dominated candidate paths, from
which the optimal path can be found. By appropriately pruning
the set of candidate paths using path pseudo-domination
relationships, we also find multicost routing and scheduling
algorithms of polynomial complexity. We examine the
performance of the algorithms in the special case of an Optical
Burst Switched network. Our results indicate that the proposed
polynomial time algorithms have performance that it is very close
to that of the optimal algorithm.

Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has been recently proposed as a modulation technique for optical
networks, due to its good spectral efficiency and impairment
tolerance. Optical OFDM is much more flexible compared to
traditional WDM systems, enabling elastic bandwidth
transmissions. We consider the planning problem of an OFDMbased optical network where we are given a traffic matrix that
includes the requested transmission rates of the connections to be
served. Connections are provisioned for their requested rate by
elastically allocating spectrum using a variable number of OFDM
subcarriers. We introduce the Routing and Spectrum Allocation
(RSA) problem, as opposed to the typical Routing and
Wavelength Assignment (RWA) problem of traditional WDM
networks, and present various algorithms to solve the RSA. We
start by presenting an optimal ILP RSA algorithm that minimizes
the spectrum used to serve the traffic matrix, and also present a
decomposition method that breaks RSA into two substituent
subproblems, namely, (i) routing and (ii) spectrum allocation
(R+SA) and solves them sequentially. We also propose a heuristic
algorithm that serves connections one-by-one and use it to solve
the planning problem by sequentially serving all traffic matrix
connections. To feed the sequential algorithm, two ordering
policies are proposed; a simulated annealing meta-heuristic is also
proposed to obtain even better orderings. Our results indicate
that the proposed sequential heuristic with appropriate ordering
yields close to optimal solutions in low running times.

Abstract: We study a problem of scheduling client requests to servers. Each client has a particular latency requirement at each server and may choose either to be assigned to some server in order to get serviced provided that her latency requirement is met, or not to participate in the assignment at all. From a global perspective, in order to optimize the performance of such a system, one would aim to maximize the number of clients that participate in the assignment. However, clients may behave selfishly in the sense that, each of them simply aims to participate in an assignment and get serviced by some server where her latency requirement is met with no regard to overall system performance. We model this selfish behavior as a strategic game, show how to compute pure Nash equilibria efficiently, and assess the impact of selfishness on system performance. We also show that the problem of optimizing performance is computationally hard to solve, even in a coordinated way, and present efficient approximation and online algorithms.

Abstract: We study a problem of scheduling client requests to servers.
Each client has a particular latency requirement at each server and may
choose either to be assigned to some server in order to get serviced provided
that her latency requirement is met or not to participate in the
assignment at all. From a global perspective, in order to optimize the
performance of such a system, one would aim to maximize the number
of clients that participate in the assignment. However, clients may behave
selfishly in the sense that each of them simply aims to participate
in an assignment and get serviced by some server where her latency requirement
is met with no regard to the overall system performance. We
model this selfish behavior as a strategic game, show how to compute
equilibria efficiently, and assess the impact of selfishness on system performance.
We also show that the problem of optimizing performance is
computationally hard to solve, even in a coordinated way, and present
efficient approximation and online algorithms.

Abstract: Embedded computing devices dominate our everyday activities, from cell phones to wireless sensors that collect and process data for various applications. Although desktop and high-end server security seems to be under control by the use of current security technology, securing the low-end embedded computing systems is a difficult long-term problem. This is mainly due to the fact that the embedded systems are constrained by their operational environment and the limited resources they are equipped with. Recent research activities focus on the deployment of lightweight cryptographic algorithms and security protocols that are well suited to the limited resources of low-end embedded systems. Elliptic Curve Cryptography (ECC) offers an interesting alternative to the classical public key cryptography for embedded systems (e.g., RSA and ElGamal), since it uses smaller key sizes for achieving the same security level, thus making ECC an attractive and efficient alternative for deployment in embedded systems. In this chapter, the processing requirements and architectures for secure network access, communication functions, storage, and high availability of embedded devices are discussed. In addition, ECC-based state-of-the-art lightweight cryptographic primitives for the deployment of security protocols in embedded systems that fulfill the requirements are presented.

Abstract: In this work we tackle the open problem of self-join size (SJS) estimation in a large-scale distributed data system, where tuples of a relation are distributed over data nodes which comprise an overlay network. Our contributions include adaptations of five well-known SJS estimation centralized techniques (coined sequential, cross-sampling, adaptive, bifocal, and sample-count) to the network environment and a novel technique which is based on the use of the Gini coefficient. We develop analyses showing how Gini estimations can lead to estimations of the underlying Zipfian or power-law value distributions. We further contribute distributed sampling algorithms that can estimate accurately and efficiently the Gini coefficient. Finally, we provide detailed experimental evidence testifying for the claimed increased accuracy, precision, and efficiency of the proposed SJS estimation method, compared to the other methods. The proposed approach is the only one to ensure high efficiency, precision, and accuracy regardless of the skew of the underlying data.

Abstract: We consider the problem of preprocessing an n-vertex digraph with real edge weights so that
subsequent queries for the shortest path or distance between any two vertices can be efficiently answered. We
give algorithms that depend on the treewidth of the input graph. When the treewidth is a constant, our algorithms
can answer distance queries in O ({\'a}(n) ) time after O.n/ preprocessing. This improves upon previously known
results for the same problem.We also give a dynamic algorithm which, after a change in an edge weight, updates
the data structure in time O.n¯ /, for any constant 0 < ¯ < 1. Furthermore, an algorithm of independent interest
is given: computing a shortest path tree, or finding a negative cycle in linear time.

Abstract: We consider the problem of preprocessing an n-vertex digraph with real edge weights so that subsequent queries for the shortest path or distance between any two vertices can be efficiently answered. We give parallel algorithms for the EREW PRAM model of computation that depend on the treewidth of the input graph. When the treewidth is a constant, our algorithms can answer distance queries in O({\'a}(n)) time using a single processor, after a preprocessing of O(log2n) time and O(n) work, where {\'a}(n) is the inverse of Ackermann's function. The class of constant treewidth graphs contains outerplanar graphs and series-parallel graphs, among others. To the best of our knowledge, these are the first parallel algorithms which achieve these bounds for any class of graphs except trees. We also give a dynamic algorithm which, after a change in an edge weight, updates our data structures in O(log n) time using O(n{\^a}) work, for any constant 0 < {\^a} < 1. Moreover, we give an algorithm of independent interest: computing a shortest path tree, or finding a negative cycle in O(log2n) time using O(n) work.

Abstract: Geographic routing is becoming the protocol of choice for
many sensor network applications. The current state of the art is unsatisfactory:
some algorithms are very efficient, however they require a
preliminary planarization of the communication graph. Planarization induces
overhead and is thus not realistic for some scenarios such as the
case of highly dynamic network topologies. On the other hand, georouting
algorithms which do not rely on planarization have fairly low success
rates and fail to route messages around all but the simplest obstacles.
To overcome these limitations, we propose the GRIC geographic routing
algorithm. It has absolutely no topology maintenance overhead, almost
100% delivery rates (when no obstacles are added), bypasses large convex
obstacles, finds short paths to the destination, resists link failure
and is fairly simple to implement. The case of hard concave obstacles
is also studied; such obstacles are hard instances for which performance
diminishes.

Abstract: In this work, we study protocols (i.e. distributed algorithms) so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol (i.e. the system is homogeneous). Moreover, we assume pairwise interactions between the processes that are scheduled by an adversary. The only constraint on the adversary scheduler is that it must be fair, intuitively meaning that it must assign to every reachable configuration of the system a non-zero probability to occur. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. In particular, in every interaction, the protocol may activate an inactive connection, deactivate an active one, or leave the state of a connection unchanged. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network (i.e. one that does not change any more). We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. We provide proofs of correctness for all of our protocols and analyze the expected time to convergence of most of them under a uniform random scheduler that selects the next pair of interacting processes uniformly at random from all such pairs. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. Our universality protocols use a subset of the population (waste) in order to distributedly construct there a TM able to decide a graph class in some given space. Then, the protocols repeatedly construct in the rest of the population (useful space) a graph equiprobably drawn from all possible graphs. The TM works on this and accepts if the presented graph is in the class. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such k. This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions. Delicate composition and reinitialization issues have to be solved for these general constructions to work.

Abstract: We address an important communication issue in wireless cellular networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region (cell) can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of the available frequencies is limited; thus, efficient solutions to the call control problem are essential. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users that communicate without signal interference. We consider cellular networks of reuse distance kge 2 and we study the on-line version of the problem using competitive analysis.
In cellular networks of reuse distance 2, the previously best known algorithm that beats the lower bound of 3 on the competitiveness of deterministic algorithms works on networks with one frequency, achieves a competitive ratio against oblivious adversaries which is between 2.469 and 2.651, and uses a number of random bits at least proportional to the size of the network. We significantly improve this result by presenting a series of simple randomized algorithms that have competitive ratios smaller than 3, work on networks with arbitrarily many frequencies, and use only a constant number of random bits or a comparable weak random source. The best competitiveness upper bound we obtain is 7/3.
In cellular networks of reuse distance k>2, we present simple randomized on-line call control algorithms with competitive ratios which significantly beat the lower bounds on the competitiveness of deterministic ones and use only random bits. Furthermore, we show a new lower bound on the competitiveness of on-line call control algorithms in cellular networks of reuse distance kge 5.

Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing voting rule that today bears his name. Although Dodgson's rule is one of the most well-studied voting rules, it suffers from serious deciencies, both from the computational point of view|it is NP-hard even to approximate the Dodgson score within sublogarithmic factors|and from the social choice point of view|it fails basic social choice desiderata such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we have asked whether there are approximation algorithms for Dodgson's rule that are monotonic or homogeneous. In this paper we give denitive answers to these questions. We design a monotonic exponential-time algorithm that yields a 2-approximation to the Dodgson score, while matching this result with a tight lower bound. We also present a monotonic polynomial-time O(logm)-approximation algorithm (where m is the number of alternatives); this result is tight as well due to a complexity-theoretic lower bound. Furthermore, we show that a slight variation of a known voting rule yields a monotonic, homogeneous, polynomial-time O(mlogm)-approximation algorithm, and establish that it is impossible to achieve a better approximation ratio even if one just asks for homogeneity. We complete the picture by studying several additional social choice properties; for these properties, we prove that algorithms with an approximation ratio that depends only on m do not exist.

Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing
voting rule that today bears his name. Although Dodg-
son's rule is one of the most well-studied voting rules, it suf-
fers from serious deciencies, both from the computational
point of view|it is NP-hard even to approximate the Dodg-
son score within sublogarithmic factors|and from the social
choice point of view|it fails basic social choice desiderata
such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we
have asked whether there are approximation algorithms for
Dodgson's rule that are monotonic or homogeneous. In this
paper we give denitive answers to these questions. We de-
sign a monotonic exponential-time algorithm that yields a
2-approximation to the Dodgson score, while matching this
result with a tight lower bound. We also present a monotonic
polynomial-time O(logm)-approximation algorithm (where
m is the number of alternatives); this result is tight as well
due to a complexity-theoretic lower bound. Furthermore,
we show that a slight variation of a known voting rule yields
a monotonic, homogeneous, polynomial-time O(mlogm)-
approximation algorithm, and establish that it is impossible
to achieve a better approximation ratio even if one just asks
for homogeneity. We complete the picture by studying sev-
eral additional social choice properties; for these properties,
we prove that algorithms with an approximation ratio that
depends only on m do not exist.

Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.

Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.

Abstract: In wireless communication, the signal of a typical broadcast station is transmittes from a broadvast center p and reaches objects at a distance,say , r from it. In addition there is a radius ro, ro < r, such that the signal originating from the center p should be avoided. In other words, points within distance ro from the station compise a hazardous zone. We consider the following station layout problem: Cover a given planar region that includes a collection of buildings with a minimum number of astations so that every point in the region is within the reach of a station, while at the same time no interior point of any building is within the hazardous zone of a station. We give algorithms for computing such station layouts in both the one- and two - dimensional cases

Abstract: Efficient query processing in traditional database
management systems relies on statistics on base data. For centralized systems, there is a rich body of research results on such statistics, from simple aggregates to more elaborate synopses such as sketches and histograms. For Internet-scale distributed systems, on the other hand, statisticsmanagement still poses major challenges. With the work in this paper we aim to endow peer-to-peer data management over structured
overlays with the power associated with such statistical information, with emphasis on meeting the scalability challenge.
To this end, we first contribute efficient, accurate, and decentralized algorithms that can compute key aggregates such as Count, CountDistinct, Sum, and Average. We show how to construct several types of histograms, such as simple Equi-Width, Average Shifted Equi-Width, and Equi-Depth histograms. We present a full-fledged open-source implementation
of these tools for distributed statistical synopses,
and report on a comprehensive experimental performance evaluation, evaluating our contributions in terms of efficiency, accuracy, and scalability.

Abstract: We consider the important problem of energy balanced data propagation in wireless sensor networks and we extend and generalize
previous works by allowing adaptive energy assignment. We consider the data gathering problem where data are generated by the sensors and
must be routed toward a unique sink. Sensors route data by either sending the data directly to the sink or in a multi-hop fashion by delivering
the data to a neighbouring sensor. Direct and neighbouring transmissions require different levels of energy consumption. Basically, the protocols balance the energy consumption among the sensors by computing the adequate ratios of direct and neighbouring transmissions. An abstract model of energy dissipation as a random walk is proposed, along with rigorous performance analysis techniques. Two efficient distributed algorithms are presented and analysed, by both rigorous means and simulation.
The first one is easy to implement and fast to execute. The protocol assumes that sensors know a-priori the rate of data they generate.
The sink collects and processes all these information in order to compute the relevant value of the protocol parameter. This value is transmitted
to the sensors which individually compute their optimal ratios of direct and neighbouring transmissions. The second protocol avoids the necessary a-priori knowledge of the data rate generated by sensors by inferring the relevant information from the observation of the data paths.
Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes.

Abstract: In emerging pervasive scenarios, data is collected by sensing devices in streams that occur at several distributed points of observation. The size of the data typically far exceeds the storage and computational capabilities of the tiny devices that have to collect and process them. A general and challenging task is to allow (some of) the nodes of a pervasive network to collectively perform monitoring of a neighbourhood of interest by issuing continuous aggregate queries on the streams observed in its vicinity. This class of algorithms is fully decentralized and diffusive in nature: collecting all the data at a few central nodes of the network is unfeasible in networks of low capability devices or in the presence of massive data sets. Two main problems arise in this scenario: (i) the intrinsic complexity of maintaining statistics over a data stream whose size greatly exceeds the capabilities of the device that performs the computation; (ii) composing the partial outcomes computed at different points of observation into an accurate, global statistic over a neighbourhood of interest, which entails coping with several problems, last but not least the receipt of duplicate information along multiple paths of diffusion.
Streaming techniques have emerged as powerful tools to achieve the general goals described above, in the first place because they assume a computational model in which computational and storage resources are assumed to be far exceeded by the amount of data on which computation occurs. In this contribution, we review the main streaming techniques and provide a classification of the computational problems and the applications they effectively address, with an emphasis on decentralized scenarios, which are of particular interest in pervasive networks

Abstract: In this paper, we present and study solutions for the efficient processing of queries over string attributes in a large P2P data network implemented with DHTs. The proposed solutions support queries with equality, prefix, suffix, and containment predicates over string attributes. Currently, no known solution to this problem exists.
We propose and study algorithms for processing such queries and their optimizations. As event-based, Publish/Subscribe information systems are a champion application class where string attribute (continuous) queries are very common, we pay particular attention to this type of data networks, formulating our solution in terms of this environment. A major design decision behind the proposed solution is our intention to provide a solution that is general (DHT-independent), capable of being implemented on top of any particular DHT.

Abstract: A key issue when designing and implementing large-scale publish/subscribe systems is how to efficiently propagate subscriptions among the brokers of the system. Brokers require this information in order to forward incoming events only to interested users, filtering out unrelated events, which can save significant overheads (particularly network bandwidth and processing time at the brokers). In this paper we contribute the notion of subscription summaries, a mechanism appropriately compacting subscription information. We develop the associated data structures and matching algorithms. The proposed mechanism can handle event/subscription schemata that are rich in terms of their attribute types and powerful in terms of the allowed operations on them. Our major results are that the proposed mechanism (i) is scalable, with the bandwidth required to propagate subscriptions increasing only slightly, even at huge-scales, and (ii) is significantly more efficient, up to orders of magnitude, depending on the scale, with respect to the bandwidth requirements for propagating subscriptions.

Abstract: In this work we consider temporal networks, i.e. networks defined by a labeling $\lambda$ assigning to each edge of an underlying graph G a set of discrete time-labels. The labels of an edge, which are natural numbers, indicate the discrete time moments at which the edge is available. We focus on path problems of temporal networks. In particular, we consider time-respecting paths, i.e. paths whose edges are assigned by $\lambda$ a strictly increasing sequence of labels. We begin by giving two efficient algorithms for computing shortest time-respecting paths on a temporal network. We then prove that there is a natural analogue of Menger’s theorem holding for arbitrary temporal networks. Finally, we propose two cost minimization parameters for temporal network design. One is the temporality of G, in which the goal is to minimize the maximum number of labels of an edge, and the other is the temporal cost of G, in which the goal is to minimize the total number of labels used. Optimization of these parameters is performed subject to some connectivity constraint. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graph families such as rings, directed acyclic graphs, and trees.

Abstract: We investigate the existence and efficient algorithmic construction
of close to optimal independent sets in random models of intersection
graphs. In particular, (a) we propose a new model for random
intersection graphs (Gn,m,p) which includes the model of [10] (the “uniform”
random intersection graphs model) as an important special case.
We also define an interesting variation of the model of random intersection
graphs, similar in spirit to random regular graphs. (b) For this
model we derive exact formulae for the mean and variance of the number
of independent sets of size k (for any k) in the graph. (c) We then propose
and analyse three algorithms for the efficient construction of large
independent sets in this model. The first two are variations of the greedy
technique while the third is a totally new algorithm. Our algorithms are
analysed for the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding close to optimal
independent sets for an interesting range of graph parameters.

Abstract: We investigate the existence and efficient algorithmic construction of close to opti-
mal independent sets in random models of intersection graphs. In particular, (a) we
propose a new model for random intersection graphs (Gn,m,~p) which includes the
model of [10] (the “uniform” random intersection graphs model) as an important
special case. We also define an interesting variation of the model of random intersec-
tion graphs, similar in spirit to random regular graphs. (b) For this model we derive
exact formulae for the mean and variance of the number of independent sets of size
k (for any k) in the graph. (c) We then propose and analyse three algorithms for
the efficient construction of large independent sets in this model. The first two are
variations of the greedy technique while the third is a totally new algorithm. Our
algorithms are analysed for the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding close to optimal in-
dependent sets for an interesting range of graph parameters.

Abstract: We consider a security problem on a distributed network.
We assume a network whose nodes are vulnerable to infection
by threats (e.g. viruses), the attackers. A system security
software, the defender, is available in the system. However,
due to the network¢s size, economic and performance reasons,
it is capable to provide safety, i.e. clean nodes from
the possible presence of attackers, only to a limited part of
it. The objective of the defender is to place itself in such a
way as to maximize the number of attackers caught, while
each attacker aims not to be caught.
In [7], a basic case of this problem was modeled as a
non-cooperative game, called the Edge model. There, the
defender could protect a single link of the network. Here,
we consider a more general case of the problem where the
defender is able to scan and protect a set of k links of the
network, which we call the Tuple model. It is natural to expect
that this increased power of the defender should result
in a better quality of protection for the network. Ideally,
this would be achieved at little expense on the existence and
complexity of Nash equilibria (profiles where no entity can
improve its local objective unilaterally by switching placements
on the network).
In this paper we study pure and mixed Nash equilibria
in the model. In particular, we propose algorithms for computing
such equilibria in polynomial time and we provide a
polynomial-time transformation of a special class of Nash
equilibria, called matching equilibria, between the Edge
model and the Tuple model, and vice versa. Finally, we
establish that the increased power of the defender results in
higher-quality protection of the network.

Abstract: Let M be a single s-t network of parallel links with load dependent latency functions shared by an infinite number of selfish users. This may yield a Nash equilibrium with unbounded Coordination Ratio [E. Koutsoupias, C. Papadimitriou, Worst-case equilibria, in: 16th Annual Symposium on Theoretical Aspects of Computer Science, STACS, vol. 1563, 1999, pp. 404-413; T. Roughgarden, E. Tardos, How bad is selfish routing? in: 41st IEEE Annual Symposium of Foundations of Computer Science, FOCS, 2000, pp. 93-102]. A Leader can decrease the coordination ratio by assigning flow {\'a}r on M, and then all Followers assign selfishly the (1-{\'a})r remaining flow. This is a Stackelberg Scheduling Instance(M,r,{\'a}),0≤{\'a}≤1. It was shown [T. Roughgarden, Stackelberg scheduling strategies, in: 33rd Annual Symposium on Theory of Computing, STOC, 2001, pp. 104-113] that it is weakly NP-hard to compute the optimal Leader's strategy. For any such network M we efficiently compute the minimum portion @b"M of flow r>0 needed by a Leader to induce M's optimum cost, as well as her optimal strategy. This shows that the optimal Leader's strategy on instances (M,r,@a>=@b"M) is in P. Unfortunately, Stackelberg routing in more general nets can be arbitrarily hard. Roughgarden presented a modification of Braess's Paradox graph, such that no strategy controlling {\'a}r flow can induce ≤1/{\'a} times the optimum cost. However, we show that our main result also applies to any s-t net G. We take care of the Braess's graph explicitly, as a convincing example. Finally, we extend this result to k commodities. A conference version of this paper has appeared in [A. Kaporis, P. Spirakis, The price of optimum in stackelberg games on arbitrary single commodity networks and latency functions, in: 18th annual ACM symposium on Parallelism in Algorithms and Architectures, SPAA, 2006, pp. 19-28]. Some preliminary results have also appeared as technical report in [A.C. Kaporis, E. Politopoulou, P.G. Spirakis, The price of optimum in stackelberg games, in: Electronic Colloquium on Computational Complexity, ECCC, (056), 2005].

Abstract: We review some recent advances for solving core algorithmic problems encountered in public
transportation systems. We show that efficient algorithms can make a great difference both in
efficiency and in optimality, thus contributing significantly to improving the quality and
service-efficiency of public transportation systems.

Abstract: In this work, we study the combinatorial structure and the
computational complexity of Nash equilibria for a certain game that
models selfish routing over a network consisting of m parallel links. We
assume a collection of n users, each employing a mixed strategy, which
is a probability distribution over links, to control the routing of its own
assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic
on those links that minimize its expected latency cost, given the network
congestion caused by the other users. The social cost of a Nash equilibrium
is the expectation, over all random choices of the users, of the
maximum, over all links, latency through a link.
We embark on a systematic study of several algorithmic problems related
to the computation of Nash equilibria for the selfish routing game we consider.
In a nutshell, these problems relate to deciding the existence of a
Nash equilibrium, constructing a Nash equilibrium with given support
characteristics, constructing the worst Nash equilibrium (the one with
maximum social cost), constructing the best Nash equilibrium (the one
with minimum social cost), or computing the social cost of a (given) Nash
equilibrium. Our work provides a comprehensive collection of efficient algorithms,
hardness results (both as NP-hardness and #P-completeness
results), and structural results for these algorithmic problems. Our results
span and contrast a wide range of assumptions on the syntax of the
Nash equilibria and on the parameters of the system.

Abstract: In this paper we study the threshold behavior of the fixed radius random graph model and its applications to the key management problem of sensor networks and, generally, for mobile ad-hoc networks. We show that this random graph model can realistically model the placement of nodes within a certain region and their interaction/sensing capabilities (i.e. transmission range, light sensing sensitivity etc.). We also show that this model can be used to define key sets for the network nodes that satisfy a number of good properties, allowing to set up secure communication with each other depending on randomly created sets of keys related to their current location. Our work hopes to inaugurate a study of key management schemes whose properties are related to properties of an appropriate random graph model and, thus, use the rich theory developed in the random graph literature in order to transfer ?good? properties of the graph model to the key sets of the nodes.
Partially supported by the IST Programme of the European Union under contact number IST-2005-15964 (AEOLUS) and the INTAS Programme under contract with Ref. No 04-77-7173 (Data Flow Systems: Algorithms and Complexity (DFS-AC)).

Abstract: We consider the frugal coverage problem, an interesting vari-
ation of set cover de¯ned as follows. Instances of the problem consist of
a universe of elements and a collection of sets over these elements; the
objective is to compute a subcollection of sets so that the number of
elements it covers plus the number of sets not chosen is maximized. The
problem was introduced and studied by Huang and Svitkina [7] due to
its connections to the donation center location problem. We prove that
the greedy algorithm has approximation ratio at least 0:782, improving
a previous bound of 0:731 in [7]. We also present a further improvement
that is obtained by adding a simple corrective phase at the end of the
execution of the greedy algorithm. The approximation ratio achieved in
this way is at least 0:806. Our analysis is based on the use of linear
programs which capture the behavior of the algorithms in worst-case
examples. The obtained bounds are proved to be tight.

Abstract: We present an improved upper bound on the competitiveness of the online colouring algorithm First-Fit in disk graphs, which are graphs representing overlaps of disks on the plane. We also show that this bound is best possible for deterministic online colouring algorithms that do not use the disk representation of the input graph. We also present a related new lower bound for unit disk graphs.

Abstract: We give an overview of models and efficient algorithms for
optimally solving timetable information problems like “given a departure
and an arrival station as well as a departure time, which is the
connection that arrives as early as possible at the arrival station?” Two
main approaches that transform the problems into shortest path problems
are reviewed, including issues like the modeling of realistic details
(e.g., train transfers) and further optimization criteria (e.g., the number
of transfers). An important topic is also multi-criteria optimization,
where in general all attractive connections with respect to several criteria
shall be determined. Finally, we discuss the performance of the described
algorithms, which is crucial for their application in a real system.

Abstract: We give an overview of models and efficient algorithms for optimally solving timetable information problems like “given a departure and an arrival station as well as a departure time, which is the connection that arrives as early as possible at the arrival station?” Two main approaches that transform the problems into shortest path problems are reviewed, including issues like the modeling of realistic details (e.g., train transfers) and further optimization criteria (e.g., the number of transfers). An important topic is also multi-criteria optimization, where in general all attractive connections with respect to several criteria shall be determined. Finally, we discuss the performance of the described algorithms, which is crucial for their application in a real system.

Abstract: In this work we study how to process complex queries in DHT-based
Peer-to-Peer (P2P) data networks. Queries are made over tuples and relations
and are expressed in a query language, such as SQL. We describe existing
research approaches for query processing in P2P systems, we suggest
improvements and enhancements, and propose a unifying framework that
consists of a modified DHT architecture, data placement and search algorithms,
and provides efficient support for processing a variety of query types, including
queries with one or more attributes, queries with selection operators (involving
equality and range queries), and queries with join operators. To our knowledge,
this is the first work that puts forth a framework providing support for all these
query types.

Abstract: We examine the problem of transmitting in minimum time a given amount of data between a
source and a destination in a network with finite channel capacities and nonzero propagation delays. In
the absence of delays, the problem has been shown to be solvable in polynomial time. In this paper, we
show that the general problem is NP-complete. In addition, we examine transmissions along a single path,
called the quickest path, and present algorithms for general and special classes of networks that improve
upon previous approaches. The first dynamic algorithm for the quickest path problem is also
given

Abstract: In this work, we introduce the notion of time to some well-known combinatorial optimization problems. In particular, we study problems defined on temporal graphs. A temporal graph D=(V,A) may be viewed as a time-sequence G_1,G_2,...,G_l of static graphs over the same (static) set of nodes V. Each G_t = D(t) = (V,A(t)) is called the instance of D at time t and l is called the lifetime of D. Our main focus is on analogues of traveling salesman problems in temporal graphs. A sequence of time-labeled edges (e.g. a tour) is called temporal if its labels are strictly increasing. We begin by considering the problem of exploring the nodes of a temporal graph as soon as possible. In contrast to the positive results known for the static case, we prove that, it cannot be approximated within cn, for some constant c > 0, in general temporal graphs and within (2 − \varepsilon), for every constant \varepsilon > 0, in the special case in which D(t) is connected for all 1 <= t <= l, both unless P = NP. We then study the temporal analogue of TSP(1,2), abbreviated TTSP(1,2), where, for all 1 <= t <= l, D(t) is a complete weighted graph with edge-costs from {1,2} and the cost of an edge may vary from instance to instance. The goal is to find a minimum cost temporal TSP tour. We give several polynomial-time approximation algorithms for TTSP(1,2). Our best approximation is (1.7 + \varepsilon) for the generic TTSP(1,2) and (13/8 + \varepsilon) for its interesting special case in which the lifetime of the temporal graph is restricted to n. In the way, we also introduce temporal versions of Maximum Matching, Path Packing, Max-TSP, and Minimum Cycle Cover, for which we obtain polynomial-time approximation algorithms and hardness results.

Abstract: In this work we add a training phase to an Impairment Aware Routing and Wavelength Assignment (IA-RWA) algorithm so as to improve its performance. The initial IA-RWA algorithm is a multi-parametric algorithm where a vector of physical impairment parameters is assigned to each link, from which the impairment vectors of candidate lightpaths are calculated. The important issue here is how to combine these impairment parameters into a scalar that would reflect the true transmission quality of a path. The training phase of the proposed IA-RWA algorithm is based on an optimization approach, called Particle Swarm Optimization (PSO), inspired by animal social behavior. The training phase gives the ability to the algorithm to be aware of the physical impairments even though the optical layer is seen as a black box. Our simulation studies show that the performance of the proposed scheme is close to that of algorithms that have explicit knowledge of the optical layer and the physical impairments.

Abstract: We study computationally hard combinatorial problems arising from the important engineering question of how to maximize the number of connections that can be simultaneously served in a WDM optical network. In such networks, WDM technology can satisfy a set of connections by computing a route and assigning a wavelength to each connection so that no two connections routed through the same fiber are assigned the same wavelength. Each fiber supports a limited number of w wavelengths and in order to fully exploit the parallelism provided by the technology, one should select a set connections of maximum cardinality which can be satisfied using the available wavelengths. This is known as the maximum routing and path coloring problem (maxRPC).
Our main contribution is a general analysis method for a class of iterative algorithms for a more general coloring problem. A lower bound on the benefit of such an algorithm in terms of the optimal benefit and the number of available wavelengths is given by a benefit-revealing linear program. We apply this method to maxRPC in both undirected and bidirected rings to obtain bounds on the approximability of several algorithms. Our results also apply to the problem maxPC where paths instead of connections are given as part of the input. We also study the profit version of maxPC in rings where each path has a profit and the objective is to satisfy a set of paths of maximum total profit.

Abstract: Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated. To do this we employ a cache replacement algorithm, 'CSP', which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularities, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements.
Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated To do this we employ a cache replacement algorithm, 'CSP, which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularifies, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements. Our results show that there are clear performance gains when utilizing the communication cost, the popularity of objects, and the auxiliary cache. In contrast, the size of objects and the admission controller have a negligible performance impact. Our major conclusions going against those in related work are that (i) LRU is preferable to CSP for important parameter values, (ii) accounting for the objects' sizes does not improve latency and/or bandwidth requirements, and (iii) the collaboration of nearby proxies is not very beneficial. Based on these results, we chart the problem solution space, identifying which algorithm is preferable and under which conditions. Finally, we develop a dynamic replacement algorithm that continuously utilizes the best algorithm as the problem-parameter values (e.g., the access distributions) change with time.

Abstract: In view of the apparent intractability of constructing Nash Equilibria (NE
in short) in polynomial time, even for bimatrix games, understanding the limitations
of the approximability of the problem is an important challenge.
In this work we study the tractability of a notion of approximate equilibria in bimatrix
games, called well supported approximate Nash Equilibria (SuppNE in short).
Roughly speaking, while the typical notion of approximate NE demands that each
player gets a payoff at least an additive term less than the best possible payoff, in a
SuppNE each player is assumed to adopt with positive probability only approximate
pure best responses to the opponent¢s strategy.
As a first step, we demonstrate the existence of SuppNE with small supports and
at the same time good quality. This is a simple corollary of Alth{\"o}fer¢s Approximation
Lemma, and implies a subexponential time algorithm for constructing SuppNE of
arbitrary (constant) precision.
We then propose algorithms for constructing SuppNE in win lose and normalized
bimatrix games (i.e., whose payoff matrices take values from {0, 1} and [0, 1] respectively).
Our methodology for attacking the problem is based on the solvability of zero sum bimatrix games (via its connection to linear programming) and provides a
0.5-SuppNE for win lose games and a 0.667-SuppNE for normalized games.
To our knowledge, this paper provides the first polynomial time algorithms constructing
{\aa}-SuppNE for normalized or win lose bimatrix games, for any nontrivial
constant 0 ≤{\aa} <1, bounded away from 1.

Abstract: There exists a great amount of algorithms for wireless sensor networks (WSNs) that have never been tried in practice. This is due to the fact that programming sensor nodes still happens on a very technical level. We remedy the situation by introducing our algorithm library Wiselib, which allows for simple implementations of algorithms. It can adopt to a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, which allow to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time. The Wiselib runs on different host operating systems such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. The Wiselib provides an algorithm with data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks. In this paper, we describe the building blocks of the Wiselib, analyze the overhead, and show how cryptographically secured routing algorithms can be implemented. We also report on results from experiments with real sensor node hardware.

Abstract: In this paper we describe a new simulation platform for complex wireless sensor networks that operate a collection of distributed algorithms and network protocols. Simulating such systems is complicated because of the need to coordinate different network layers and debug protocol stacks, often with very different interfaces, options, and fidelities. Our platform (which we call WSNGE) is a flexible and extensible environment that provides a highly scalable simulator with unique characteristics. It focuses on user friendliness, providing every function in both scriptable and visual way, allowing the researcher to define simulations and view results in an easy to use graphical environment. Unlike other solutions, WSNGE does not distinguish between different scenario types, allowing multiple different protocols to run at the same time. It enables rich online interaction with running simulations, allowing parameters, topologies or the whole scenario to be altered at any point in time.