[RACTI-RU1-2016-38] Michail, Othon and Spirakis, Paul, How Many Cooks Spoil the Soup?, in: Structural Information and Communication Complexity - 23rd International Colloquium, SIROCCO 2016, Helsinki, Finland, July 19-21, 2016, Revised Selected Papers, pages 3-18, Helsinki, Finland, 2016. [DOI]
Tools and Technologies for eParticipation and eVoting, Trust and Security: provisions and instruments, eDemocracy and eParticipation Challenges, Risk Assessment, Cryptographic Protocol, Security Architecture
Mobile tourism, Mobile recommender systems, Personalization; Points of interest; Pull-based, Reactive, Proactive, Location awareness, Context-awareness, Route planning, Tour planning
Abstract: Τα τελευταία χρόνια οι εφαρμογές συνεχούς μετάδοσης ροών πληροφορίας στο διαδίκτυο έχουν γίνει ιδιαίτερα δημοφιλείς. Με τον συνεχώς αυξανόμενο ρυθμό εισόδου νέων αντικειμένων πληροφορίας, γίνεται ολοένα και πιο επιτακτική η ανάγκη για την ανάπτυξη πληροφορικών συστημάτων που να μπορούν να προσφέρουν στους χρήστες τους μόνο εκείνες τις πληροφορίες που τους ενδιαφέρουν, φιλτράροντας τεράστιους όγκους από άσχετες για τον κάθε χρήστη, πληροφορίες. Ένα μοντέλο διάδοσης πληροφορίας ικανό να ενσωματώσει τέτοιου είδους ιδιότητες, είναι το μοντέλο δημοσιεύσεων/συνδρομών βασισμένο στο περιεχόμενο ( content-based publish/subscribe)
Βασική συνεισφορά μας στο χώρο είναι η εφαρμογή του μοντέλου δημοσιεύσεων/συνδρομών βασισμένου στο περιεχόμενο (content-based publish/subscribe) πάνω στα δίκτυα ομοτίμων ώστε να μπορέσουμε να προσφέρουμε στους χρήστες υψηλή εκφραστικότητα κατά την δήλωση των ενδιαφερόντων τους, λειτουργώντας σε ένα πλήρως κατανεμημένο και κλιμακώσιμο περιβάλλον. Ο κορμός των προτεινόμενων λύσεων σε αυτή τη διατριβή είναι: (α) η ανάπτυξη αλγορίθμων για την αποθήκευση των κλειδιών των δημοσιεύσεων σε κατάλληλους κόμβους του δικτύου με βάση τις συνθήκες στο περιεχόμενο που έχουν δηλωθεί και (β) αλγορίθμων δρομολόγησης δημοσιεύσεων στο διαδίκτυο έτσι ώστε να ((συναντούν)) αυτούς τους κόμβους οι οποίοι περιέχουν συνδρομές που ικανοποιούνται από την πληροφορία της δημοσίευσης.
Οι προτεινόμενοι αλγόριθμοι υλοποιήθηκαν και εξετάσθηκαν ενδελεχώς με προσομοίωση μελετώντας την απόδοσή τους με βάση μετρικές όπως: η δίκαιη κατανομή του φόρτου στους κόμβους του δικτύου από τη διακίνηση μηνυμάτων κατά την επεξεργασία των συνδρομών/δημοσιεύσεων, ο συνολικός αριθμός μηνυμάτων που διακινούνται, ο συνολικός όγκος επιπλέον πληροφορίας που απαιτούν οι αλγόριθμοι να εισέλθει στο δίκτυο (network bandwidth), και ο χρόνος που απαιτείται για την ανεύρεση των συνδρομών που συζευγνύουν με κάθε δημοσίευση.
Abstract: Simultaneous oscillation of 23 wavelengths, spaced at 100 GHz, is demonstrated from a single
source using a semiconductor optical amplifier linear cavity. The wavelength comb is generated from an
intra cavity, fiber implemented Lyot filter. Each oscillating wavelength has a linewidth of 12.5 GHz and
the maximum power variation between the 23 wavelengths was less than 3 dB.
Abstract: We demonstrate a 40 Gb/s self-synchronizing, all-optical packet
clock recovery circuit designed for efficient packet-mode traffic. The circuit
locks instantaneously and enables sub-nanosecond packet spacing due to the
low clock persistence time. A low-Q Fabry-Perot filter is used as a passive
resonator tuned to the line-rate that generates a retimed clock-resembling
signal. As a reshaping element, an optical power-limiting gate is
incorporated to perform bitwise pulse equalization. Using two preamble
bits, the clock is captured instantly and persists for the duration of the data
packet increased by 16 bits. The performance of the circuit suggests its
suitability for future all-optical packet-switched networks with reduced
transmission overhead and fine network granularity.
Abstract: We demonstrate instantaneous 40 Gb/s clock extraction from 1 ns long data packets
separated by 750 ps. The circuit comprises a Fabry-Perot filter and an all-optical power limiting
gate and requires very short inter-packet guardbands.
Abstract: In this paper, we survey the current state-of-the-art in middleware and systems for Wireless Sensor Networks (WSN). We provide a discussion on the definition of WSN middleware, design issues associated with it, and the taxonomies commonly used to categorize it. We also present a categorization of a number of such middleware platforms, using middleware functionalities and challenges which we think will play a crucial role in developing software for WSN in the near future. Finally, we provide a short discussion on WSN middleware trends.
Abstract: We demonstrate a highly stable, all-polarization-maintaining
fiber semiconductor ring laser source. It uses a semiconductor optical
amplifier (SOA) to provide both gain and gain modulation from an external
5-GHz optical pulse train. The laser source generates 3.5-ps pulses
up to a 50-GHz repetition rate with negligible amplitude pattern.
Abstract: Based on our experience in designing, building and maintaining an information system for supporting a large scale electronic lottery, we present in this paper a unified approach to the design and implementation of electronic lotteries with the focus on pragmatic trust establishment. This approach follows closely the methodologies commonly employed in the development of general information systems. However, central to the proposed approach is the decomposition of a security critical system into layers containing basic trust components so as to facilitate the management of trust, first along the layers, and then as we move from layer to layer. We believe that such a structured approach, based on layers and trust components, can help designers of security critical applications produce demonstrably robust and verifiable systems that people will not hesitate to use.
Abstract: In the maximum cover problem, we are given a collection of
sets over a ground set of elements and a positive integer w, and we are
asked to compute a collection of at most w sets whose union contains the
maximum number of elements from the ground set. This is a fundamental
combinatorial optimization problem with applications to resource allo-
cation. We study the simplest APX-hard variant of the problem where
all sets are of size at most 3 and we present a 6=5-approximation algo-
rithm, improving the previously best known approximation guarantee.
Our algorithm is based on the idea of Żrst computing a large packing of
disjoint sets of size 3 and then augmenting it by performing simple local
improvements.
Abstract: In the maximum cover problem, we are given a collection of
sets over a ground set of elements and a positive integer w, and we are
asked to compute a collection of at most w sets whose union contains the
maximum number of elements from the ground set. This is a fundamental
combinatorial optimization problem with applications to resource allo-
cation. We study the simplest APX-hard variant of the problem where
all sets are of size at most 3 and we present a 6=5-approximation algo-
rithm, improving the previously best known approximation guarantee.
Our algorithm is based on the idea of Żrst computing a large packing of
disjoint sets of size 3 and then augmenting it by performing simple local
improvements.
Abstract: The efficient use of resources and the lossless transfer of data bursts in future optical
networks requires the accurate knowledge of the available bandwidth for each network
link. Such information is important in monitoring congestions and can be used by
appropriate load balancing and congestion avoidance mechanisms. In this paper we
propose a mechanism for monitoring and subsequently managing bandwidth resources,
using the Simple NetworkManagement Protocol (SNMP). In the proposed mechanism,
link bandwidth availability is not a scalar parameter, but a function of time that records
the future utilization of the link. For every output port, each agent-node maintains a
simple data structure in the form of a table that records the utilization profile of that
outgoing link. With the addition of new objects in the Management Information Base
(MIB) of each agent-node and proper synchronization, SNMP can be used to update
and retrieve the reservations made on the links in order to obtain an instant picture of
the network traffic situation.
Abstract: We demonstrate a novel approach in the implementation of a broadband Optical Network Unit
using a Semiconductor Optical Amplifier-based Optical Loop Mirror, both experimentally and via
simulations for up to 50 km and 2.5 Gb/s.
Abstract: Implementation of a commercial application to a
grid infrastructure introduces new challenges in managing the
quality-of-service (QoS) requirements, most stem from the fact
that negotiation on QoS between the user and the service provider
should strictly be satisfied. An interesting commercial application
with a wide impact on a variety of fields, which can benefit from
the computational grid technologies, is three–dimensional (3-D)
rendering. In order to implement, however, 3-D rendering to a
grid infrastructure, we should develop appropriate scheduling
and resource allocation mechanisms so that the negotiated (QoS)
requirements are met. Efficient scheduling schemes require
modeling and prediction of rendering workload. In this paper
workload prediction is addressed based on a combined fuzzy
classification and neural network model. Initially, appropriate
descriptors are extracted to represent the synthetic world. The
descriptors are obtained by parsing RIB formatted files, which
provides a general structure for describing computer-generated
images. Fuzzy classification is used for organizing rendering
descriptor so that a reliable representation is accomplished which
increases the prediction accuracy. Neural network performs
workload prediction by modeling the nonlinear input-output
relationship between rendering descriptors and the respective
computational complexity. To increase prediction accuracy, a
constructive algorithm is adopted in this paper to train the neural
network so that network weights and size are simultaneously
estimated. Then, a grid scheduler scheme is proposed to estimate
the queuing order that the tasks should be executed and the
most appopriate processor assignment so that the demanded
QoS are satisfied as much as possible. A fair scheduling policy is
considered as the most appropriate. Experimental results on a real
grid infrastructure are presented to illustrate the efficiency of the
proposed workload prediction — scheduling algorithm compared
to other approaches presented in the literature.
Abstract: Smart Dust is comprised of a vast number of ultra-small fully autonomous computing and communication devices, with very restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Smart Dust can be very useful in practice i.e. in the local detection of a remote crucial event and the propagation of data reporting its realization to a control center.
In this work, we have implemented and experimentally evaluated four protocols (PFR, LTP and two variations of LTP which we here introduce) for local detection and propagation in smart dust networks, under new, more general and realistic modelling assumptions. We comparatively study, by using extensive experiments, their behavior highlighting their relative advantages and disadvantages. All protocols are very successful. In the setting we considered here, PFR seems to be faster while the LTP based protocols are more energy efficient.
Abstract: Smart Dust is comprised of a vast number of ultra-small fully autonomous computing and communication devices, with very restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Smart Dust can be very useful in practice i.e. in the local detection of a remote crucial event and the propagation of data reporting its realization to a control center.
In this work, we have implemented and experimentally evaluated four protocols (PFR, LTP and two variations of LTP which we here introduce) for local detection and propagation in smart dust networks, under new, more general and realistic modelling assumptions. We comparatively study, by using extensive experiments, their behavior highlighting their relative advantages and disadvantages. All protocols are very successful. In the setting we considered here, PFR seems to be faster while the LTP based protocols are more energy efficient.
Abstract: The management of Grid resources requires scheduling of both computation and communication tasks at various levels. In this study, we consider the two constituent sub-problems of Grid scheduling, namely: (i) the scheduling of computation tasks to processing resources and (ii) the routing and scheduling of the data movement in a Grid network. Regarding computation tasks, we examine two typical online task scheduling algorithms that employ advance reservations and perform full network simulation experiments to measure their performance when implemented in a centralized or distributed manner. Similarly, for communication tasks, we compare two routing and data scheduling algorithms that are implemented in a centralized or a distributed manner. We examine the effect network propagation delay has on the performance of these algorithms. Our simulation results indicate that a distributed architecture with an exhaustive resource utilization update strategy yields better average end-to-end delay performance than a centralized architecture.
Abstract: In this work we propose a new unified PON
RAN architecture for LTE mobile backhaul networks,
employing ringbased WDM PONs. The proposed
architecture supports dynamic setup of virtual circ
uits for interbase station communication, over a dedicated
λ
LAN channel. The reservation mechanism is arbitrated by the OLT, which also monitors the traffic imbalances of downstream channels. The proposed architecture also supports load balancing, by dynamically reallocatin
g and sharing the capacity of the downstream wavelengths.
Abstract: We propose a simple and intuitive cost mechanism which assigns
costs for the competitive usage of m resources by n selfish agents.
Each agent has an individual demand; demands are drawn according to
some probability distribution. The cost paid by an agent for a resource
she chooses is the total demand put on the resource divided by the number
of agents who chose that same resource. So, resources charge costs
in an equitable, fair way, while each resource makes no profit out of the
agents.We call our model the Fair Pricing model. Its fair cost mechanism
induces a non-cooperative game among the agents. To evaluate the Nash
equilibria of this game, we introduce the Diffuse Price of Anarchy, as an
extension of the Price of Anarchy that takes into account the probability
distribution on the demands. We prove:
– Pure Nash equilibria may not exist, unless all chosen demands are
identical; in contrast, a fully mixed Nash equilibrium exists for all
possible choices of the demands. Further on, the fully mixed Nash
equilibrium is the unique Nash equilibrium in case there are only two
agents.
– In the worst-case choice of demands, the Price of Anarchy is {\`E}(n);
for the special case of two agents, the Price of Anarchy is less than
2 − 1
m.
– Assume now that demands are drawn from a bounded, independent
probability distribution, where all demands are identically distributed
and each is at most a (universal for the class) constant times its expectation.
Then, the Diffuse Price of Anarchy is at most that same
constant, which is just 2 when each demand is distributed symmetrically
around its expectation.
Abstract: We consider the Railway Traveling Salesman Problem. We
show that this problem can be reduced to a variant of the generalized
traveling salesman problem, defined on an undirected graph G = (V,E)
with the nodes partitioned into clusters, which consists in finding a mini-
mum cost cycle spanning a subset of nodes with the property that exactly
two nodes are chosen from each cluster. We describe an exact exponen-
tial time algorithm for the problem, as well we present two mixed integer
programming models of the problem. Based on one of this models pro-
posed, we present an efficient solution procedure based on a cutting plane
algorithm. Extensive computational results for instances taken from the
railroad company of the Netherlands Nederlandse Spoorwegen and involv-
ing graphs with up to 2182 nodes and 38650 edges are reported.
Abstract: Large-scale sensor networks, monitoring an environment at close range with high spatial and temporal resolutions are expected to play an important role in various applications, e.g., assessing the ``health'' of machines; environmental, medical, food-safety, and habitat monitoring; inventory control, building automation, etc. Ensuring the security of these complex and yet resource-constrained systems has emerged as one of the most pressing challenges for researchers. In this paper (i) we present the major threats and some characteristic countermeasures, (ii) we propose a way to classify existing systems for intrusion detection in wireless sensor networks and (iii) we present a new approach for decentralized energy efficient intrusion detection that can be used to improve security from both external and internal adversaries.
Abstract: The inference problem for propositional circumscription is known to
be highly intractable and, in fact, harder than the inference problem for classi-
cal propositional logic. More precisely, in its full generality this problem is P - 2
complete, which means that it has the same inherent computational complexity
as the satisfiability problem for quantified Boolean formulas with two alternations
(universal-existential) of quantifiers. We use Schaefer?s framework of generalized
satisfiability problems to study the family of all restricted cases of the inference
problem for propositional circumscription. Our main result yields a complete clas-
sification of the ?truly hard? ( P -complete) and the ?easier? cases of this problem
2
(reducible to the inference problem for classical propositional logic). Specifically,
we establish a dichotomy theorem which asserts that each such restricted case either
is P -complete or is in coNP. Moreover, we provide efficiently checkable criteria
2
that tell apart the ?truly hard? cases from the ?easier? ones. We show our results both
when the formulas involved are and are not allowed to contain constants. The present
work complements a recent paper by the same authors, where a complete classifi-
cation into hard and easy cases of the model-checking problem in circumscription
was established.
Abstract: Many of the network security protocols employed today utilize symmetric block ciphers (DES, AES and CAST etc). The majority of the symmetric block ciphers implement the crucial substitution operation using look up tables, called substitution boxes. These structures should be highly nonlinear and have bit dispersal, i.e. avalanche, properties in order to render the cipher with resistant to cryptanalysis attempts, such as linear and differential cryptanalysis. Highly secure substitution boxes can be constructed using particular Boolean functions as components that have certain mathematical properties which enhance the robustness of the whole cryptoalgorithm. However, enforcing these properties on SBoxes is a highly computationally intensive task. In this paper, we present a distributed algorithm and its implementation on a computing cluster that accelerates the construction of secure substitution boxes with good security properties. It is fully parametric since it can employ any class of Boolean functions with algorithmically definable properties and can construct SBoxes of arbitrary sizes. We demonstrate the efficiency of the distributed algorithm implementation compared to its sequential counterpart, in a number of experiments.
Abstract: Core networks of the future will have a
translucent and eventually transparent optical
structure. Ultra-high-speed end-to-end connectiv-
ity with high quality of service and high reliability
will be realized through the exploitation of opti-
mized protocols and lightpath routing algorithms.
These algorithms will complement a flexible con-
trol and management plane integrated in the
proposed solution. Physical layer impairments
and optical performance are monitored and
incorporated in impairment-aware lightpath rout-
ing algorithms. These algorithms will be integrat-
ed into a novel dynamic network planning tool
that will consider dynamic traffic characteristics,
a reconfigurable optical layer, and varying physi-
cal impairment and component characteristics.
The network planning tool along with extended
control planes will make it possible to realize the
vision of optical transparency. This article pre-
sents a novel framework that addresses dynamic
cross-layer network planning and optimization
while considering the development of a future
transport network infrastructure.
Abstract: We here present the Forward Planning Situated Protocol (FPSP), for scalable, energy efficient and fault tolerant data propagation in situated wireless sensor networks. To deal with the increased complexity of such deeply networked sensor systems, instead of emphasizing on a particular aspect of the services provided, i.e. either for low-energy periodic, or low-latency event-driven, or high-success query-based sensing, FPSP uses two novel mechanisms that allow the network operator to adjust the performance of the protocol in terms of energy, latency and success rate on a per-task basis. We emphasize on distributedness, direct or indirect interactions among relatively simple agents, flexibility and robustness.
The protocol operates by employing a series of plan & forwardphases through which devices self-organize into forwarding groups that propagate data over discovered paths. FPSPperforms a limited number of long range, high power data transmissions to collect information regarding the neighboring devices. The acquired information, allows to plan a (parameterizable long by {\"e}) sequence of short range, low power transmissions between nearby particles, based on certain optimization criteria. All particles that decide to respond (based on local criteria) to these long range transmissions enter the forwardingphase during which information is propagated via the acquired plan. Clearly, the duration of the forwarding phases is characterized by the parameter {\"e}, the transmission medium and the processing speed of the devices. In fact the parameter {\"e} provides a mechanism to adjust the protocol performance in terms of the latency--energy trade-off. By reducing {\"e} the latency is reduced at the cost of spending extra energy, while by increasing {\"e}, the energy dissipation is reduced but the latency is increased.
To control the success rate--energy trade-off, particles react locally on environment and context changes by using a set of rules that are based on response thresholds that relate individual-level plasticity with network-level resiliency, motivated by the nature-inspired method for dividing labor, a metaphor of social insect behavior for solving problems [1]. Each particle has an individual response threshold {\`E} that is related to the "local" density (as observed by the particle, [2]); particles engage in propagation of events when the level of the task-associated stimuli exceeds their thresholds. Let s be the intensity of a stimulus associated with a particular sensing task, set by the human authorities. We adopt the response function T{\`e}(s) = sp>np>over sp>np> + {\`e}p>np>, the probability of performing the task as a function of s, where n > 1 determines the steepness of the threshold. Thus, when {\`e} is small (i.e. the network is sparse) then the response probability increases; when s increases (i.e. for critical sensing tasks) the response probability increases as well.
This role-based approach where a selective number of devices do the high cost planning and the rest of the network operates in a low cost state leads to systems that have increased energy efficiency and high fault-tolerance since these long range planning phases allow to bypass obstacles (where no sensors are available) or faulty sensors (that have been disabled due to power failure or other natural events).
Abstract: eVoting is considered to be one of the most challenging domains of modern eGovernment and one of the main vehicles for increasing eParticipation among citizens. One of the main obstacles for its wide adoptionis the reluctance of citizens to participate in electronic voting procedures. This reluctance can be partially attributed to the low penetration of technology among citizens. However, the main reason behind this reluctance is the lack of trust which stems from the belief of citizens that systems implementing an eVoting process will violate their privacy. The departure point of this approach is that the emergence of such a belief can be considerably facilitated by designing and building systems in a way that evidence about the system’s properties is produced during the design process. In this way, the designers can demonstrate the respect in privacy using this evidence that can be understood and checked by the specialist and the informed layman. These tools and models should provide sufficient evidence that the target system handles privacy concerns and requirements that can remove enough of the fears towards eVoting. This paper presents the efforts of the authors‘ organization, the Computer Technology Institute and Press “Diophantus” (CTI), towards the design and implementation of an eVoting system, called PNYKA, with demonstrable security properties. This system was based on a trust-centered engineering approach for building general security critical systems. The authors‘ approach is pragmatic rather than theoretical in that it sidesteps the controversy that besets the nature of trust in information systems and starts with a working definition of trust as people’s positive attitude towards a system that transparently and demonstrably performs its operations, respecting their privacy. The authors also discuss the social side of eVoting, i.e. how one can help boost its acceptance by large social groups targeting the whole population of the country. The authors view eVoting as an innovation that must be diffused to a population and then employ a theoretical model that studies diffusion of innovation in social network, delineating structural properties of the network that help diffuse the innovation fast. Furthermore, the authors explain how CTI’s current situation empowers CTI to realize its vision to implement a privacy preserving, discussion and public consultation forum in Greece. This forum will link, together, all Greek educational institutes in order to provide a privacy preserving discussion and opinion gathering tool useful in decision making within the Greek educational system.
Abstract: Future Grid Networks should be able to provide Quality
of Service (QoS) guarantees to their users. In this work
we propose a framework for Grid Networks that provides
deterministic delay guarantees to its Guaranteed Service
(GS) users and best effort service to its Best Effort (BE)
users. The proposed framework is theoretically and experimentally
analyzed. We also define four types of computational
resources based on the type of users (GS, BE) these
resources serve and the priority they give them. We implement
the proposed QoS framework for Grids and verify that
it not only satisfies the delay guarantees given to GS users,
but also improves performance in terms of deadlines missed
and resource use. In our simulations, data from a real Grid
Network are used, validating in this way the appropriateness
and usefulness of the proposed framework.
Abstract: We give an efficient local search
algorithm that computes a good vertex coloring of a graph $G$. In
order to better illustrate this local search method, we view local
moves as selfish moves in a suitably defined game. In particular,
given a graph $G=(V,E)$ of $n$ vertices and $m$ edges, we define
the \emph{graph coloring game} $\Gamma(G)$ as a strategic game
where the set of players is the set of vertices and the players
share the same action set, which is a set of $n$ colors. The
payoff that a vertex $v$ receives, given the actions chosen by all
vertices, equals the total number of vertices that have chosen the
same color as $v$, unless a neighbor of $v$ has also chosen the
same color, in which case the payoff of $v$ is 0. We show:
\begin{itemize}
\item The game $\Gamma(G)$ has always pure Nash equilibria. Each
pure equilibrium is a proper coloring of $G$. Furthermore, there
exists a pure equilibrium that corresponds to an optimum coloring.
\item We give a polynomial time algorithm $\mathcal{A}$ which
computes a pure Nash equilibrium of $\Gamma(G)$. \item The total
number, $k$, of colors used in \emph{any} pure Nash equilibrium
(and thus achieved by $\mathcal{A}$) is $k\leq\min\{\Delta_2+1,
\frac{n+\omega}{2}, \frac{1+\sqrt{1+8m}}{2}, n-\alpha+1\}$, where
$\omega, \alpha$ are the clique number and the independence number
of $G$ and $\Delta_2$ is the maximum degree that a vertex can have
subject to the condition that it is adjacent to at least one
vertex of equal or greater degree. ($\Delta_2$ is no more than the
maximum degree $\Delta$ of $G$.) \item Thus, in fact, we propose
here a \emph{new}, \emph{efficient} coloring method that achieves
a number of colors \emph{satisfying (together) the known general
upper bounds on the chromatic number $\chi$}. Our method is also
an alternative general way of \emph{proving},
\emph{constructively}, all these bounds. \item Finally, we show
how to strengthen our method (staying in polynomial time) so that
it avoids ``bad'' pure Nash equilibria (i.e. those admitting a
number of colors $k$ far away from $\chi$). In particular, we show
that our enhanced method colors \emph{optimally} dense random
$q$-partite graphs (of fixed $q$) with high probability.
\end{itemize}
Abstract: Paul Spirakis is an eminent, talented, and influential researcher that contributed significantly to computer science. This article is a modest attempt of a biographical sketch of Paul, which we drafted with extreme love and honor.
Abstract: .We present a new methodology for computing approximate
Nash equilibria for two-person non-cooperative games based upon certain
extensions and specializations of an existing optimization approach pre-
viously used for the derivation of xed approximations for this problem.
In particular, the general two-person problem is reduced to an inde-
nite quadratic programming problem of special structure involving the
n x n adjacency matrix of an induced simple graph specied by the in-
put data of the game, where n is the number of players' strategies.
Abstract: Consider a network vulnerable to viral infection. The system security software can guarantee
safety only to a limited part of the network. Such limitations result from economy costs or processing
costs. The problem raised is to which part of the network the security software should
be installed, so that to secure as much as possible the network. We model this practical network
scenario as a non-cooperative multi-player game on a graph, with two kinds of players, a set
of attackers and a protector player, representing the viruses and the system security software,
respectively. Each attacker player chooses a node of the graph (or a set of them, via a probability
distribution) to infect. The protector player chooses independently, in a basic case of the
problem, a simple path or an edge of the graph (or a set of them, via a probability distribution)
and cleans this part of the network from attackers. Each attacker wishes to maximize the probability
of escaping its cleaning by the protector. In contrast, the protector aims at maximizing
the expected number of cleaned attackers. We call the two games obtained from the two basic
cases considered, as the Path and the Edge model, respectively. For these two games, we are
interested in the associated Nash equilibria, where no network entity can unilaterally improve
its local objective. We obtain the following results:
• The problem of existence of a pure Nash equilibrium is NP-complete for the Path model.
This opposed to that, no instance of the Edge model possesses a pure Nash equilibrium,
proved in [7].
• In [7] a characterization of mixed Nash equilibria for the Edge model is provided. However,
that characterization only implies an exponential time algorithm for the general case.
Here, combining it with clever exploration of properties of various practical families of
graphs, we compute, in polynomial time, mixed Nash equilibria on corresponding graph
instances. These graph families include, regular graphs, graphs that can be decomposed, in
polynomially time, into vertex disjoint r-regular subgraphs, graphs with perfect matchings
and trees.
• We utilize the notion of social cost [6] for measuring system performance on such scenario;
here is defined to be the utility of the protector. We prove that the corresponding Price of
Anarchy in any mixed Nash equilibria of the game is upper and lower bounded by a linear
function of the number of vertices of the graph.
Abstract: Consider a network vulnerable to viral infection, where the security software can guarantee safety only to a limited part of it. We model this practical network scenario as a non-cooperative multi-player game on a graph, with two kinds of players, a set of attackers and a protector player, representing the viruses and the system security software, respectively. We are interested in the associated Nash equilibria, where no network entity can unilaterally improve its local objective. We obtain the following results: for certain families of graphs, mixed Nash equilibria can be computed in polynomially time. These families include, among others, regular graphs, graphs with perfect matchings and trees. The corresponding price of anarchy for any mixed Nash equilibria of the game is upper and lower bounded by a linear function of the number of vertices of the graph. (We define the price of anarchy to reflect the utility of the protector). Finally, we introduce a generalised version of the game. We show that the existence problem of pure Nash equilibria here is NP complete.
Abstract: In this paper we present a CPU scavenging
architecture suitable for desktop resources,
and we study its appropriateness in exploiting the
PC Laboratory resources of the Greek School
Network and their integration to the existing
HellasGrid national infrastructure. School laboratories
form an extensive network equipped with
computational systems and fast Internet connections.
As this infrastructure is utilized at most 8 h per day and 5 days per week, it could be
made available during its remaining idle time for
computational purposes through the use of Grid
technology. The structure and organization of the
school laboratories and backbone network enables
the CPU scavenging service, as an independent
and additional service, which will not violate
the operational rules and policies of the school
network, while it will add additional resources
to the current HellasGrid infrastructure with low
adaptation cost.
Abstract: Load balancing/sharing is a policy which exploits the communication facility between the servers of a distributed system, by using the exchanging of status information and jobs between any two servers of the system, in order to improve the performance of the whole system. In this work, we propose a new adaptive distributed hierarchical scheme, the Virtual Tree Algorithm (VTA), which creates a virtual binary tree structure over the actual network topology. It uses the Difference-Initiated (DI) technique ([11, 1]) for load balancing/sharing, which needs remote information for the transfer policy, and no additional information for the location policy. We demonstrate here that the introduced virtual construction can keep the exchanged messages to a number favourable to those of the previously known efficient algorithms. To show the above statement and evaluate the performance of our policy, we make use of both analytical and simulation results. By using the simulation model that we developed, we compared our results with one of the most representative and new adaptive, symmetrical, distributed, and efficient algorithms, the Variable Threshold (V THR) algorithm
Abstract: This work addresses networked embedded systems enabling the seam-
less interconnection of smart building automations to the Internet and
their abstractions as web services. In our approach, such abstractions are
used to primarily create a exible, holistic and scalable system and allow
external end-users to compose and run their own smart/green building
automation application services on top of this system.
Towards this direction, in this paper we present a smart building test-
bed consisting of several sensor motes and spanning across seven rooms.
Our test-bed's design and implementation simultaneously addresses sev-
eral corresponding system layers; from hardware interfaces, embedded
IPv6 networking and energy balancing routing algorithms to a RESTful
architecture and over the web development of sophisticated, smart, green
scenarios. In fact, we showcase how IPv6 embedded networking combined
with RESTful architectures make the creation of building automation ap-
plications as easy as creating any other Internet Web Service.
Abstract: With the proliferation of wireless sensor net-
works and mobile technologies in general, it is possible to
provide improved medical services and also to reduce costs
as well as to manage the shortage of specialized personnel.
Monitoring a person’s health condition using sensors pro-
vides a lot of benefits but also exposes personal sensitive
information to a number of privacy threats. By recording
user-related data, it is often feasible for a malicious or
negligent data provider to expose these data to an unau-
thorized user. One solution is to protect the patient’s pri-
vacy by making difficult a linkage between specific
measurements with a patient’s identity. In this paper we
present a privacy-preserving architecture which builds
upon the concept of
k
-anonymity; we present a clustering-
based anonymity scheme for effective network manage-
ment and data aggregation, which also protects user’s
privacy by making an entity indistinguishable from other
k
similar entities. The presented algorithm is resource
aware, as it minimizes energy consumption with respect to
other more costly, cryptography-based approaches. The
system is evaluated from an energy-consuming and net-
work performance perspective, under different simulation
scenarios.
Abstract: The efficient use of resources and the lossless transfer of data bursts in future optical
networks requires the accurate knowledge of the available bandwidth for each network
link. Such information is important in monitoring congestions and can be used by
appropriate load balancing and congestion avoidance mechanisms. In this paper we
propose a mechanism for monitoring and subsequently managing bandwidth resources,
using the Simple NetworkManagement Protocol (SNMP). In the proposed mechanism,
link bandwidth availability is not a scalar parameter, but a function of time that records
the future utilization of the link. For every output port, each agent-node maintains a
simple data structure in the form of a table that records the utilization profile of that
outgoing link. With the addition of new objects in the Management Information Base
(MIB) of each agent-node and proper synchronization, SNMP can be used to update
and retrieve the reservations made on the links in order to obtain an instant picture of
the network traffic situation.
Abstract: Motivated by the problem of supporting energy-efficient broadcasting in ad hoc wireless networks, we study the Minimum
Energy Consumption Broadcast Subgraph (MECBS) problem. We present the first logarithmic approximation algorithm for the
problem which uses an interesting reduction to Node-Weighted Connected Dominating Set.
Abstract: A considerable part of recent research in smart cities and IoT has focused on achieving energy savings in buildings and supporting aspects related to sustainability. In this context, the educational community is one of the most important ones to consider, since school buildings constitute a large part of non-residential buildings, while also educating students on sustainability matters is an investment for the future. In this work, we discuss a methodology for achieving energy savings in schools based on the utilization of data produced by an IoT infrastructure installed inside school buildings and related educational scenarios. We present the steps comprising this methodology in detail, along with a set of tangible results achieved within the GAIA project. We also showcase how an IoT infrastructure can support activities in an educational setting and produce concrete outcomes, with typical levels of 20% energy savings.
Abstract: In mobile ad-hoc networks (MANETs), the mobility of the nodes is a complicating factor that significantly affects the effectiveness and performance of the routing protocols. Our work builds upon recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) and proposes a protocol framework that exploits the usually different mobility rates of the nodes by adapting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of an origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessary implementation changes required to incorporate existing routing protocols in to our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.
Abstract: In ad-hoc mobile networks (MANET), the mobility of the nodes is a complicating factor that significantly affects the effectiveness and performance of the routing protocols. Our work builds upon the recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) and proposes a protocol framework that exploits the usually different mobility rates of the nodes by adopting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessery implementation changes required to incorporate existing routing protocols in our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.
Abstract: We propose a simple obstacle model to be used while simulating wireless sensor networks. To the best of our knowledge, this is the first time such an integrated and systematic obstacle model for these networks has been proposed. We define several types of obstacles that can be found inside the deployment area of a wireless sensor network and provide a categorization of these obstacles based on their nature (physical and communication obstacles, i.e. obstacles that are formed out of node distribution patterns or have physical presence, respectively), their shape and their change of nature over time. We make an eXtension to a custom-made sensor network simulator (simDust) and conduct a number of simulations in order to study the effect of obstacles on the performance of some representative (in terms of their logic) data propagation protocols for wireless sensor networks. Our findings confirm that obstacle presence has a significant impact on protocol performance, and also that different obstacle shapes and sizes may affect each protocol in different ways. This provides an insight into how a routing protocol will perform in the presence of obstacles and highlights possible protocol shortcomings. Moreover, our results show that the effect of obstacles is not directly related to the density of a sensor network, and cannot be emulated only by changing the network density.
Abstract: We design and implement a multicost impairment- aware routing and wavelength assignment algorithm for online traffic. In transparent optical networks the quality of a transmission degrades due to physical layer impairments. To serve a connection, the proposed algorithm finds a path and a free wavelength (a lightpath) that has acceptable signal quality performance by estimating a quality of transmission measure, called the Q factor. We take into account channel utilization in the network, which changes as new connections are established or released, in order to calculate the noise variances that correspond to physical impairments on the links. These, along with the time invariant eye impairment penalties of all candidate network paths, form the inputs to the algorithm. The multicost algorithm finds a set of so called non-dominated Q paths from the given source to the given destination. Various objective functions are then evaluated in order to choose the optimal lightpath to serve the connection. The proposed algorithm combines the strength of multicost optimization with low execution time, making it appropriate for serving online connections.
Abstract: We propose and evaluate a new burst assembly algorithm based on the average delay of the packets comprising a burst. This
method fixes the average delay of the packets belonging to an assembled burst to a desired value TAVE that may be different for
each forwarding equivalence class (FEC). We show that the proposed method significantly improves the delay jitter experienced
by the packets during the burst assembly process, when compared to that of timer-based and burst length-based assembly policies.
Minimizing packet delay jitter is important in a number of applications, such as real-audio and streaming-video applications. We
also find that the improvement in the packet delay jitter yields a corresponding significant improvement in the performance of TCP,
whose operation depends critically on the ability to obtain accurate estimates of the round-trip times (RTT).
c
2007 Elsevier B.V. All rights reserved.
Abstract: We present a new dynamic graph structure specifically suited
for large-scale transportation networks that provides simultaneously three
unique features: compactness, agility and dynamicity. We demonstrate
its practicality and superiority by conducting an experimental study for
shortest route planning in large-scale European and US road networks
with a few dozen millions of nodes and edges. Our approach is the first
one that concerns the dynamic maintenance of a large-scale graph with
ordered elements using a contiguous memory part, and which allows an
arbitrary online reordering of its elements.
Abstract: Smart Dust is a special case of wireless sensor networks, comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Smart Dust can be very useful in practice, i.e. in the local detection of remote crucial events and the propagation of data reporting their realization to a control center.
In this paper, we propose a new energy efficient and fault tolerant protocol for data propagation in smart dust networks, the Variable Transmission Range Protocol (VTRP). The basic idea of data propagation in VTRP is the varying range of data transmissions, i.e. we allow the transmission range to increase in various ways. Thus, data propagation in our protocol exhibits high fault-tolerance (by bypassing obstacles or faulty sensors) and increases network lifetime (since critical sensors, i.e. close to the control center are not overused). As far as we know, it is the first time varying transmission range is used.
We implement the protocol and perform an extensive experimental evaluation and comparison to a representative protocol (LTP) of several important performance measures with a focus on energy consumption. Our findings indeed demonstrate that our protocol achieves significant improvements in energy efficiency and network lifetime.
Abstract: In this work we propose a new energy efficient and fault tolerant protocol for data propagation in wireless sensor networks, the Variable Transmission Range Protocol VTRP. The basic idea of data propagation in VTRP is the varying range of data transmissions, ie. we allow the transmission range to increase in various ways. Thus data propagation in our protocol exhibits high fault-tolerance (by bypassing obstacles or faulty sensors) and increases network lifetime (since critical sensors, ie. close to the control center are not overused). As far as we know, it is the first time varying transmission range is used.
We implement the protocol and perform an extensive experimental evaluation and comparison to a representative protocol (LTP) of several important performance measures with a focus on energy consumption. Our findings indeed demonstrate that our protocol achieves significant improvements in energy efficiency and network lifetime.
Abstract: In this paper, we present a new hybrid optical burst switch architecture (HOBS) that takes advantage of the pre-transmission idle
time during lightpath establishment. In dynamic circuit switching (wavelength routing) networks, capacity is immediately hardreserved
upon the arrival of a setup message at a node, but it is used at least a round-trip time delay later. This waste of resources
is significant in optical multi-gigabit networks and can be used to transmit traffic of a lower class of service in a non-competing
way. The proposed hybrid OBS architecture, takes advantage of this idle time to transmit one-way optical bursts of a lower class of
service, while high priority data explicitly requests and establishes end-to-end lightpaths. In the proposed scheme, the two control
planes (two-way and one-way OBS reservation) are merged, in the sense that each SETUP message, used for the two-way lightpath
establishment, is associated with one-way burst transmission and therefore it is modified to carry routing and overhead information
for the one-way traffic as well. In this paper, we present the main architectural features of the proposed hybrid scheme and further
we assess its performance by conducting simulation experiments on the NSF net backbone topology. The extensive network study
revealed that the proposed hybrid architecture can achieve and sustain an adequate burst transmission rate with a finite worst case
delay.
Abstract: Known general proofs of Nash Theorem (about the existence of Nash Equilibria (NEa) in finite strategic games) rely on the use of a fixed point theorem (e.g. Brouwer or Kakutani. While it seems that there is no general way of proving the existence of Nash equilibria without the use of a fixed point theorem, there do however exist some (not so common in the CS literature) proofs that seem to indicate alternative proof paths, for games of two players. This note discusses two such proofs.
Abstract: We present a parallel priority queue that supports the following operations in
constant time: parallel insertion of a sequence of elements ordered according
to key, parallel decrease key for a sequence of elements ordered according to
key, deletion of the minimum key element, and deletion of an arbitrary element.
Our data structure is the first to support multi-insertion and multi-decrease key in
constant time. The priority queue can be implemented on the EREW PRAM and
can perform any sequence of n operations in O(n) time and O(m log n) work, m
being the total number of keyes inserted and/or updated. A main application is a
parallel implementation of Dijkstra˘s algorithm for the single-source shortest path
problem, which runs in O(n) time and O(m log n) work on a CREW PRAM on
graphs with n vertices and m edges. This is a logarithmic factor improvement in
the running time compared with previous approaches.
Abstract: In this work we present the basic concepts in the architecture of a peer-to-peer environment for monitoring multiple wireless sensor networks, called ShareSense. ShareSense, which is currently under development, uses JXTA as a peer-to-peer substrate. We demonstrate its basic functionalities using a simple application scenario, which utilizes multiple disparate wireless sensor networks. This application scenario involves monitoring of such networks using a separate management environment and a custom application GUI, as well as using Google Earth as an additional user interface.
Abstract: In this work we present the architecture and implementation of WebDust, a software platform for managing multiple, heterogeneous (both in terms of software and hardware), geographically disparate sensor networks. We describe in detail the main concepts behind its design, and basic aspects of its implementation, including the services provided to end-users and developers. WebDust uses a peer-to-peer substrate, based on JXTA, in order to unify multiple sensor networks installed in various geographic areas. We aim at providing a software framework that will permit developers to deal with the new and critical aspects that networks of sensors and tiny devices bring into global computing, and to provide a coherent set of high level services, design rules and technical recommendations, in order to be able to develop the envisioned applications of global sensor networks. Furthermore, we give an overview of a deployed distributed testbed, consisting of a total 56 nodes and describing in more detail two specific testbed sites and the integration of the related software and hardware technologies used for its operation with our platform. Finally, we describe the design and implementation of an interface option provided to end-users, based on the popular Google Earth application.
Abstract: Tourists become increasingly dependent on mobile city guides to locate tourist services and retrieve information about nearby points of interest (POIs) when visiting unknown destinations. Although several city guides support the provision of personalized tour recommendations to assist tourists visiting the most interesting attractions, existing tour planners only consider walking tours. Herein, we introduce eCOMPASS, a context-aware mobile application which also considers the option of using public transit for moving around. Far beyond than just providing navigational aid, eCOMPASS incorporates multimodality (i.e. time dependency) within its routing logic aiming at deriving nearoptimal sequencing of POIs along recommended tours so as to best utilize time available for sightseeing and minimize waiting time at transit stops. Further advancing the state of the art, eCOMPASS allows users to define arbitrary start/end locations(e.g. the current location of a mobile user) rather than choosing among a fixed set of locations. This paper describes the routing algorithm which comprises the core functionality of eCOMPASS
and discusses the implementation details of the mobile application using the metropolitan area of Berlin (Germany) as case study
Abstract: We propose a priority-based balanced routing scheme, called the priority STAR routing scheme, which leads to optimal throughput and average delay at the same time for random broadcasting and routing. In particular, the average reception delay for random broadcasting required in n1timesn2times...timesnd tori with ni=O(1), n-ary d-cubes with n=O(1), or d-dimensional hypercubes is O(d+1/(1-rho)). We also study the case where multiple communication tasks for random 1-1 routing and/or random broadcasting are executed at the same time. When a constant fraction of the traffic is contributed by broadcast requests, the average delay for random 1-1 routing required in any d-dimensional hypercube, any n-ary d-cube with n = O(1), and most n1timesn2times...timesnd tori with ni=O(1) are O(d) based on priority STAR. Our simulation results show that the priority-based balanced routing scheme considerably outperform the best previous routing schemes for these networks
Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present
a new protocol for data propagation towards a control center
(``sink") that avoids flooding by probabilistically favoring
certain (``close to optimal") data transmissions.
This protocol is very simple to implement in sensor devices
and operates under total absence
of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density.
As shown by a geometry analysis,
the protocol is correct, since it always propagates data
to the sink, under ideal network conditions (no failures). Using
stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the
protocol manages to propagate data very close to the sink, thus in
this sense it is robust. We finally present and discuss
large-scale experimental findings validating the analytical
results.
Abstract: We study the problem of data propagation in sensor networks,
comprised of a large number of very small and low-cost nodes,
capable of sensing, communicating and computing. The distributed
co-operation of such nodes may lead to the accomplishment of large
sensing tasks, having useful applications in practice. We present a new protocol for data propagation towards a control center ("sink") that avoids flooding by probabilistically favoring certain ("close to optimal") data transmissions. Motivated by certain applications and also as a starting point for a rigorous analysis, we study here lattice-shaped sensor networks. We however show that this lattice shape emerges even in randomly deployed sensor networks of sufficient sensor density. Our work is inspired and builds upon the directed diffusion paradigm.
This protocol is very simple to implement in sensor devices, uses only local information and operates under total absence of co-ordination between sensors. We consider a network model of randomly deployed sensors of sufficient density. As shown by a geometry analysis, the protocol is correct, since it always propagates data to the sink, under ideal network conditions (no failures). Using stochastic processes, we show that the protocol is very energy efficient. Also, when part of the network is inoperative, the protocol manages to propagate data very close to the sink, thus in this sense it is robust. We finally present and discuss large-scale experimental findings validating the analytical results.
Abstract: We demonstrate a simple method for upgrading the repetition rate of 10 GHz optical sources to 40 GHz. It employs a Fabry-Perot filter and the saturation properties of a Semiconductor Optical Amplifier.
Abstract: Raising awareness among young people on the
relevance of behaviour change for achieving energy savings is widely considered as a key approach towards long-term and costeffective energy efficiency policies. The GAIA Project aims to deliver a comprehensive solution for both increasing awareness on energy efficiency and achieving energy savings in school buildings. In this framework, we present a novel rule engine that, leveraging a resource-based graph model encoding relevant application domain knowledge, accesses IoT data for producing energy savings recommendations. The engine supports configurability, extensibility and ease-of-use requirements, to be easily applied and customized to different buildings. The paper introduces the main design and implementation details and presents a set of preliminary performance results.
Abstract: We describe the design and implementation
of a secure and robust architectural data management
model suitable for cultural environments. Usage and exploitation
of the World Wide Web is a critical requirement
for a series of administrative tasks such as collecting, managing
and distributing valuable cultural and artistic information.
This requirement introduces a great number of
Internet threats for cultural organizations that may cause
huge data and/or financial losses, harm their reputation
and public acceptance as well as people’s trust on them.
Our model addresses a list of fundamental operational
and security requirements. It utilizes a number of cryptographic
primitives and techniques that provide data safety
and secure user interaction on especially demanding online
collaboration environments. We provide a reference
implementation of our architectural model and discuss
the technical issues. It is designed as a standalone solution
but it can be flexibly adapted in broader management
infrastructures.
Abstract: Usage and exploitation of the Internet is a critical requirement for managing and distributing valuable digital assets. This requirement introduces a great number of threats for commercial (or not) organizations that may cause huge data and financial losses, harm their reputation as well as people's trust on them. In this paper we present the research challenges for secure digital asset management over the web by proposing a model that provides data safety and secure user interaction on especially demanding on-line collaboration environments.
Abstract: In this paper, a new service oriented networking
paradigm is presented, where network nodes (peers) are self-
organized into individual service entities. The key idea relies on
the overlay approach, where there exists a virtual service plane,
fragmented into self-organized and self-managed entities called
islands of service transparency. The islands are formed in an
upstream, ad-hoc mode from the non-networking resources (i.e
VoD, grid server, etc) towards all ingress routers of the network,
using link state advertisements and multi-cost path selection
algorithms (i.e residual bandwidth, server capacity, storage, etc).
Organization and re-organization of nodes around non-network
resources is transparent to end-users, and thus any request
within a specific service island is transparently routed to the
island’s resource for execution. A service proxy is commissioned
to resolve service addresses and service attributes to QoS metrics.
In this paper, we present the main notations and metrics of the
proposed architecture as well as node behavior and potential
GMPLS extensions for implementation issues
Abstract: We present a simple parallel algorithm for the single-source shortest path
problem in planar digraphs with nonnegative real edge weights. The algorithm runs
on the EREW PRAM model of parallel computation in O((n2=+n1&=) log n)
time, performing O(n1+= log n) work for any 0<{\aa}<1/2. The strength of the
algorithm is its simplicity, making it easy to implement and presumable quite
efficient in practice. The algorithm improves upon the work of all previous
parallel algorithms. Our algorithm is based on a region decomposition of the
input graph and uses a well-known parallel implementation of Dijkstra's
algorithm. The logarithmic factor in both the work and the time can be
eliminated by plugging in a less practical, sequential planar shortest path
algorithm together with an improved parallel implementation of Dijkstra's
algorithm.
Abstract: Smart Dust is a set of a vast number of ultra-small fully
autonomous computing and communication devices, with very
restricted energy and computing capabilities, that co-operate to
quickly and efficiently accomplish a large sensing task. Smart
Dust can be very useful in practice i.e. in the local detection of
a remote crucial event and the propagation of data reporting its
realization. In this work we continue (see [POMC02]) our
effort towards the research on smart dust from a basic algorithmic
point of view. Under a simple but realistic model for smart dust
we present an interesting problem, which is how to propagate
efficiently information on an event detected locally. Then we
present a new smart dust protocol, which we call the
``Sleep-Awake" protocol, for information propagation that explicitly uses the energy saving features (i.e. the alteration of sleeping and awake time periods) of the smart dust particles. By using both probabilistic some first analysis and extensive
experiments, we provide some first concrete results for the
success probability and the time and energy efficiency of the
protocol, in terms of parameters of the smart dust network. We
note that the study of the interplay of these parameters allows us
to program the smart dust network characteristics accordingly.
Abstract: We propose a MAC protocol for mobile ad hoc networks that
uses power control for the RTS/CTS and DATA frame
transmissions in order to improve energy and capacity
utilization efficiency. Unlike IEEE 802.11, in our scheme the
RTS frames are not sent using the maximum transmission
power to silence neighbouring nodes, and the CTS frames do
not silence all receiving nodes to the same degree. In contrast,
the transmission power of the RTS frames follows a slow
start principle, while the CTS frames, which are sent at
maximum transmission power, prevent the neighbouring
nodes from transmitting their DATA frames with power more
than a computed threshold, while allowing them to transmit at
power levels less than that threshold. This is done by
including in the RTS and the CTS frames additional
information, such as the power of the transmissions, and the
interference tolerance of the nodes. Moreover the DATA
frames are sent at the minimum required transmission power
increased by a small margin to ensure connectivity with the
intended receiver, so as to cause minimal interference to
neighbouring nodes and allow for future interference to be
added to the receiver of the DATA frames. The power to be
used by the transmitter is computed by the recipient of the
RTS frame and is included in the CTS frame. It is expected
that a network with such a power management scheme would
achieve a better throughput performance and more power
savings than a network without such a scheme.
Abstract: We propose a MAC protocol for mobile ad hoc networks that
uses power control for the RTS/CTS and DATA frame
transmissions in order to improve energy and capacity
utilization efficiency. Unlike IEEE 802.11, in our scheme the
RTS frames are not sent using the maximum transmission
power to silence neighbouring nodes, and the CTS frames do
not silence all receiving nodes to the same degree. In contrast,
the transmission power of the RTS frames follows a slow
start principle, while the CTS frames, which are sent at
maximum transmission power, prevent the neighbouring
nodes from transmitting their DATA frames with power more
than a computed threshold, while allowing them to transmit at
power levels less than that threshold. This is done by
including in the RTS and the CTS frames additional
information, such as the power of the transmissions, and the
interference tolerance of the nodes. Moreover the DATA
frames are sent at the minimum required transmission power
increased by a small margin to ensure connectivity with the
intended receiver, so as to cause minimal interference to
neighbouring nodes and allow for future interference to be
added to the receiver of the DATA frames. The power to be
used by the transmitter is computed by the recipient of the
RTS frame and is included in the CTS frame. It is expected
that a network with such a power management scheme would
achieve a better throughput performance and more power
savings than a network without such a scheme.
Abstract: As water supplies become scarce and polluted, there is an urgent
need to irrigate more efficiently in order to optimize water
use. In this paper, we present a WSN based, smart homeirrigation
system that consists of heterogeneous motes, special
sensors and actuators. The system is fully adaptive not
only to environmental conditions but also to the specific water
needs that different plants may have. This way, it manages
to perform efficient home irrigation, while it provides
an IPv6-capable managing system.
Abstract: We present an implementation ofan EC cryptographic library
targeting three main objectives: portability, modularity, and ease
ofu se. Our goal is to provide a fully-equipped library ofp ortable source
code with clearly separated modules that allows for easy development of
EC cryptographic protocols, and which can be readily tailored to suit
different requirements and user needs. We discuss several implementation
issues regarding the development oft he library and report on some
preliminary experiments.
Abstract: In this paper we present a platform for developing mobile, locative and collaborative distributed games comprised of small programmable object technologies (e.g., wireless sensor networks) and traditional networked processors.
The platform is implemented using a combination of JAVA
Standard and Mobile editions, targeting also mobile phones
that have some kind of sensors installed. We briefly present
the architecture of our platform and demonstrate its capabilities by reporting two pervasive multiplayer games. The key
characteristic of these games is that players interact with each
other and their surrounding environment by moving, running
and gesturing as a means to perform game related actions, using small programmable object technologies.
Abstract: This paper presents an overview of Quality of Service (QoS) differentiation mechanisms proposed for Optical Burst Switching (OBS) networks. OBS has been proposed to couple the benefits of both circuit and packet switching for the “on demand” use of capacity in the future optical Internet. In such a case, QoS support imposes some important challenges before this technology is deployed. This paper takes a broader view on QoS, including QoS differentiation not only at the burst but also at the transport levels for OBS networks. A classification of existing QoS differentiation mechanisms for OBS is given and their efficiency and complexity are comparatively discussed. We provide numerical examples on how QoS differentiation with respect to burst loss rate and transport layer throughput can be achieved in OBS networks.
Abstract: We present an improved upper bound on the competitiveness of the online colouring algorithm First-Fit in disk graphs, which
are graphs representing overlaps of disks on the plane. We also show that this bound is best possible for deterministic online
colouring algorithms that do not use the disk representation of the input graph. We also present a related new lower bound for unit
disk graphs.
Abstract: The paper is concerned with the time efficient
processing of spatiotemporal predicates, i.e. spatial
predicates associated with an exact temporal
constraint. A set of such predicates forms a buffer
query or a Spatio-temporal Pattern (STP) Query with
time. In the more general case of an STP query, the
temporal dimension is introduced via the relative order
of the spatial predicates (STP queries with order).
Therefore, the efficient processing of a spatiotemporal
predicate is crucial for the efficient implementation of
more complex queries of practical interest. We propose
an extension of a known approach, suitable for
processing spatial predicates, which has been used for
the efficient manipulation of STP queries with order.
The extended method is supported by efficient indexing
structures. We also provide experimental results that
show the efficiency of the technique.
Abstract: eVoting is a challenging approach for increasing eParticipation. However, lack of citizens˘ trust seems to be a main obstacle that hinders its successful realization. In this paper we propose a trust-centered engineering approach for building eVoting systems that people can trust, based on transparent design and implementation phases. The approach is based on three components: the decomposition of eVoting systems into “layers of trust” for reducing the complexity of managing trust issues in smaller manageable layers, the application of a risk analysis methodology able to identify and document security critical aspects of the eVoting system, and a cryptographically secure eVoting protocol. Our approach is pragmatic rather than theoretical in the sense that it sidesteps the controversy that besets the nature of trust in information systems and starts with a working definition of trust as people˘s positive attitude towards a system that performs its operations transparently.
Abstract: Designing wireless sensor networks is inherently complex; many aspects such as energy efficiency, limited resources, decentralized collaboration, fault tolerance have to be tackled. To be effective and to produce applicable results, fundamental research has to be tested, at least as a proof-of-concept, in large scale environments, so as to assess the feasibility of the new concepts, verify their large scale effects (not only at technological level, but also as for their foreseeable implications on users, society and economy) and derive further requirements, orientations and inputs for the research. In this paper we focus on the problems of interconnecting existing testbed environments via the Internet and providing a virtual unifying laboratory that will support academia, research centers and industry in their research on networks and services. In such a facility important issues of trust, security, confidentiality and integrity of data may arise especially for commercial (or not) organizations. In this paper we investigate such issues and present the design of a secure and robust architectural model for interconnecting testbeds of wireless sensor networks.
Abstract: In this paper,we propose a new scheme for the on
demand use of capacity in OBS networks, combining a two-way
reservation protocol and an assembly scheme process that incorporates an aggressive and forward resource reservation mechanism.The key idea is to tune the assembly timer to be equal to the
time associated with the establishment of the end-to-endconnection (round-trip-time) and, thus, synchronize the resource reservation with the assembly process. In this way, upon the arrival of
the first packet in the queue, reservation of resources may start simultaneously based on an aggressive prediction of the burst length.
Abstract: In this paper we consider the problem of web page usage prediction in a web site by modeling users˘ navigation history and web page content with weighted suffix trees. This user˘s navigation prediction can be exploited either in an on-line recommendation system in a web site or in a web page cache system. The method proposed has the advantage that it demands a constant amount of computational effort per one user˘s action and consumes a relatively small amount of extra memory space. These features make the method ideal for an on-line working environment. Finally, we have performed an evaluation of the proposed scheme with experiments on various web site log files and web pages and we have found that its quality performance is fairly well and in many cases an outperforming one.
Abstract: In this work, we present a concrete framework, based on web services-oriented architecture, for integrating small programmable objects in the web of things.
Functionality and data gathered by the Small Programmable Objects (SPO) are exposed using Web Services. Based on this, by exploiting XML encoding, SPO can be comprehensible by any web application. The architecture proposed is focused in providing secure and efficient interoperability between SPO and the web.
Additionally, the proposed architecture provides management capabilities for deploying, maintaining and operating SPO applications across multiple networks.
We present the multilayer architecture of our system and its implementation, which uses a combination of Java Standard and Micro Editions. Finally, we present a case study presenting our implementation. In this application we use SunSPOTs, which are wireless network motes developed by Sun Microsystems.
Abstract: One frequently employed way of propagation exploited by worms is through the victim˘s contact book. The contact book, which reflects the acquaintance profiles of people, is used as a “hit-list”, to which the worm can send itself in order to spread fast. In this paper we propose a discrete worm propagation model that relies upon a combined email and Instant Messaging (IM) communication behaviour of users. We also model user reaction against infected email as well as the rate at which antivirus software is installed. User acquaintance is perceived as a “network” connecting users based on their contact book links. We then propose a worm propagation formulation based on a token propagation algorithm, further analyzed with a use of a system of continuous differential equations, as dictated by Wormald˘s theorem on approximating “well-behaving” random processes with deterministic functions.
Abstract: We study the problem of fast and energy-efficient data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually “learns” the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.
Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order
to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a)the greedy motion of the sink towards the region of highest density each time and b)taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated.
Abstract: We consider the problem of planning a mixed line
rates (MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support the transmission at different line
rates. Previously proposed planning algorithms, have used a
transmission reach limit for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more significant between
transmissions that utilize different modulation formats. Thus, the
transmission reach of a connection with a specific modulation
format/line rate depends also on the other connections that copropagate
with it in the network. To plan a MLR WDM network,
we present routing and wavelength assignment (RWA)
algorithms that take into account the adaptation of the
transmission reach of each connection according to the use of the
modulation formats/line rates in the network. The proposed
algorithms are able to plan the network so as to alleviate
interference effects, enabling the establishment of connections of
acceptable quality over paths that would otherwise be prohibited
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves (not just the sinks) are mobile. Furthermore, we focus on mobility scenarios characterized by heterogeneous, highly changing mobility roles in the network. To capture these high dynamics of diverse sensory motion we propose a novel network parameter,
the mobility level, which, although simple and local, quite accurately takes into account both the spatial and speed characteristics of motion. We then propose adaptive data dissemination protocols that use the mobility level estimation to optimize performance, by basically exploiting high mobility (redundant message ferrying) as a cost-effective replacement of flooding, e.g. the sensors tend to dynamically propagate less data in the presence
of high mobility, while nodes of high mobility are favored for moving data around. These dissemination schemes are enhanced by a distance-sensitive probabilistic message flooding inhibition mechanism that further reduces communication cost, especially for fast nodes of high mobility level, and as distance to data destination decreases. Our simulation findings
demonstrate significant performance gains of our protocols compared to non-adaptive protocols, i.e. adaptation increases the success rate and reduces latency (even by 15%) while at the same time significantly reducing energy dissipation (in most cases by even 40%). Also, our adaptive schemes achieve significantly higher message delivery ratio and
satisfactory energy-latency trade-offs when compared to flooding when sensor nodes have
limited message queues.
Abstract: We introduce a new modelling assumption for wireless sensor networks, that of node redeployment (addition of sensor devices during protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under these modelling assumptions, we design, implement and evaluate a new power conservation scheme for efficient data propagation. Our scheme is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards improved operation choices. The scheme is simple, distributed and does not require exchange of control messages between nodes.
Implementing our protocol in software we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of the network simulator ns-2. We focus on highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-offs between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation protocol) and good trade-offs achieved. Furthermore, the redeployment of additional sensors during network evolution and/or the heterogeneous deployment of sensors, drastically improve (when compared to ``equal total power" simultaneous deployment of identical sensors at the start) the protocol performance (i.e. the success rate increases up to four times} while reducing energy dissipation and, interestingly, keeping latency low).
Abstract: Clustering is a crucial network design approach to enable large-scale wireless sensor networks (WSNs) deployments. A large variety of clustering approaches has been presented focusing on different performance metrics. Such protocols usually aim at minimizing communication overhead, evenly distributing roles among the participating nodes, as well as controlling the network topology. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal wireless communication channels and perfect energy consumption estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper, we design a new clustering protocol that adapts to the changes in the environment and the needs and goals of the user applications. We address the issues that hinder its performance due to the real environment conditions and provide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the \textsf{WISEBED} framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the clustering problem, regarding adapting to environmental conditions and the quality of topology control. Our study clearly demonstrates the applicability of our approach and the benefits it offers to both research \& development communities.
Abstract: Wireless Sensor Networks are by nature highly dynamic and communication between sensors is completely ad hoc, especially when mobile devices are part of the setup. Numerous protocols and applications proposed for such networks
operate on the assumption that knowledge of the neighborhood is a priori available to all nodes. As a result, WSN deployments need to use or implement from scratch a neighborhood discovery mechanism. In this work we present a new protocol based on adaptive periodic beacon exchanges. We totally avoid continuous beaconing by adjusting the rate of broadcasts using the concept of consistency over the understanding of neighborhood that nearby devices share. We propose, implement and evaluate our adaptive neighborhood discovery protocol over our experimental testbed and using large scale simulations. Our results indicate that the
new protocol operates more eciently than existing reference implementations while it provides valid information to applications that use it. Extensive performance evaluation indicates that it successfully reduces generated network traffic by 90% and increases network lifetime by 20% compared to existing mechanisms that rely on continuous beaconing.
Abstract: We study the problem of secure routing in wireless sensor networks where the sensors and the sink can move during the execution of remote monitoring applications and communication is not necessarily directed towards the sink. We present a new routing protocol that builds upon a collection of mechanisms so that the integrity and confidentiality of the information reported to the controlling authorities is secured. The mechanisms are simple to implement, rely only on local information and require O(1) storage per sensor. The protocol adapts to mobility and security challenges that may arise throughout the execution of the application. We take special care for wireless sensor networks that monitor dynamically changing environments and applications that require its operation for extended periods of time. APSR can detect when the current network conditions are about to change and becomes ready for adaption to the new conditions. We demonstrate how to deal with inside and outside attacks even when the network is adapting to internal and/or external events.
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. Furthermore, we focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics of diverse sensory motion
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to optimize performance, by basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. We focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to improve performance. By basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.
Abstract: The last few years we are witnessing a growing trend towards a stronger connection between the natural
and the digital domain; the digital domain steadily becomes a part of our everyday lives, while we monitor
natural processes and activities in greater detail with the aid of the digital domain. Wireless Sensor Networks
are a recent technological advancement that attempts to deal specifically with both trends.
Abstract: Data propagation in wireless sensor networks can be performed either by hop-by-hop single transmissions or by multi-path broadcast of data. Although several energy-aware MAC layer protocols exist that operate very well in the case of single point-to-point transmissions, none is especially designed and suitable for multiple broadcast transmissions. The key idea of our protocols is the passive monitoring of local network conditions and the adaptation of the protocol operation accordingly. The main contribution of our adaptive method is to proactively avoid collisions by implicitly and early enough sensing the need for collision avoidance. Using the above ideas, we design, implement and evaluate three different, new strategies for proactive adaptation. We show, through a detailed and extended simulation evaluation, that our parameter-based family of protocols for multi-path data propagation significantly reduce the number of collisions and thus increase the rate of successful message delivery (to above 90%) by achieving satisfactory trade-offs with the average propagation delay. At the same time, our protocols are shown to be very energy efficient, in terms of the average energy dissipation per delivered message.
Abstract: We consider sensor networks where the sensor nodes are attached on entities that move in a highly dynamic, heterogeneous manner. To capture this mobility diversity we introduce a new network parameter, the direction-aware mobility
level, which measures how fast and close each mobile node is expected to get to the data destination (the sink). We then provide local, distributed data dissemination protocols
that adaptively exploit the node mobility to improve performance. In particular, "high" mobility is used as a low cost replacement for data dissemination (due to the ferrying of data), while in the case of "low" mobility either a) data propagation redundancy is increased (when highly mobile neighbors exist) or b) long-distance data transmissions are used (when the entire neighborhood is of low mobility) to accelerate data dissemination towards the sink. An extensive performance comparison to relevant methods from
the state of the art demonstrates signicant improvements i.e. latency is reduced by even 4 times while keeping energy dissipation and delivery success at very satisfactory levels.
Abstract: We investigate the problem of ecient wireless energy recharging in Wireless Rechargeable Sensor Networks (WRSNs). In
such networks a special mobile entity (called the Mobile Charger) traverses the network and wirelessly replenishes the energy
of sensor nodes. In contrast to most current approaches, we envision methods that are distributed, adaptive and use limited
network information. We propose three new, alternative protocols for ecient recharging, addressing key issues which we
identify, most notably (i) to what extent each sensor should be recharged (ii) what is the best split of the total energy between
the charger and the sensors and (iii) what are good trajectories the MC should follow. One of our protocols (
LRP
) performs
some distributed, limited sampling of the network status, while another one (
RTP
) reactively adapts to energy shortage alerts
judiciously spread in the network. As detailed simulations demonstrate, both protocols signicantly outperform known state
of the art methods, while their performance gets quite close to the performance of the global knowledge method (
GKP
) we
also provide, especially in heterogeneous network deployments.
Abstract: Energy consumption reserves a large portion of the budget for school buildings. At the same time, the students that use such facilities are the adults of the years to come and thus, should they embrace energy-aware behaviors, then sustainable results with respect to energy efficiency are anticipated. GAIA is a research project targeting this user domain, proposing a set of applications that a) aims at raising awareness, prompting action and fostering engagement in energy efficiency enhancement, and b) is adaptable to the needs of each facility/community. This application set relies on an IoT sensing infrastructure, as well as on the use of humans as sensors to create situational awareness.
Abstract: In this paper, we propose an efficient non-linear task workload prediction mechanism incorporated with a fair scheduling algorithm
for task allocation and resource management in Grid computing. Workload prediction is accomplished in a Grid middleware approach
using a non-linear model expressed as a series of finite known functional components using concepts of functional analysis. The coefficient
of functional components are obtained using a training set of appropriate samples, the pairs of which are estimated based on
a runtime estimation model relied on a least squares approximation scheme. The advantages of the proposed non-linear task workload
prediction scheme is that (i) it is not constrained by analysis of source code (analytical methods), which is practically impossible to be
implemented in complicated real-life applications or (ii) it does not exploit the variations of the workload statistics as the statistical
approaches does. The predicted task workload is then exploited by a novel scheduling algorithm, enabling a fair Quality of Service oriented
resource management so that some tasks are not favored against others. The algorithm is based on estimating the adjusted fair
completion times of the tasks for task order selection and on an earliest completion time strategy for the grid resource assignment. Experimental
results and comparisons with traditional scheduling approaches as implemented in the framework of European Union funded
research projects GRIA and GRIDLAB grid infrastructures have revealed the outperformance of the proposed method.
Abstract: Smart cities are becoming a vibrant application domain for a number of science fields. As such, service providers and stakeholders are beginning to integrate co-creation aspects into current implementations to shape the future smart city solutions. In this context, holistic solutions are required to test such aspects in real city-scale IoT deployments, considering the complex city ecosystems. In this work, we discuss OrganiCity˘s implementation of an Experimentation-as-a-Service framework, presenting a toolset that allows developing, deploying and evaluating smart city solutions in a one-stop shop manner. This is the first time such an integrated toolset is offered in the context of a large-scale IoT infrastructure, which spans across multiple European cities. We discuss the design and implementation of the toolset, presenting our view on what Experimentation-as-a-Service should provide, and how it is implemented. We present initial feedback from 25 experimenter teams that have utilized this toolset in the OrganiCity project, along with a discussion on two detailed actual use cases to validate our approach. Learnings from all experiments are discussed as well as architectural considerations for platform scaling. Our feedback from experimenters indicates that Experimentation-as-a-Service is a viable and useful approach.
Abstract: We argue the case for a new paradigm for architecting structured P2P overlay networks, coined AESOP. AESOP consists of 3 layers: (i) an architecture, PLANES, that ensures significant performance speedups, assuming knowledge of altruistic peers; (ii) an accounting/auditing layer, AltSeAl, that identifies and validates altruistic peers; and (iii) SeAledPLANES, a layer that facilitates the coordination/collaboration of the previous two components. We briefly present these components along with experimental and analytical data of the promised significant performance gains and the related overhead. In light of these very encouraging results, we put this three-layer architecture paradigm forth as the way to structure the P2P overlay networks of the future.
Abstract: In this paper, we propose simple protocols for enabling two communicating agents that may have never met before to extract common knowledge out of any initial knowledge that each of them possesses. The initial knowledge from which the agents start, may even be independent of each other, implying that the two agents need not have had previous access to common information sources. In addition, the common knowledge extracted upon the termination of the protocols depends, in a fair way, on the (possibly independent) information items initially known, separately, by the two agents. It is fair in the sense that there is a negotiation between the two agents instead of one agent forcing the other to conform to its own knowledge. These protocols, may be extended in order to support security applications where the establishment of a common knowledge is required. Moreover, the implementation of the protocols leads to reasonably small code that can also fit within resource limited devices involved in any communication network while, at the same time, it is efficient as simulation results demonstrate.
Abstract: Wireless sensor networks are comprised of a vast number of
ultra-small autonomous computing, communication and sensing devices,
with restricted energy and computing capabilities, that co-operate
to accomplish a large sensing task. Such networks can be very useful
in practice, e.g.~in the local monitoring of ambient conditions and
reporting them to a control center. In this paper we propose a
distributed group key establishment protocol that uses mobile agents
(software) and is particularly suitable for energy constrained,
dynamically evolving ad-hoc networks. Our approach totally avoids
the construction and the maintenance of a distributed structure that
reflects the topology of the network. Moreover, it trades-off
complex message exchanges by performing some amount of additional
local computations in order to be applicable at dense and dynamic
sensor networks. The extra computations are simple for the devices
to implement and are evenly distributed across the participants of
the network leading to good energy balance. We evaluate the
performance of our protocol in a simulated environment and compare
our results with existing group key establishment protocols. The
security of the protocol is based on the Diffie-Hellman problem and
we used in our experiments its elliptic curve analog. Our findings
basically indicate the feasibility of implementing our protocol in
real sensor network devices and highlight the advantages and
disadvantages of each approach given the available technology and
the corresponding efficiency (energy, time) criteria.
Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a) the greedy motion of the sink towards the region of highest density each time and b) taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated. We also investigate the case of multiple sinks by suggesting appropriate distributed coordination methods.
Abstract: Η θεωρία παιγνίων προσφέρει μοντέλα και αναλυτικά εργαλεία για την κατανόηση φαινομένων που παρατηρούνται όταν κάποιες εγωιστικές οντότητες (παίκτες) με προσωπικές στρατηγικές και συμφέροντα αλληλεπιδρούν. Στόχος της διατριβής είναι η μελέτη προβλημάτων που συνδέονται τόσο με τη θεωρία παιγνίων όσο και με τη θεωρία υπολογισμού. Τα συγκεκριμένα θέματα με τα οποία ασχοληθήκαμε και τα αποτελέσματα της έρευνάς μας συνοψίζονται παρακάτω. Υπολογισμός Προσεγγιστικών Ισορροπιών Nash. Ένα από τα πλέον θεμελιώδη αλγοριθμικά προβλήματα της θεωρίας παιγνίων είναι το πρόβλημα υπολογισμού μιας ισορροπίας Nash ενός δοθέντος παιγνίου. Μια ισορροπία Nash είναι ένας συνδυασμός στρατηγικών, μία για κάθε παίκτη, τέτοια ώστε να μην υπάρχει παίκτης που να μπορεί να αυξήσει το κέρδος του αν αποκλίνει μονομερώς, αν αλλάξει δηλαδή τη στρατηγική του. Πρόσφατα όμως αποδείχθηκε ότι το πρόβλημα υπολογισμού μιας ισορροπίας Nash είναι πλήρες για την κλάση πολυπλοκότητας PPAD, γεγονός που έστρεψε την έρευνα προς την αναζήτηση ε-προσεγγιστικών ισορροπιών Nash, δηλαδή συνδυασμών στρατηγικών με την ιδιότητα ότι κανένας παίκτης δεν μπορεί να αυξήσει το κέρδος του περισσότερο από ε αν αποκλίνει μονομερώς. Στα πλαίσια της διατριβής αναπτύξαμε δύο από τους πρώτους αλγορίθμους υπολογισμού μιας ε-προσεγγιστικής ισορροπίας Nash για την περίπτωση όπου το ε είναι κάποια σταθερά. Οι προσεγγίσεις που επιτυγχάνουν οι αλγόριθμοί μας είναι ε=3/4 και ε=(2+λ)/4 αντίστοιχα, όπου λ είναι το ελάχιστο, μεταξύ όλων των ισορροπιών Nash, κέρδος για έναν παίκτη. Επιπλέον, μελετήσαμε μια ευρεία κλάση τυχαίων παιγνίων δύο παικτών, για την οποία υπολογίσαμε μια πολύ καλή ε-προσεγγιστική ισορροπία Nash, με το ε να τείνει στο 0 καθώς το πλήθος των διαθέσιμων στρατηγικών των παικτών τείνει στο άπειρο. Εξισορρόπηση Φορτίου και Ατελής Πληροφόρηση. Οι αρχές της θεωρίας παιγνίων είναι χρήσιμες στην ανάλυση της επίδρασης που έχει στην καθολική απόδοση ενός συστήματος διαμοιραζόμενων πόρων η εγωιστική και ανταγωνιστική συμπεριφορά των χρηστών του. Προς την κατεύθυνση αυτή, εστιάσαμε στο πρόβλημα της εξισορρόπησης φορτίου. Συγκεκριμένα, κάθε ένας από ένα σύνολο n χρηστών, με διαφορετικό φορτίο ο καθένας, καλείται να επιλέξει έναν από δύο διαθέσιμους εξυπηρετητές όπου θα αναθέσει το φορτίο του. Ο στόχος κάθε χρήστη είναι να ελαχιστοποιήσει το κόστος του, το οποίο ορίζεται ως το αναμενόμενο συνολικό φορτίο στον εξυπηρετητή που θα επιλέξει. Στην περίπτωση που κάθε χρήστης γνωρίζει τόσο το μέγεθος του φορτίου του όσο και τα μεγέθη των φορτίων των άλλων χρηστών, το επαγόμενο παίγνιο είναι ουσιαστικά ένα παίγνιο συμφόρησης με βάρη, ανήκει δηλαδή σε μια κατηγορία παιγνίων τα οποία έχουν μελετηθεί εκτενώς στην πρόσφατη βιβλιογραφία. Στα πλαίσια της διατριβής μελετήσαμε την περίπτωση όπου οι χρήστες έχουν ατελή πληροφόρηση σχετικά με τα ακριβή μεγέθη των φορτίων τους. Εστιάσαμε στις λεγόμενες κατωφλιακές στρατηγικές, δηλαδή σε στρατηγικές της μορφής.
Abstract: Optical network design problems fall in the broad
category of network optimization problems. We give a short
introduction on network optimization and general algorithmic
techniques that can be used to solve complex and difficult
network design problems. We apply these techniques to address
the static Routing and Wavelength Assignment problem that is
related to planning phase of a WDM optical network. We present
simulation result to evaluate the performance of the proposed
algorithmic solution.
Abstract: For many random Constraint Satisfaction Problems, by now, we have asymptotically tight estimates of
the largest constraint density for which they have solutions. At the same time, all known polynomial-time algorithms
for many of these problems already completely fail to find solutions at much smaller densities. For example, it is
well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some
of the simplest possible coloring algorithms already achieve this goal. Given the simplicity of those algorithms, one
would expect there is a lot of room for improvement. Yet, to date, no algorithm is known that uses (2 - o)÷ colors,
in spite of efforts by numerous researchers over the years.
In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we believe it is natural to
inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a
subset of {1, . . . , k}n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense
to a phase transition in the geometry of this set. Roughly, the set of k-colorings looks like a giant ball for k ? 2÷, but
like an error-correcting code for k ? (2 - o)÷. We prove that a completely analogous phase transition also occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each problem, its location corresponds precisely with the point were all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to prove rigorously much of the celebrated 1-step Replica-Symmetry-Breaking hypothesis
of statistical physics for random CSPs.
Abstract: For a number of optimization problems on random graphs
and hypergraphs, e.g., k-colorings, there is a very big gap between the
largest average degree for which known polynomial-time algorithms can
find solutions, and the largest average degree for which solutions provably
exist. We study this phenomenon by examining how sets of solutions
evolve as edges are added.We prove in a precise mathematical sense that,
for each problem studied, the barrier faced by algorithms corresponds
to a phase transition in the problems solution-space geometry. Roughly
speaking, at some problem-specific critical density, the set of solutions
shatters and goes from being a single giant ball to exponentially many,
well-separated, tiny pieces. All known polynomial-time algorithms work
in the ball regime, but stop as soon as the shattering occurs. Besides
giving a geometric view of the solution space of random instances our
results provide novel constructions of one-way functions.
Abstract: As a result of recent significant technological advances, a new computing and communication environment, Mobile Ad Hoc Networks (MANET), is about to enter the mainstream. A multitude of critical aspects, including mobility, severe limitations and limited reliability, create a new set of crucial issues and trade-offs that must be carefully taken into account in the design of robust and efficient algorithms for these environments. The communication among mobile hosts is one among the many issues that need to be resolved efficiently before MANET becomes a commodity.
In this paper, we propose to discuss the communication problem in MANET as well as present some characteristic techniques for the design, the analysis and the performance evaluation of distributed communication protocols for mobile ad hoc networks. More specifically, we propose to review two different design techniques. While the first type of protocols tries to create and maintain routing paths among the hosts, the second set of protocols uses a randomly moving subset of the hosts that acts as an intermediate pool for receiving and delivering messages. We discuss the main design choices for each approach, along with performance analysis of selected protocols.
Abstract: The Lov{\'a}sz Local Lemma (LLL) is a powerful tool that can be used to prove that an object
having none of a set of bad properties exists, using the probabilistic method. In many applications
of the LLL it is also desirable to explicitly construct the combinatorial object. Recently it was
shown that this is possible using a randomized algorithm in the full asymmetric LLL setting [R.
Moser and G. Tardos, 2010]. A strengthening of the LLL for the case of dense local neighborhoods
proved in [R. Bissacot et al., 2010] was recently also made constructive in [W. Pegden, 2011]. In
another recent work [B. Haupler, B. Saha, A. Srinivasan, 2010], it was proved that the algorithm
of Moser and Tardos is still efficient even when the number of events is exponential. Here we
prove that these last two contributions can be combined to yield a new version of the LLL.
Abstract: We discuss some new algorithmic and complexity issues in
coalitional and dynamic/evolutionary games, related to the understand-
ing of modern selŻsh and Complex networks.
In particular: (a) We examine the achievement of equilibria via natural
distributed and greedy approaches in networks. (b) We present a model
of a coalitional game in order to capture the anarchy cost and complexity
of constructing equilibria in such situations. (c) We propose a stochastic
approach to some kinds of local interactions in networks, that can be
viewed also as extensions of the classical evolutionary game theoretic
setting.
Abstract: About this book
This state-of-the-art survey features papers that were selected after an open call following the International Dagstuhl Seminar on Algorithmic Methods for Railway Optimization held in Dagstuhl Castle, Germany, in June 2004. The second part of the volume constitutes the refereed proceedings of the 4th International Workshop on Algorithmic Methods and Models for Optimization of Railways held in Bergen, Norway, in September 2004.
The volume covers algorithmic methods for analyzing and solving problems arising in railway optimizations, with a special focus on the interplay between railway and other public transportation systems. Beside algorithmics and mathematical optimization, the relevance of formal models and the influence of applications on problem modeling are also considered. In addition, the papers address experimental studies and useful prototype implementations.
The 17 full papers presented here were carefully reviewed and selected from numerous submissions and are organized into topical sections covering network and line planning, timetabling and timetable information, rolling stock and crew scheduling, and real-time operations.
Abstract: An ever growing emphasis is put nowadays in developing personalized journey planning and renewable mobility services in smart cities. These services combine means of scheduled-based public transport and electric vehicles or bikes, using crowdsourcing techniques for collecting real-time traffic information and for assessing the recommended routes. The goal is to develop an information system that will allow the fast, real-time computation of best routes.
The main challenges in developing such an information system are both technological and algorithmic. The technological challenge concerns the collection, storage, management, and updating of a huge volume of transport data that are usually time-dependent, and the provision (through these data) of personalized renewable mobility services in smartphones. This challenge is typically confronted by creating a cloud infrastructure that on the one hand will support the storage, management, and updating of data, while on the other hand it will handle the necessary data feed to the smartphone applications for providing the users with the requested best routes.
The algorithmic challenge concerns the development of innovative algorithms for the efficient provision of journey planning services in smartphones, based on data they will receive from the cloud infrastructure. These services guarantee the computation of realistic and useful best routes, as well as the updating of the precomputed (route and timetable) information, in case of delays of scheduled public transport vehicles, so that the users can online update their routes to destination. The goal is to develop an algorithmic basis for supporting modern renewable mobility services (information systems), such as "mobility on demand'' (where the next leg of a journey is decided in real-time) and "door-to-door'' personalized mobility, in urban scheduled-based public transport environments. Scheduled-based public transport information systems should not only compute in real-time end-user queries requesting best routes, but also to update the timetable information in case of delays.
The core algorithmic issues of mobility and journey planning (regarding the computation of optimal routes under certain criteria) in scheduled-based public transport systems concern the efficient solution of the fundamental earlier arrival (EA) problem (compute a journey from station S to station T minimizing the overall traveling time required to complete the journey), the minimum number of
transfers (MNT) problem (compute a journey from station S to station T minimizing the number of times a passenger is required to change vehicle), and the efficient updating of timetable information system in case of vehicle delays. The EA and MNT problems have been extensively studied in the literature under two main approaches: the array-based modeling (where the timetable is represented as an array) and the graph-based modeling (where the timetable is represented as a graph). Experimental results have shown so far that the array-based approaches are faster in terms of query time than graph-based ones, as they are able to better exploit data locality and do not rely on priority queues. On the other hand, the array-based approaches have not been theoretically or experimentally studied as far as the efficient updating of timetable information, in case of delays, is concerned.
In this thesis, new graph-based models are being developed that solve efficiently the aforementioned fundamental algorithmic mobility problems in urban scheduled-based public transport information systems, along with a mobile application (journey planner) running on Android-based smartphones that includes a service for the evaluation of the recommended routes by the users. In particular:
(a) An extensive comparative evaluation was conducted on graph-based dynamic models that represent big data volumes regarding their suitability for representing timetable information. The study confirmed that the realistic time-expanded model is the most suitable for representing timetable information.
(b) Two new graph-based models have been developed for representing timetable information (in a timetable information system), the reduced time-expanded model and the dynamic timetable model (DTM), both of which are more space-efficient with respect to the realistic time-expanded model. For both of the new models, new efficient algorithms were developed for fast answering of EA and MNT queries, as well as for updating the timetable information representation in case of delays.
(c)An experimental evaluation was conducted with the new graph-based models and their associated query and update algorithms on a set of 14 real-world scheduled-based transportation systems, including the metropolitan areas of Berlin, Athens, London, Rome, and Madrid. The experimental results showed that the query algorithms of the reduced time-expanded model are superior to those of the DTM model, while the reverse is true regarding the update algorithms. In addition, the experimental study showed that the query algorithms of the new graph-based models compete favorably with those of the best array-based models.
(d) A mobile, cloud-based, journey planner (information system) was developed whose core algorithmic engine builds upon the new graph-based models. The mobile application is accompanied by a service that allows the users to assess the recommended journeys. The journey planner demonstrates the practicality of the new graph-based models and their associated query and update algorithms.
Abstract: In this work, we study the Population Protocol model of Angluin et al. from the perspective of protocol verification. In particular, we are interested in algorithmically solving the problem of determining whether a given population protocol conforms to its specifications. Since this is the first work on verification of population protocols, we redefine most notions of population protocols in order to make them suitable for algorithmic verification. Moreover, we formally define the general verification problem and some interesting special cases. All these problems are shown to be NP-hard. We next propose some first algorithmic solutions for a natural special case. Finally, we conduct experiments and algorithmic engineering in order to improve our verifiers' running times.
Abstract: The 3-coloring problem is well known to be NP-complete. It is also well known that it
remains NP-complete when the input is restricted to graphs with diameter 4. Moreover, assuming
the Exponential Time Hypothesis (ETH), 3-coloring cannot be solved in time 2
o
(
n
)
on graphs with
n
vertices and diameter at most 4. In spite of the extensive studies of the 3-coloring problem with
respect to several basic parameters, the complexity status of this problem on graphs with small
diameter, i.e. with diameter at most 2, or at most 3, has been a longstanding and challenging open
question. In this paper we investigate graphs with small diameter. For graphs with diameter at most 2,
we provide the rst subexponential algorithm for 3-coloring, with complexity 2
O
(
p
n
log
n
)
, which is
asymptotically the same as the currently best known time complexity for the graph isomorphism
problem. Furthermore we extend the notion of an articulation vertex to that of an
articulation
neighborhood
, and we provide a polynomial algorithm for 3-coloring on graphs with diameter 2
that have at least one articulation neighborhood. For graphs with diameter at most 3, we establish
the complexity of 3-coloring, even for the case of triangle-free graphs. Namely we prove that for
every
"
2
[0
;
1), 3-coloring is NP-complete on triangle-free graphs of diameter 3 and radius 2 with
n
vertices and minimum degree
=
(
n
"
). Moreover, assuming ETH, we use three dierent amplication
techniques of our hardness results, in order to obtain for every
"
2
[0
;
1) subexponential asymptotic
lower bounds for the complexity of 3-coloring on triangle-free graphs with diameter 3 and minimum
degree
=
(
n
"
). Finally, we provide a 3-coloring algorithm with running time 2
O
(min
f
;
n
log
g
)
for
arbitrary graphs with diameter 3, where
n
is the number of vertices and
(resp.
) is the minimum
(resp. maximum) degree of the input graph. To the best of our knowledge, this algorithm is the rst
subexponential algorithm for graphs with
=
!
(1) and for graphs with
=
O
(1) and
=
o
(
n
).
Due to the above lower bounds of the complexity of 3-coloring, the running time of this algorithm is
asymptotically almost tight when the minimum degree of the input graph is
=
(
n
"
), where
"
2
[
1
2
;
1).
Abstract: In this work we study the important problem of colouring squares of planar graphs (SQPG). We design and implement two new algorithms that colour in a different way SQPG. We call these algorithms MDsatur and RC. We have also implemented and experimentally evaluated the performance of most of the known approximation colouring algorithms for SQPG [14, 6, 4, 10]. We compare the quality of the colourings achieved by these algorithms, with the colourings obtained by our algorithms and with the results obtained from two well-known greedy colouring heuristics. The heuristics are mainly used for comparison reasons and unexpectedly give very good results. Our algorithm MDsatur outperforms the known algorithms as shown by the extensive experiments we have carried out.
The planar graph instances whose squares are used in our experiments are “non-extremal” graphs obtained by LEDA and hard colourable graph instances that we construct.
The most interesting conclusions of our experimental study are:
1) all colouring algorithms considered here have almost optimal performance on the squares of “non-extremal” planar graphs. 2) all known colouring algorithms especially designed for colouring SQPG, give significantly better results, even on hard to colour graphs, when the vertices of the input graph are randomly named. On the other hand, the performance of our algorithm, MDsatur, becomes worse in this case, however it still has the best performance compared to the others. MDsatur colours the tested graphs with 1.1 OPT colours in most of the cases, even on hard instances, where OPT denotes the number of colours in an optimal colouring. 3) we construct worst case instances for the algorithm of Fotakis el al. [6], which show that its theoretical analysis is tight.
Abstract: This work focuses on the computational power of the Mediated Population Protocol model on complete communication graphs and initially identical edges (SMPP). In particular, we investigate the class MPS of all predicates that are stably computable by the SMPP model. It is already known that MPS is in the symmetric subclass of NSPACE(n^2). Here we prove that this inclusion holds with equality, thus, providing the following exact characterization for MPS: A predicate is in MPS iff it is symmetric and is in NSPACE(n^2).
Abstract: We present an all-optical circuit that performs header separation and insertion to 10 Gb/s variable
length optical packets using a packet clock recovery circuit to control a 2x2 optical exchange bypass switch.
Abstract: In this paper, we demonstrate clock extraction from
10 Gb/s asynchronous short data packets. Successful clock
acquisition is achieved from data packets arriving at time
intervals of only 1.5 ns, irrespective of their precise phase
relation. The clock recovery circuit used consists of a Fabry-
Perot filter (FPF) and a non-linear UNI gate and requires very
short time for synchronization.
Abstract: We present a novel all-optical half-adder using only two semiconductor interferometric gates. The first gate
simu/taneous/y peiforms logical AND/OR Boolean operatrbns and the second outputs SUM and CARRY bits of 10
GMs pseudorandom data patterns.
Abstract: Packet clock generation from flag bits is demonstrated, using a Fabry-Perot filter followed by a semiconductor optical amplifier. Ten clock pulses are generated from a single pulse with less than 0.45 dB amplitude modulation.
Abstract: This paper reviews the work performed under the
European ESPRIT project DO_ALL (Digital OpticAL Logic
modules) spanning from advanced devices (semiconductor optical
amplifiers) to all-optical modules (laser sources and gates) and
from optical signal processing subsystems (packet clock recovery,
optical write/store memory, and linear feedback shift register) to
their integration in the application level for the demonstration of
nontrivial logic functionality (all-optical bit-error-rate tester and
a 2 2 exchange–bypass switch). The successful accomplishment
of the project˘s goals has opened the road for the implementation
of more complex ultra-high-speed all-optical signal processing
circuits that are key elements for the realization of all-optical
packet switching networks.
Abstract: We demonstrate a simple all-optical technique to detect the beginning of a packet for packet-rate
synchronization. It employs a packet clock recovery unit and a SOA and it requires no special data encoding to operate.
Abstract: Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar, and sparse networks. The approach used is to preprocess the inputn-vertex network so that afterward, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after anO(n log n) preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in timeO(n2). This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, \~{a}, of the input network. The parameter \~{a} varies between 1 and {\`E}(n); the algorithms perform well when \~{a} = o(n). The value of a min-cut can be found in timeO(n + \~{a}2log \~{a}) and all-pairs min-cut can be solved in timeO(n2 + \~{a}4log \~{a}) for sparse networks. The corresponding running times for planar networks areO(n + \~{a} log \~{a}) andO(n2 + \~{a}3log \~{a}), respectively. The latter bounds depend on a result of independent interest; outerplanar networks have small “mimicking” networks that are also outerplanar.
Abstract: In this paper, we consider the problem of energy balanced data propagation in wireless sensor networks and we generalise previous works by allowing realistic energy assignment. A new modelisation of the process of energy consumption as a random walk along with a new analysis are proposed. Two new algorithms are presented and analysed. The first one is easy to implement and fast to execute. However, it needs a priori assumptions on the process generating data to be propagated. The second algorithm overcomes this need by inferring information from the observation of the process. Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes. This represents an important contribution for propagating energy balanced data in wireless sensor netwoks due to their highly dynamic nature.
Abstract: In this paper we study the problem of basic communication
in ad-hoc mobile networks where the deployment area changes in a
highly dynamic way and is unknown. We call such networks
highly changing ad-hoc mobile networks.
For such networks we investigate an efficient communication protocol which extends
the idea (introduced in [WAE01,POMC01]) of exploiting the co-ordinated
motion of a small part of an ad-hoc mobile
network (the ``runners support") to achieve
very fast communication between any two mobile users of the network.
The basic idea of the new protocol presented here is, instead
of using a fixed sized support for the whole duration of the protocol,
to employ a support of some initial (small) size which
adapts (given some time which can be made fast enough) to the
actual levels of traffic and the
(unknown and possibly rapidly changing) network area by
changing its size in order to converge to an optimal size,
thus satisfying certain Quality of Service criteria.
We provide here some proofs of correctness and fault tolerance
of this adaptive approach and we also provide analytical results
using Markov Chains and random walk techniques to show that such
an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive
implementation of the basic ``runners support" idea.
Abstract: We introduce a new modelling assumption in wireless sensor networks, that of node redeployment (addition of sensor devices during the protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under this model, we design, implement and evaluate a power conservation scheme for efficient data propagation. Our protocol is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards best operation choices. Our protocol operates does not require exchange of control messages between nodes to coordinate.Implementing our protocol we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of Ns2. We focus in highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general and parameterized metric capturing the trade-off between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation paradigm) and good trade-offs. Furthermore, redeployment of sensors during network evolution and/or heterogeneous deployment of sensors drastically improve (when compared to equal total "power" simultaneous deployment of identical sensors at the start) the protocol performance (the success rate increases up to four times while reducing energy dissipation and, interestingly, keeping latency low).
Abstract: Grid Infrastructures have been used to solve large scale scientific problems that do not have special requirements on QoS. However, the introduction and success of the Grids in commercial applications as well, entails the provision of QoS mechanisms which will allow for meeting the special requirements of the users-customers. In this paper we present an advanced Grid Architecture which incorporates appropriate mechanisms so as to allow guarantees of the diverse and contradictory usersrsquo QoS requirements. We present a runtime estimation model, which is the heart of any scheduling and resource allocation algorithm, and we propose a scheme able to predict the runtime of submitted jobs for any given application on any computer by introducing a general prediction model. Experimental results are presented which indicate the robustness and reliability of the proposed architecture. The scheme has been implemented in the framework of GRIA IST project (Grid Resources for Industrial Applications).
Abstract: In this paper, a novel configuration is proposed for
the implementation of an almost all-optical switch architecture
called the scheduling switch, which when combined with appropriate
wait-for-reservation or tell-and-go connection and flow
control protocols provides lossless communication for traffic
that satisfies certain smoothness properties. An all-optical 2 2
exchange/bypass (E/B) switch based on the nonlinear operation
of a semiconductor optical amplifier (SOA) is considered as the
basic building block of the scheduling switch as opposed to active
SOA-based space switches that use injection current to switch
between ON and OFF states. The experimental demonstration of
the optically addressable 2 2 E/B, which is summarized for
10–Gb/s data packets as well as synchronous digital hierarchy
(SDH)/STM-64 data frames, ensures the feasibility of the proposed
configuration at high speeds, with low switching energy and low
losses during the scheduling process. In addition, it provides
reduction of the number of required components for the construction
of the scheduling switch, which is calculated to be 50% in the
number of active elements and 33% in the fiber length.
Abstract: We present an almost all-optical node
architecture suitable for on-the-fly packet and burst
switching without losses. The operation of the node is based
on wavelength converters for mapping the incoming to the
outgoing links, and for intra-node contention resolution. The
node can be built out of commercially available equipment,
and is easily scalable, with respect to the number of its
incoming and outgoing links, by simple addition of
components.
Abstract: The use of Augmented Reality (AR) technologies is currently being investigated in numerous and diverse application domains. In this work, we discuss the ways in which we are integrating AR into educational in-class activities for the GAIA project, aiming to enhance existing tools that target behavioral changes towards energy efficiency in schools. We combine real-time IoT data from a sensing infrastructure inside a fleet of school buildings with AR software running on tablets and smartphones, as companions to a set of educational lab activities aimed at promoting energy awareness in a STEM context. We also utilize this software as a means to ease access to IoT data and simplify device maintenance. We report on the design and current status of our implementation, describing functionality in the context of our target applications, while also relaying our experiences from the use of such technologies in this application domain.
Abstract: The use of maker community tools and IoT technologies inside classrooms is spreading to an ever-increasing number of education and science fields. GAIA is a European research project focused on achieving behavior change for sustainability and energy awareness in schools. In this work, we report on how a large IoT deployment in a number of educational buildings and real-world data from this infrastructure, are utilized to support a “maker” lab kit activity inside the classroom. We also provide some insights to the integration of these activities in the school curriculum, along with a discussion on feedback produced through a series of workshop activities in a number of schools in Greece. Moreover, we discuss the application of the lab kit framework towards implementing an interactive installation. We also report on how the lab kit is paired with a serious game and an augmented reality app for smartphones and tablets, supporting the in-class activities. Our initial evaluation results show a very positive first reaction by the school community.
Abstract: This article presents a novel crawling and
clustering method for extracting and pro-
cessing cultural data from the web in a fully
automated fashion. Our architecture relies
upon a focused web crawler to download we
b documents relevant to culture. The
focused crawler is a web crawler that
searches and processes only those web pages
that are relevant to a particular topic. After downloading the pages, we extract from
each document a number of words for each th
ematic cultural area, filtering the docu-
ments with non-cultural content; we then create multidimensional document vectors
comprising the most frequent cultural term o
ccurrences. We calculate the dissimilarity
between the cultural-related document vect
ors and for each cultural theme, we use
cluster analysis to partition the documents in
to a number of clusters. Our approach is
validated via a proof-of-concept applica
tion which analyzes hundreds of web pages
spanning different cultural thematic areas.
Abstract: We investigate the problem of communication in an ad-hoc mobile network, that is, we assume the extreme case of a total absense of any fixed network infrastructure (for example a case of rapid deployment of a set of mobile hosts in an unknown terrain). We propose, in such a case, that a small subset of the deployed hosts (which we call the support) should be used for network operations. However, the vast majority of the hosts are moving arbitrarily according to application needs.
We then provide a simple, correct and efficient protocol for communication that avoids message flooding. Our protocol manages to establish communication between any pair of mobile hosts in small, a-priori guaranteed expected time bounds even in the worst case of arbitrary motions of the hosts that not in the support (provided that they do not deliberately try to avoid the support). These time bounds, interestingly, do not depend, on the number of mobile hosts that do not belong in the support. They depend only on the size of the area of motions. Our protocol can be implemented in very efficient ways by exploiting knowledge of the space of motions or by adding more power to the hosts of the support.
Our results exploit and further develop some fundamental properties of random walks in finite graphs.
Abstract: We introduce a new model of
ad-hoc mobile networks, which we call hierarchical,
that are comprised of dense subnetworks of mobile
users (corresponding to highly populated
geographical areas, such as cities),
interconnected across access ports
by sparse but frequently used connections
(such as highways).
For such networks, we present
an efficient routing protocol which extends
the idea (introduced in WAE00) of exploiting the co-ordinated
motion of a small part of an ad-hoc mobile
network (the ``support'') to achieve
very fast communication between any two mobile users of the network.
The basic idea of the new protocol presented here is, instead
of using a unique (large) support for the whole network,
to employ a hierarchy of (small) supports (one for each city)
and also take advantage of the regular traffic
of mobile users across the interconnection highways to communicate
between cities.
We combine here theoretical analysis (average case estimations based on random walk properties) and experimental implementations (carried out using the LEDA platform) to claim and validate results showing that such a hierarchical routing approach is,
for this class of ad-hoc mobile networks, significantly more efficient than a simple extension of the
basic ``support'' idea presented in WAE00.
Abstract: Wireless Sensor Networks are complex systems consisting of a number of relatively simple autonomous sensing devices spread on a geographical area. The peculiarity of these devices lies on the constraints they face in relation to their energy reserves and their computational, storage and communication capabilities. The utility of these sensors is to measure certain environmental conditions and to detect critical events in relation to these measurements. Those events thereupon have to be reported to a specific central station namely the “sink”. This data propagation generally has the form of a hop-by-hop transmission. In this framework we work on distributed data propagation protocols which are taking into account the energy reserves of the sensors. In particular following the work of Chatzigiannakis et al. on the Probabilistic Forwarding Protocol (PFR) we present the distributed probabilistic protocol EFPFR, which favors transmission from the less depleted sensors in addition to favor transmissions close to the “optimal line”. This protocol is simple and relies only on local information for propagation decisions. Its main goal is to limit the total amount of energy dissipated per event and therefore to extend the network’s operation duration.
Abstract: We have conducted an extensive experimental study on algorithms for fully dynamic transitive
closure. We have implemented the recent fully dynamic algorithms by King [1999], Roditty [2003],
Roditty and Zwick [2002, 2004], and Demetrescu and Italiano [2000, 2005] along with several variants
and compared them to pseudo fully dynamic and simple-minded algorithms developed in a
previous study [Frigioni et al. 2001].We tested and compared these implementations on random inputs,
synthetic (worst-case) inputs, and on inputs motivated by real-world graphs. Our experiments
reveal that some of the dynamic algorithms can really be of practical value in many situations.
Abstract: We have conducted an extensive experimental study on algorithms for fully dynamic transitive
closure. We have implemented the recent fully dynamic algorithms by King [1999], Roditty [2003],
Roditty and Zwick [2002, 2004], and Demetrescu and Italiano [2000, 2005] along with several variants
and compared them to pseudo fully dynamic and simple-minded algorithms developed in a
previous study [Frigioni et al. 2001].We tested and compared these implementations on random inputs,
synthetic (worst-case) inputs, and on inputs motivated by real-world graphs. Our experiments
reveal that some of the dynamic algorithms can really be of practical value in many situations.
Abstract: We investigate basic communication protocols in ad-hoc mobile networks. We follow the semi-compulsory approach according to which a small part of the mobile users, the support , that moves in a predetermined way is used as an intermediate pool for receiving and delivering messages. Under this approach, we present a new semi-compulsory protocol called the runners in which the members of perform concurrent and continuous random walks and exchange any information given to them by senders when they meet. We also conduct a comparative experimental study of the runners protocol with another existing semi-compulsory protocol, called the snake, in which the members of move in a coordinated way and always remain pairwise adjacent. The experimental evaluation has been carried out in a new generic framework that we developed to implement protocols for mobile computing. Our experiments showed that for both protocols only a small support is required for efficient communication, and that the runners protocol outperforms the snake protocol in almost all types of inputs we considered.
Abstract: We conduct an experimental study for the timetabling proble
m in a public railway network
under disruptions. We investigate three bicriteria optimi
zation problems introduced recently
that model the robustness of a timetable towards delays. We e
xperimentally evaluated these
models against various waiting time rules at stations. Our r
esults constitute the first proofs-
of-concept for these new robust timetabling approaches.
Abstract: We conduct an experimental study for a fundamental case of the timetabling problem in a public railway network under disruptions. Three bicriteria optimazation problems, modeling the robustness of the timetable towards delays, are experimentally evaluated against various waiting time rules at stations. Our results constitute the first proofs-of-concept for these models.
Abstract: We conduct an experimental study for the timetabling problem in a public railway network under disruptions. We investigate three bicriteria optimization problems introduced recently that model the robustness of a timetable towards delays. We experimentally evaluated these models against various waiting time rules at stations. Our results constitute the rst proofs-of-concept for these new robust timetabling approaches.
Abstract: We perform an extensive experimental study of several dynamic algorithms for transitive closure. In particular, we implemented algorithms given by Italiano, Yellin, Cicerone et al., and two recent randomized algorithms by Henzinger and King. We propose a fine-tuned version of Italiano's algorithms as well as a new variant of them, both of which were always faster than any of the other implementations of the dynamic algorithms. We also considered simple-minded algorithms that were easy to implement and likely to be fast in practice. Wetested and compared the above implementations on random inputs, on non-random inputs that are worst-case inputs for the dynamic algorithms, and on an input motivated by a real-world graph.
Abstract: The “small world” phenomenon, i.e., the fact that the
global social network is strongly connected in the sense
that every two persons are inter-related through a small
chain of friends, has attracted research attention and has
been strongly related to the results of the social
psychologist˘s Stanley Milgram experiments; properties
of social networks and relevant problems also emerge in
peer-to-peer systems and their study can shed light on
important modern network design properties.
In this paper, we have experimentally studied greedy
routing algorithms, i.e., algorithms that route information
using “long-range” connections that function as
shortcuts connecting “distant” network nodes. In
particular, we have implemented greedy routing
algorithms, and techniques from the recent literature in
networks of line and grid topology using parallelization
for increasing efficiency. To the best of our knowledge, no
similar attempt has been made so far
Abstract: In this paper we present a new approximation algorithm for
the Minimum Energy Broadcast Routing (MEBR) problem in ad hoc
wireless networks that has exponentially better approximation factor
than the well-known Minimum Spanning Tree (MST) heuristic. Namely,
for any instance where a minimum spanning tree of the set of stations
is guaranteed to cost at most ˝ times the cost of an optimal solution
for MEBR, we prove that our algorithm achieves an approximation ra-
tio bounded by 2 ln ˝ ˇ 2 ln 2 + 2. This result is particularly relevant for
its consequences on Euclidean instances where we signiŻcantly improve
previous results.
Abstract: Avertical perspective, ranging from management
and routing to physical layer options, concerning dynamic
network monitoring and compensation of impairments
(M&C),is given.Feasibility, reliability,and performance
improvements on reconfigurable transparent networksare
expected to arise from the consolidated assessment of network management and control specifications, as a more accurate evaluation of available M&C techniques. In the network
layer,physical parameters aware algorithms are foreseen to
pursue reliable network performance. In the physical layer,
some new M&C methods were developed and rating of the state-of-the-art reported in literature is given. Optical monitoring implementation and viability is discussed.
Abstract: A temporal graph is, informally speaking, a graph that changes with time. When time is discrete and only the relationships between the participating entities may change and not the entities themselves, a temporal graph may be viewed as a sequence G1,G2…,Gl of static graphs over the same (static) set of nodes V. Though static graphs have been extensively studied, for their temporal generalization we are still far from having a concrete set of structural and algorithmic principles. Recent research shows that many graph properties and problems become radically different and usually substantially more difficult when an extra time dimension in added to them. Moreover, there is already a rich and rapidly growing set of modern systems and applications that can be naturally modeled and studied via temporal graphs. This, further motivates the need for the development of a temporal extension of graph theory. We survey here recent results on temporal graphs and temporal graph problems that have appeared in the Computer Science community.
Abstract: Raising awareness among young people and changing their behaviour and habits concerning energy usage is key to achieving sustained energy saving. Additionally, young people are very sensitive to environmental protection so raising awareness among children is much easier than with any other group of citizens. This work examines ways to create an innovative Information & Communication Technologies (ICT) ecosystem (including web-based, mobile, social and sensing elements) tailored specifically for school environments, taking into account both the users (faculty, staff, students, parents) and school buildings, thus motivating and supporting young citizens˘ behavioural change to achieve greater energy efficiency. A mixture of open-source IoT hardware and proprietary platforms on the infrastructure level, are currently being utilized for monitoring a fleet of 18 educational buildings across 3 countries, comprising over 700 IoT monitoring points. Hereon presented is the system˘s high-level architecture, as well as several aspects of its implementation, related to the application domain of educational building monitoring and energy efficiency. The system is developed based on open-source technologies and services in order to make it capable of providing open IT-infrastructure and support from different commercial hardware/sensor vendors as well as open-source solutions. The system presented can be used to develop and offer new app-based solutions that can be used either for educational purposes or for managing the energy efficiency of the building. The system is replicable and adaptable to settings that may be different than the scenarios envisioned here (e.g., targeting different climate zones), different IT infrastructures and can be easily extended to accommodate integration with other systems. The overall performance of the system is evaluated in real-world environment in terms of scalability, responsiveness and simplicity.
Abstract: We present the conceptual basis and the initial planning for an open source management architecture for wireless sensor networks (WSN). Although there is an abundance of open source tools serving the administrative needs of WSN deployments, there is a lack of tools or platforms for high level integrated WSN management. This is because of a variety of factors, including the lack of open source management tools, the immaturity of tools that offer manageability for WSNs, the limited high level management capabilities of sensor devices and architectures, and the lack of standardization. The current work is, to our knowledge, the first effort to conceptualize, formalize and design a remote, integrated management platform for the support of WSN research laboratories. The platform is based on the integration and extension of two innovative platforms: jWebDust, a WSN operation and management platform, and OpenRSM, an open source integrated remote systems and network management platform. The proposed system architecture can support several levels of integration (infrastructure management, functionality integration, firmware management), corresponding to different use-cases and application settings.
Abstract: In this paper we propose a new methodology for determining
approximate Nash equilibria of non-cooperative bimatrix games and,
based on that, we provide an efficient algorithm that computes 0.3393-
approximate equilibria, the best approximation till now. The methodology
is based on the formulation of an appropriate function of pairs of
mixed strategies reflecting the maximum deviation of the players˘ payoffs
from the best payoff each player could achieve given the strategy
chosen by the other. We then seek to minimize such a function using
descent procedures. As it is unlikely to be able to find global minima
in polynomial time, given the recently proven intractability of the problem,
we concentrate on the computation of stationary points and prove
that they can be approximated arbitrarily close in polynomial time and
that they have the above mentioned approximation property. Our result
provides the best till now for polynomially computable -approximate
Nash equilibria of bimatrix games. Furthermore, our methodology for
computing approximate Nash equilibria has not been used by others.
Abstract: An ad-hoc mobile network is a collection of mobile hosts, with wireless communication capability, forming a temporary network without the aid of any established fixed infrastructure. In such a (dynamically changing) network it is not at all easy to avoid broadcasting (and flooding).
In this paper we propose, theoretically analyse and experimentally validate a new and efficient protocol for pairwise communication. The protocol exploits the co-ordinated motion of a small part of the network (i.e. it is a semi-compulsory protocol) in order to provide to various senders and receivers an efficient support for message passing. Our implementation platform is the LEDA system and we have tested the protocol for three classes of graphs (grids, random graphs and bipartite multi-stage graphs) each ing a different ?motion topology?.
Our theoretical analysis (based on properties of random walks) and our experimental measurements indicate that only a small fraction of the mobile stations are enough to be exploited by the support in order to achieve very fast communication between any pair of mobile stations.
Abstract: Urban road networks are represented as directed graphs, accompanied by a metric which assigns cost functions (rather than scalars) to the arcs, e.g. representing time-dependent arc-traversal-times. In this work, we present oracles for providing time-dependent min-cost route plans, and conduct their experimental evaluation on a real-world data set (city of Berlin). Our oracles are based on precomputing all landmark-to-vertex shortest travel-time functions, for properly selected landmark sets. The core of this preprocessing phase is based on a novel, quite efficient and simple oneto-all approximation method for creating approximations of shortest travel-time functions. We then propose three query algorithms, including a PTAS, to efficiently provide mincost route plan responses to arbitrary queries. Apart from the purely algorithmic challenges, we deal also with several
implementation details concerning the digestion of raw traffic data, and we provide heuristic improvements of both the preprocessing phase and the query algorithms. We conduct an extensive, comparative experimental study with all query algorithms and six landmark sets. Our results are quite encouraging, achieving remarkable speedups (at least by two orders of magnitude) and quite small approximation guarantees, over the time-dependent variant of Dijkstra˘s algorithm.
Abstract: We study the problem of greedy, single path data propaga-
tion in wireless sensor networks, aiming mainly to minimize
the energy dissipation. In particular, we rst mathemat-
ically analyze and experimentally evaluate the energy e-
ciency and latency of three characteristic protocols, each one
selecting the next hop node with respect to a dierent cri-
terion (minimum projection, minimum angle and minimum
distance to the destination). Our analytic and simulation
ndings suggest that any single criterion does not simulta-
neously satisfy both energy eciency and low latency. To-
wards parameterized energy-latency trade-os we provide as
well hybrid combinations of the two criteria (direction and
proximity to the sink). Our hybrid protocols achieve sig-
nicant perfomance gains and allow ne-tuning of desired
performance. Also, they have nice energy balance proper-
ties, and can prolong the network lifetime.
Abstract: We introduce a new model of ad-hoc mobile networks,
which we call hierarchical, that are comprised of
dense subnetworks of mobile users (corresponding to highly
populated geographical areas), interconnected across access
ports by sparse but frequently used connections.
To implement communication in such a case, a possible
solution would be to install a very fast (yet limited) backbone
interconnecting such highly populated mobile user areas, while
employing a hierarchy of (small) supports (one for each lower level
site). This fast backbone provides a limited number of access
ports within these dense areas of mobile users.
We combine here theoretical analysis (average case estimations based on
random walk properties) to claim and validate
results showing that such a hierarchical routing approach is,
for this class of ad-hoc mobile networks, significantly
more efficient than a simple extension of the
basic ``support'' idea presented in [WAE00,DISC01].
Abstract: We present new combinatorial approximation algorithms for k-set cover. Previous approaches are based on extending the greedy algorithm by efficiently handling small sets. The new algorithms further extend them by utilizing the natural idea of computing large packings of elements into sets of large size. Our results improve the previously best approximation bounds for the k-set cover problem for all values of k ≥ 6. The analysis technique could be of independent interest; the upper bound on the approximation factor is obtained by bounding the objective value of a factor-revealing linear program.
Abstract: We present new combinatorial approximation algorithms for
k-set cover. Previous approaches are based on extending the greedy al-
gorithm by e±ciently handling small sets. The new algorithms further
extend them by utilizing the natural idea of computing large packings
of elements into sets of large size. Our results improve the previously
best approximation bounds for the k-set cover problem for all values
of k ¸ 6. The analysis technique could be of independent interest; the
upper bound on the approximation factor is obtained by bounding the
objective value of a factor-revealing linear program.
Abstract: Consider k particles, 1 red and k–1 white, chasing each other on the nodes of a graph G. If the red one catches one of the white, it ldquoinfectsrdquo it with its color. The newly red particles are now available to infect more white ones. When is it the case that all white will become red? It turns out that this simple question is an instance of information propagation between random walks and has important applications to mobile computing where a set of mobile hosts acts as an intermediary for the spread of information.
In this paper we model this problem by k concurrent random walks, one corresponding to the red particle and k–1 to the white ones. The infection time Tk of infecting all the white particles with red color is then a random variable that depends on k, the initial position of the particles, the number of nodes and edges of the graph, as well as on the structure of the graph.
We easily get that an upper bound on the expected value of Tk is the worst case (over all initial positions) expected meeting time m* of two random walks multiplied by THgr (log k). We demonstrate that this is, indeed, a tight bound; i.e. there is a graph G (a special case of the ldquolollipoprdquo graph), a range of values kposition of particles achieving this bound.
When G is a clique or has nice expansion properties, we prove much smaller bounds for Tk. We have evaluated and validated all our results by large scale experiments which we also present and discuss here. In particular, the experiments demonstrate that our analytical results for these expander graphs are tight.
Abstract: In this contribution instances of a problem introduced by the differential cryptanalysis of Feistel cryptosystems are formulated as optimization tasks. The performance of Evolutionary Computation methods on these tasks is studied for a representative Feistel cryptosystem, the Data Encryption Standard. The results indicate that the proposed methodology is efficient in handling this type of problems and furthermore, that its effectiveness depends mainly on the construction of the objective function. This approach is applicable to all Feistel cryptosystems that are amenable to differential cryptanalysis.
Abstract: We focus on the problem of computing approximate Nash equilibria and well-supported approximate Nash equilibria in random bimatrix games, where each player's payoffs are bounded and independent random variables, not necessarily identically distributed, but with common expectations. We show that the completely mixed uniform strategy profile, i.e. the combination of mixed strategies (one per player) where each player plays with equal probability each one of her available pure strategies, is an almost Nash equilibrium for random bimatrix games, in the sense that it is, with high probability, an {\aa}-well-supported Nash equilibrium where {\aa} tends to zero as n tends to infinity.
Abstract: We study the following Constrained Bipartite Edge Coloring problem: We are given a bipartite graph G=(U,V,E) of maximum degree I with n vertices, in which some of the edges have been legally colored with c colors. We wish to complete the coloring of the edges of G minimizing the total number of colors used. The problem has been proved to be NP-hard even for bipartite graphs of maximum degree three. Two special cases of the problem have been previously considered and tight upper and ower bounds on the optimal number of colors were proved. The upper bounds led to 3/2-approximation algorithms for both problems. In this paper we present a randomized (1.37+o(1))-approximation algorithm for the general problem in the case where max{l,c} = {\`u}(ln n). Our techniques are motivated by recent works on the Circular Arc Coloring problem and are essentially different and simpler than the existing ones.
Abstract: We consider selfish routing over a network consisting of m parallellinks through which $n$ selfish users route their traffic trying tominimize their own expected latency. We study the class of mixedstrategies in which the expected latency through each link is at mosta constant multiple of the optimum maximum latency had globalregulation been available. For the case of uniform links it is knownthat all Nash equilibria belong to this class of strategies. We areinterested in bounding the coordination ratio (or price of anarchy) ofthese strategies defined as the worst-case ratio of the maximum (overall links) expected latency over the optimum maximum latency. The loadbalancing aspect of the problem immediately implies a lower boundO(ln m ln ln m) of the coordinationratio. We give a tight (up to a multiplicative constant) upper bound.To show the upper bound, we analyze a variant of the classical ballsand bins problem, in which balls with arbitrary weights are placedinto bins according to arbitrary probability distributions. At theheart of our approach is a new probabilistic tool that we call ballfusion; this tool is used to reduce the variant of the problem whereballs bear weights to the classical version (with no weights). Ballfusion applies to more general settings such as links with arbitrarycapacities and other latency functions.
Abstract: Motivated by the wavelength assignment problem in WDM optical networks, we study path coloring problems in graphs. Given a set of paths P on a graph G, the path coloring problem is to color the paths of P so that no two paths traversing the same edge of G are assigned the same color and the total number of colors used is minimized. The problem has been proved to be NP-hard even for trees and rings.
Using optimal solutions to fractional path coloring, a natural relaxation of path coloring, on which we apply a randomized rounding technique combined with existing coloring algorithms, we obtain new upper bounds on the minimum number of colors sufficient to color any set of paths on any graph. The upper bounds are either existential or constructive.
The existential upper bounds significantly improve existing ones provided that the cost of the optimal fractional path coloring is sufficiently large and the dilation of the set of paths is small. Our algorithmic results include improved approximation algorithms for path coloring in rings and in bidirected trees. Our results extend to variations of the original path coloring problem arizing in multifiber WDM optical networks.
Abstract: We give a deterministic polynomial-time algorithm which for any given average degree d and asymptotically almost all random graphs G in G(n, m = [d/2 n]) outputs a cut of G whose ratio (in cardinality) with the maximum cut is at least 0.952. We remind the reader that it is known that unless P=NP, for no constant {\aa}>0 is there a Max-Cut approximation algorithm that for all inputs achieves an approximation ratio of (16/17) +{\aa} (16/17 < 0.94118).
Abstract: We consider the Moran process, as generalized by Lieberman, Hauert and Nowak (Nature, 433:312--316, 2005). A population resides on the vertices of a finite, connected, undirected graph and, at each time step, an individual is chosen at random with probability proportional to its assigned 'fitness' value. It reproduces, placing a copy of itself on a neighbouring vertex chosen uniformly at random, replacing the individual that was there. The initial population consists of a single mutant of fitness r>0 placed uniformly at random, with every other vertex occupied by an individual of fitness 1. The main quantities of interest are the probabilities that the descendants of the initial mutant come to occupy the whole graph (fixation) and that they die out (extinction); almost surely, these are the only possibilities. In general, exact computation of these quantities by standard Markov chain techniques requires solving a system of linear equations of size exponential in the order of the graph so is not feasible. We show that, with high probability, the number of steps needed to reach fixation or extinction is bounded by a polynomial in the number of vertices in the graph. This bound allows us to construct fully polynomial randomized approximation schemes (FPRAS) for the probability of fixation (when r≥1) and of extinction (for all r>0).
Abstract: We consider approval voting elections in which each voter votes for a (possibly empty) set of candidates and the outcome consists of a set of k candidates for some parameter k, e.g., committee elections. We are interested in the minimax approval voting rule in which the outcome represents a compromise among the voters, in the sense that the maximum distance between the preference of any voter and the outcome is as small as possible. This voting rule has two main drawbacks. First, computing an outcome that minimizes the maximum distance is computationally hard. Furthermore, any algorithm that always returns such an outcome provides
incentives to voters to misreport their true preferences.
In order to circumvent these drawbacks, we consider approximation algorithms, i.e., algorithms that produce an outcome that approximates the minimax distance for any given instance. Such algorithms can be considered as alternative voting rules. We present a polynomial-time 2-approximation algorithm that uses a natural linear programming relaxation for the underlying optimization problem and deterministically
rounds the fractional solution in order to compute the outcome; this result improves upon the previously best known algorithm that has an approximation ratio of 3. We are furthermore interested in approximation algorithms that are resistant to manipulation by (coalitions of) voters, i.e., algorithms that do not motivate voters to misreport their true preferences in order to improve their distance from the outcome. We complement previous results in the literature with new upper and lower bounds on strategyproof and group-strategyproof algorithms.
Abstract: The study of the path coloring problem is motivated by the allocation of optical bandwidth to communication requests in all-optical networks that utilize Wavelength Division Multiplexing (WDM). WDM technology establishes communication between pairs of network nodes by establishing transmitter-receiver paths and assigning wavelengths to each path so that no two paths going through the same fiber link use the same wavelength. Optical bandwidth is the number of distinct wavelengths. Since state-of-the-art technology allows for a limited number of wavelengths, the engineering problem to be solved is to establish communication minimizing the total number of wavelengths used. This is known as the wavelength routing problem. In the case where the underlying network is a tree, it is equivalent to the path coloring problem.
We survey recent advances on the path coloring problem in both undirected and bidirected trees. We present hardness results and lower bounds for the general problem covering also the special case of sets of symmetric paths (corresponding to the important case of symmetric communication). We give an overview of the main ideas of deterministic greedy algorithms and point out their limitations. For bidirected trees, we present recent results about the use of randomization for path coloring and outline approximation algorithms that find path colorings by exploiting fractional path colorings. Also, we discuss upper and lower bounds on the performance of on-line algorithms.
Abstract: We examine the problem of assigning n independent jobs to m unrelated parallel machines, so that each job is processed without interruption on one of the machines, and at any time, every machine processes at most one job. We focus on the case where m is a fixed constant, and present a new rounding approach that yields approximation schemes for multi-objective minimum makespan scheduling with a fixed number of linear cost constraints. The same approach gives approximation schemes for covering problems like maximizing the minimum load on any machine, and for assigning specific or equal loads to the machines.
Abstract: We present a new architecture for bufferless, asynchronous all-optical self-routing network combining
an efficient physical layer structure and conflict-preventing signaling protocol for providing lossless communication
with optimum resource utilization and QoS differentiation.
Abstract: We present a 40 Gb/s asynchronous self-routing network and node architecture that exploits bit
and packet level optical signal processing to perform synchronization, forwarding and
switching. Optical packets are self-routed on a hop-by-hop basis through the network by using
stacked optical tags, each representing a specific optical node. Each tag contains control signals
for configuring the switching matrix and forwarding each packet to the appropriate outgoing
link and onto the next hop. Physical layer simulations are performed, modeling each optical subsystem
of the node showing acceptable signal quality and Bit Error Rates. Resource reservationbased
signaling algorithms are theoretically modeled for the control plane capable of providing
high performance in terms of blocking probability and holding time.
Abstract: The Greek School Network (GSN) has developed and put into production a number of e-learning services, including synchronous and asynchronous tele-education, electronic class management, blogs, video-on-demand, podcasts and multimedia libraries. These new services complement established and accepted e-learning services, such as teleconferencing, user wikis, forums, email, electronic publishing, and e-magazines. This report presents the most prominent digital e-learning services offered by GSN, with emphasis on the asynchronous tele-education service, which is presented in detail. Its implementation platform, the Moodle course management system, is compared against well-known asynchronous open source tele-education platforms such as COSE, Claroline, Fle3, ILIAS, Manhattan, KEWL, Comentor, e-Class and Eledge. The evaluation of the asynchronous tele-education platforms is based on detailed comparisons of their characteristics and of the methodology they adopt in order to deliver educational services. The comparison is based on evaluation criteria derived from the documented experiences of research institutes and educational bodies and also from the experience of GSN itself. The paper concludes with the presentation of an extension to Moodle for implementing communities of practice (CoPs) that facilitate the creation and delivery of electronic educational open content for teachers in a synergetic manner.
Abstract: We consider algorithmic questions concerning the existence,
tractability and quality of atomic congestion games, among users that
are considered to participate in (static) selfish coalitions. We carefully
define a coalitional congestion model among atomic players.
Our findings in this model are quite interesting, in the sense that we
demonstrate many similarities with the non–cooperative case. For example,
there exist potentials proving the existence of Pure Nash Equilibria
(PNE) in the (even unrelated) parallel links setting; the Finite Improvement
Property collapses as soon as we depart from linear delays, but
there is an exact potential (and thus PNE) for the case of linear delays,
in the network setting; the Price of Anarchy on identical parallel
links demonstrates a quite surprising threshold behavior: it persists on
being asymptotically equal to that in the case of the non–cooperative
KP–model, unless we enforce a sublogarithmic number of coalitions.
We also show crucial differences, mainly concerning the hardness of algorithmic
problems that are solved efficiently in the non–cooperative case.
Although we demonstrate convergence to robust PNE, we also prove the
hardness of computing them. On the other hand, we can easily construct
a generalized fully mixed Nash Equilibrium. Finally, we propose a new
improvement policy that converges to PNE that are robust against (even
dynamically forming) coalitions of small size, in pseudo–polynomial time.
Keywords. Game Theory, Atomic Congestion Games, Coalitions, Convergence
to Equilibria, Price of Anarchy.
Abstract: We consider algorithmic questions concerning the existence, tractability and quality of Nash equi-
libria, in atomic congestion games among users participating in selsh coalitions.
We introduce a coalitional congestion model among atomic players and demonstrate many in-
teresting similarities with the non-cooperative case. For example, there exists a potential function
proving the existence of Pure Nash Equilibria (PNE) in the unrelated parallel links setting; in
the network setting, the Finite Improvement Property collapses as soon as we depart from linear
delays, but there is an exact potential (and thus PNE) for linear delays; the Price of Anarchy on
identical parallel links demonstrates a quite surprising threshold behavior: it persists on being
asymptotically equal to that in the case of the non-cooperative KP-model, unless the number of
coalitions is sublogarithmic.
We also show crucial dierences, mainly concerning the hardness of algorithmic problems that
are solved eciently in the non{cooperative case. Although we demonstrate convergence to robust
PNE, we also prove the hardness of computing them. On the other hand, we propose a generalized
fully mixed Nash Equilibrium, that can be eciently constructed in most cases. Finally, we
propose a natural improvement policy and prove its convergence in pseudo{polynomial time to
PNE which are robust against (even dynamically forming) coalitions of small size.
Abstract: We consider algorithmic questions concerning the existence,
tractability and quality of atomic congestion games, among users that
are considered to participate in (static) selfish coalitions. We carefully
define a coalitional congestion model among atomic players.
Our findings in this model are quite interesting, in the sense that we
demonstrate many similarities with the non–cooperative case. For example,
there exist potentials proving the existence of Pure Nash Equilibria
(PNE) in the (even unrelated) parallel links setting; the Finite Improvement
Property collapses as soon as we depart from linear delays, but
there is an exact potential (and thus PNE) for the case of linear delays,
in the network setting; the Price of Anarchy on identical parallel
links demonstrates a quite surprising threshold behavior: it persists on
being asymptotically equal to that in the case of the non–cooperative
KP–model, unless we enforce a sublogarithmic number of coalitions.
We also show crucial differences, mainly concerning the hardness of algorithmic
problems that are solved efficiently in the non–cooperative case.
Although we demonstrate convergence to robust PNE, we also prove the
hardness of computing them. On the other hand, we can easily construct
a generalized fully mixed Nash Equilibrium. Finally, we propose a new
improvement policy that converges to PNE that are robust against (even
dynamically forming) coalitions of small size, in pseudo–polynomial time.
Keywords. Game Theory, Atomic Congestion Games, Coalitions, Convergence
to Equilibria, Price of Anarchy.
Abstract: We study here the effect of concurrent greedy moves of players in atomic congestion games
where n selfish agents (players) wish to select a resource each (out of m resources) so that her selfish delay there is not much. Such games usually admit a global potential that decreases by sequential and selfishly improving moves. However, concurrent moves may not always lead to global convergence. On the other hand, concurrent play is desirable because it might essentially improve the system convergence time to some balanced state. The problem of ?maintaining? global progress while allowing concurrent play is
exactly what is examined and answered here. We examine two orthogonal settings : (i) A game where the players decide their moves without global information, each acting ?freely? by sampling resources randomly and locally deciding to migrate (if the new resource is better) via a random experiment. Here, the resources can have quite arbitrary latency that is load dependent. (ii) An ?organised? setting where the players are prepartitioned into selfish groups (coalitions) and where each coalition does an improving coalitional move.
Here the concurrency is among the members of the coalition. In this second setting, the resources have latency functions that are only linearly dependent on the load, since this is the only case so far where a global potential exists. In both cases (i), (ii) we show that the system converges to an ?approximate? equilibrium very fast (in logarithmic rounds where the logarithm is taken on the maximum value of the global potential). This is interesting, since two quite orthogonal settings lead to the same result. Our work considers concurrent selfish play for arbitrary latencies for the first time. Also, this is the first time where fast coalitional convergence
to an approximate equilibrium is shown. All our results refer to atomic games (ie players are finite and distinct).
Abstract: We study here the effect of concurrent greedy moves of players in atomic
congestion games where n selfish agents (players) wish to select a resource each (out
of m resources) so that her selfish delay there is not much. The problem of “maintaining”
global progress while allowing concurrent play is exactly what is examined
and answered here. We examine two orthogonal settings: (i) A game where the players
decide their moves without global information, each acting “freely” by sampling
resources randomly and locally deciding to migrate (if the new resource is better)
via a random experiment. Here, the resources can have quite arbitrary latency that is
load dependent. (ii) An “organised” setting where the players are pre-partitioned into
selfish groups (coalitions) and where each coalition does an improving coalitional
move. Our work considers concurrent selfish play for arbitrary latencies for the first
time. Also, this is the first time where fast coalitional convergence to an approximate
equilibrium is shown.
Abstract: We study here the effect of concurrent greedy moves of players in
atomic congestion games where n selfish agents (players) wish to select a re-
source each (out of m resources) so that her selfish delay there is not much. The
problem of maintaining global progress while allowing concurrent play is ex-
actly what is examined and answered here. We examine two orthogonal settings :
(i) A game where the players decide their moves without global information, each
acting freely by sampling resources randomly and locally deciding to migrate
(if the new resource is better) via a random experiment. Here, the resources can
have quite arbitrary latency that is load dependent. (ii) An organised setting
where the players are pre-partitioned into selfish groups (coalitions) and where
each coalition does an improving coalitional move. Our work considers concur-
rent selfish play for arbitrary latencies for the first time. Also, this is the first time
where fast coalitional convergence to an approximate equilibrium is shown.
Abstract: In this survey we present some recent advances in the literature
of atomic (mainly network) congestion games. The algorithmic
questions that we are interested in have to do with the existence of pure
Nash equilibria, the efficiency of their construction when they exist, as
well as the gap of the best/worst (mixed in general) Nash equilibria from
the social optima in such games, typically called the Price of Anarchy
and the Price of Stability respectively.
Abstract: A new model for intrusion and its propagation through various attack
schemes in networks is considered. The model is characterized by the number of
network nodes n, and two parameters f and g. Parameter f represents the probability
of failure of an attack to a node and is a gross measure of the level of security of
the attacked system and perhaps of the intruder˘s skills; g represents a limit on
the number of attacks that the intrusion software can ever try, due to the danger
of being discovered, when it issues them from a particular (broken) network node.
The success of the attack scheme is characterized by two factors: the number of
nodes captured (the spread factor) and the number of virtual links that a defense
mechanism has to trace from any node where the attack is active to the origin of
the intrusion (the traceability factor). The goal of an intruder is to maximize both
factors. In our model we present four different ways (attack schemes) by which an
intruder can organize his attacks. Using analytic and experimental methods, we first
show that for any 0 < f < 1, there exists a constant g for which any of our attack
schemes can achieve a {\`E}(n) spread and traceability factor with high probability,
given sufficient propagation time. We also show for three of our attack schemes
that the spread and the traceability factors are, with high probability, linearly related
during the whole duration of the attack propagation. This implies that it will not be
easy for a detection mechanism to trace the origin of the intrusion, since it will have
to trace a number of links proportional to the nodes captured.
Abstract: A new model for intrusion and its propagation through various attack
schemes in networks is considered. The model is characterized by the number of
network nodes n, and two parameters f and g. Parameter f represents the probability
of failure of an attack to a node and is a gross measure of the level of security of
the attacked system and perhaps of the intruder˘s skills; g represents a limit on
the number of attacks that the intrusion software can ever try, due to the danger
of being discovered, when it issues them from a particular (broken) network node.
The success of the attack scheme is characterized by two factors: the number of
nodes captured (the spread factor) and the number of virtual links that a defense
mechanism has to trace from any node where the attack is active to the origin of
the intrusion (the traceability factor). The goal of an intruder is to maximize both
factors. In our model we present four different ways (attack schemes) by which an
intruder can organize his attacks. Using analytic and experimental methods, we first
show that for any 0 < f < 1, there exists a constant g for which any of our attack
schemes can achieve a (n) spread and traceability factor with high probability,
given sufficient propagation time. We also show for three of our attack schemes
that the spread and the traceability factors are, with high probability, linearly related
during the whole duration of the attack propagation. This implies that it will not be
easy for a detection mechanism to trace the origin of the intrusion, since it will have
to trace a number of links proportional to the nodes captured.
Abstract: We design and implement an algorithm for solving the static RWA problem based on an LP relaxation formulation. This formulation is capable of providing integer optimal solutions despite the absence of integrality constraints for a large subset of RWA input instances. In static RWA there is no a-priori knowledge of the channels usage and the interference among them cannot be avoided once the solution has been found. To take into consideration adjacent channel interference, we extend our formulation and model the interference by a set of analytical formulas as additional constraints on RWA.
Abstract: In this paper, we present BAD, an application-level multi-
cast infrastructure. BAD is designed to improve the perfor-
mance of multicast dissemination trees, under both a static
and a dynamic environment, where the eective bandwidth
of the network links changes with time. Its main goal is
to improve the data rate that end users perceive during
a multicast operation. BAD can be used for the creation
and management of multicast groups. It can be deployed
over any DHT retaining its fundamental advantages of band-
width improvement. BAD consists of a suit of algorithms
for node joins/leaves, bandwidth distribution to heteroge-
neous nodes, tree rearrangement and reduction of overhead.
We have implemented BAD within the FreePastry system.
We report on the results of a detailed performance evalua-
tion which testies for BAD's eciency and low overhead.
Specically, our experiments show that the improvement on
the minimum bandwidth ranges from 40% to 1400% and the
improvement on the average bandwidth ranges from 60% to
250%.
Abstract: In this paper we address the problem of estimating the num-
ber of stations in a wireless network. Under the assumption that each
station can detect collisions, we show that it is possible to estimate the
number stations in the network within a factor 2 from the correct value
in time O(log n log log n). We further show that if no station can detect
collisions, the same task can be accomplished within a factor of 3 in time
O(log2 n) and maximum energy O(log n) per node, with high probability.
Finally, we present an algorithm that computes the minimum value held
by the stations in the wireless network in time O(log2 n).
Abstract: We study the problem of scheduling permanent jobs on un-
related machines when the objective is to minimize the Lp
norm of the machine loads. The problem is known as load
balancing under the Lp norm. We present an improved up-
per bound for the greedy algorithm through simple analy-
sis; this bound is also shown to be best possible within the
class of deterministic online algorithms for the problem. We
also address the question whether randomization helps on-
line load balancing under Lp norms on unrelated machines;
this is a challenging question which is open for more than a
decade even for the L2 norm. We provide a positive answer
to this question by presenting the Żrst randomized online
algorithms which outperform deterministic ones under any
(integral) Lp norm for p = 2; :::; 137. Our algorithms es-
sentially compute in an online manner a fractional solution
to the problem and use the fractional values to make ran-
dom choices. The local optimization criterion used at each
step is novel and rather counterintuitive: the values of the
fractional variables for each job correspond to °ows at an ap-
proximate Wardrop equilibrium for an appropriately deŻned
non-atomic congestion game. As corollaries of our analysis
and by exploiting the relation between the Lp norm and the
makespan of machine loads, we obtain new competitive algo-
rithms for online makespan minimization, making progress
in another longstanding open problem.
Abstract: We study the problem of scheduling permanent jobs on un-
related machines when the objective is to minimize the Lp
norm of the machine loads. The problem is known as load
balancing under the Lp norm. We present an improved up-
per bound for the greedy algorithm through simple analy-
sis; this bound is also shown to be best possible within the
class of deterministic online algorithms for the problem. We
also address the question whether randomization helps on-
line load balancing under Lp norms on unrelated machines;
this is a challenging question which is open for more than a
decade even for the L2 norm. We provide a positive answer
to this question by presenting the Żrst randomized online
algorithms which outperform deterministic ones under any
(integral) Lp norm for p = 2; :::; 137. Our algorithms es-
sentially compute in an online manner a fractional solution
to the problem and use the fractional values to make ran-
dom choices. The local optimization criterion used at each
step is novel and rather counterintuitive: the values of the
fractional variables for each job correspond to °ows at an ap-
proximate Wardrop equilibrium for an appropriately deŻned
non-atomic congestion game. As corollaries of our analysis
and by exploiting the relation between the Lp norm and the
makespan of machine loads, we obtain new competitive algo-
rithms for online makespan minimization, making progress
in another longstanding open problem.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding
density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to
produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: We focus on range query processing on large-scale, typically
distributed infrastructures. In this work we present the ART
(Autonomous Range Tree) structure, which outperforms
the most popular decentralized structures, including Chord
(and some of its successors), BATON (and its successor) and
Skip-Graphs. ART supports the join/leave and range query
operations in O(log logN) and O(log2
b logN +|A|) expected
w.h.p number of hops respectively, where the base b is a
double-exponentially power of two, N is the total number of
peers and |A| the answer size.
Abstract: We work on an extension of the Population Protocol model
of Angluin et al. [1] that allows edges of the communication graph, G, to
have states that belong to a constant size set. In this extension, the so
called Mediated Population Protocol model (MPP) [2,3], both uniformity
and anonymity are preserved.We here study a simplified version of MPP,
the Graph Decision Mediated Population Protocol model (GDM), in
order to capture MPP's ability to decide graph languages. We also prove
some first impossibility results both for weakly connected and possibly
disconnected communication graphs.
Abstract: We examine multi-player pervasive games that rely on the
use of ad-hoc mobile sensor networks. The unique feature in
such games is that players interact with each other and their
surrounding environment by using movement and presence
as a means of performing game-related actions, utilizing sen-
sor devices. We brie
y discuss the fundamental issues and
challenges related to these type of games and the scenar-
ios associated with them. We have also developed a frame-
work, called Fun in Numbers (FinN) that handles a number
of these issues, such as such as neighbors discovery, local-
ization, synchronization and delay-tolerant communication.
FinN is developed using Java and is based on a multilayer ar-
chitecture, which provides developers with a set of templates
and services for building and operating new games.
Abstract: We study the fundamental naming and counting problems in networks that are anonymous, unknown, and possibly dynamic. Network dynamicity is modeled by the 1-interval connectivity model [KLO10]. We first prove that on static networks with broadcast counting is impossible to solve without a leader and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. With a leader we solve counting in linear time. Then we focus on dynamic networks with broadcast. We show that if nodes know an upper bound on the maximum degree that will ever appear then they can obtain an upper bound on n. Finally, we replace broadcast with one-to-each, in which a node may send a different message to each of its neighbors. This variation is then proved to be computationally equivalent to a full-knowledge model with unique names.
Abstract: Purpose – To examine broadband competition and broadband penetration in a set of countries that employ the same regulation framework. To define the policy and strategy required to promote broadband in weak markets that do not employ alternative infrastructures.
Design/methodology/approach – Study penetration and competition level statistics from 2002 to 2005 in a set of countries with different infrastructures deployed, services provided as well as in their social-economic structures but employing the same regulation framework. Measure the level of inter-platform and intra-platform competition as well as the availability of bitstream access versus the incumbents' shares.
Findings – The paper concludes that a mature broadband market is the one that exhibits a high penetration ratio in combination with a high competition level. Bitstream access can counterbalance the inexistence of alternative broadband infrastructures, especially in weak markets. In particular the availability of numerous bitstream access types in combination with the proper price differentiation can fuel broadband adoption in relatively weak broadband markets.
Originality/value – The paper challenges the general rule that only platform (also known as facility) based competition guarantees long-term growth of the broadband market. Bitstream and resale access do not lag local loop unbundling and can be used in weak markets that do not employ alternative infrastructures to fuel competition in the relevant markets. Different policies and strategies must be followed, in that case, on behalf of the local NRA.
Abstract: In this work we present a platform-agnostic framework for intergrating heterogeneous Smart Objects in the Web of Things. Our framework, consists of 4 different hardware platforms, Arduino, SunSPOT, TelosB, iSense. These hardware platforms are the most representative ones, as used by the relevant research community. A first contribution of our work is a careful description of the necessary steps to make such a heterogeneous network interoperate and the implementation of a network stack, in the form of a software library, named mkSense, which enables their intercommunication. Moreover, we describe the design and implementation of software library which can be used for building “intelligent software” for the Web of Things.
Abstract: We propose and evaluate fast reservation (FR)
protocols for Optical Burst Switched (OBS) networks. The
proposed reservation schemes aim at reducing the end-to-end
delay of a data burst, by sending the Burst Header Packet (BHP)
in the core network before the burst assembly is completed at the
ingress node. We use linear prediction filters to estimate the
expected length of the burst and the time needed for the
burstification process to complete. A BHPpacket carrying these
estimates is sent before burst completion, in order to reserve
bandwidth at each intermediate node for the time interval the
burst is expected to pass from that node. Reducing the total time
needed for a packet to be transported over an OBS network is
important, especially for real-time applications. Reserving
bandwidth only for the time interval it is actual going to be used
by a burst is important for network utilization efficiency. In the
simulations conducted we evaluate the proposed extensions and
prove their usefulness.
Abstract: Ever since the creation of the first human society, people have understood that the only way of sustaining and improving their societies is to rely on each other for exchanging services. This reliance have traditionally built on developing, among them, {\em trust}, a vague, intuitive to a large extend and hard to define concept that brought together people who worked towards the progress we all witness around us today. % Today's society is, however, becoming increasingly massive, collective, and complex and includes not only people, but huge numbers of machines as well. It is no overstatement to say that machines interconnected together into a complex communication fabric as well as communicating directly with people will very soon outnumber people by several orders of magnitude. Thus, trust, being already a difficult concept to define and measure when applied to a few people that form a cooperating group or a set of acquaintances, it is far more difficult to pinpoint when applied to large communities whose members may hardly know each other in person or to interconnected machines employed by these communities. In this paper we attempt to take a pragmatic position with regard to trust definition and measurement. We employ several formalisms, into each of which we define a reasonable notion of trust, and show that inherent weaknesses of these formalisms result in an inability to have a concrete and fully measurable trust concept. % We then argue that trust in the modern intertwined WWW society must, necessarily, incorporate to some degree non-formalizable elements, such as common sense and intuition. Although this may sound pessimistic, our view is that it is not, since by understanding these limitations of formalism with regard to trust, increases self-awareness and caution on people's part and avoid problems that result if one relies only on automation in order to deduce trust.
Abstract: In this work, we study the propagation of influence and computation in dynamic networks that are possibly disconnected at every instant. We focus on a synchronous message passing communication model with broadcast and bidirectional links. To allow for bounded end-to-end communication we propose a set of minimal temporal connectivity conditions that bound from the above the time it takes for information to make progress in the network. We show that even in dynamic networks that are disconnected at every instant information may spread as fast as in networks that are connected at every instant. Further, we investigate termination criteria when the nodes know some upper bound on each of the temporal connectivity conditions. We exploit our termination criteria to provide efficient protocols (optimal in some cases) that solve the fundamental counting and all-to-all token dissemination (or gossip) problems. Finally, we show that any protocol that is correct in instantaneous connectivity networks can be adapted to work in temporally connected networks.
Abstract: In this work, we study the propagation of influence and computation in dynamic distributed computing systems that are possibly disconnected at every instant. We focus on a synchronous message-passing communication model with broadcast and bidirectional links. Our network dynamicity assumption is a worst-case dynamicity controlled by an adversary scheduler, which has received much attention recently. We replace the usual (in worst-case dynamic networks) assumption that the network is connected at every instant by minimal temporal connectivity conditions. Our conditions only require that another causal influence occurs within every time window of some given length. Based on this basic idea, we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate termination criteria in networks in which an upper bound on any of these metrics is known. We exploit our termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental counting and all-to-all token dissemination (or gossip) problems.
Abstract: In this paper we present a protocol for Certified E-Mail that ensures temporal authentication. We first slightly modify a previously known three-message optimistic protocol in order to obtain a building block that meets some properties. We then extend this basic protocol enhancing it with the temporal authentication by adding a single message, improving the message complexity of known protocols. The fairness of the protocol is ensured by an off-line Trusted third party that joins the protocol only in case one of the players misbehaves. In order to guarantee temporal authentication we assume the existance of an on-line time stamping server.
Abstract: We demonstrate an all-optical clock and data recovery
circuit for short asynchronous data packets at 10-Gb/s line
rate. The technique employs a Fabry–P{\'e}rot filter and an ultrafast
nonlinear interferometer (UNI) to generate the local packet
clock, followed by a second UNI gate to act as decision element,
performing a logical AND operation between the extracted clocks
and the incoming data packets. The circuit can handle short
packets arriving at time intervals as short as 1.5 ns and arbitrary
phase alignment.
Abstract: We present optical clock and data recovery for 10 Gb/s asynchronous packets. The original data are
sampled with an optical gate powered by clock packet, generated with a novel packet clock recovery circuit.
Abstract: We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property is crucial for maximizing the time the network is functional, by avoiding early energy depletion of a large portion of sensors. We propose a distributed, adaptive data propagation algorithm that exploits limited, local network density information for achieving energy-balance while at the same time
minimizing energy dissipation.
We investigate both uniform and heterogeneous sensor placement distributions. By a detailed experimental evaluation and comparison with well-known energy-balanced protocols, we show that our density-based protocol improves energy efficiency signicantly while also having better energy balance properties.
Furthermore, we compare the performance of our protocol with a centralized, o-line optimum solution derived by a linear program which maximizes the network lifetime and show that it achieves near-optimal performance for uniform sensor deployments.
Abstract: The Team Orienteering Problem with Time Windows (TOPTW)
deals with deriving a number of tours comprising a subset of candidate
nodes (each associated with a \prot" value and a visiting time window)
so as to maximize the overall \prot", while respecting a specied time
span. TOPTW has been used as a reference model for the Tourist Trip
Design Problem (TTDP) in order to derive near-optimal multiple-day
tours for tourists visiting a destination featuring several points of inter-
est (POIs), taking into account a multitude of POI attributes. TOPTW
is an NP-hard problem and the most ecient known heuristic is based on
Iterated Local Search (ILS). However, ILS treats each POI separately;
hence it tends to overlook highly protable areas of POIs situated far
from the current location, considering them too time-expensive to visit.
We propose two cluster-based extensions to ILS addressing the afore-
mentioned weakness by grouping POIs on disjoint clusters (based on
geographical criteria), thereby making visits to such POIs more attrac-
tive. Our approaches improve on ILS with respect to solutions quality,
while executing at comparable time and reducing the frequency of overly
long transfers among POIs.
Abstract: Divisible load scenarios occur in modern media server applications since most multimedia applications typically require access to continuous and discrete data. A high performance Continuous Media (CM) server greatly depends on the ability of its disk IO subsystem to serve both types of workloads efficiently. Disk scheduling algorithms for mixed media workloads, although they play a central role in this task, have been overlooked by related research efforts. These algorithms must satisfy several stringent performance goals, such as achieving low response time and ensuring fairness, for the discrete-data workload, while at the same time guaranteeing the uninterrupted delivery of continuous data, for the continuous-data workload. The focus of this paper is on disk scheduling algorithms for mixed media workloads in a multimedia information server. We propose novel algorithms, present a taxonomy of relevant algorithms, and study their performance through experimentation. Our results show that our algorithms offer drastic improvements in discrete request average response times, are fair, serve continuous requests without interruptions, and that the disk technology trends are such that the expected performance benefits can be even greater in the future.
Abstract: This paper presents ongoing work on using data mining clustering to support the evaluation of software
systems' maintainability. As input for our analysis we employ software measurement data extracted from
Java source code. We propose a two-steps clustering process which facilitates the assessment of a system's
maintainability at rst, and subsequently an in-cluster analysis in order to study the evolution of each
cluster as the system's versions pass by. The process is evaluated on Apache Geronimo, a J2EE 1.4 open
source Application Server. The evaluation involves analyzing several versions of this software system in
order to assess its evolution and maintainability over time. The paper concludes with directions for future
work.
Abstract: In recent years, the evolution of urban environments, jointly with the progress of the Information and Communication sector, have enabled the rapid adoption of new solutions that contribute to the growth in popularity of Smart Cities. Currently, the majority of the world population
lives in cities encouraging different stakeholders within these innovative ecosystems to seek new solutions guaranteeing the sustainability and efficiency of such complex environments. In this work, it is discussed how the experimentation with IoT technologies and other data sources form the cities can be utilized to co-create in the OrganiCity project, where key actors like citizens, researchers and other stakeholders shape smart city services and applications in a collaborative fashion. Furthermore,
a novel architecture is proposed that enables this organic growth of the future cities, facilitating the experimentation that tailors the adoption of new technologies and services for a better quality of life, as well as agile and dynamic mechanisms for managing cities. In this work, the different components and enablers of the OrganiCity platform are presented and discussed in detail and include, among others, a portal to manage the experiment life cycle, an Urban Data Observatory to explore data assets,
and an annotations component to indicate quality of data, with a particular focus on the city-scale opportunistic data collection service operating as an alternative to traditional communications.
Abstract: This work proposes a methodology for source code quality and static behaviour evaluation of a software
system, based on the standard ISO/IEC-9126. It uses elements automatically derived from source code
enhanced with expert knowledge, in the form of quality characteristic rankings, allowing software
engineers to assign weights to source code attributes. It is flexible in terms of the set of metrics and
source code attributes employed, even in terms of the ISO/IEC-9126 characteristics to be assessed. We
applied the methodology to two case studies, involving five open source and one proprietary system.
Results demonstrated that the methodology can capture software quality trends and express expert
perceptions concerning system quality in a quantitative and systematic manner
Abstract: One problem that frequently arises is the establishment of a
secure connection between two network nodes. There are many key
establishment protocols that are based on Trusted Third Parties or
public key cryptography which are in use today. However, in the
case of networks with frequently changing topology and size
composed of nodes of limited computation power, such as the ad-hoc
and sensor networks, such an approach is difficult to apply. One
way of attacking this problem for such networks is to have the two
nodes share some piece of information that will, subsequently,
enable them to transform this information into a shared
communication key.
%
Having each pair of network nodes share some piece of information
is, often, achieved through appropriate {\em key pre-distribution}
schemes. These schemes work by equipping each network node with a
set of candidate key values, some of which shared with other
network nodes possessing other keys sets. Later, when two nodes
meet, they can employ a suitable key establishment protocol in
order to locate shared values and used them for the creation of
the communication key.
%
In this paper we give a formal definition of collusion resistant
key predistribution schemes and then propose such a scheme based
on probabilistically created set systems. The resulting key sets
are shown to have a number of desirable properties that ensure the
confidentiality of communication sessions against collusion
attacks by other network nodes.
Abstract: An intersection graph of n vertices assumes that each vertex is equipped with a subset of a global label set. Two vertices share an edge
when their label sets intersect. Random Intersection Graphs (RIGs) (as defined in [18, 31]) consider label sets formed by the following experiment:
each vertex, independently and uniformly, examines all the labels (m in total) one by one. Each examination is independent and the vertex
succeeds to put the label in her set with probability p. Such graphs nicely capture interactions in networks due to sharing of resources among nodes.
We study here the problem of efficiently coloring (and of finding upper bounds to the chromatic number) of RIGs. We concentrate in a range
of parameters not examined in the literature, namely: (a) m = n{\'a} for less than 1 (in this range, RIGs differ substantially from the Erd¨os- Renyi random graphs) and (b) the selection probability p is quite high
(e.g. at least ln2 n m in our algorithm) and disallows direct greedy colouring methods.
We manage to get the following results:
For the case mp ln n, for any constant < 1 − , we prove that np colours are enough to colour most of the vertices of the graph with high probability (whp). This means that even for quite dense
graphs, using the same number of colours as those needed to properly colour the clique induced by any label suffices to colour almost all of the vertices of the graph. Note also that this range of values of m, p
is quite wider than the one studied in [4].
� We propose and analyze an algorithm CliqueColour for finding a proper colouring of a random instance of Gn,m,p, for any mp >=ln2 n. The algorithm uses information of the label sets assigned to the
vertices of Gn,m,p and runs in O (n2mp2/ln n) time, which is polynomial in n and m. We also show by a reduction to the uniform random
intersection graphs model that the number of colours required by the algorithm are of the correct order of magnitude with the actual
chromatic number of Gn,m,p.
⋆ This work was partially supported by the ICT Programme of the European Union under contract number ICT-2008-215270 (FRONTS). Also supported by Research Training Group GK-693 of the Paderborn Institute for Scientific Computation
(PaSCo).
� We finally compare the problem of finding a proper colouring for Gn,m,p to that of colouring hypergraphs so that no edge is monochromatic.We show how one can find in polynomial time a k-colouring of the vertices of Gn,m,p, for any integer k, such that no clique induced by only one label in Gn,m,p is monochromatic. Our techniques are novel and try to exploit as much as possible the hidden structure of random intersection graphs in this interesting range.
Abstract: We investigate random intersection graphs, a combinatorial model that quite accurately abstracts distributed networks with local interactions between nodes blindly sharing critical resources from a limited globally available domain. We study important combinatorial properties (independence and hamiltonicity) of such graphs. These properties relate crucially to algorithmic design for important problems (like secure communication and frequency assignment) in distributed networks characterized by dense, local interactions and resource limitations, such as sensor networks. In particular, we prove that, interestingly, a small constant number of random, resource selections suffices to make the graph hamiltonian and we provide tight evaluations of the independence number of these graphs.
Abstract: This chapter aims at presenting certain important aspects of the design of lightweight, event-driven algorithmic solutions for data dissemination in wireless sensor networks that provide support for reliable, efficient and concurrency-intensive operation. We wish to emphasize that efficient solutions at several levels are needed, e.g.~higher level energy efficient routing protools and lower level power management schemes. Furthermore, it is important to combine such different level methods into integrated protocols and approaches. Such solutions must be simple, distributed and local. Two useful algorithmic design principles are randomization (to trade-off efficiency and fault-tolerance) and adaptation (to adjust to high network dynamics towards improved operation). In particular, we provide a) a brief description of the technical specifications of state-of-the-art sensor devices b) a discussion of possible models used to abstract such networks, emphasizing heterogeneity, c) some representative power management schemes, and d) a presentation of some characteristic protocols for data propagation. Crucial efficiency properties of these schemes and protocols (and their combinations, in some cases) are investigated by both rigorous analysis and performance evaluations through large scale simulations.
Abstract: The problem of communication among mobile nodes is one of the most fundamental problems in ad hoc mobile networks and is at the core of many algorithms, such as for counting the number of nodes, electing a leader, data processing etc. For an exposition of several important problems in ad hoc mobile networks. The work of Chatzigiannakis, Nikoletseas and Spirakis focuses on wireless mobile networks that are subject to highly dynamic structural changes created by mobility, channel fluctuations and device failures. These changes affect topological connectivity, occur with high frequency and may not be predictable in advance. Therefore, the environment where the nodes move (in three-dimensional space with possible obstacles) as well as the motion that the nodes perform are \textit{input} to any distributed algorithm.
Abstract: In this work, we overview some results concerning communication combinatorial properties in random intersection graphs and uniform random intersection graphs. These properties relate crucially to algorithmic design for important problems (like secure communication and frequency assignment) in distributed networks characterized by dense, local interactions and resource limitations, such as sensor networks. In particular, we present and discuss results concerning the existence of large independent sets of vertices whp in random instances of each of these models. As the main contribution of our paper, we introduce a new, general model, which we denote G(V, χ, f). In this model, V is a set of vertices and χ is a set of m vectors in ℝm. Furthermore, f is a probability distribution over the powerset 2χ of subsets of χ. Every vertex selects a random subset of vectors according to the probability f and two vertices are connected according to a general intersection rule depending on their assigned set of vectors. Apparently, this new general model seems to be able to simulate other known random graph models, by carefully describing its intersection rule.
Abstract: We study the problem of maintaining connectivity in a wireless
network where the network nodes are equipped with
directional antennas. Nodes correspond to points on the
plane and each uses a directional antenna modeled by a sector
with a given angle and radius. The connectivity problem
is to decide whether or not it is possible to orient the antennas
so that the directed graph induced by the node transmissions
is strongly connected. We present algorithms for
simple polynomial-time-solvable cases of the problem, show
that the problem is NP-complete in the 2-dimensional case
when the sector angle is small, and present algorithms that
approximate the minimum radius to achieve connectivity for
sectors with a given angle. We also discuss several extensions
to related problems. To the best of our knowledge, the
problem has not been studied before in the literature.
Abstract: We present and evaluate a compact, all-optical Clock and Data
Recovery (CDR) circuit based on integrated Mach Zehnder interferometric
switches. Successful operation for short packet-mode traffic of variable
length and phase alignment is demonstrated. The acquired clock signal rises
within 2 bits and decays within 15 bits, irrespective of packet length and
phase. Error-free operation is demonstrated at 10 Gb/s.
Abstract: We design and implement various algorithms for
solving the static RWA problem with the objective of minimizing
the maximum number of requested wavelengths based on LP
relaxation formulations. We present a link formulation, a path
formulation and a heuristic that breaks the problem in the two
constituent subproblems and solves them individually and
sequentially. The flow cost functions that are used in these
formulations result in providing integer optimal solutions despite
the absence of integrality constraints for a large subset of RWA
input instances, while also minimizing the total number of used
wavelengths. We present a random perturbation technique that is
shown to increase the number of instances for which we find
integer solutions, and we also present appropriate iterative fixing
and rounding methods to be used when the algorithms do not yield
integer solutions. We comment on the number of variables and
constraints these formulations require and perform extensive
simulations to compare their performance to that of a typical minmax
congestion formulation.
Abstract: We address an important communication issue arising in
wireless cellular networks that utilize frequency division
multiplexing (FDM) technology. In such networks, many
users within the same geographical region (cell) can communicate
simultaneously with other users of the network
using distinct frequencies. The spectrum of the available
frequencies is limited; thus, efficient solutions to the call
controlproblemareessential.Theobjectiveofthecallcontrol
problem is, given a spectrum of available frequencies
and users that wish tocommunicate, to maximize the benefit,
i.e., the number of users that communicate without
signalinterference.Weconsidercellularnetworksofreuse
distance k ≥ 2 and we study the online version of the
problem using competitive analysis. In cellular networks
of reuse distance 2, the previously best known algorithm
that beats the lower bound of 3 on the competitiveness
of deterministic algorithms, works on networks with one
frequency, achieves a competitive ratio against oblivious
adversaries, which is between 2.469 and 2.651, and uses
a number of random bits at least proportional to the size
of the network.We significantly improve this result by presentingaseriesofsimplerandomizedalgorithmsthathave
competitiveratiossignificantlysmallerthan3,workonnetworks
with arbitrarily many frequencies, and use only a
constant number of random bits or a comparable weak
random source. The best competitiveness upper bound
we obtain is 16/7 using only four random bits. In cellular
networks of reuse distance k > 2, we present simple
randomized online call control algorithms with competitive
ratios, which significantly beat the lower bounds on
the competitiveness of deterministic ones and use only
O(log k )randombits. Also,weshownewlowerboundson
thecompetitivenessofonlinecallcontrolalgorithmsincellularnetworksofanyreusedistance.
Inparticular,weshow
thatnoonline algorithm can achieve competitive ratio better
than 2, 25/12, and 2.5, in cellular networks with reuse
distancek ∈ {2, 3, 4},k = 5,andk ≥ 6, respectively.
Abstract: Clustering is an important research topic for wireless sensor
networks (WSNs). A large variety of approaches has been
presented focusing on dierent performance metrics. Even
though all of them have many practical applications, an ex-
tremely limited number of software implementations is avail-
able to the research community. Furthermore, these very few
techniques are implemented for specic WSN systems or are
integrated in complex applications. Thus it is very difficult
to comparatively study their performance and almost impos-
sible to reuse them in future applications under a dierent
scope. In this work we study a large body of well estab-
lished algorithms. We identify their main building blocks
and propose a component-based architecture for developing
clustering algorithms that (a) promotes exchangeability of
algorithms thus enabling the fast prototyping of new ap-
proaches, (b) allows cross-layer implementations to realize
complex applications, (c) oers a common platform to com-
paratively study the performance of dierent approaches,
(d) is hardware and OS independent. We implement 5 well
known algorithms and discuss how to implement 11 more.
We conduct an extended simulation study to demonstrate
the faithfulness of our implementations when compared to
the original implementations. Our simulations are at very
large scale thus also demonstrating the scalability of the
original algorithms beyond their original presentations. We
also conduct experiments to assess their practicality in real
WSNs. We demonstrate how the implemented clustering
algorithms can be combined with routing and group key es-
tablishment algorithms to construct WSN applications. Our
study clearly demonstrates the applicability of our approach
and the benets it oers to both research & development
communities.
Abstract: This chapter is an introduction to the basic concepts and advances of a new field, that of Computational (or Algorithmic) Game Theory. We study the computational complexity of Nash equilibria and review the related algorithms proposed in the literature. Then, given the apparent difficulty of computing exact Nash equilibria, we study the efficient computation of approximate notions of Nash equilibria. Next we deal with several computational issues related to the class of congestion games, which model the selfish behavior of individuals when competing on the usage of a common set of resources. Finally, we study the price of anarchy (in the context of congestion games), which is defined as a measure of the performance degradation due to the the lack of coordination among the involved players.
Abstract: We survey here some recent computational models for networks of tiny artifacts. In particular, we focus on networks consisting of artifacts with sensing capabilities. We first imagine the artifacts moving passively, that is, being mobile but unable to control their own movement. This leads us to the population protocol model of Angluin et al. (2004) [16]. We survey this model and some of its recent enhancements. In particular, we also present the mediated population protocol model in which the interaction links are capable of storing states and the passively mobile machines model in which the finite state nature of the agents is relaxed and the agents become multitape Turing machines that use a restricted space. We next survey the sensor field model, a general model capturing some identifying characteristics of many sensor network˘s settings. A sensor field is composed of kinds of devices that can communicate one to the other and also to the environment through input/output data streams. We, finally, present simulation results between sensor fields and population protocols and analyze the capability of their variants to decide graph properties
Abstract: Here we survey various computational models for Wireless Sensor Networks (WSNs). The population protocol model (PP) considers networks of tiny mobile finite-state artifacts that can sense the environment and communicate in pairs to perform a computation. The mediated population protocol model (MPP) enhances the previous model by allowing the communication links to have a constant size buffer, providing more computational power. The graph decision MPP model (GDM) is a special case of MPP that focuses on the MPP's ability to decide graph properties of the network. Another direction towards enhancing the PP is followed by the PALOMA model in which the artifacts are no longer finite-state automata but Turing Machines of logarithmic memory in the population size. A different approach to modeling WSNs is the static synchronous sensor field model (SSSF) which describes devices communicating through a fixed communication graph and interacting with their environment via input and output data streams. In this survey, we present the computational capabilities of each model and provide directions for further research.
Abstract: In this chapter, our focus is on computational network analysis from a theoretical point of view. In particular, we study the \emph{propagation of influence and computation in dynamic distributed computing systems}. We focus on a \emph{synchronous message passing} communication model with bidirectional links. Our network dynamicity assumption is a \emph{worst-case dynamicity} controlled by an adversary scheduler, which has received much attention recently. We first study the fundamental \emph{naming} and \emph{counting} problems (and some variations) in
networks that are \emph{anonymous}, \emph{unknown}, and possibly dynamic. Network dynamicity is modeled here by the \emph{1-interval connectivity model}, in which communication is synchronous and a (worst-case) adversary
chooses the edges of every round subject to the condition that each instance is connected. We then replace this quite strong assumption by minimal \emph{temporal connectivity} conditions. These conditions only require that \emph{another causal influence occurs within every time-window of some given length}. Based on this basic idea we define several novel metrics for capturing the speed of information spreading in a dynamic network. We present several results that correlate these metrics. Moreover, we investigate \emph{termination criteria} in networks in which an upper bound on any of these metrics is known. We exploit these termination criteria to provide efficient (and optimal in some cases) protocols that solve the fundamental \emph{counting} and \emph{all-to-all token dissemination} (or \emph{gossip}) problems. Finally, we propose another model of worst-case temporal connectivity, called \emph{local
communication windows}, that assumes a fixed underlying communication network and restricts the adversary to allow communication between local neighborhoods in every time-window of some fixed length. We prove some basic properties and provide a protocol for counting in this model.
Abstract: A mimicking network for a k-terminal network, N, is one whose realizable external flows are
the same as those of N. Let S.k/ denote the minimum size of a mimicking network for a k-terminal network.
In this paper we give new constructions of mimicking networks and prove the following results (the values
in brackets are the previously best known results): S.4/ D 5 [216], S.5/ D 6 [232]. For bounded treewidth
networks we show S.k/ D O.k/ [22k ], and for outerplanar networks we show S.k/ · 10k ˇ 6 [k22kC2].
Abstract: We study the partially eponymous model of distributed computation, which simultaneously
generalizes the anonymous and the eponymous models. In this model, processors have
identities, which are neither necessarily all identical (as in the anonymous model) nor
necessarily unique (as in the eponymous model). In a decision problem formalized as a
relation, processors receive inputs and seek to reach outputs respecting the relation. We
focus on the partially eponymous ring, and we shall consider the computation of circularly
symmetric relations on it. We consider sets of rings where all rings in the set have the same
multiset of identity multiplicities.
We distinguish between solvability and computability: in solvability, processors are
required to always reach outputs respecting the relation; in computability, they must
do so whenever this is possible, and must otherwise report impossibility.
We present a topological characterization of solvability for a relation on a set of rings,
which can be expressed as an efficiently checkable, number-theoretic predicate.
We present a universal distributed algorithm for computing a relation on a set of
rings; it runs any distributed algorithm for constructing views, followed by local steps.
We derive, as our main result, a universal upper bound on the message complexity to
compute a relation on a set of rings; this bound demonstrates a graceful degradation
with the Least Minimum Base, a parameter indicating the degree of least possible
eponymity for a set of rings. Thereafter, we identify two cases where a relation can be
computed on a set of rings, with rings of size n, with an efficient number of O .n lg n/
messages.
Abstract: This paper addresses the problem of counting the size of a network where (i) processes have the same identifiers (anonymous nodes) and (ii) the et-
work topology constantly changes (dynamic network). Changes are riven by a powerful adversary that can look at internal process states and add and remove edges in order to contrast the convergence of the algorithm to the correct count. The paper proposes two leader-based counting algorithms. Such algorithms are based on a technique that mimics an energy-transfer between network nodes. The first algorithm assumes that the adversary cannot generate either disconnected network graphs or network graphs where nodes have degree greater than D. In such algorithm, the leader can count the size of the network and detect the counting termination in a finite time (i.e., conscious counting algorithm). The second algorithm assumes that the adversary only keeps the network graph connected at any time and we prove that the leader can still converge to a correct count in a finite number of rounds, but it is not conscious when this convergence happens.
Abstract: This paper addresses stability issues in incremental induction of decision trees. Stability problems arise when an induction algorithm must revise a decision tree very often and oscillations between similar concepts decrease learning speed. We review a heuristic that solves this problem and subsequently employ asymptotic analysis to approximate the basic parameters related to the estimation of computational effort in incremental learning of decision trees. We then use these approximations to simplify the heuristic, we deliver insight into its amortizing behavior and argue how they can also speed-up its execution and enhance its applicability, also providing experimental evidence to support these claims.
Abstract: In this article, we present a detailed performance
evaluation of a hybrid optical switching (HOS)
architecture called Overspill Routing in Optical Networks
(ORION). The ORION architecture combines
(optical) wavelength and (electronic) packet switching,
so as to obtain the individual advantages of both switching
paradigms. In particular, ORION exploits the possible insertions/extractions, to reduce the necessary
interfaces, do not deteriorate performance and thus the
use of traffic concentrators assure ORION’s economic
viability.
idle periods of established lightpaths to transmit
packets destined to the next common node, or even
directly to their common end-destination. Depending
on whether all lightpaths are allowed to simultaneously
carry and terminate overspill traffic or overspill is restricted
to a sub-set of wavelengths, the architecture
limits itself to constrained or un-constrained ORION. To
evaluate both cases, we developed an extensive network
simulator where the basic features of the ORION architectureweremodeled,
including suitable edge/core node
switches and load-varying sources to simulate overloading
traffic conditions. Further, we have assessed various
aspects of the ORION architecture including two
basic routing/forwarding policies and various buffering
schemes. The complete network study shows that
ORION can absorb temporal traffic overloads, as intended,
provided sufficient buffering is present.We also
demonstrate that the restriction of simultaneous packet
Abstract: All-optical gate control signal generation is demonstrated
from flag pulses, using a Fabry–P{\'e}rot filter followed by
a semiconductor optical amplifier. Ten control pulses are generated
from a single flag pulse having less than 0.45-dB amplitude
modulation. By doubling or tripling the number of flag pulses, the
number of control pulses increases approximately by a factor of
two or three. The circuit can control the switching state of all-optical
switches, on a packet-by-packet basis, and can be used for
nontrivial network functionalities such us self-routing.
Abstract: In this work, we expanded the Arduino's
capabilities by adding an 802.15.4 wireless module, in order to
expose its functionality as a Web of Things node. The second
contribution of our work is a careful description of the necessary
steps to make a heterogeneous network interoperate and the
implementation of a network stack for the 4 most representative
hardware platforms, as used by the relevant research community
(Arduino, SunSPOT, TelosB, iSense), in the form of a software
library, named mkSense, which enables their
intercommunication. Moreover, we describe the design and
implementation of a software library which can be used for
building “intelligent software” for the Web of Things.
Abstract: Ever-increasing bandwidth demands and higher flexibility are the main challenges for the next generation optical core networks. A new trend in order to address these challenges is to consider the impairments of the lightpaths during the design of optical networks. In our work, we focus on translucent optical networks, where some lightpaths are routed transparently, whereas others go through a number of regenerators. We present a cost analysis of design strategies, which are based either on an exact Quality of Transmission (QoT) validation or on a relaxed one and attempt to reduce the amount of regenerators used. In the exact design strategy, regenerators are required if the QoT of a candidate lightpath is below a predefined threshold, assuming empty network conditions. In the relaxed strategy, this predefined threshold is lower, while it is assumed that the network is fully loaded. We evaluate techno-economically the suggested design solutions and also show that adding more flexibility to the optical nodes has a large impact to the total infrastructure cost.
Abstract: We propose a simple and intuitive cost mechanism which assigns costs for the competitive
usage of m resources by n selfish agents. Each agent has an individual demand; demands are
drawn according to some probability distribution. The cost paid by an agent for a resource
it chooses is the total demand put on the resource divided by the number of agents who
chose that same resource. So, resources charge costs in an equitable, fair way, while each
resource makes no profit out of the agents.
We call our model the Fair Pricing model. Its fair cost mechanism induces a noncooperative
game among the agents. To evaluate the Nash equilibria of this game, we
introduce the Diffuse Price of Anarchy, as an extension of the Price of Anarchy that takes
into account the probability distribution on the demands. We prove:
˛ Pure Nash equilibria may not exist, unless all chosen demands are identical.
˛ A fully mixed Nash equilibrium exists for all possible choices of the demands. Further
on, the fully mixed Nash equilibrium is the unique Nash equilibrium in case there are
only two agents.
˛ In the worst-case choice of demands, the Price of Anarchy is Ł(n); for the special case
of two agents, the Price of Anarchy is less than 2 ˇ 1
m .
˛ Assume now that demands are drawn from a bounded, independent probability distribution,
where all demands are identically distributed, and each demand may not exceed
some (universal for the class) constant times its expectation. It happens that the constant
is just 2 when each demand is distributed symmetrically around its expectation.
We prove that, for asymptotically large games where the number of agents tends to
infinity, the Diffuse Price of Anarchy is at most that universal constant. This implies
the first separation between Price of Anarchy and Diffuse Price of Anarchy.
Towards the end, we consider two closely related cost sharing models, namely the Average
Cost Pricing and the Serial Cost Sharing models, inspired by Economic Theory. In contrast
to the Fair Pricing model, we prove that pure Nash equilibria do exist for both these models.
Abstract: We investigate the existence of optimal tolls for atomic symmetric
network congestion games with unsplittable traffic and arbitrary non-negative and
non-decreasing latency functions.We focus on pure Nash equilibria and a natural
toll mechanism, which we call cost-balancing tolls. A set of cost-balancing tolls
turns every path with positive traffic on its edges into a minimum cost path. Hence
any given configuration is induced as a pure Nash equilibrium of the modified
game with the corresponding cost-balancing tolls. We show how to compute in
linear time a set of cost-balancing tolls for the optimal solution such that the total
amount of tolls paid by any player in any pure Nash equilibrium of the modified
game does not exceed the latency on the maximum latency path in the optimal
solution. Our main result is that for congestion games on series-parallel networks
with increasing latencies, the optimal solution is induced as the unique pure Nash
equilibrium of the game with the corresponding cost-balancing tolls. To the best
of our knowledge, only linear congestion games on parallel links were known to
admit optimal tolls prior to this work. To demonstrate the difficulty of computing
a better set of optimal tolls, we show that even for 2-player linear congestion
games on series-parallel networks, it is NP-hard to decide whether the optimal
solution is the unique pure Nash equilibrium or there is another equilibrium of
total cost at least 6/5 times the optimal cost.
Abstract: Counting in general, and estimating the cardinality of (multi-) sets in particular, is highly desirable for a large variety of applications, representing a foundational block for the efficient deployment and access of emerging internet-scale information systems. Examples of such applications range from optimizing query access plans in internet-scale databases, to evaluating the significance (rank/score) of various data items in information retrieval applications. The key constraints that any acceptable solution must satisfy are: (i) efficiency: the number of nodes that need be contacted for counting purposes must be small in order to enjoy small latency and bandwidth requirements; (ii) scalability, seemingly contradicting the efficiency goal: arbitrarily large numbers of nodes nay need to add elements to a (multi-) set, which dictates the need for a highly distributed solution, avoiding server-based scalability, bottleneck, and availability problems; (iii) access and storage load balancing: counting and related overhead chores should be distributed fairly to the nodes of the network; (iv) accuracy: tunable, robust (in the presence of dynamics and failures) and highly accurate cardinality estimation; (v) simplicity and ease of integration: special, solution-specific indexing structures should be avoided. In this paper, first we contribute a highly-distributed, scalable, efficient, and accurate (multi-) set cardinality estimator. Subsequently, we show how to use our solution to build and maintain histograms, which have been a basic building block for query optimization for centralized databases, facilitating their porting into the realm of internet-scale data networks.
Abstract: In this paper we study the notion of the Evolutionary Stable
Strategies (ESS) in evolutionary games and we demonstrate their qualitative
difference from the Nash Equilibria, by showing that a random
evolutionary game has on average exponentially less number of ESS than
the number of Nash Equilibria in the underlying symmetric 2-person
game with random payoffs.
Abstract: We consider a synchronous distributed system with n processes that communicate through a dynamic network guaranteeing 1-interval connectivity i.e., the network topology graph might change at each interval while keeping the graph connected at any time. The processes belonging to the distributed system are identified through a set of labels L = {l1 , l2 . . . , lk } (with 1 ≤ k < n). In this challenging system model, the paper addresses the following problem: ”counting the number of processes with the same label”. We provide a distributed algorithm that is able solve the problem based on the notion of energy transfer. Each process owns a fixed energy charge, and tries to discharge itself exchanging, at each round, at most half of its charge with neighbors. The paper also discusses when such counting is possible in the presence of failures. Counting processes with the same label in dynamic networks with homonyms is of great importance because it is as difficult as computing generic aggregating functions.
Abstract: Random walks in wireless sensor networks can serve as fully
local, very simple strategies for sink motion that reduce energy dissipa-
tion a lot but increase the latency of data collection. To achieve satis-
factory energy-latency trade-offs the sink walks can be made adaptive,
depending on network parameters such as density and/or history of past
visits in each network region; but this increases the memory require-
ments. Towards better balances of memory/performance, we propose two
new random walks: the Random Walk with Inertia and the Explore-and-
Go Random Walk; we also introduce a new metric (Proximity Varia-
tion) that captures the different way each walk gets close to the network
nodes. We implement the new walks and experimentally compare them
to known ones. The simulation findings demonstrate that the new walk˘s
performance (cover time) gets close to the one of the (much stronger)
biased walk, while in some other respects (partial cover time, proximity
variation) they even outperform it. We note that the proposed walks
have been fine-tuned in the light of experimental findings.
Abstract: We present a new overlay, called the Deterministic Decentral-
ized tree (D2-tree). The D2-tree compares favourably to other overlays for
the following reasons: (a) it provides matching and better complexities,
which are deterministic for the supported operations; (b) the manage-
ment of nodes (peers) and elements are completely decoupled from each
other; and (c) an e±cient deterministic load-balancing mechanism is pre-
sented for the uniform distribution of elements into nodes, while at the
same time probabilistic optimal bounds are provided for the congestion
of operations at the nodes.
Abstract: DAP (Distributed Algorithms Platform) is a generic and homogeneous simulation environment aiming at the implementation, simulation, and testing of distributed algorithms for wired and wireless networks. In this work, we present its architecture, the most important design decisions, and discuss its distinct features and functionalities. DAP allows the algorithm designer to implement a distributed protocol by creating his own customized environment, and programming in a standard programming language in a style very similar to that of a real-world application. DAPprovides a graphical user interface that allows the designer to monitor and control the execution of simulations, visualize algorithms, as well as gather statistics and other information for their experimental analysis and testing.
Abstract: In this work we examine a task scheduling and data migration problem for Grid Networks, which we refer to as the Data Consolidation (DC) problem. DC arises when a task needs for its execution two or more pieces of data, possibly scattered throughout the Grid Network. In such a case, the scheduler and the data manager must select the data replicas to be used and the site where these will accumulate for the task to be executed. The policies for selecting the data replicas and the data consolidating site comprise the Data Consolidation problem. We propose and experimentally evaluate a number of DC techniques. Our simulation results brace our belief that DC is an important technique for Data Grids since it can substantially improve task delay, network load and other performance related parameters.
Abstract: In this work, we develop an IPv6 enabled smart building test-bed facility, by combining sensing and communication devices and functionalities. We address the Internet of Things paradigm by using diverse heterogeneous devices such as smartphones, sensor motes, NFC technology and traditional electrical devices, each one serving a specific role in the test-bed facility. Also, we extend a basic actuation component by making it self-aware, in terms of supported resources. Those enhancements allow us to enrich the test-bed’s capabilities in terms of M2M communication, portability and decentralization of the actuation process. Finally, we provide a simple smart room scenario for a tunable combination of energy eciency and comfort, which automatically adjusts the room’s light level based on ambient conditions and user preferences and demonstrate the feasibility of our system.
Abstract: We work on an extension of the Population Protocol model of Angluin et al. that allows edges of the communication graph, G, to have states that belong to a constant size set. In this extension, the so called Mediated Population Protocol model (MPP), both uniformity and anonymity are preserved. We here study a simplified version of MPP, the Graph Decision Mediated Population Protocol model (GDM), in order to capture MPP's ability to decide (stably compute) graph languages (sets of communication graphs). To understand properties of the communication graph is an important step in almost any distributed system. We prove that any graph language is undecidable if we allow disconnected communication graphs. As a result, we focus on studying the computational limits of the GDM model in (at least) weakly connected communication graphs only and give several examples of decidable graph languages in this case. To do so, we also prove that the class of decidable graph languages is closed under complement, union and intersection operations. Node and edge parity, bounded out-degree by a constant, existence of a node with more incoming than outgoing neighbors and existence of some directed path of length at least k=O(1) are some examples of properties whose decidability is proven. To prove the decidability of graph languages we provide protocols (GDMs) for them and exploit the closure results. Finally, we prove the existence of symmetry in two specific communication (sub)graphs which we believe is the first step towards the proof of impossibility results in the GDM model. In particular, we prove that there exists no GDM, whose states eventually stabilize, to decide whether G contains some directed cycle of length 2 (2-cycle).
Abstract: The existence of good probabilistic models for the job
arrival process and the delay components introduced at
the different stages of job processing in a Grid
environment is important for the improved understanding
of the computing concept envisioned by the Grid. In this
study, we present a thorough analysis of the job arrival
process in the EGEE infrastructure and the time
durations a job spends at different states in the EGEE
environment. We define four delay components of the
total job delay and model each component separately.
We observe that the job inter-arrival times at the Grid
level can be adequately modeled by a rounded
exponential distribution, while the total job delay (from
the time it is generated until the time it completes
execution) is dominated by the Computing Element’s
queuing and Worker Node’s execution times.
Abstract: The purpose of the first Student Workshop on Wireless Sensor Networks is to bring together both graduate and undergraduate students working in the area of wireless sensor networks, with focus on applications, real-world experiments or deployments of wireless sensor networks. Students will have the opportunity to interact with their peers and publicize and get feedback on their work, exchange experiences, make contacts, and learn what other students are doing in the Wireless Sensor Networks area. The workshop would be a day-long program organized in such a way to promote lively discussions.
Abstract: The Greek School Network (GSN) is the nationwide network that connects all units of primary and secondary education in Greece. GSN offers a significant set of diverse services to more than 15.000 schools and administrative units, and more than 60.000 teachers, placing GSN second in infrastructure size nationwide. GSN has relied on the emerging power of open source software to build cutting-edge services capable of covering internal administrative and monitoring needs, end user demands, and, foremost, modern pedagogical requirements for tools and services. GSN provides a wide set of advanced services, varying from web mail to virtual classrooms and synchronous/asynchronous tele-education. This paper presents an evaluation of GSN open source services based on the opinions of users who use GSN for educational purposes, and on usage and traffic measurement statistics. The paper reaches the conclusion that open source software provides a sound technological platform that meets the needs for cutting edge educational services deployment, and innovative, competitive software production for educational networks.
Abstract: In this technical report we present a throughput performance analysis on a heterogeneous wire-
less sensor network which consists of 4 dierent sensor devices (TelosB, SunSPOT, Arduino,
iSense) and the results of a set of experiments during which we gathered LQI/RSSI measure-
ments.
Abstract: In this work we introduce two practical and interesting models of ad-hoc mobile networks: (a) hierarchical ad-hoc networks, comprised of dense subnetworks of mobile users interconnected by a very fast yet limited backbone infrastructure, (b) highly changing ad-hoc networks, where the deployment area changes in a highly dynamic way and is unknown to the protocol. In such networks, we study the problem of basic communication, i.e., sending messages from a sender node to a receiver node. For highly changing networks, we investigate an efficient communication protocol exploiting the coordinated motion of a small part of an ad-hoc mobile network (the ldquorunners supportrdquo) to achieve fast communication. This protocol instead of using a fixed sized support for the whole duration of the protocol, employs a support of some initial (small) size which adapts (given some time which can be made fast enough) to the actual levels of traffic and the (unknown and possibly rapidly changing) network area, by changing its size in order to converge to an optimal size, thus satisfying certain Quality of Service criteria. Using random walks theory, we show that such an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive implementation of the basic ldquorunners supportrdquo idea, introduced in [9,10]. For hierarchical ad-hoc networks, we establish communication by using a ldquorunnersrdquo support in each lower level of the hierarchy (i.e., in each dense subnetwork), while the fast backbone provides interconnections at the upper level (i.e., between the various subnetworks). We analyze the time efficiency of this hierarchical approach. This analysis indicates that the hierarchical implementation of the support approach significantly outperforms a simple implementation of it in hierarchical ad-hoc networks. Finally, we discuss a possible combination of the two approaches above (the hierarchical and the adaptive ones) that can be useful in ad-hoc networks that are both hierarchical and highly changing. Indeed, in such cases the hierarchical nature of these networks further supports the possibility of adaptation.
Abstract: In this work we investigate the problem of communication among mobile hosts, one of the most fundamental problems in ad-hoc mobile networks that is at the core of many algorithms. Our work investigates the extreme case of total absence of any fixed network backbone or centralized administration, instantly forming networks based only on mobile hosts with wireless communication capabilities, where topological connectivity is subject to frequent, unpredictable change.
For such dynamically changing networks we propose a set of protocols which exploit the coordinated (by the protocol) motion of a small part of the network in order to manage network operations. We show that such protocols can be designed to work correctly and efficiently for communication by avoiding message flooding. Our protocols manage to establish communication between any pair of mobile hosts in small, a-priori guaranteed expected time bounds. Our results exploit and further develop some fundamental properties of random walks in finite graph.
Apart from studying the general case, we identify two practical and interesting cases of ad-hoc mobile networks: a) hierarchical ad-hoc networks, b) highly changing ad-hoc networks, for which we propose protocols that efficiently deal with the problem of basic communication.
We have conducted a set of extensive experiments, comprised of thousands of mobile hosts in order to validate the theoretical results and show that our protocols achieve very efficient communication under different scenaria.
Abstract: Evaluating target tracking protocols for wireless sensor networks that can localize multiple mobile devices, can be a very challenging task. Such protocols usually aim at minimizing communication overhead, data processing for the participating nodes, as well as delivering adequate tracking information of the mobile targets in a timely manner. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal network localization and perfect distance estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper we design a new localization protocol, where mobile assets can be tracked passively via software agents. We address the issues that hinder its performance due to the real environment conditions and provide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the WISEBED framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the target tracking problem, regarding power consumption and the quality of tracking information. Finally we also conduct some very focused simulations to assess the scalability of our protocol in very large networks and multiple mobile assets.
Abstract: We have designed and implemented a platform that enables monitoring and actuation in multiple buildings, that has been utilised in the context of a research project in Greece, focusing on public school buildings. The Green Mindset project has installed IoT devices in 12 Greek public schools to monitor energy consumption, along with indoor and outdoor environmental parameters. We present the architecture and actual deployment of our system, along with a first set of findings.
Abstract: In this paper, we present a Programmable Packet Processing Engine suitable for deep header processing in high-speed networking systems.
The engine, which has been – fabricated as part of a complete network processor, consists of a typical RISC-CPU, whose register
Wle has been modiWed in order to support eYcient context switching, and two simple special-purpose processing units. The engine can be
used in a number of network processing units (NPUs), as an alternative to the typical design practice of employing a large number of simple
general purpose processors, or in any other embedded system designed to process mainly network protocols. To assess the performance
of the engine, we have proWled typical networking applications and a series of experiments were carried out. Further, we have
compared the performance of our processing engine to that of two widely used NPUs and show that our proposed packet-processing
engine can run speciWc applications up to three times faster. Moreover, the engine is simpler to be fabricated, less complex in terms of
hardware complexity, while it can still be very easily programmed.
Abstract: Pervasive games are a new type of digital games that combines game and physical reality within the gameplay. This novel game type raises unprecedented research and design challenges for developers and urges the exploration of new technologies and methods to create high quality game experiences and design novel and compelling forms of content for the players. This chapter follows a systematic approach to explore the landscape of pervasive gaming. First, the authors approach pervasive games from a theoretical point of view, defining the four axes of pervasive games design, introducing the concept of game world persistency, and describing aspects of spatially/temporally/socially expanded games. Then, they present ten pervasive game projects, classified in five genres based on their playing environment and features. Following that, the authors present a comparative view of those projects with respect to several design aspects: communication and localization, context and personal awareness aspects, information model, player equipment, and game space visualization. Last, the authors highlight current trends, design principles, and future directions for pervasive games development.
Abstract: Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Such networks can be very useful in practice, e.g.~in the local monitoring of ambient conditions and reporting them to a control center. In this paper we propose a new lightweight, distributed group key establishment protocol suitable for such energy constrained networks. Our approach basically trade-offs complex message exchanges by performing some amount of additional local computations. The extra computations are simple for the devices to implement and are evenly distributed across the participants of the network leading to good energy balance. We evaluate the performance our protocol in comparison to existing group key establishment protocols both in simulated and real environments. The intractability of all protocols is based on the Diffie-Hellman problem and we used its elliptic curve analog in our experiments. Our findings basically indicate the feasibility of implementing our protocol in real sensor network devices and highlight the advantages and disadvantages of each approach given the available technology and the corresponding efficiency (energy, time) criteria.
Abstract: Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Such networks can be very useful in practice, e.g.~in the local monitoring of ambient conditions and reporting them to a control center. In this paper we propose a new lightweight, distributed group key establishment protocol suitable for such energy constrained networks. Our approach basically trade-offs complex message exchanges by performing some amount of additional local computations. The extra computations are simple for the devices to implement and are evenly distributed across the participants of the network leading to good energy balance. We evaluate the performance our protocol in comparison to existing group key establishment protocols both in simulated and real environments. The intractability of all protocols is based on the Diffie-Hellman problem and we used its elliptic curve analog in our experiments. Our findings basically indicate the feasibility of implementing our protocol in real sensor network devices and highlight the advantages and disadvantages of each approach given the available technology and the corresponding efficiency (energy, time) criteria.
Abstract: In the recent years we have seen an increased popularity in game development using Smartphones, which has provided an increasingly ubiquitous platform for designing games. In this paper we wish to investigate the use of a modern Smartphone’s capabilities in game
development by implementing and evaluating a classic game on the iPhone platform. We identify the limitations and possibilities that this field offers to the different aspect of game design.
Abstract: Wireless Sensor Networks consist of a large number of small, autonomous devices, that are able to interact with their inveronment by sensing and collaborate to fulfill their tasks, as, usually, a single node is incapable of doing so; and they use wireless communication to enable this collaboration. Each device has limited computational and energy resources, thus a basic issue in the applicastions of wireless sensor networks is the low energy consumption and hence, the maximization of the network lifetime.
The collected data is disseminated to a static control point – data sink in the network, using node to node - multi-hop data propagation. However, sensor devices consume significant amounts of energy in addition to increased implementation complexity, since a routing protocol is executed. Also, a point of failure emerges in the area near the control center where nodes relay the data from nodes that are farther away. Recently, a new approach has been developed that shifts the burden from the sensor nodes to the sink. The main idea is that the sink has significant and easily replenishable energy reserves and can move inside the area the sensor network is deployed, in order to acquire the data collected by the sensor nodes at very low energy cost. However, the need to visit all the regions of the network may result in large delivery delays.
In this work we have developed protocols that control the movement of the sink in wireless sensor networks with non-uniform deployment of the sensor nodes, in order to succeed an efficient (with respect to both energy and latency) data collection. More specifically, a graph formation phase is executed by the sink during the initialization: the network area is partitioned in equal square regions, where the sink, pauses for a certain amount of time, during the network traversal, in order to collect data.
We propose two network traversal methods, a deterministic and a random one. When the sink moves in a random manner, the selection of the next area to visit is done in a biased random manner depending on the frequency of visits of its neighbor areas. Thus, less frequently visited areas are favored. Moreover, our method locally determines the stop time needed to serve each region with respect to some global network resources, such as the initial energy reserves of the nodes and the density of the region, stopping for a greater time interval at regions with higher density, and hence more traffic load. In this way, we achieve accelerated coverage of the network as well as fairness in the service time of each region.Besides randomized mobility, we also propose an optimized deterministic trajectory without visit overlaps, including direct (one-hop) sensor-to-sink data transmissions only.
We evaluate our methods via simulation, in diverse network settings and comparatively to related state of the art solutions. Our findings demonstrate significant latency and energy consumption improvements, compared to previous research.
Abstract: Wireless sensor networks are a recently introduced category of ad hoc computer networks, which are comprised by nodes of small size and limited computing and energy resources. Such nodes are able of measuring physical properties such as temperature, humidity, etc., wireless communication between each other and in some cases interaction with their surrounding environments (through the use of electromechanical parts).
As these networks have begun to be widely available (in terms of cost and commercial hardware availability), their field of application and philosophy of use is constantly evolving. We have numerous examples of their applications, ranging from monitoring the biodiversity of a specific outdoor area to structural health monitoring of bridges, and also networks ranging from few tens of nodes to even thousands of nodes.
In this PhD thesis we investigated the following basic research lines related to wireless sensor networks:
a) their simulation,
b) the development of data propagation protocols suited to such networks and their evaluation through simulation,
c) the modelling of ``hostile'' circumstances (obstacles) during their operation and evaluation of their impact through simulation,
d) the development of a sensor network management application.
Regarding simulation, we initially placed an emphasis to issues such as the effective simulation of networks of several thousands of nodes, and in that respect we developed a network simulator (simDust), which is extendable through the addition of new data propagation protocols and visualization capabilities. This simulator was used to evaluate the performance of a number of characteristic data propagation protocols for wireless sensor networks. Furthermore, we developed a new protocol (VRTP) and evaluated its performance against other similar protocols. Our studies show that the new protocol, that uses dynamic changes of the transmission range of the network nodes, performs better in certain cases than other related protocols, especially in networks containing obstacles and in the case of non-homogeneous placement of nodes.
Moreover, we emphasized on the addition of ``realistic'' conditions to the simulation of such protocols, that have an adversarial effect on their operation. Our goal was to introduce a model for obstacles that adds little computational overhead to a simulator, and also study the effect of the inclusion of such a model on data propagation protocols that use geographic information (absolute or relative). Such protocols are relatively sensitive to dynamic topology changes and network conditions. Through our experiments, we show that the inclusion of obstacles during simulation can have a significant effect on these protocols.
Finally, regarding applications, we initially proposed an architecture (WebDust/ShareSense), for the management of such networks, that would provide basic capabilities of managing such networks and developing applications above it. Features that set it apart are the capability of managing multiple heterogeneous sensor networks, openess, the use of a peer-to-peer architecture for the interconnection of multiple sensor network. A large part of the proposed architecture was implemented, while the overall architecture was extended to also include additional visualization capabilities.
Abstract: Wireless sensor networks are comprised of a vast number of devices, situated in an area of interest that self organize in a structureless network, in order to monitor/record/measure an environmental variable or phenomenon and subsequently to disseminate the data to the control center.
Here we present research focused on the development, simulation and evaluation of energy efficient algorithms, our basic goal is to minimize the energy consumption. Despite technology advances, the problem of energy use optimization remains valid since current and emerging hardware solutions fail to solve it.
We aim to reduce communication cost, by introducing novel techniques that facilitate the development of new algorithms. We investigated techniques of distributed adaptation of the operations of a protocol by using information available locally on every node, thus through local choices we improve overall performance. We propose techniques for collecting and exploiting limited local knowledge of the network conditions. In an energy efficient manner, we collect additional information which is used to achieve improvements such as forming energy efficient, low latency and fault tolerant paths to route data. We investigate techniques for managing mobility in networks where movement is a characteristic of the control center as well as the sensors. We examine methods for traversing and covering the network field based on probabilistic movement that uses local criteria to favor certain areas.
The algorithms we develop based on these techniques operate a) at low level managing devices, b) on the routing layer and c) network wide, achieving macroscopic behavior through local interactions. The algorithms are applied in network cases that differ in density, node distribution, available energy and also in fundamentally different models, such as under faults, with incremental node deployment and mobile nodes. In all these settings our techniques achieve significant gains, thus distinguishing their value as tools of algorithmic design.
Abstract: We study here the problem of determining the majority type in an arbitrary connected network, each vertex of which has initially two possible types. The vertices may have a few additional possible states and can interact in pairs only if they share an edge. Any (population) protocol is required to stabilize in the initial majority. We first present and analyze a protocol with 4 states per vertex that always computes the initial majority value, under any fair scheduler. As we prove, this protocol is optimal, in the sense that there is no population protocol that always computes majority with fewer than 4 states per vertex. However this does not rule out the existence of a protocol with 3 states per vertex that is correct with high probability. To this end, we examine a very natural majority protocol with 3 states per vertex, introduced in [Angluin et al. 2008] where its performance has been analyzed for the clique graph. We study the performance of this protocol in arbitrary networks. We prove that, when the two initial states are put uniformly at random on the vertices, this protocol converges to the initial majority with probability higher than the probability of converging to the initial minority. In contrast, we present an infinite family of graphs, on which the protocol can fail whp, even when the difference between the initial majority and the initial minority is n−Θ(lnn). We also present another infinite family of graphs in which the protocol of Angluin et al. takes an expected exponential time to converge. These two negative results build upon a very positive result concerning the robustness of the protocol on the clique. Surprisingly, the resistance of the clique to failure causes the failure in general graphs. Our techniques use new domination and coupling arguments for suitably defined processes whose dynamics capture the antagonism between the states involved.
Abstract: A central problem in distributed computing and telecommunications
is the establishment of common knowledge between two computing
entities. An immediate use of such common knowledge is in the
initiation of a secure communication session between two entities
since the two entities may use this common knowledge in order to
produce a secret key for use with some symmetric cipher.
%
The dynamic establishment of shared information (e.g. secret key)
between two entities is particularly important in networks with no
predetermined structure such as wireless mobile ad-hoc networks.
In such networks, nodes establish and terminate communication
sessions dynamically with other nodes which may have never been
encountered before in order to somehow exchange information which
will enable them to subsequently communicate in a secure manner.
%
In this paper we give and theoretically analyze a protocol that
enables two entities initially possessing a string each to
securely eliminate inconsistent bit positions, obtaining strings
with a larger percentage of similarities. This can help the nodes
establish a shared set of bits and use it as a key with some
shared key encryption scheme.
Abstract: In this paper, we discuss key findings, technological challenges and socioeconomic opportunities in Smart City era. Most of the conclusions were gathered during SmartSantander project, an EU project that is developing a city-scale testbed for IoT and Future Internet experimentation, providing an integrated framework for implementing Smart City services.
Abstract: We here present Fun in Numbers (FinN), a framework for developing pervasive applications and interactive installations for entertainment and educational purposes. Using ad hoc mobile wireless sensor network nodes as the enabling devices, FinN allows for the quick prototyping of applications that utilize input from multiple physical sources (sensors and other means of interfacing), by offering a set of programming templates and services, such as topology discovery, localization and synchronization, that hide the underlying complexity. We present the target application domains of FinN, along with a set of multiplayer games and interactive installations. We describe the overall architecture of our platform
and discuss some key implementation issues of the application domains. Finally, we present the experience gained by deploying the applications developed with our platform.
Abstract: We describe our experiences from implementing and integrating a new job scheduling algorithm in the gLite Grid middleware and present experimental results that compare it to the existing gLite scheduling algorithms. It is the first time that gLite scheduling algorithms are put under test and compared with a new algorithm under the same conditions. We describe the problems that were encountered and solved, going from theory and simulations to practice and the actual implementation of our scheduling algorithm. In this work we also describe the steps one needs to follow in order to develop and test a new scheduling algorithm in gLite. We present the methodology followed and the testbed that was set up for the comparisons. Our research sheds light on some of the problems of the existing gLite scheduling algorithms and makes clear the need for the development of new.
Abstract: The domain of smart cities is currently burgeoning, with a lot of potential for scientific and socio-economic innovation gradually being revealed. It is also becoming apparent that cross-discipline research will be instrumental in designing and building smarter cities, where IoT technology is becoming omnipresent. SmartSantander is an FP7 project that built a massive
city-scale IoT testbed aiming to provide both a tool for the research community and a functional system for the local government to implement operational city
services. In this work, we present key smart cities projects, main application domains and representative smart city frameworks that reflect the latest advances in the smart cities domain and our own experience through SmartSantander. The project has deployed 51.910 IoT endpoints, offering a massive infrastructure to the community, as well as functional system services and a number of end-user applications. Based on these aspects, we identify and
discuss a number of key scientific and technological challenges. We also present an overview of the developed system components and applications, and
discuss the ways that current smart city challenges were handled in the project.
Abstract: Content integration of web data sources is becoming increasingly important for the formation of the next generation information systems. A common performance bottleneck faced by all proposed solutions is the network overhead incurred when contacting the integrated e-sites. With this paper we contribute ongoing work on D.I.C.E. and Co.In.S.; a domain-independent Content Integration System and its Data Integration Cache Engine. DICE constitutes a cache engine utilizing novel techniques for operating as a fully active semantic cache. We show how our contributions can be applied in the field of content integration in order to improve the response time of content integration systems, representative of a large class of e-commerce applications. We have implemented the proposed architecture and are currently developing a number of applications.
Abstract: Direct routing is the special case of bufferless routing where N packets, once injected into the network, must be routed along specific paths to their destinations without conflicts. We give a general treatment of three facets of direct routing: (i) Algorithms. We present a polynomial time greedy algorithm for arbitrary direct routing problems whch is worst-case optimal, i.e., there exist instances for which no direct routing algorithm is better than the greedy. We apply variants of this algorithm to commonly used network topologies. In particular, we obtain near-optimal routing time using for the tree and d-dimensional mesh, given arbitrary sources and destinations; for the butterfly and the hypercube, the same result holds for random destinations. (ii) Complexity. By a reduction from Vertex Coloring, we show that Direct Routing is inapproximable, unless P=NP. (iii) Lower Bounds for Buffering. We show the existence of routing problems which cannot be solved efficiently with direct routing. To solve these problems, any routing algorithm needs buffers. We give nontrivial lower bounds on such buffering requirements for general routing algorithms.
Abstract: Peer-to-Peer (P2P) search requires intelligent decisions for query routing: selecting the best peers to which a given query, initiated at some peer, should be forwarded for retrieving additional search results. These decisions are based on statistical summaries for each peer, which are usually organized on a per-keyword basis and managed in a distributed directory of routing indices. Such architectures disregard the possible correlations among keywords. Together with the coarse granularity of per-peer summaries, which are mandated for scalability, this limitation may lead to poor search result quality.
This paper develops and evaluates two solutions to this problem, sk-STAT based on single-key statistics only, and mk-STAT based on additional multi-key statistics. For both cases, hash sketch synopses are used to compactly represent a peer's data items and are efficiently disseminated in the P2P network to form a decentralized directory. Experimental studies with Gnutella and Web data demonstrate the viability and the trade-offs of the approaches.
Abstract: We present the first approximate distance oracle for sparse directed networks with time-dependent arc-travel-times determined by continuous, piecewise linear, positive functions possessing the FIFO property. Our approach precomputes (1 + {\aa}) −approximate distance summaries from selected landmark vertices to all other vertices in the network, and provides two sublinear-time query algorithms that deliver constant and (1 + {\'o}) −approximate shortest-travel-times, respectively, for arbitrary origin-destination pairs in the network. Our oracle is based only on the sparsity of the network, along with two quite natural assumptions about travel-time functions which allow the smooth transition towards asymmetric and time-dependent distance metrics.
Abstract: When one engineers distributed algorithms, some special characteristics
arise that are different from conventional (sequential or parallel)
computing paradigms. These characteristics include: the need for either a
scalable real network environment or a platform supporting a simulated
distributed environment; the need to incorporate asynchrony, where arbitrarya
synchrony is hard, if not impossible, to implement; and the generation
of “difficult” input instances which is a particular challenge. In this
work, we identifys ome of the methodological issues required to address
the above characteristics in distributed algorithm engineering and illustrate
certain approaches to tackle them via case studies. Our discussion
begins byad dressing the need of a simulation environment and how asynchronyis
incorporated when experimenting with distributed algorithms.
We then proceed bys uggesting two methods for generating difficult input
instances for distributed experiments, namelya game-theoretic one and another
based on simulations of adversarial arguments or lower bound proofs.
We give examples of the experimental analysis of a pursuit-evasion protocol
and of a shared memorypro blem in order to demonstrate these ideas.
We then address a particularlyi nteresting case of conducting experiments
with algorithms for mobile computing and tackle the important issue of
motion of processes in this context. We discuss the two-tier principle as
well as a concurrent random walks approach on an explicit representation
of motions in ad-hoc mobile networks, which allow at least for averagecase
analysis and measurements and may give worst-case inputs in some
cases. Finally, we discuss a useful interplay between theory and practice
that arise in modeling attack propagation in networks.
Abstract: In this survey, we describe the state of the art for research on experimentally-driven research on networks of tiny artifacts. The main topics are existing and planned practical testbeds, software simulations, and hybrid approaches; in addition, we describe a number of current studies undertaken by the authors.
Abstract: Wireless sensor networks are comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, which co-operate to accomplish a large sensing task. Such networks can be very useful in practice in applications that require fine-grain monitoring of physical environment subjected to critical conditions (such as inaccessible terrains or disaster places). Very large numbers of sensor devices can be deployed in areas of interest and use self-organization and collaborative methods to form deeply networked environments. Features including the huge number of sensor devices involved, the severe power, computational and memory limitations, their dense deployment and frequent failures, pose new design and implementation aspects. The efficient and robust realization of such large, highly-dynamic, complex, non-conventional environments is a challenging algorithmic and technological task. In this work we consider certain important aspects of the design, deployment and operation of distributed algorithms for data propagation in wireless sensor networks and discuss some characteristic protocols, along with an evaluation of their performance.
Abstract: This paper deals with systems of multiple mobile robots each of which observes the positions of the other robots and moves to a new position so that eventually the robots form a circle. In the model we study, the robots are anonymous and oblivious, in the sense that they cannot be distinguished by their appearance and do not have a common x-y coordinate system, while they are unable to remember past actions.
We propose a new distributed algorithm for circle formation on the plane. We prove that our algorithm is correct and provide an upper bound for its performance. In addition, we conduct an extensive and detailed comparative simulation experimental study with the DK algorithm described in [7]. The results show that our algorithm is very simple and takes considerably less time to execute than algorithm DK.
Abstract: This paper deals with systems of multiple mobile robots each of which observes the positions of the other robots and moves to a new position so that eventually the robots form a circle. In the model we study, the robots are anonymous and oblivious, in the sense that they cannot be distinguished by their appearance and do not have a common x-y coordinate system, while they are unable to remember past actions.
We propose a new distributed algorithm for circle formation on the plane. We prove that our algorithm is correct and provide an upper bound for its performance. In addition, we conduct an extensive and detailed comparative simulation experimental study with the DK algorithm. The results show that our algorithm is very simple and takes considerably less time to execute than algorithm DK.
Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.
Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.
Abstract: We exploit the game-theoretic ideas presented in [12] to
study the vertex coloring problem in a distributed setting. The vertices
of the graph are seen as players in a suitably defined strategic game,
where each player has to choose some color, and the payoff of a vertex is
the total number of players that have chosen the same color as its own.
We extend here the results of [12] by showing that, if any subset of nonneighboring
vertices perform a selfish step (i.e., change their colors in order
to increase their payoffs) in parallel, then a (Nash equilibrium) proper
coloring, using a number of colors within several known upper bounds
on the chromatic number, can still be reached in polynomial time. We
also present an implementation of the distributed algorithm in wireless
networks of tiny devices and evaluate the performance in simulated and
experimental environments. The performance analysis indicates that it
is the first practically implementable distributed algorithm.
Abstract: Counting items in a distributed system, and estimating the cardinality of multisets in particular,
is important for a large variety of applications and a fundamental building block for emerging Internet-scale information systems. Examples of such applications range from optimizing query access plans in peer-to-peer data sharing, to computing the significance (rank/score) of data items in distributed information retrieval. The general formal problem addressed in this article is computing the network-wide distinct number of items with some property (e.g., distinct files with file name
containing “spiderman”) where each node in the network holds an arbitrary subset, possibly overlapping the subsets of other nodes. The key requirements that a viable approach must satisfy are:
(1) scalability towards very large network size, (2) efficiency regarding messaging overhead, (3) load
balance of storage and access, (4) accuracy of the cardinality estimation, and (5) simplicity and easy
integration in applications. This article contributes the DHS (Distributed Hash Sketches) method
for this problem setting: a distributed, scalable, efficient, and accurate multiset cardinality estimator.
DHSis based on hash sketches for probabilistic counting, but distributes the bits of each counter
across network nodes in a judicious manner based on principles of Distributed Hash Tables, paying
careful attention to fast access and aggregation as well as update costs. The article discusses various
design choices, exhibiting tunable trade-offs between estimation accuracy, hop-count efficiency, and
load distribution fairness. We further contribute a full-fledged, publicly available, open-source implementation of all our methods, and a comprehensive experimental evaluation for various settings.
Abstract: Swarms of small devices, that bridge the physical with the digital domain and communicate with each other, have become essential parts of our daily activity, forming networks to support myriads of new and exciting applications. Technology requires such systems to be dependable and adaptive to the user needs, to sudden changes of the environment, and to specific applications characteristics.
Abstract: In this paper we describe a new simulation platform for heterogeneous distributed systems comprised of small programmable objects (e.g., wireless sensor networks) and traditional networked processors. Simulating such systems is complicated because of the need to coordinate compilers and simulators, often with very different interfaces, options, and fidelities.
Our platform (which we call ADAPT) is a flexible and extensible environment that provides a highly scalable simulator with unique characteristics. While the platform provides advanced functionality such as real-time simulation monitoring, custom topologies and scenarios, mixing real and simulated nodes, etc., the effort required by the user and the impact to her code is minimal. We here present its architecture, the most important design decisions, and discuss its distinct features and functionalities. We integrate our simulator to the Sun SPOT platform to enable simulation of sensing applications that employ both low-end and high-end devices programmed with different languages that are internetworked with heterogeneous technologies. We believe that ADAPT will make the development of applications that use small programmable objects more widely accessible and will enable researchers to conduct a joint research approach that combines both theory and practice.
Abstract: Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network.
Abstract: We address the issue of measuring distribution fairness in Internet-scale networks. This problem has several interesting instances encountered in different applications, ranging from assessing the distribution of load between network nodes for load balancing purposes, to measuring node utilization for optimal resource exploitation, and to guiding autonomous decisions of nodes in networks built with market-based economic principles. Although some metrics have been proposed, particularly for assessing load balancing algorithms, they fall short. We first study the appropriateness of various known and previously proposed statistical metrics for measuring distribution fairness. We put forward a number of required characteristics for appropriate metrics. We propose and comparatively study the appropriateness of the Gini coefficient (G) for this task. Our study reveals as most appropriate the metrics of G, the fairness index (FI), and the coefficient of variation (CV) in this order. Second, we develop six distributed sampling algorithms to estimate metrics online efficiently, accurately, and scalably. One of these algorithms (2-PRWS) is based on two effective optimizations of a basic algorithm, and the other two (the sequential sampling algorithm, LBS-HL, and the clustered sampling one, EBSS) are novel, developed especially to estimate G. Third, we show how these metrics, and especially G, can be readily utilized online by higher-level algorithms, which can now know when to best intervene to correct unfair distributions (in particular, load imbalances). We conclude with a comprehensive experimentation which comparatively evaluates both the various proposed estimation algorithms and the three most appropriate metrics (G, CV, andFI). Specifically, the evaluation quantifies the efficiency (in terms of number of the messages and a latency indicator), precision, and accuracy achieved by the proposed algorithms when estimating the competing fairness metrics. The central conclusion is that the proposed metric, G, can be estimated with a small number of messages and latency, regardless of the skew of the underlying distribution.
Abstract: We propose a general policy to allocate subcarriers to time-varying traffic in a flexible OFDM optical
network. We compare the OFDM network performance to that of a fixed-grid WDM network using simulations.
Abstract: A new dynamic Interpolation Search (IS) data structure is
presented that achieves O(log log n) search time with high probability
on unknown continuous or even discrete input distributions with mea-
surable probability of key collisions, including power law and Binomial
distributions. No such previous result holds for IS when the probabil-
ity of key collisions is measurable. Moreover, our data structure exhibits
O(1) expected search time with high probability for a wide class of in-
put distributions that contains all those for which o(log log n) expected
search time was previously known.
Abstract: Using a set of geometric containers to speed up shortest path queries in a weighted graph has been proven a useful tool for dealing with large sparse graphs. Given a layout of a graph G=(V,E), we store, for each edge (u,v)set membership, variantE, the bounding box of all nodes tset membership, variantV for which a shortest u-t-path starts with (u,v). Shortest path queries can then be answered by DijkstraImage restricted to edges where the corresponding bounding box contains the target.
In this paper, we present new algorithms as well as an empirical study for the dynamic case of this problem, where edge weights are subject to change and the bounding boxes have to be updated. We evaluate the quality and the time for different update strategies that guarantee correct shortest paths in an interesting application to railway information systems, using real-world data from six European countries.
Abstract: Service Oriented Computing and its most famous implementation technology Web Services (WS) are becoming an important enabler of networked business models. Discovery mechanisms are a critical factor to the overall utility of Web Services. So far, discovery mechanisms based on the UDDI standard rely on many centralized and area-specific directories, which poses information stress problems such as performance bottlenecks and fault tolerance. In this context, decentralized approaches based on Peer to Peer overlay networks have been proposed by many researchers as a solution. In this paper, we propose a new structured P2P overlay network infrastructure designed for Web Services Discovery. We present theoretical analysis backed up by experimental results, showing that the proposed solution outperforms popular decentralized infrastructures for web discovery, Chord (and some of its successors), BATON (and it˘s successor) and Skip-Graphs.
Abstract: We present a new overlay, called the
Deterministic Decentralized tree
(
D
2
-
tree). The
D
2
-tree compares favorably to other overlays for the following reasons: (a)
it provides matching and better complexities,which are deterministic for the supported
operations; (b) the management of nodes (peers) and elements are completely decoupled from each other; and (c) an efficient deterministic load-balancing mechanism
is presented for the uniform distribution of elements into nodes, while at the same
time probabilistic optimal bounds are provided for the congestion of operations at the
nodes. The load-balancing scheme of elements into nodes is deterministic and general
enough to be applied to other hierarchical tree-based overlays. This load-balancing
mechanism is based on an innovative lazy weight-balancing mechanism, which is
interesting in its own right.
Abstract: We present a new overlay, called the Deterministic Decentralized tree ( TeX-tree). The TeX-tree compares favorably to other overlays for the following reasons: (a) it provides matching and better complexities, which are deterministic for the supported operations; (b) the management of nodes (peers) and elements are completely decoupled from each other; and (c) an efficient deterministic load-balancing mechanism is presented for the uniform distribution of elements into nodes, while at the same time probabilistic optimal bounds are provided for the congestion of operations at the nodes. The load-balancing scheme of elements into nodes is deterministic and general enough to be applied to other hierarchical tree-based overlays. This load-balancing mechanism is based on an innovative lazy weight-balancing mechanism, which is interesting in its own right.
Abstract: This paper deals with early obstacles recognition in wireless sensor networks under various traffic
patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a
constant factor of 2đ+1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.
Abstract: In this paper we present the efficient burst reservation protocol (EBRP) suitable
for bufferless optical burst switching (OBS) networks. The EBRPprotocol is a
two-way reservation scheme that employs timed and in-advance reservation of
resources. In the EBRPprotocol timed reservations are relaxed, introducing a
reservation time duration parameter that is negotiated during call setupphase.
This feature allows bursts to reserve resources beyond their actual size to
increase their successful forwarding probability and can be used to provide
quality-of-service (QoS) differentiation. The EBRPprotocol is suitable for
OBS networks and can guarantee a low blocking probability for bursts that can
tolerate the round-trip delay associated with the two-way reservation.We present
the main features of the proposed protocol and describe in detail the timing
considerations regarding the call setupphase and the actual reservation process.
Furthermore, we show evaluation results and compare the EBRPperformance
against two other typical reservation schemes, a tell-and-wait and a tell-and-go
(just-enough-time) like protocol. EBRP has been developed for the control plane
of the IST-LASAGNE project.
Abstract: We investigate the Vehicle Routing Problem with Time Windows (VRPTW) under an eco-friendly framework that demands the delivery of balanced and compact customer clusters. We present a new approach consisting of three major phases: (i) a first clustering of customers with compatible time windows; (ii) a second clustering of customers with close geographic proximity based on various methods (natural cuts, KaHIP, quad trees); (iii) a refinement phase that either splits a cluster into smaller ones, or merges clusters to form a bigger compact cluster. Our approach turns out to be beneficial when used in an on-line environment, where changes to the initial tour are requested (add a new customer to the tour or drop some customers). The new method serves as a warm starting point for re-evaluating and further optimizing the solution of VRPTW. Experiments with real data sets demonstrate that our approach compares favorably with standard approaches that start from a basic (cold) solution.
Abstract: Two important performance parameters of distributed, rate-based flow control algorithms are their locality and convergence complexity. The former is characterized by the amount of global knowledge that is available to their scheduling mechanisms, while the latter is defined as the number of update operations performed on rates of individual sessions until max-min fairness is reached. Optimistic algorithms allow any session to intermediately receive a rate larger than its max-min fair rate; bottleneck algorithms finalize the rate of a session only if it is restricted by a certain, highly congested link of the network. In this work, we present a comprehensive collection of lower and upper bounds on convergence complexity, under varying degrees of locality, for optimistic, bottleneck, rate-based flow control algorithms. Say that an algorithm is oblivious if its scheduling mechanism uses no information of either the session rates or the network topology. We present a novel, combinatorial construction of a capacitated network, which we use to establish a fundamental lower bound of dn 4 + n 2 on the convergence complexity of any oblivious algorithm, where n is the number of sessions laid out on a network, and d, the session dependency, is a measure of topological dependencies among sessions. Moreover, we devise a novel simulation proof to establish that, perhaps surprisingly, the lower bound of dn 4 + n 2 on convergence complexity still holds for any partially oblivious algorithm, in which the scheduling mechanism is allowed to use information about session rates, but is otherwise unaware of network topology. On the positive side, we prove that the lower bounds for oblivious and partially oblivious algorithms are both tight. We do so by presenting optimal oblivious algorithms, which converge after dn 2 + n 2 update operations are performed in the worst case. To complete the picture, we show that linear convergence complexity can indeed be achieved if information about both session rates and network topology is available to schedulers. We present a counterexample, nonoblivious algorithm, which converges within an optimal number of n update operations. Our results imply a surprising convergence complexity collapse of oblivious and partially oblivious algorithms, and a convergence complexity separation between (partially) oblivious and nonoblivious algorithms for optimistic, bottleneck rate-based flow control.
Abstract: In this work we study the tractability of well supported approximate
Nash Equilibria (SuppNE in short) in bimatrix games. In view
of the apparent intractability of constructing Nash Equilibria (NE in
short) in polynomial time, even for bimatrix games, understanding the
limitations of the approximability of the problem is of great importance.
We initially prove that SuppNE are immune to the addition of arbitrary
real vectors to the rows (columns) of the row (column) player˘s
payoff matrix. Consequently we propose a polynomial time algorithm
(based on linear programming) that constructs a 0.5−SuppNE for arbitrary
win lose games.
We then parameterize our technique for win lose games, in order to
apply it to arbitrary (normalized) bimatrix games. Indeed, this new technique
leads to a weaker {\"o}−SuppNE for win lose games, where {\"o} = √5−1
2
is the golden ratio. Nevertheless, this parameterized technique extends
nicely to a technique for arbitrary [0, 1]−bimatrix games, which assures
a 0.658−SuppNE in polynomial time.
To our knowledge, these are the first polynomial time algorithms providing
{\aa}−SuppNE of normalized or win lose bimatrix games, for some
nontrivial constant {\aa} ∈ [0, 1), bounded away from 1.
Abstract: We propose a new data dissemination protocol for wireless sensor networks, that basically pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the sink. This extra information is still local, limited and obtained in a distributed manner. This extra knowledge is acquired by only a small fraction of sensors thus the extra energy cost only marginally affects the overall protocol efficiency. The new protocol has low latency and manages to propagate data successfully even in the case of low densities. Furthermore, we study in detail the effect of failures and show that our protocol is very robust. In particular, we implement and evaluate the protocol using large scale simulation, showing that it significantly outperforms well known relevant solutions in the state of the art.
Abstract: Smart Dust is a set of a vast number of ultra-small fully autonomous computing and communication devices, with very restricted energy and computing capabilities, that co-operate to quickly and efficiently accomplish a large sensing task. Smart Dust can be very useful in practice, i.e., in the local detection of a remote crucial event and the propagation of data reporting its realization. In this work we make an effort towards the research on smart dust from an algorithmic point of view. We first provide a simple but realistic model for smart dust and present an interesting problem, which is how to propagate efficiently information on an event detected locally. Then we present various smart dust protocols for local detection and propagation that are simple enough to be implemented on real smart dust systems, and perform, under some simplifying assumptions, a rigorous average case analysis of their efficiency and energy consumption (and their interplay). This analysis leads to concrete results showing that our protocols are very efficient and robust. We also validate the analytical results by extensive experiments.
Abstract: Andrews et al. [Automatic method for hiding latency in high bandwidth networks, in: Proceedings of the ACM Symposium on Theory of Computing, 1996, pp. 257–265; Improved methods for hiding latency in high bandwidth networks, in: Proceedings of the Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, 1996, pp. 52–61] introduced a number of techniques for automatically hiding latency when performing simulations of networks with unit delay links on networks with arbitrary unequal delay links. In their work, they assume that processors of the host network are identical in computational power to those of the guest network being simulated. They further assume that the links of the host are able to pipeline messages, i.e., they are able to deliver Ppackets in time O(P+d) where d is the delay on the link.
In this paper we examine the effect of eliminating one or both of these assumptions. In particular, we provide an efficient simulation of a linear array of homogeneous processors connected by unit-delay links on a linear array of heterogeneous processors connected by links with arbitrary delay. We show that the slowdown achieved by our simulation is optimal. We then consider the case of simulating cliques by cliques; i.e., a clique of heterogeneous processors with arbitrary delay links is used to simulate a clique of homogeneous processors with unit delay links. We reduce the slowdown from the obvious bound of the maximum delay link to the average of the link delays. In the case of the linear array we consider both links with and without pipelining. For the clique simulation the links are not assumed to support pipelining.
The main motivation of our results (as was the case with Andrews et al.) is to mitigate the degradation of performance when executing parallel programs designed for different architectures on a network of workstations (NOW). In such a setting it is unlikely that the links provided by the NOW will support pipelining and it is quite probable the processors will be heterogeneous. Combining our result on clique simulation with well-known techniques for simulating shared memory PRAMs on distributed memory machines provides an effective automatic compilation of a PRAM algorithm on a NOW.
Abstract: One of the most important features in image analysis and understanding is shape. Mathematical morphology is the image processing branch that deals with shape analysis. The definition of all morphological transformations is based on two primitive operations, i.e. dilation and erosion. Since many applications require the solution of morphological problems in real time, researching time efficient algorithms for these two operations is crucial.
In this paper, efficient algorithms for the binary as well as the grey level dilation and erosion are presented and evaluated for an advanced associative processor. It is shown through simulation results that the above architecture is near optimal in the binary case and is also as efficient as the array processor with a 2D-mesh interconnection in the grey level case. Finally, it is proven that the implementation of this image processing machine is economically feasible
Abstract: The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on n vertices using O(n) space such that vertex adjacency is tested in O(l) time. We show here how to construct such a representation efficiently by providing simple and optimal algorithms, both in a sequential and a parallel setting. Our sequential algorithm runs in O(n) time. The parallel algorithm runs in O(log n) time using O(n/log n) CRCW PRAM processors, or inO(log n log* n) time using O(n/log n log* n) EREW PRAM processors. Previous results for this problem are based on matroid partitioning and thus have a high complexity.
Abstract: In large-scale or evolving networks, such as the Internet,
there is no authority possible to enforce a centralized traffic management.
In such situations, Game Theory and the concepts of Nash equilibria
and Congestion Games [8] are a suitable framework for analyzing
the equilibrium effects of selfish routes selection to network delays.
We focus here on layered networks where selfish users select paths to
route their loads (represented by arbitrary integer weights). We assume
that individual link delays are equal to the total load of the link. We
focus on the algorithm suggested in [2], i.e. a potential-based method
for finding pure Nash equilibria (PNE) in such networks. A superficial
analysis of this algorithm gives an upper bound on its time which is
polynomial in n (the number of users) and the sum of their weights. This
bound can be exponential in n when some weights are superpolynomial.
We provide strong experimental evidence that this algorithm actually
converges to a PNE in strong polynomial time in n (independent of the
weights values). In addition we propose an initial allocation of users
to paths that dramatically accelerates this algorithm, compared to an
arbitrary initial allocation. A by-product of our research is the discovery
of a weighted potential function when link delays are exponential to their
loads. This asserts the existence of PNE for these delay functions and
extends the result of
Abstract: We present three new coordination mechanisms for schedul-
ing n selŻsh jobs on m unrelated machines. A coordination
mechanism aims to mitigate the impact of selŻshness of jobs
on the e±ciency of schedules by deŻning a local schedul-
ing policy on each machine. The scheduling policies induce
a game among the jobs and each job prefers to be sched-
uled on a machine so that its completion time is minimum
given the assignments of the other jobs. We consider the
maximum completion time among all jobs as the measure
of the e±ciency of schedules. The approximation ratio of
a coordination mechanism quantiŻes the e±ciency of pure
Nash equilibria (price of anarchy) of the induced game. Our
mechanisms are deterministic, local, and preemptive in the
sense that the scheduling policy does not necessarily process
the jobs in an uninterrupted way and may introduce some
idle time. Our Żrst coordination mechanism has approxima-
tion ratio O(logm) and always guarantees that the induced
game has pure Nash equilibria to which the system con-
verges in at most n rounds. This result improves a recent
bound of O(log2 m) due to Azar, Jain, and Mirrokni and,
similarly to their mechanism, our mechanism uses a global
ordering of the jobs according to their distinct IDs. Next
we study the intriguing scenario where jobs are anonymous,
i.e., they have no IDs. In this case, coordination mechanisms
can only distinguish between jobs that have diffeerent load
characteristics. Our second mechanism handles anonymous
jobs and has approximation ratio O
ˇ logm
log logm
˘
although the
game induced is not a potential game and, hence, the exis-
tence of pure Nash equilibria is not guaranteed by potential
function arguments. However, it provides evidence that the
known lower bounds for non-preemptive coordination mech-
anisms could be beaten using preemptive scheduling poli-
cies. Our third coordination mechanism also handles anony-
mous jobs and has a nice \cost-revealing" potential func-
tion. Besides in proving the existence of equilibria, we use
this potential function in order to upper-bound the price of stability of the induced game by O(logm), the price of an-
archy by O(log2 m), and the convergence time to O(log2 m)-
approximate assignments by a polynomial number of best-
response moves. Our third coordination mechanism is the
Żrst that handles anonymous jobs and simultaneously guar-
antees that the induced game is a potential game and has
bounded price of anarchy.
Abstract: In this thesis we investigate the problems of data routing and data collection in wireless sensor networks characterised by intense and higly diverse mobility. We propose a set of protocols that takes exploits the motion of the sensors in order to inform the sink about the network topology. We experimentally evaluate these protocolls in a wide range of topologies, including both homogeneous and heterogeneous ones.
We also investigate random walks as simple motion strategies for mobile sinks that perform data collection from static WSN's. We propose three new random walks that improve latency compared to already known ones, as well as a new metric called Proximity Variation. This metric captures the different way each random walks traverses the network area.
Abstract: We examine a task scheduling and data migration problem for grid networks, which we refer to as the Data Consolidation (DC) problem. DC arises when a task concurrently requests multiple pieces of data, possibly scattered throughout the grid network, that have to be present at a selected site before the task˘s execution starts. In such a case, the scheduler and the data manager must select (i) the data replicas to be used, (ii) the site where these data will be gathered for the task to be executed, and (iii) the routing paths to be followed; this is assuming that the selected datasets are transferred concurrently to the execution site. The algorithms or policies for selecting the data replicas, the data consolidating site and the corresponding paths comprise a Data Consolidation scheme. We propose and experimentally evaluate several DC schemes of polynomial number of operations that attempt to estimate the cost of the concurrent data transfers, to avoid congestion that may appear due to these transfers and to provide fault tolerance. Our simulation results strengthen our belief that DC is an important problem that needs to be addressed in the design of data grids, and can lead, if performed efficiently, to significant benefits in terms of task delay, network load and other performance parameters.
Abstract: Data collection is usually performed in wireless sensor networks by the sensors
relaying data towards a static control center (sink). Motivated by important
applications (mostly related to ambient intelligence and remote monitoring)
and as a first step towards introducing mobility, we propose the basic
idea of having a sink moving in the network area and collecting
data from sensors. We propose four characteristic mobility patterns
for the sink along with different data collection strategies. Through a
detailed simulation study, we evaluate several important performance properties of
each approach. Our findings demonstrate that by taking advantage
of the sink's mobility and shifting work from sensors to the powerful sink,
we can significantly reduce the energy spent in relaying traffic and thus greatly
extend the lifetime of the network.
Abstract: Through recent technology advances in the eld of wireless energy transmission, Wireless Rechargeable Sensor Networks
(WRSN) have emerged. In this new paradigm for
WSNs a mobile entity called Mobile Charger (MC) traverses
the network and replenishes the dissipated energy of sensors.
In this work we rst provide a formal denition of the charging
dispatch decision problem and prove its computational
hardness. We then investigate how to optimize the tradeo
s of several critical aspects of the charging process such
as a) the trajectory of the charger, b) the dierent charging
policies and c) the impact of the ratio of the energy
the MC may deliver to the sensors over the total available
energy in the network. In the light of these optimizations,
we then study the impact of the charging process to the
network lifetime for three characteristic underlying routing
protocols; a greedy protocol, a clustering protocol and an
energy balancing protocol. Finally, we propose a Mobile
Charging Protocol that locally adapts the circular trajectory
of the MC to the energy dissipation rate of each sub-region
of the network. We compare this protocol against several
MC trajectories for all three routing families by a detailed
experimental evaluation. The derived ndings demonstrate
signicant performance gains, both with respect to the no
charger case as well as the dierent charging alternatives; in
particular, the performance improvements include the network
lifetime, as well as connectivity, coverage and energy
balance properties.
Abstract: In many cryptographic applications it is
necessary to generate elliptic curves (ECs) whose order
possesses certain properties. The method that is usually
employed for the generation of such ECs is the
so-called Complex Multiplication method. This method
requires the use of the roots of certain class field polynomials
defined on a specific parameter called the discriminant.
The most commonly used polynomials are
the Hilbert and Weber ones. The former can be used
to generate directly the EC, but they are characterized
by high computational demands. The latter have usually
much lower computational requirements, but they
do not directly construct the desired EC. This can be
achieved if transformations of their roots to the roots of the corresponding (generated by the same discriminant)
Hilbert polynomials are provided. In this paper we present
a variant of the Complex Multiplicationmethod that
generates ECs of cryptographically strong order. Our
variant is based on the computation of Weber polynomials.
We present in a simple and unifying manner a
complete set of transformations of the roots of aWeber
polynomial to the roots of its corresponding Hilbert
polynomial for all values of the discriminant. In addition,
we prove a theoretical estimate of the precision required
for the computation ofWeber polynomials for all values
of the discriminant.We present an extensive experimental
assessment of the computational efficiency of the
Hilbert and Weber polynomials along with their precision
requirements for various discriminant values and
we compare them with the theoretical estimates.We further
investigate the time efficiency of the new Complex
Multiplication variant under different implementations
of a crucial step of the variant. Our results can serve as
useful guidelines to potential implementers of EC cryptosystems
involving generation of ECs of a desirable
order on resource limited hardware devices or in systems
operating under strict timing response constraints
Abstract: We call radiation at a point of a wireless network the total amount of electromagnetic quantity (energy or power density) the point is exposed to. The impact of radiation can be high and we believe it is worth studying and control; towards radiation aware wireless networking we take (for the first time in the study of this aspect) a distributed computing, algorithmic approach. We exemplify this line of research by focusing on sensor networks, studying the minimum radiation path problem of finding the lowest radiation trajectory of a person moving from a source to a destination point in the network region. For this problem, we sketch the main ideas behind a linear program that can provide a tight approximation of the optimal solution, and then we discuss three heuristics that can lead to low radiation paths. We also plan to investigate the impact of diverse node mobility to the heuristics' performance.
Abstract: The Time Dependent Team Orienteering Problem with Time Windows (TDTOPTW) can be used to model several real life problems. Among them, the route planning problem for tourists interested in visiting multiple points of interest (POIs) using public transport. The main objective of this problem is to select POIs that match tourist preferences, while taking into account a multitude of parameters and constraints and respecting the time available for sightseeing in a daily basis. TDTOPTW is NP-hard while almost the whole body of the related literature addresses the non time dependent version of the problem. The only TDTOPTW heuristic proposed so far is based on the assumption of periodic service schedules. Herein, we propose two efficient cluster-based heuristics for the TDTOPTW which yield high quality solutions, take into account time dependency in calculating travel times between POIs and make no assumption on periodic service schedules. The validation scenario for our prototyped algorithms included the metropolitan transit network and real POI sets compiled from Athens (Greece).
Abstract: Intuitively, Braess's paradox states that destroying a part
of a network may improve the common latency of selsh
ows at Nash
equilibrium. Such a paradox is a pervasive phenomenon in real-world
networks. Any administrator, who wants to improve equilibrium delays
in selsh networks, is facing some basic questions: (i) Is the network
paradox-ridden? (ii) How can we delete some edges to optimize equilibrium
ow delays? (iii) How can we modify edge latencies to optimize
equilibrium
ow delays?
Unfortunately, such questions lead to NP-hard problems in general. In
this work, we impose some natural restrictions on our networks, e.g.
we assume strictly increasing linear latencies. Our target is to formulate
ecient algorithms for the three questions above.We manage to provide:
{ A polynomial-time algorithm that decides if a network is paradoxridden,
when latencies are linear and strictly increasing.
{ A reduction of the problem of deciding if a network with arbitrary
linear latencies is paradox-ridden to the problem of generating all
optimal basic feasible solutions of a Linear Program that describes
the optimal trac allocations to the edges with constant latency.
{ An algorithm for nding a subnetwork that is almost optimal wrt
equilibrium latency. Our algorithm is subexponential when the number
of paths is polynomial and each path is of polylogarithmic length.
{ A polynomial-time algorithm for the problem of nding the best
subnetwork, which outperforms any known approximation algorithm
for the case of strictly increasing linear latencies.
{ A polynomial-time method that turns the optimal
ow into a Nash
ow by deleting the edges not used by the optimal
ow, and performing
minimal modications to the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity
of recognizing the Braess's paradox most severe manifestations,
and our techniques show novel ways of using the probabilistic method
and of exploiting convex separable quadratic programs.
Abstract: Intuitively, Braess’s paradox states that destroying a part of a network may improve the common latency of selfish flows at Nash equilibrium. Such a paradox is a pervasive phenomenon in real-world networks. Any administrator who wants to improve equilibrium delays in selfish networks, is facing some basic questions:
– Is the network paradox-ridden?
– How can we delete some edges to optimize equilibrium flow delays?
– How can we modify edge latencies to optimize equilibrium flow delays?
Unfortunately, such questions lead to View the MathML sourceNP-hard problems in general. In this work, we impose some natural restrictions on our networks, e.g. we assume strictly increasing linear latencies. Our target is to formulate efficient algorithms for the three questions above. We manage to provide:
– A polynomial-time algorithm that decides if a network is paradox-ridden, when latencies are linear and strictly increasing.
– A reduction of the problem of deciding if a network with (arbitrary) linear latencies is paradox-ridden to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the edges with constant latency.
– An algorithm for finding a subnetwork that is almost optimal wrt equilibrium latency. Our algorithm is subexponential when the number of paths is polynomial and each path is of polylogarithmic length.
– A polynomial-time algorithm for the problem of finding the best subnetwork which outperforms any known approximation for the case of strictly increasing linear latencies.
– A polynomial-time method that turns the optimal flow into a Nash flow by deleting the edges not used by the optimal flow, and performing minimal modifications on the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity of recognizing the most severe manifestations of Braess’s paradox, and our techniques show novel ways of using the probabilistic method and of exploiting convex separable quadratic programs.
Abstract: We consider two approaches that model timetable information in public transportation systems
as shortest-path problems in weighted graphs. In the time-expanded approach, every event at
a station, e.g., the departure of a train, is modeled as a node in the graph, while in the timedependent
approach the graph contains only one node per station. Both approaches have been
recently considered for (a simplified version of) the earliest arrival problem, but little is known
about their relative performance. Thus far, there are only theoretical arguments in favor of the
time-dependent approach. In this paper, we provide the first extensive experimental comparison of
the two approaches. Using several real-world data sets, we evaluate the performance of the basic
models and of several new extensions towards realistic modeling. Furthermore, new insights on
solving bicriteria optimization problems in both models are presented. The time-expanded approach
turns out to be more robust for modeling more complex scenarios, whereas the time-dependent
approach shows a clearly better performance.
Abstract: We consider two approaches that model timetable information in public transportation systems
as shortest-path problems in weighted graphs. In the time-expanded approach, every event at
a station, e.g., the departure of a train, is modeled as a node in the graph, while in the timedependent
approach the graph contains only one node per station. Both approaches have been
recently considered for (a simplified version of) the earliest arrival problem, but little is known
about their relative performance. Thus far, there are only theoretical arguments in favor of the
time-dependent approach. In this paper, we provide the first extensive experimental comparison of
the two approaches. Using several real-world data sets, we evaluate the performance of the basic
models and of several new extensions towards realistic modeling. Furthermore, new insights on
solving bicriteria optimization problems in both models are presented. The time-expanded approach
turns out to be more robust for modeling more complex scenarios, whereas the time-dependent
approach shows a clearly better performance.
Abstract: In this paper we consider communication issues arising in mobile networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency allocation and the call control problem are essential. In the frequency allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies. In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5. For the call control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio of 2.934 in cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call control algorithms for cellular and arbitrary planar networks, respectively.
Abstract: In this paper we consider communication issues arising in cellular (mobile) networks that utilize frequency division multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency-allocation and the call-control problems are essential. In the frequency-allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call-control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies.
In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5 . For the call-control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio between 2.469 and 2.651 for cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call-control algorithms for cellular and arbitrary planar networks, respectively.
Abstract: We consider the problem of computing minimum congestion,
fault-tolerant, redundant assignments of messages to faulty parallel de-
livery channels. In particular, we are given a set M of faulty channels,
each having an integer capacity ci and failing independently with proba-
bility fi. We are also given a set of messages to be delivered over M, and
a fault-tolerance constraint (1), and we seek a redundant assignment
that minimizes congestion Cong(), i.e. the maximum channel load,
subject to the constraint that, with probability no less than (1 ), all
the messages have a copy on at least one active channel. We present a
4-approximation algorithm for identical capacity channels and arbitrary
message sizes, and a 2l ln(jMj=)
ln(1=fmax)m-approximation algorithm for related
capacity channels and unit size messages.
Both algorithms are based on computing a collection of disjoint chan-
nel subsets such that, with probability no less than (1 ), at least one
channel is active in each subset. The objective is to maximize the sum of
the minimum subset capacities. Since the exact version of this problem
is NP-complete, we present a 2-approximation algorithm for identical
capacities, and a (8 + o(1))-approximation algorithm for arbitrary ca-
pacities.
Abstract: We present a set of three new time-dependent models with
increasing
exibility for realistic route planning in
flight networks. By
these means, we obtain small graph sizes while modeling airport procedures
in a realistic way. With these graphs, we are able to efficiently
compute a set of best connections with multiple criteria over a full day.
It even turns out that due to the very limited graph sizes it is feasible
to precompute full distance tables between all airports. As a result, best
connections can be retrieved in a few microseconds on real world data.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor
networks, from a network design perspective i.e. towards estimating the least possible number
of sensors to be deployed, their positions and operation chatacteristics needed to perform the
tracking task. To avoid an expensive massive deployment, we try to take advantage of
possible coverage ovelaps over space and time, by introducing a novel combinatorial model
that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial
problem of covering a universe of elements by at least three sets (to ensure that each point in
the network area is covered at any time by at least three sensors, and thus being localized). We
then design and analyze an efficient approximate method for sensor placement and operation,
that with high probability and in polynomial expected time achieves a (log n) approximation
ratio to the optimal solution. Our network design solution can be combined with alternative
collaborative processing methods, to suitably fit different tracking scenaria.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a {\`E}(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios.
Abstract: We study the important problem of tracking moving
targets in wireless sensor networks. We try to overcome the
limitations of standard state of the art tracking methods based on
continuous location tracking, i.e. the high energy dissipation and
communication overhead imposed by the active participation of
sensors in the tracking process and the low scalability, especially
in sparse networks. Instead, our approach uses sensors in a
passive way: they only record and judiciously spread information
about observed target presence in their vicinity; this information
is then used by the (powerful) tracking agent to locate the target
by just following the traces left at sensors. Our protocol is greedy,
local, distributed, energy efficient and very successful, in the
sense that (as shown by extensive simulations) the tracking agent
manages to quickly locate and follow the target; also, we achieve
good trade-offs between the energy dissipation and latency.
Abstract: We investigate the problem of ecient wireless energy recharging in Wireless Rechargeable Sensor Networks (WRSNs). In
such networks special mobile entities (called the Mobile Chargers) traverse the network and wirelessly replenish the energy
of sensor nodes. In contrast to most current approaches, we envision methods that are distributed and use limited network
information. We propose four new protocols for ecient recharging, addressing key issues which we identify, most notably (i)
what are good coordination procedures for the Mobile Chargers and (ii) what are good trajectories for the Mobile Chargers.
Two of our protocols (
DC,DCLK
) perform distributed, limited network knowledge coordination and charging, while two others
(
CC,CCGK
) perform centralized, global network knowledge coordination and charging. As detailed simulations demonstrate,
one of our distributed protocols outperforms a known state of the art method, while its performance gets quite close to the
performance of the powerful centralized global knowledge method.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has recently been proposed as a modulation technique for optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments. We consider the planning problem of an OFDM optical network, where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers and choosing an appropriate modulation level, taking into account the transmission distance. We introduce the Routing, Modulation Level and Spectrum Allocation (RMLSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, prove that is also NP-complete and present various algorithms to solve it. We start by presenting an optimal ILP RMLSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RMLSA into its two
substituent subproblems, namely, (i) routing and modulation level, and (ii) spectrum allocation (RML+SA), and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all the connections in the traffic matrix. In the sequential algorithm, we investigate two policies for defining the order in which connections are considered. We also use a simulated annealing meta-heuristic to obtain even better orderings. We examine the performance of the proposed algorithms through simulation experiments and evaluate the spectrum utilization benefits that can be obtained by utilizing OFDM elastic bandwidth allocation, when compared to a traditional WDM network.
Abstract: We describe the design and implementation of secure and robust protocol and system for a national electronic lottery. Electronic lotteries at a national level are a viable cost effective alternative to mechanical ones when there is a business need to support many types of rdquogames of chancerdquo and to allow increased drawing frequency. Electronic lotteries are, in fact, extremely high risk financial application: If one discovers a way to predict or otherwise claim the winning numbers (even once) the result is huge financial damages. Moreover, the e-lottery process is complex, which increases the possibility of fraud or costly accidental failures. In addition, a national lottery must adhere to auditability and (regulatory) fairness requirements regarding its drawings. Our mechanism, which we believe is the first one of its kind to be described in the literature, builds upon a number of cryptographic primitives that ensure the unpredictability of the winning numbers, the prevention of their premature leakages and prevention of fraud. We also provide measures for auditability, fairness, and trustworthiness of the process. Besides cryptography, we incorporate security mechanisms that eliminate various risks along the entire process. Our system which was commissioned by a national organization, was implemented in the field and has been operational and active for a while, now.
Abstract: As the Internet of Things (IOT) arises, the use of
low-end devices on a daily basis increases. The wireless nature
of communication that these devices provide raises security and
privacy issues. For protecting a user’s privacy, cryptography
offers the tool of zero knowledge proofs (ZKP). In this
paper, we study well-established ZKPprotocols based on the
discrete logarithm problem and we adapt them to the Elliptic
Curve Cryptography (ECC) setting, which consists an ideal
candidate for embedded implementations. Then, we implement
the proposed protocols on Wiselib, a generic and open source
algorithmic library. For the first time, we present a thorough
evaluation of the protocols on two popular hardware platforms
equipped with low end microcontrollers (Jennic JN5139, TI
MSP430) and 802.15.4 RF transceivers, in terms of code size,
execution time, message size and energy requirements. This
work’s results can be used from developers who wish to achieve
certain levels of privacy in their applications.
Abstract: One of the most important applications of wireless sensor
networks is building monitoring and more specically, the
early detection of emergency events and the provision of
guidance for safe evacuation of the building. In this pa-
per, we describe a demo application that, in the event of a
re inside a monitored building, uses the information from
the deployed sensor network in order to nd the shortest
safest path away from the emergency and provides naviga-
tion guidance to the occupants (modelled by a mobile robot),
in order to safely evacuate the building. For this demo, we
developed our own ad-hoc robot-sensor interconnection us-
ing expansion connectors and programming in a low-level
language.
Abstract: Modern Wireless Sensor Networks offer an easy, lowcost
and reliable alternative to the back-end for monitoring
and controlling large geographical areas like Buildings
and Industries. We present the design and implementation details of an open and efficient Prototype System as a solution for low-cost BMS that comprises of heterogeneous, small-factor wireless devices. Placing that in the context of Internet of Things we come up with a solution that can cooperate with other systems installed on the same site to lower power consumption and costs as well as benefit humans that use its services in an transparent way. We evaluate and assess key aspects of the performance of our prototype. Our findings indicate specific approaches
to reduce the operation costs and allow the development of open applications.
Abstract: Urban ecosystems are becoming one of the most potentially attractive scenarios for innovating new services and technologies. In parallel, city managers, urban utilities and other stakeholders are fostering the intensive use of advanced technologies aiming at improving present city performance and its sustainability. The deployment of such technology entails the generation of massive amounts of information which in many cases might become useful for other services and applications. Hence, aiming at taking advantage of such massive amounts of information and deployed technology as well as breaking the potential digital barrier that can be raised, some easy-to-use tools have to be made available to the urban stakeholders. These tools integrated in a platform, operated directly or indirectly by the city, provides a singular opportunity for exploiting the concept of connected city whilst fostering innovation in all city dimensions and making the co-creation concept a reality and eventually impacting on government policies.
Abstract: This work is an attempt to present and describe the design and implementation of a system for the cooperative multiplayer control of gaming and entertainment-related software, based on the use of mobile devices with wireless networking capabilities. We are currently using wireless sensor networking devices as the enabling platform, and our prototype application is based on Google Earth�s integrated flight simulator.
Abstract: For a place that gathers millions of people theWeb seems
pretty lonely at times. This is mainly due to the current predominant
browsing scenario; that of an individual participating
in an autonomous surfing session. We believe that
people should be seen as an integral part of the browsing
and searching activity towards a concept known as social
navigation. In this work, we extend the typical web
browser˘s functionality so as to raise awareness of other
people having similar web surfing goals at the current moment.
We further present features and algorithms that facilitate
online communication and collaboration towards common
searching targets. The utility of our system is established
by experimental studies. The extensions we present
can be easily adopted in a typical web browser.
Abstract: Few IoT systems monitoring energy consumption in buildings have focused on the educational community. IoT in the educational domain can jump-start a process of sustainability awareness and behavioral change towards energy savings, as well as provide tangible financial savings. We present a real-world multi-site IoT deployment, comprising 19 school buildings, aiming at enabling IoT-based energy awareness and sustainability lectures, promoting energy-saving behaviors supported by IoT data. We discuss scenarios where IoT-enabled applications are integrated into school life, providing an engaging and hands-on approach, based on real data, generating value in terms of educational and energy savings outcomes. We also present a set of first results, based on the analysis of school-building data, which highlight potential ways to identify irregularities and inefficiencies.
Abstract: In this work we study the problem of scheduling tasks with dependencies in multiprocessor architectures where processors have different speeds.
We present the preemptive algorithm "Save-Energy" that given a schedule of tasks it post processes it to improve the energy efficiency without any deterioration of the makespan. In terms of time efficiency, we show that preemptive scheduling in an asymmetric system can achieve the same or better optimal makespan than in a symmetric system. Motivited by real multiprocessor systems, we investigate architectures that exhibit limited asymmetry: there are two essentially different speeds. Interestingly, this special case has not been studied in the field of parallel computing and scheduling theory; only the general case was studied where processors have K essentially different speeds. We present the non-preemptive algorithm "Remnants'' that achieves almost optimal makespan. We provide a refined analysis of a recent scheduling method. Based on this analysis, we specialize the scheduling policy and provide an algorithm of (3 + o(1)) expected approximation factor. Note that this improves the previous best factor (6 for two speeds). We believe that our work will convince researchers to revisit this well studied scheduling problem for these simple, yet realistic, asymmetric multiprocessor architectures.
Abstract: We study the problem of energy-balanced data propagation
in wireless sensor networks. The energy balance property
guarantees that the average per sensor energy dissipation
is the same for all sensors in the network, during
the entire execution of the data propagation protocol. This
property is important since it prolongs the network˘s lifetime
by avoiding early energy depletion of sensors.
We propose a new algorithm that in each step decides
whether to propagate data one-hop towards the final destination
(the sink), or to send data directly to the sink. This
randomized choice balances the (cheap) one-hop transimssions
with the direct transimissions to the sink, which are
more expensive but “bypass” the sensors lying close to the
sink. Note that, in most protocols, these close to the sink
sensors tend to be overused and die out early.
By a detailed analysis we precisely estimate the probabilities
for each propagation choice in order to guarantee
energy balance. The needed estimation can easily be performed
by current sensors using simple to obtain information.
Under some assumptions, we also derive a closed form
for these probabilities.
The fact (shown by our analysis) that direct (expensive)
transmissions to the sink are needed only rarely, shows that
our protocol, besides energy-balanced, is also energy efficient.
Abstract: We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, during the entire execution of the data propagation protocol. This property is important since it prolongs the network˘:s lifetime by avoiding early energy depletion of sensors.
We propose a new algorithm that in each step decides whether to propagate data one-hop towards the final destination (the sink), or to send data directly to the sink. This randomized choice balances the (cheap) one-hop transimssions with the direct transimissions to the sink, which are more expensive but “bypass” the sensors lying close to the sink. Note that, in most protocols, these close to the sink sensors tend to be overused and die out early.
By a detailed analysis we precisely estimate the probabilities for each propagation choice in order to guarantee energy balance. The needed estimation can easily be performed by current sensors using simple to obtain information. Under some assumptions, we also derive a closed form for these probabilities.
The fact (shown by our analysis) that direct (expensive) transmissions to the sink are needed only rarely, shows that our protocol, besides energy-balanced, is also energy efficient.
Abstract: The energy balance property (i.e., all nodes having the same energy throughout the network evolution) contributes significantly (along with energy efficiency) to the maximization of the network lifespan and network connectivity. The problem of achieving energy balanced propagation is well studied in static networks, as it has attracted a lot of research attention.
Recent technological advances have enabled sensor devices to be attached to mobile entities of our every day life (e.g. smart-phones, cars, PDAs etc), thus introducing the formation of highly mobile sensor networks.
Inspired by the aforementioned applications, this work is (to the best of our knowledge) the first studying the energy balance property in wireless networks where the nodes are highly and dynamically mobile. In particular, in this paper we propose a new diverse mobility model which is easily parameterized and we also present a new protocol which tries to adaptively exploit the inherent node mobility in order to achieve energy balance in the network in an efficient way.
Abstract: Wireless Sensor Networks are comprised of a vast number of ultra-small, autonomous computing and communication devices, with restricted energy, that co-operate to accomplish a large sensing task. In this work: a) We propose extended versions of two data propagation protocols for such networks: the Sleep-Awake Probabilistic Forwarding Protocol (SW-PFR) and the Hierarchical Threshold sensitive Energy Efficient Network protocol (H-TEEN). These non-trivial extensions improve the performance of the original protocols, by introducing sleep-awake periods in the PFR protocol to save energy, and introducing a hierarchy of clustering in the TEEN protocol to better cope with large networks, b) We implemented the two protocols and performed an extensive simulation comparison of various important measures of their performance with a focus on energy consumption, c) We investigate in detail the relative advantages and disadvantages of each protocol, d) We discuss a possible hybrid combination of the two protocols towards optimizing certain goals.
Abstract: In this work we study energy efficient routing strategies
for wireless ad-hoc networks. In this kind of networks,
energy is a scarce resource and its conservation
and efficient use is a major issue. Our strategy follows
the multi-cost routing approach, according to which a
cost vector of various parameters is assigned to each
link. The parameters of interest are the number of hops
on a path, and the residual energy and the transmission
power of the nodes on the path. These parameters
are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. We evaluate the routing algorithms
proposed in a number of scenarios, with respect
to energy consumption, throughput and other performance
parameters of interest. From the experiments
conducted we conclude that routing algorithms that take
into account energy related parameters, increase the
lifetime of the network, while achieving better performance
than other approaches, such as minimum hop
routing.
Abstract: In this work, we propose an energy-efficient multicasting algorithm
for wireless networks for the case where the transmission
powers of the nodes are fixed. Our algorithm is
based on the multicost approach and selects an optimal
energy-efficient set of nodes for multicasting, taking into account:
i) the node residual energies, ii) the transmission
powers used by the nodes, and iii) the set of nodes covered.
Our algorithm is optimal, in the sense that it can
optimize any desired function of the total power consumed
by the multicasting task and the minimum of the current
residual energies of the nodes, provided that the optimization
function is monotonic in each of these parameters. Our
optimal algorithm has non-polynomial complexity, thus, we
propose a relaxation producing a near-optimal solution in
polynomial time. The performance results obtained show
that the proposed algorithms outperform established solutions
for energy-aware multicasting, with respect to both
energy consumption and network lifetime. Moreover, it is
shown that the near-optimal multicost algorithm obtains
most of the performance benefits of the optimal multicost
algorithm at a smaller computational overhead.
Abstract: A crucial issue in wireless networks is to support efficiently communication patterns that are typical in traditional (wired) networks. These include broadcasting, multicasting, and gossiping (all-to-all communication). In this work we study such problems in static ad hoc networks. Since, in ad hoc networks, energy is a scarce resource, the important engineering question to be solved is to guarantee a desired communication pattern minimizing the total energy consumption. Motivated by this question, we study a series of wireless network design problems and present new approximation algorithms and inapproximability results.
Abstract: Many efforts have been done in the last years to model public transport timetables in order to
find optimal routes. The proposed models can be classified into two types: those representing the
timetable as an array, and those representing it as a graph. The array-based models have been
shown to be very effective in terms of query time, while the graph-based models usually answer
queries by computing shortest paths, and hence they are suitable to be used in combination with
speed-up techniques developed for road networks.
In this paper, we focus on the dynamic behavior of graph-based models considering the case
where transportation systems are subject to delays with respect to the given timetable. We
make three contributions: (i) we give a simplified and optimized update routine for the wellknown
time-expanded model along with an engineered query algorithm; (ii) we propose a new
graph-based model tailored for handling dynamic updates; (iii) we assess the effectiveness of
the proposed models and algorithms by an experimental study, which shows that both models
require negligible update time and a query time which is comparable to that required by some
array-based models.
Abstract: We consider classical linear-time planar separator algorithms, determining for a given planar graph a small subset of the nodes whose removal separates the graph into two components of similar size. These algorithms are based upon Planar Separator Theorems, which guarantee separators of size MediaObjects/InlineFigure1.png and remaining components of size less than 2n/3. In this work, we present a comprehensive experimental study of the algorithms applied to a large variety of graphs, where the main goal is to find separators that do not only satisfy upper bounds but also possess other desirable qualities with respect to separator size and component balance. We propose the usage of fundamental cycles, whose size is at most twice the diameter of the graph, as planar separators: For graphs of small diameter the guaranteed bound is better than the MediaObjects/InlineFigure2.png bounds, and it turns out that this simple strategy almost always outperforms the other algorithms, even for graphs with large diameter.
Abstract: The presentwork considers the following computational problem:
Given any finite game in normal form G and the corresponding
infinitely repeated game G∞, determine in polynomial time (wrt1 the representation
ofG) a profile of strategies for the players inG∞ that is an equilibrium
point wrt the limit-of-means payoff. The problem has been solved
for two players [10], based mainly on the implementability of the threats
for this case. Nevertheless, [4] demonstrated that the traditional notion of
threats is a computationally hard problem for games with at least 3 players
(see also [8]). Our results are the following: (i) We propose an alternative
notion of correlated threats, which is polynomial time computable
(and therefore credible). Our correlated threats are also more severe than
the traditional notion of threats, but not overwhelming for any individual
player. (ii) When for the underlying game G there is a correlated strategy
with payoff vector strictly larger than the correlated threats vector,
we efficiently compute a polynomial–size (wrt the description of G) equilibrium
point for G∞, for any constant number of players. (iii) Otherwise,
we demonstrate the construction of an equilibrium point for an arbitrary
number of players and up to 2 concurrently positive payoff coordinates in
any payoff vector of G. This completely resolves the cases of 3 players, and
provides a direction towards handling the cases of more than 3 players. It
is mentioned that our construction is not a Nash equilibrium point, because
the correlated threats we use are implemented via, not only full synchrony
(as in [10]), but also coordination of the other players˘ actions. But
this seems to be a fair trade-off between efficiency of the construction and
players˘ coordination, in particular because it only affects the punishments
(which are anticipated never to be used).
Abstract: In this paper, we demonstrate the significant impact of (a) the mobility rate and (b) the user density on the performance of routing protocols in ad-hoc mobile networks. In particular, we study the effect of these parameters on two different approaches for designing routing protocols: (a) the route creation and maintenance approach and (b) the support approach that forces few hosts to move, acting as helpers for message delivery. We study one representative protocol for each approach, i.e. AODV for the first approach and RUNNERS for the second. We have implemented the two protocols and performed a large scale and detailed simulation study of their performance. The main findings are: the AODV protocol behaves well in networks of high user density and low mobility rate, while its performance drops for sparse networks of highly mobile users. On the other hand, the RUNNERS protocol seems to tolerate well (and in fact benefit from) high mobility rates and low densities.
Abstract: Evolutionary Game Theory is the study of strategic interactions
among large populations of agents who base their decisions on simple,
myopic rules. A major goal of the theory is to determine broad classes
of decision procedures which both provide plausible descriptions of selfish
behaviour and include appealing forms of aggregate behaviour. For example,
properties such as the correlation between strategies˘ growth rates
and payoffs, the connection between stationary states and the well-known
game theoretic notion of Nash equilibria, as well as global guarantees of
convergence to equilibrium, are widely studied in the literature.
Our paper can be seen as a quick introduction to Evolutionary Game
Theory, together with a new research result and a discussion of many
algorithmic and complexity open problems in the area. In particular, we
discuss some algorithmic and complexity aspects of the theory, which
we prefer to view more as Game Theoretic Aspects of Evolution rather
than as Evolutionary Game Theory, since the term “evolution” actually
refers to strategic adaptation of individuals˘ behaviour through a
dynamic process and not the traditional evolution of populations. We
consider this dynamic process as a self-organization procedure which,
under certain conditions, leads to some kind of stability and assures robustness
against invasion. In particular, we concentrate on the notion of
the Evolutionary Stable Strategies (ESS). We demonstrate their qualitative
difference from Nash Equilibria by showing that symmetric 2-person
games with random payoffs have on average exponentially less ESS than
Nash Equilibria. We conclude this article with some interesting areas of
future research concerning the synergy of Evolutionary Game Theory
and Algorithms.
Abstract: Evolutionary Game Theory is the study of strategic interactions among large populations of agents who base their decisions on simple, myopic rules. A major goal of the theory is to determine broad classes of decision procedures which both provide plausible descriptions of selfish behaviour and include appealing forms of aggregate behaviour. For example, properties such as the correlation between strategies' growth rates and payoffs, the connection between stationary states and Nash equilibria and global guarantees of convergence to equilibrium, are widely studied in the literature. In this paper we discuss some computational aspects of the theory, which we prefer to view more as Game Theoretic Aspects of Evolution than Evolutionary Game Theory, since the term "evolution" actually refers to strategic adaptation of individuals ' behaviour through a dynamic process and not the traditional evolution of populations. We consider this dynamic process as a self-organization procedure, which under certain conditions leads to some kind of stability and assures robustness against invasion.
Abstract: We investigate important combinatorial and algorithmic properties of $G_{n, m, p}$ random intersection graphs. In particular, we prove that with high probability (a) random intersection graphs are expanders, (b) random walks on such graphs are ``rapidly mixing" (in particular they mix in logarithmic time) and (c) the cover time of random walks on such graphs is optimal (i.e. it is $\Theta(n \log{n})$). All results are proved for $p$ very close to the connectivity threshold and for the interesting, non-trivial range where random intersection graphs differ from classical $G_{n, p}$ random graphs.
Abstract: Raising awareness among young people, and especially students, on the relevance of behavior change for achieving energy savings is increasingly being considered as a key enabler towards long-term and cost-effective energy efficiency policies. However, the way to successfully apply educational interventions focused on such targets inside schools is still an open question. In this paper, we present our approach for enabling IoT-based energy savings and sustainability awareness lectures and promoting data-driven energy-saving behaviors focused on a high school audience. We present our experiences toward the successful application of sets of educational tools and software over a real-world Internet of Things (IoT) deployment. We discuss the use of gamification and competition as a very effective end-user engagement mechanism for school audiences. We also present the design of an IoT-based hands-on lab activity, integrated within a high school computer science curriculum utilizing IoT devices and data produced inside the school building, along with the Node-RED platform. We describe the tools used, the organization of the educational activities and related goals. We report on the experience carried out in both directions in a high school in Italy and conclude by discussing the results in terms of achieved energy savings within an observation period.
Abstract: Several networking technologies targeting the IoT application space currently compete within the smart city domain, both in outdoor and indoor deployments. However, up till now, there is no clear winner, and results from real-world deployments have only recently started to surface. In this paper, we present a comparative study of 2 popular IoT networking technologies, LoRa and IEEE 802.15.4, within the context of a research-oriented IoT deployment inside school buildings in Europe, targeting energy efficiency in education. We evaluate the actual performance of these two technologies in real-world settings, presenting a comparative study on the effect of parameters like the built environment, network quality, or data rate. Our results indicate that both technologies have their advantages, and while in certain cases both are perfectly adequate, in our use case LoRa exhibits a more robust behavior. Moreover, LoRa˘s characteristics make it a very good choice for indoor IoT deployments such as in educational buildings, and especially in cases where there are low bandwidth requirements.
Abstract: Energy is a scarce resource in ad hoc wireless networks and it
is of paramount importance to use it e±ciently when establishing com-
munication patterns. In this work we study algorithms for computing
energy-e±cient multicast trees in ad hoc wireless networks. Such algo-
rithms either start with an empty solution which is gradually augmented
to a multicast tree (augmentation algorithms) or take as input an initial
multicast tree and `walk' on di®erent multicast trees for a Żnite number
of steps until some acceptable decrease in energy consumption is achieved
(local search algorithms).
We mainly focus on augmentation algorithms and in particular we have
implemented a long list of existing such algorithms in the literature and
new ones. We experimentally compare all these algorithms on random
geometric instances of the problem and obtain results in terms of the
energy e±ciency of the solutions obtained. Additional results concerning
the running time of our implementations are also presented. We also ex-
plore how much the solutions obtained by augmentation algorithms can
be improved by local search algorithms. Our results show that one of our
new algorithms and its variations achieve the most energy-e±cient solu-
tions while being very fast. Our investigations shed some light to those
properties of geometric instances of the problem which make augmenta-
tion algorithms perform well.
Abstract: An enhanced impairment-aware path computation element (EPCE) for dynamic
transparent optical networks is proposed and experimentally evaluated. The obtained results show
that by using the EPCE, light-path setup times of few seconds are achieved.
Abstract: Core optical networks using reconfigurable optical
switches and tunable lasers appear to be on the road towards
widespread deployment and could evolve to all-optical mesh
networks in the coming future. Considering the impact of physical
layer impairments in the planning and operation of all-optical
(and translucent) networks is the main focus of the DICONET
project. The impairment aware network planning and operation
tool (NPOT) is the main outcome of DICONET project, which
is explained in detail in this paper. The key building blocks of
the NPOT, consisting of network description repositories, the
physical layer performance evaluator, the impairment aware
routing and wavelength assignment engines, the component
placement modules, failure handling and the integration of
NPOT in the control plane are the main contributions of this
work. Besides, the experimental result of DICONET proposal for
centralized and distributed control plane integration schemes and
the performance of the failure handling in terms of restoration
time is presented in this work.
Abstract: A centralized impairment-aware lightpath restoration scheme is experimentally
demonstrated. Through the implementation of the QoT estimator module on FPGA technology,
lightpath restoration times around 1.3s are obtained for the high-priority traffic class.
Abstract: Online and Realtime counting and estimating the cardinality of sets is highly desirable for a large variety of applications, representing a foundational block for the efficient deployment and access of emerging internet scale information systems. In this work we implement three well known duplicate
insensitive counting algorithms and evaluate their performance in a testbed of resource-limited commercial off-the-shelf hardware devices. We focus on devices that can be used in wireless mobile and sensor applications and evaluate the memory complexity, time complexity and absolute error of the algorithms under different realistic scenaria. Our findings indicate the suitability of each algorithm depending on the application characteristics.
Abstract: We present key aspects (hardware, software, topology, networking) of SenseWall, an experimental sensor network test-bed we have created for the implementation and engineering of distributed sensor network algorithms. We then describe how SenseWall has been in particular used to implement two recent state of the art algorithms for energy balanced sensor data propagation. We elaborate on the issues and challenges created by the restrictions and particularities of the experimental test-bed and how we dealt with them. We also carry out a detailed performance evaluation comparing the energy balance protocols to two baseline protocols that include only either single hop or direct data transmissions.
Abstract: This paper evaluates a centralised impairment-aware path restoration approach for GMPLScontrolled
transparent optical networks. Experimental results on a 14-node network test-bed show successful
QoT compliant path restoration of around 3.6 seconds.
Abstract: This paper presents experimental measurements of bulk data
transfer in a wireless multi-hop sensor network environment. We
investigate the effect of the number of the hops and the
conditions of the surrounding environment on the performance of
the network in terms of achieved transfer rates. Our findings
validate the theoretically established results on the relation
between the throughput and the network diameter, i.e.~is inversely
proportional to the network diameter and in particular the number
of hops needed for data to reach its destination. Furthermore, we
indicate how throughput is (significantly) affected by the type of
the physical environment, i.e.~it drops as the harshness of the
ambient conditions increases.
Abstract: In large scale networks users often behave selfishly trying to minimize their routing cost. Modelling this as a noncooperative game, may yield a Nash equilibrium with unboundedly poor network performance. To measure this inefficacy, the Coordination Ratio or Price of Anarchy (PoA) was introduced. It equals the ratio of the cost induced by the worst Nash equilibrium, to the corresponding one induced by the overall optimum assignment of the jobs to the network. On improving the PoA of a given network, a series of papers model this selfish behavior as a Stackelberg or Leader-Followers game.
We consider random tuples of machines, with either linear or M/M/1 latency functions, and PoA at least a tuning parameter c. We validate a variant (NLS) of the Largest Latency First (LLF) Leaderrsquos strategy on tuples with PoA ge c. NLS experimentally improves on LLF for systems with inherently high PoA, where the Leader is constrained to control low portion agr of jobs. This suggests even better performance for systems with arbitrary PoA. Also, we bounded experimentally the least Leaderrsquos portion agr0 needed to induce optimum cost. Unexpectedly, as parameter c increases the corresponding agr0 decreases, for M/M/1 latency functions. All these are implemented in an extensive Matlab toolbox.
Abstract: We investigate the efficiency of protocols for basic communication in ad-hoc mobile networks. All protocols are classified in the semi-compulsory, relay category according to which communication is achieved through a small team of mobile hosts, called the support, which move in a predetermined way and serves as an intermediate pool for receiving and delivering messages. We implement a new semi-compulsory, relay protocol in which the motion of the support is based on a so-called ``hunter" strategy developed for a pursuit-evasion game. We conduct an extensive, comparative experimental study of this protocol with other two existing protocols, each one possessing a different motion for its support. We considered several types of inputs, including among others two kinds of motion patterns (random and adversarial) for the mobile hosts not in the support. Our experiments showed that for all protocols the throughput scales almost linearly with the number of mobile hosts in the network, and that a small support size suffices for efficient communication. An interesting outcome is that in most cases the new protocol is inferior to the other two, although it has a global knowledge of the motion space.
Abstract: We study the fundamental problem of computing an arbitrary Nash equilibrium in bimatrix games.
We start by proposing a novel characterization of the set of Nash equilibria, via a bijective map to the solution set of a (parameterized) quadratic program, whose feasible space is the (highly structured) set of correlated equilibria.
We then proceed by proposing new subclasses of bimatrix games for which either an exact polynomial-time construction, or at least a FPTAS, is possible. In particular, we introduce the notion of mutual (quasi-) concavity of a bimatrix game, which assures (quasi-) convexity of our quadratic program, for at least one value of the parameter. For mutually concave bimatrix games, we provide a polynomial-time computation of a Nash equilibrium, based on the polynomial tractability of convex quadratic programming. For the mutually quasiconcave games, we provide (to our knowledge) the first FPTAS for the construction of a Nash equilibrium.
Of course, for these new polynomially tractable subclasses of bimatrix games to be useful, polynomial-time certificates are also necessary that will allow us to efficiently identify them. Towards this direction, we provide various characterizations of mutual concavity, which allow us to construct such a certificate. Interestingly, these characterizations also shed light to some structural properties of the bimatrix games satisfying mutual concavity. This subclass entirely contains the most popular subclass of polynomial-time solvable bimatrix games, namely, all the constant-sum games (rank-0 games). It is though incomparable to the subclass of games with fixed rank [KT07]: Even rank-1 games may not be mutually concave (eg, Prisoner's dilemma), but on the other hand, there exist mutually concave games of arbitrary (even full) rank. Finally, we prove closeness of mutual concavity under (Nash equilibrium preserving) positive affine transformations of bimatrix games having the same scaling factor for both payoff matrices. For different scaling factors the property is not necessarily preserved.
Abstract: In the near future, it is reasonable to expect that new types of systems will appear, of massive scale, expansive and permeating their environment, of very heterogeneous nature, and operating in a constantly changing networked environment. We expect that most such systems will have the form of a very large society of unimpressive networked artefacts. Yet by cooperation, they will be organized in large societies to accomplish tasks that are difficult or beyond the capabilities of todays conventional centralized systems.
The Population Protocol model of Angluin et. al. introduced a novel approach towards the study of such systems by assuming that each artefact is an agent, so limited, that can be represented as a finite-state sensor of constant (O(1)) total storage capacity. Such agents are passively mobile and communicate in pairs using a low-power wireless signal. It has been proven that, although such systems consist of extremely limited, cheap and bulk-produced hardware devices, they are still capable of carrying out very useful nontrivial computations. Based on this approach we investigate many new intriguing directions.
Abstract: We consider the performance of a number of DPLL algorithms on random 3-CNF formulas with n variables and m = rn clauses. A long series of papers analyzing so-called “myopic” DPLL algorithms has provided a sequence of lower bounds for their satisfiability threshold. Indeed, for each myopic algorithm A it is known that there exists an algorithm-specific clause-density, rA , such that if rple, rA equals 8/3 = 2.66.. for orderred-dll and 3.003... for generalized unit clause. We prove that for densities well within the provably satisfiable regime, every backtracking extension of either of these algorithms takes exponential time. Specifically, all extensions of orderred-dll take exponential time for r > 2.78 and the same is true for generalized unit clause for all r > 3.1. Our results imply exponential lower bounds for many other myopic algorithms for densities similarly close to the corresponding rA .
Abstract: We study the combinatorial structure and computational complexity of extreme Nash equilibria, ones that maximize or minimize a certain objective function, in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user routes its traffic on links that minimize its expected latency cost.
Our structural results provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria and that under a certain condition, the social cost of any Nash equilibrium is within a factor of 6 + epsi, of that of the fully mixed Nash equilibrium, assuming that link capacities are identical.
Our complexity results include hardness, approximability and inapproximability ones. Here we show, that for identical link capacities and under a certain condition, there is a randomized, polynomial-time algorithm to approximate the worst social cost within a factor arbitrarily close to 6 + epsi. Furthermore, we prove that for any arbitrary integer k > 0, it is -hard to decide whether or not any given allocation of users to links can be transformed into a pure Nash equilibrium using at most k selfish steps. Assuming identical link capacities, we give a polynomial-time approximation scheme (PTAS) to approximate the best social cost over all pure Nash equilibria. Finally we prove, that it is -hard to approximate the worst social cost within a multiplicative factor . The quantity is the tight upper bound on the ratio of the worst social cost and the optimal cost in the model of identical capacities.
Abstract: We propose a fair scheduling algorithm for Computational Grids, called Fair Execution Time Estimation (FETE) algorithm. FETE assigns a task to the computation resource that minimizes what we call its fair execution time estimation. The fair execution time of a task on a certain resource is an estimation of the time by which a task will be executed on the resource, assuming it gets a fair share of the resource’s computational power. Though space-shared scheduling is used in practice, the estimates of the fair execution times are obtained assuming that a time-sharing discipline is used. We experimentally evaluate the proposed algorithm and observe that it outperforms other known scheduling algorithms. We also propose a version of FETE, called Simple FETE (SFETE), which requires no a-priori knowledge of the tasks workload and in most cases has similar performance to that of FETE.
Abstract: We study the problem of fair resource allocation in a simple cooperative multi-agent setting where we have k agents and a set of n objects to be allocated to those agents. Each object is associated with a weight represented by a positive integer or real number. We would like to allocate all objects to the agents so that each object is allocated to only one agent and the weight is distributed fairly. We adopt the fairness index popularized by the networking community as our measure of fairness, and study centralized algorithms for fair resource allocation. Based on the relationship between our problem and number partitioning, we devise a greedy algorithm for fair resource allocation that runs in polynomial time but is not guaranteed to find the optimal solution, and a complete anytime algorithm that finds the optimal solution but runs in exponential time. Then we study the phase transition behavior of the complete algorithm. Finally, we demonstrate that the greedy algorithm actually performs very well and returns almost perfectly fair allocations.
Abstract: We investigate the impact of multiple, mobile sinks on
efficient data collection in wireless sensor networks. To
improve performance, our protocol design focuses on minimizing
overlaps of sink trajectories and balancing the service load
among the sinks. To cope with high network dynamics, placement
irregularities and limited network knowledge we propose three different
protocols: a) a centralized one, that explicitly equalizes spatial coverage;
this protocol assumes strong modeling assumptions, and also serves as a kind
of performance lower bound in uniform networks of low dynamics b)
a distributed protocol based on mutual avoidance of sinks c) a clustering
protocol that distributively groups service areas towards balancing the load per sink.
Our simulation findings demonstrate significant gains in latency, while keeping the success
rate and the energy dissipation at very satisfactory levels even under
high network dynamics and deployment heterogeneity.
Abstract: We study the problem of fast and energy-efficient
data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory
mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually ”learns”
the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.
Abstract: We propose a new data dissemination protocol for wireless sensor networks, that basically pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the sink. This extra information is still local, limited and obtained in a distributed manner. This extra knowledge is acquired by only a small fraction of sensors thus the extra energy cost only marginally affects the overall protocol efficiency. The new protocol has low latency and manages to propagate data successfully even in the case of low densities. Furthermore, we study in detail the effect of failures and show that our protocol is very robust. In particular, we implement and evaluate the protocol using large scale simulation, showing that it significantly outperforms well known relevant solutions in the state of the art.
Abstract: ManyWSN algorithms and applications are based on knowledge
regarding the position of nodes inside the network area.
However, the solution of using GPS based modules in order
to perform localization in WSNs is a rather expensive solution
and in the case of indoor applications, such as smart
buildings, is also not applicable. Therefore, several techniques
have been studied in order to perform relative localization
in WSNs; that is, to compute the position of
a node inside the network area relatively to the position
of other nodes. Many such techniques are based on indicators
like the Radio Signal Strength Indicator (RSSI)
and the Link Quality Indicator (LQI). These techniques are
based on the assumption that there is strong correlation between
the Euclidian distance of the communicating motes
and these indicators. Therefore, high values of RSSI and
LQI should indicate physical proximity of two communicating
nodes. However, these indicators do not depend solely on
distance. Physical obstacles, ambient electromagnetic noise
and interferences from other wireless transmissions also affect
the quality of wireless communication in a stochastic
way. In this paper we propose, implement, experimentally
fine tune and evaluate a localization algorithm that exploits
the stochastic nature of interferences during wireless communications
in order to perform localization in WSNs. Our
algorithm is particularly designed for in-door localisation of
moving people in smart buildings. The localisation achieved
is fine-grained, i.e. the position of the target mote is successfully
computed with approximately one meter accuracy.
This fine-grained localisation can be used by smart Building
Management Systems in many applications such as room
adaptation to presence. In our scenario, our proposed algorithm is used by a smart room in order to localise the
position of people inside the room and adapt room illumination
accordingly.
Abstract: The Internet and the Web have arguably surpassed the von Neumann Computer as the most complex computational artifacts of our times. Out of all the formidable characteristics of the Internet/Web, it seems that the most novel is its socio-economic complexity. The Internet and the Web are built, operated and used by a multitude of very diverse economic interests, which compete or collaborate in various degrees. In fact, this suggests that some very important insights about the Web may come from a fusion of ideas from Algorithms with concepts and techniques from Mathematical Economics and Game Theory. Since 2000, a new field has emerged (Algorithmic Game Theory and Computational Internet Economics), which examines exactly such ideas. We feel that this field belongs to Web Science. Some of the main topics of that field examined so far are Internet equilibria, the so called “Price of Anarchy” (a measure of loss of optimality due to selfishness [Koutsoupias,Papadimitriou (1999)], electronic markets and their equilibria, auctions, and algorithmic mechanisms design (inverse game theory or how to design a game in the net in such a clever way that individual players, motivated solely by their selfish interests, actually end up meeting the goals of the game designer!). Our paper here surveys the main concepts and results of this very promising subfield.
Abstract: Motivated by the problem of allocating optical bandwidth in tree–shaped WDM networks, we study the fractional path coloring problem in trees. We consider the class of locally-symmetric sets of paths on binary trees and prove that any such set of paths has a fractional coloring of cost at most 1.367L, where L denotes the load of the set of paths. Using this result, we obtain a randomized algorithm that colors any locally-symmetric set of paths of load L on a binary tree (with reasonable restrictions on its depth) using at most 1.367L+o(L) colors, with high probability.
Abstract: We consider the following distributed optimization problem: three agents i =
1; 2; 3 are each presented with a load drawn independently from the same known prior distribution.
Then each agent decides on which of two available bins to put her load. Each bin has
capacity �, and the objective is to find a distributed protocol that minimizes the probability
that an overflow occurs (or, equivalently, maximizes the winning probability).
In this work, we focus on the cases of full information and local information, depending on
whether each agent knows the loads of both other agents or not. Furthermore, we distinguish
between the cases where the agents are allowed to follow different decision rules (eponymous
model) or not (anonymous model ). We assume no communication among agents.
First, we present optimal protocols for the full information case, for both the anonymous and
the eponymous model.
For the local information, anonymous case, we show that the winning probability is upper
bounded by 0.622 in the case where the input loads are drawn from the uniform distribution.
Motivated by [3], we present a general method for computing the optimal single-threshold protocol
for any continuous distribution, and we apply this method to the case of the exponential
distribution.
Finally, we show how to compute, in exponential time, an optimal protocol for the local
information, eponymous model for the case where the input loads are drawn from a discretevalued,
bounded distribution.
Abstract: Fun in Numbers (FinN) is a platform for developing and playing mobile, locative and collaborative distributed games
using wireless sensors. Using FinN, a very large and diverse set of games can be enhanced, by maximizing the on-game
experience and collecting statistics for off-line, web-based view. At the same time the essence of such games remains the
same: fun in large numbers, in every place and at any time. FinN is implemented using a combination of JAVA Standard
and Mobile editions, while on the hardware part we use wireless sensor devices, called Sun SPOTs. In the future, mobile
phones that have some kind of sensors embedded, or other custom devices can be used for the same purpose. We report a
number of examples of games created with FinN and briefly present the architecture of our platform.
Abstract: We discuss two different ways of having fun with two different kinds of games: On the one hand, we present a framework for developing multiplayer pervasive games that rely on the use of mobile sensor networks. On the other hand, we show how to exploit game theoretic concepts in order to study the graph-theoretic problem of vertex coloring.
Abstract: In this paper we present the design of a simulator platform called FUSE (Fast Universal Simulator Engine). The term Universal means that the Engine can be adapted easily to different domains and be used for varying simulation needs, although our main target is simulation of distributed algorithms in distributed computing environments. The Engine is Fast in the sense that the simulation overhead is minimal and very large systems can be simulated. We discuss the architecture and the design decisions that form the basis of these features. We also describe the functionality that is provided to its users (e.g., monitoring, statistics, etc.).
Abstract: We discuss key findings and technological challenges related to SmartSantander, an EU project that is developing a city-scale experimental facility for Internet of Things and Future Internet experimentation. The main goal of the project is to design and construct a city scale lab for experimentation and provide an integrated framework for implementing Smart City services.
Abstract: Distributed algorithm designers often assume that system processes execute the same predefined software. Alternatively, when they do not assume that, designers turn to non-cooperative games and seek an outcome that corresponds to a rough consensus when no coordination is allowed. We argue that both assumptions are inapplicable in many real distributed systems, e.g., the Internet, and propose designing self-stabilizing and Byzantine fault-tolerant distributed game authorities. Once established, the game authority can secure the execution of any complete information game. As a result, we reduce costs that are due to the processes˘ freedom of choice. Namely, we reduce the price of malice.
Abstract: In this work we experimentally study the min order Radiocoloring problem (RCP) on Chordal, Split and Permutation graphs, which are three basic families of perfect graphs. This problem asks to find an assignment using the minimum number of colors to the vertices of a given graph G, so that each pair of vertices which are at distance at most two apart in G have different colors. RCP is an NP-Complete problem on chordal and split graphs [4]. For each of the three families, there are upper bounds or/and approximation algorithms known for minimum number of colors needed to radiocolor such a graph [4,10].
We design and implement radiocoloring heuristics for graphs of above families, which are based on the greedy heuristic. Also, for each one of the above families, we investigate whether there exists graph instances requiring a number of colors in order to be radiocolored, close to the best known upper bound for the family. Towards this goal, we present a number generators that produce graphs of the above families that require either (i) a large number of colors (compared to the best upper bound), in order to be radiocolored, called ldquoextremalrdquo graphs or (ii) a small number of colors, called ldquonon-extremalrdquoinstances. The experimental evaluation showed that random generated graph instances are in the most of the cases ldquonon-extremalrdquo graphs. Also, that greedy like heuristics performs very well in the most of the cases, especially for ldquonon-extremalrdquo graphs.
Abstract: We consider the generation of prime order elliptic curves
(ECs) over a prime field Fp using the Complex Multiplication (CM)
method. A crucial step of this method is to compute the roots of a special
type of class field polynomials with the most commonly used being the
Hilbert and Weber ones, uniquely determined by the CM discriminant
D. In attempting to construct prime order ECs using Weber polynomials
two difficulties arise (in addition to the necessary transformations of the
roots of such polynomials to those of their Hilbert counterparts). The
first one is that the requirement of prime order necessitates that D ≡ 3
(mod 8), which gives Weber polynomials with degree three times larger
than the degree of their corresponding Hilbert polynomials (a fact that
could affect efficiency). The second difficulty is that these Weber polynomials
do not have roots in Fp. In this paper we show how to overcome
the above difficulties and provide efficient methods for generating ECs of
prime order supported by a thorough experimental study. In particular,
we show that such Weber polynomials have roots in F
p3 and present a
set of transformations for mapping roots of Weber polynomials in F
p3
to roots of their corresponding Hilbert polynomials in Fp. We also show
how a new class of polynomials, with degree equal to their corresponding
Hilbert counterparts (and hence having roots in Fp), can be used
in the CM method to generate prime order ECs. Finally, we compare
experimentally the efficiency of using this new class against the use of
the aforementioned Weber polynomials.
Keywords: Elliptic Curve Cryptosystems,
Abstract: Geographic routing is becoming the protocol of choice for many sensor network applications. Some very efficient geographic routing algorithms exist, however they require a preliminary planarization of the communication graph. Planarization induces overhead which makes this approach not optimal when lightweight protocols are required. On the other hand, georouting algorithms which do not rely on planarization have fairly low success rates and either fail to route messages around all but the simplest obstacles or have a high topology control overhead (e.g. contour detection algorithms). In this entry we describe the GRIC algorithm which was designed to overcome some of those limitations. The GRIC algorithm was proposed in [PN07a]. It is the first lightweight and efficient on demand (i.e. all-to-all) geographic routing algorithm which does not require planarization, has almost 100% delivery rates (when no obstacles are added), and behaves well in the presence of large communication blocking obstacles.
Abstract: The technological as well as software advances in
microelectronics and embedded component design have led to the
development of low cost, small-sized devices capable of forming
wireless, ad-hoc networks and sensing a number of qualities of
their environment, while performing computations that depend
on the sensed qualities as well as information received by their
peers. These sensor networks rely on the collective power of
the separate devices as well as their computational and sensing
capabilities to understand "global" environmental states through
locally sampled information and local sensor interactions. Due
to the locality of the sensor networks, that naturally arises due
to the locality of their communications capabilities, a number
of interesting connections exist between these networks and
geometrical concepts and problems. In this paper we study two
simple problems that pertain to the formation of low power
and low interference communication patterns in fixed topology
sensor networks. We study the problem of using multihop
communication links instead of direct ones as well as the problem
of forming a communication ring of sensor networks so as to
reduce power consumption as well as interference from other
nodes. Our focus is on the connection between sensor networks
and geometrical concepts, rather than on practicality, so as to
highlight their interrelationship.
Abstract: We study geometric versions of the min-size k-clustering
problem, a clustering problem which generalizes clustering to minimize
the sum of cluster radii and has important applications. We prove that
the problem can be solved in polynomial time when the points to be clus-
tered are located on a line. For Euclidean spaces of higher dimensions,
we show that the problem is NP-hard and present polynomial time ap-
proximation schemes. The latter result yields an improved approximation
algorithm for the related problem of k-clustering to minimize the sum of
cluster diameters.
Abstract: A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information
systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra˘s algorithm) on a suitably defined graph. We investigate reduction
of the search space while simultaneously retaining data structures, created during a preprocessing
phase, of size linear (i.e., optimal) to the size of the graph. We show that the search space of
Dijkstra˘s algorithm can be significantly reduced by extracting geometric information from a given
layout of the graph and by encapsulating precomputed shortest-path information in resulted geometric
objects (containers). We present an extensive experimental study comparing the impact of
different types of geometric containers using test data from real-world traffic networks. We also
present new algorithms as well as an empirical study for the dynamic case of this problem, where
edge weights are subject to change and the geometric containers have to be updated and show that
our new methods are two to three times faster than recomputing everything from scratch. Finally,
in an appendix, we discuss the software framework that we developed to realize the implementations
of all of our variants of Dijkstra˘s algorithm. Such a framework is not trivial to achieve as our
goal was to maintain a common code base that is, at the same time, small, efficient, and flexible,
as we wanted to enhance and combine several variants in any possible way.
Abstract: Building efficient internet-scale data management services is the main focus of this chapter. In particular, we aim to show how to leverage DHT technology and extend it with novel algorithms and architectures in order to (i) improve efficiency and reliability for traditional DHT (exact-match) queries, particularly exploiting the abundance of altruism witnessed in real-life P2P networks, (ii) speedup range queries for data stored on DHTs, and (iii) support efficiently and scalably the publish/subscribe paradigm over DHTs, which crucially depends on algorithms for supporting rich queries on string-attribute data.
Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.
Abstract: Green Awareness in Action (GAIA) is introducing game challenges to the school community, where real-world sensor data produced inside school buildings are used,
aiming to increase awareness and reduce energy consumption. An initial small-scale in-school evaluation trial of the games˘ deployment is reported here.
Abstract: Educational buildings constitute 17% of the non-residential building stock in the EU [1], while recent work shows that a focus on energy use in schools can potentially yield an array of rewards, in concert with educational excellence and a healthy learning environment [2].
Having these in mind, GAIA1, a Horizon2020 EC-funded project, is developing an IoT platform
that combines sensing, web-based and gamification elements, in order to address the
educational community. Its primary aim is to increase awareness about energy consumption
and sustainability, based on real sensor data produced by the school buildings where students and teachers live and work, while also lead towards behavior change in terms of energy efficiency.
Abstract: The Internet of Things is shaping up to be the ideal vehicle for introducing pervasive computing in our everyday lives, especially in the form of smart home and building management systems. However, although such technologies are gradually becoming more mainstream, there is still a lot of ground to be covered with respect to public buildings and specifically ones in the educational sector. We discuss here \Green Mindset", an action focusing on energy efficiency and
sustainability in Greek public schools. A large-scale sensor infrastructure has been deployed to 12 public school buildings across diverse settings. We report on the overall design and implementation of the system, as well as on some first results coming from the data produced. Our system provides a flexible and efficient basis for realizing a unified approach to monitoring energy consumption and environmental parameters,
that can be used both for building administration
and educational purposes.
Abstract: The simplex method has been successfully used in solving linear programming problems for many years. Parallel approaches have also extensively been studied due to the intensive computations required, especially for the solution of large linear problems (LPs). In this paper we present a highly scalable parallel implementation framework of the standard full tableau simplex method on a highly parallel (distributed memory) environment. Specifically, we have designed and implemented a suitable column distribution scheme as well as a row distribution scheme and we have entirely tested our implementations over a considerably powerful distributed platform (linux cluster with myrinet interface). We then compare our approaches (a) among each other for variable number of problem size (number of rows and columns) and (b) to other recent and valuable corresponding efforts in the literature. In most cases, the column distribution scheme performs quite/much better than the row distribution scheme. Moreover, both schemes (even the row distribution scheme over large-scale problems) lead to particularly high speedup and efficiency values, which are considerably better in all cases than the ones achieved in other similar research efforts and implementations. Moreover, we further evaluate our basic parallelization scheme over very large LPs in order to validate more reliably the high efficiency and scalability achieved.
Abstract: One of the major problems algorithm designers usually face is to know in advance whether a proposed optimization algorithm is going to behave as planned, and if not, what changes are to be made to the way new solutions are examined so that the algorithm performs nicely. In this work we develop a methodology for differentiating good neighborhoods from bad ones. As a case study we consider the structure of the space of assignments for random 3-SAT formulas and we compare two neighborhoods, a simple and a more refined one that we already know the corresponding algorithm behaves extremely well. We give evidence that it is possible to tell in advance what neighborhood structure will give rise to a good search algorithm and we show how our methodology could have been used to discover some recent results on the structure of the SAT space of solutions. We use as a tool Go with the winners, an optimization heuristic that uses many particles that independently search the space of all possible solutions. By gathering statistics, we compare the combinatorial characteristics of the different neighborhoods and we show that there are certain features that make a neighborhood better than another, thus giving rise to good search algorithms.
Abstract: Human mobility monitoring and respective traces are important for understanding human behavior, respective patterns and associated context. Such data can be potentially used in business intelligence-oriented systems, for providing added value commercial services or insight to internal enterprise procedures. At the same time, smartphones are rapidly becoming an indispensable tool for our everyday life, while their advanced networking and computing capabilities are increasingly being used as enablers for new applications. We discuss here a system using a stable computing and networking infrastructure along with smartphone applications, based on commodity technologies, meant to be deployed rapidly and provide analytics almost in real-time for such aspects. We also discuss a related scenario in order to provide insight as to where our system could be used. We briefly present the deployment of our system in two settings, an office building and a research exhibition event, along with our experiences. Our findings show that it is feasible and efficient to deploy and operate our system relatively easy, producing meaningful data.
Abstract: In this paper we investigate the construction of efficient secret sharing schemes by using a technique called hypergraph decomposition, extending in a non-trivial way the previously studied graph decomposition technique. A major advantage advantage of hypergraph decomposition is that it applies to any access structure, rather than only structures representable as graphs. As a consequence we obtain secret sharing schemes for several classes of access structures with improved efficiency over previous results. We also obtain an elementary characterization of the ideal access structures among the hyperstars, which is of independent interest.
Abstract: For a place that gathers millions of people the Web seems pretty lonely at times. This is mainly due to the current predominant browsing scenario; that of an individual participating in an autonomous surfing session. We believe that people should be seen as an integral part of the browsing and searching activity towards a concept known as social navigation. Based on this observation we present iClone (www.iclone.com), a social web browser that is able to raise awareness of other people surfing similar websites at the same time by utilizing temporal correlations of their web history logs and to facilitate online communication and collaboration.
Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical networks. In such networks and in the absence of regenerators, the signal quality of transmission degrades due to physical layer
impairments. We initially present an algorithm for solving the static RWA problem based on an LP relaxation formulation that tends to yield integer solutions. To account for signal degradation due to physical impairments, we model the effects of the path length, the path hop count, and the interference among ligthpaths by imposing additional (soft) constraints on RWA. The objective of the resulting optimization problem is not only to serve the
connection requests using the available wavelengths, but also to minimize the total accumulated signal degradation on the selected lightpaths. Our simulation studies indicate that the proposed RWA algorithms select the lightpaths for the requested connections so as to avoid impairment generating sources, thus dramatically reducing the overall physical-layer blocking when compared to RWA algorithms that do not account for impairments.
Abstract: Dynamic graph algorithms have been extensively studied in the last two
decades due to their wide applicabilityin manycon texts. Recently, several
implementations and experimental studies have been conducted investigating
the practical merits of fundamental techniques and algorithms. In most
cases, these algorithms required sophisticated engineering and fine-tuning
to be turned into efficient implementations. In this paper, we surveysev -
eral implementations along with their experimental studies for dynamic
problems on undirected and directed graphs. The former case includes
dynamic connectivity, dynamic minimum spanning trees, and the sparsification
technique. The latter case includes dynamic transitive closure and
dynamic shortest paths. We also discuss the design and implementation of
a software libraryfor dynamic graph algorithms.
Abstract: In this work we study the implementation of multicost rout-
ing in a distributed way in wireless mobile ad hoc networks.
In contrast to traditional single-cost routing, where each
path is characterized by a scalar, in multicost routing a
vector of cost parameters is assigned to each network link,
from which the cost vectors of candidate paths are calcu-
lated. These parameters are combined in various optimiza-
tion functions, corresponding to different routing algorithms,
for selecting the optimal path. Up until now the performance
of multicost and multi-constrained routing in wireless ad hoc
networks has been evaluated either at a theoretical level or
by assuming that nodes are static and have full knowledge
of the network topology and nodes� state. In the present
paper we assess the performance of multicost routing based
on energy-related parameters in mobile ad hoc networks by
embedding its logic in the Dynamic Source Routing (DSR)
algorithm, which is a well-known fully distributed routing
algorithm. We use simulations to compare the performance
of the multicost-DSR algorithm to that of the original DSR
algorithm and examine their behavior under various node
mobility scenarios. The results confirm that the multicost-
DSR algorithm improves the performance of the network in
comparison to the original DSR algorithm in terms of energy efficiency. The multicost-DSR algorithm enhances the
performance of the network not only by reducing energy
consumption overall in the network, but also by spreading
energy consumption more uniformly across the network, pro
longing the network lifetime and reducing the packet dropprobability. Furthermore the delay suffered by the packets
reaching their destination for the case of the multicost-DSR
algorithm is shown to be lower than in the case of the orig
inal DSR algorithm.
Abstract: In this work we discuss Fun in Numbers, a software platform for implementing multiplayer games and interactive installations, that are based on the use of ad hoc mobile sensing devices. We utilize a detailed log of a three-day long public showcase as a basis to discuss the implementation issues related to a set of games and installations, which are examples of this unique category of applications, utilizing a blend of technologies. We discuss their fundamental concepts and features, also arguing that they have many aspects and potential uses. The architecture of the platform and implementation details are highlighted in this work, along with detailed descriptions of the protocols used. Our experiments shed light on a number of key issues, such as network scaling and real-time performance, and we provide experiments regarding cross-layer software issues. We additionally provide data showing that such games and installations can be efficiently supported by our platform, with as many as 50 concurrent players in the same physical space. These results are backed up by a user evaluation study from a large sample of 136 visitors, which shows that such applications can be seriously fun.
Abstract: We describe algorithms for finding shortest paths and distances in outerplanar and planar digraphs
that exploit the particular topology of the input graph. An important feature of our algorithms is that they can
work in a dynamic environment, where the cost of any edge can be changed or the edge can be deleted. In the
case of outerplanar digraphs, our data structures can be updated after any such change in only logarithmic time.
A distance query is also answered in logarithmic time. In the case of planar digraphs, we give an interesting
tradeoff between preprocessing, query, and update times depending on the value of a certain topological
parameter of the graph. Our results can be extended to n-vertex digraphs of genus O.n1ˇ"/ for any " > 0.
Abstract: We present improved methods for computing a set of alternative source-to-destination routes
in road networks in the form of an alternative graph. The resulting alternative graphs are
characterized by minimum path overlap, small stretch factor, as well as low size and complexity.
Our approach improves upon a previous one by introducing a new pruning stage preceding any
other heuristic method and by introducing a new filtering and fine-tuning of two existing methods.
Our accompanying experimental study shows that the entire alternative graph can be computed
pretty fast even in continental size networks.
Abstract: We present a new finger search tree with
O(
log log
d)
expected search time
in the Random Access Machine (RAM) model of computation for a large class of
input distributions. The parameter
d
represents the number of elements (distance) be-
tween the search element and an element pointed to by a finger, in a finger search tree
that stores
n
elements. Our data structure improves upon a previous result by Andersson and Mattsson that exhibits expected
O(
log log
n)
search time by incorporating the
distance
d
into the search time complexity, and thus removing the dependence on
n
.
We are also able to show that the search time is
O(
log log
d
+
φ(n))
with high prob-
ability, where
φ(n)
is
any
slowly growing function of
n
. For the need of the analysis
we model the updates by a “balls and bins” combinatorial game that is interesting in
its own right as it involves insertions and deletions of balls according to an unknown
distribution.
Abstract: We present a new finger search tree with O(1) worst-case
update time and O(log log d) expected search time with high probability
in the Random Access Machine (RAM) model of computation for a large
class of input distributions. The parameter d represents the number of
elements (distance) between the search element and an element pointed
to by a finger, in a finger search tree that stores n elements. For the need
of the analysis we model the updates by a "balls and bins" combinatorial
game that is interesting in its own right as it involves insertions and
deletions of balls according to an unknown distribution.
Abstract: We present a new finger search tree with O(log log d) expected search time in the Random
Access Machine (RAM) model of computation for a large class of input distributions. The
parameter d represents the number of elements (distance) between the search element and an
element pointed to by a finger, in a finger search tree that stores n elements. Our data structure
improves upon a previous result by Andersson and Mattsson that exhibits expected O(log log n)
search time by incorporating the distance d into the search time complexity, and thus removing
the dependence on n. We are also able to show that the search time is O(log log d + φ(n)) with
high probability, where φ(n) is any slowly growing function of n. For the need of the analysis
we model the updates by a “balls and bins” combinatorial game that is interesting in its own
right as it involves insertions and deletions of balls according to an unknown distribution.
Abstract: Collection selection has been a research issue for years. Typically,
in related work, precomputed statistics are employed
in order to estimate the expected result quality of each collection,
and subsequently the collections are ranked accordingly.
Our thesis is that this simple approach is insufficient
for several applications in which the collections typically
overlap. This is the case, for example, for the collections
built by autonomous peers crawling the web. We
argue for the extension of existing quality measures using
estimators of mutual overlap among collections and present
experiments in which this combination outperforms CORI,
a popular approach based on quality estimation. We outline
our prototype implementation of a P2P web search engine,
coined MINERVA1, that allows handling large amounts of
data in a distributed and self-organizing manner. We conduct
experiments which show that taking overlap into account
during collection selection can drastically decrease the
number of collections that have to be contacted in order to
reach a satisfactory level of recall, which is a great step toward
the feasibility of distributed web search.
Abstract: In load balancing games, there is a set of available servers and
a set of clients; each client wishes to run her job on some server. Clients
are selŻsh and each of them selects a server that, given an assignment
of the other clients to servers, minimizes the latency she experiences
with no regard to the global optimum. In order to mitigate the e®ect of
selŻshness on the e±ciency, we assign taxes to the servers. In this way,
we obtain a new game where each client aims to minimize the sum of
the latency she experiences and the tax she pays. Our objective is to
Żnd taxes so that the worst equilibrium of the new game is as e±cient
as possible. We present new results concerning the impact of taxes on
the e±ciency of equilibria, with respect to the total latency of all clients
and the maximum latency (makespan).
Abstract: The problem of robust line planning requests for a set of
origin-destination paths (lines) along with their frequencies in an underlying
railway network infrastructure, which are robust to
uctuations of
real-time parameters of the solution.
In this work, we investigate a variant of robust line planning stemming
from recent regulations in the railway sector that introduce competition
and free railway markets, and set up a new application scenario: there is
a (potentially large) number of line operators that have their lines xed
and operate as competing entities struggling to exploit the underlying
network infrastructure via frequency requests, while the management of
the infrastructure itself remains the responsibility of a single (typically
governmental) entity, the network operator.
The line operators are typically unwilling to reveal their true incentives.
Nevertheless, the network operator would like to ensure a fair (or, socially
optimal) usage of the infrastructure, e.g., by maximizing the (unknown to
him) aggregate incentives of the line operators. We show that this can be
accomplished in certain situations via a (possibly anonymous) incentivecompatible
pricing scheme for the usage of the shared resources, that is
robust against the unknown incentives and the changes in the demands
of the entities. This brings up a new notion of robustness, which we
call incentive-compatible robustness, that considers as robustness of the
system its tolerance to the entities' unknown incentives and elasticity
of demands, aiming at an eventual stabilization to an equilibrium point
that is as close as possible to the social optimum.
Abstract: We consider the online impairment-aware routing
and wavelength assignment (IA-RWA) problem in transparent
WDM networks. To serve a new connection, the online algorithm,
in addition to finding a route and a free wavelength (a lightpath),
has to guarantee its transmission quality, which is affected by
physical-layer impairments. Due to interference effects, the establishment
of the new lightpath affects and is affected by the other
lightpaths. We present two multicost algorithms that account
for the actual current interference among lightpaths, as well as
for other physical effects, performing a cross-layer optimization
between the network and physical layers. In multicost routing,
a vector of cost parameters is assigned to each link, from which
the cost vectors of the paths are calculated. The first algorithm
utilizes cost vectors consisting of impairment-generating source
parameters, so as to be generic and applicable to different physical
settings. These parameters are combined into a scalar cost
that indirectly evaluates the quality of candidate lightpaths. The
second algorithm uses specific physical-layer models to define
noise variance-related cost parameters, so as to directly calculate
the -factor of candidate lightpaths. The algorithms find a set of
so-called nondominated paths to serve the connection in the sense
that no path is better in the set with respect to all cost parameters.
To select the lightpath, we propose various optimization functions
that correspond to different IA-RWA algorithms. The proposed
algorithms combine the strength of multicost optimization with
low execution times, making them appropriate for serving online
connections
Abstract: We present a unique location-based application of sensor networks. The ``Magnetize Words'' installation produces word clouds following visitors around, interacting with each other as the visitors meet in physical space. The installation can be based on either of two localization-providing platforms. The first achieves high localization accuracy while requiring visitors to carry identifying tags. The other provides low accuracy, but is fully nonintrusive.
Abstract: We propose efficient schemes for information-theoretically secure
key exchange in the Bounded Storage Model (BSM), where the adversary
is assumed to have limited storage. Our schemes generate a
secret One Time Pad (OTP) shared by the sender and the receiver,
from a large number of public random bits produced by the sender
or by an external source. Our schemes initially generate a small
number of shared secret bits, using known techniques. We introduce
a new method to expand a small number of shared bits to a
much longer, shared key.
Our schemes are tailored to the requirements of sensor nodes
and wireless networks. They are simple, efficient to implement and
take advantage of the fact that practical wireless protocols transmit
data in frames, unlike previous protocols, which assume access to
specific bits in a stream of data. Indeed, our main contribution is
twofold.
On the one hand, we construct schemes that are attractive in
terms of simplicity, computational complexity, number of bits read
from the shared random source and expansion factor of the initial
key to the final shared key.
On the other hand, we show how to transformany existing scheme
for key exchange in BSM into a more efficient scheme in the number
of bits it reads from the shared source, given that the source is
transmitted in frames.
Abstract: In this paper, we discuss the conceptual problems arising from security issues for small artefacts. We propose two frameworks for security of small artefacts and present some preliminary results for the two frameworks
Abstract: In this work, we study the impact of the dynamic changing of the network link capacities on the stability properties of packet-switched networks. Especially, we consider the Adversarial, Quasi-Static Queuing Theory model, where each link capacity may take on only two possible (integer) values, namely 1 and C>1 under a (w,\~{n})-adversary. We obtain the following results:
• Allowing such dynamic changes to the link capacities of a network with just ten nodes that uses the LIS (Longest-in-System) protocol for contention–resolution results in instability at rates View the MathML source and for large enough values of C.
• The combination of dynamically changing link capacities with compositions of contention–resolution protocols on network queues suffices for similar instability bounds: The composition of LIS with any of SIS (Shortest-in-System), NTS (Nearest-to-Source), and FTG (Furthest-to-Go) protocols is unstable at rates View the MathML source for large enough values of C.
• The instability bound of the network subgraphs that are forbidden for stability is affected by the dynamic changes to the link capacities: we present improved instability bounds for all the directed subgraphs that were known to be forbidden for stability on networks running a certain greedy protocol.
Abstract: A number of infrastructures are being deployed for Future Internet experimentation purposes, providing access to large-scale IoT resources to researchers and industry. SmartSantander is among the largest ones, deployed at the city center of Santander. We discuss SmartSantander˘s augmentation using smartphones provided by volunteers, in order to increase sensing resources and ubiquity. Our system allows developers to write code for Android and automatically deploy their experiments to Android devices, alongside the SmartSantander platform. Initial results produced by experiments with a small number of volunteers show that the system provides meaningful extensions to the existing platform.
Abstract: Wireless networks are nowadays widely used and constantly evolving; we are constantly surrounded by numerous such networks. A modern idea is to exploit this variety and aggregate all available connections together towards improving Internet connectivity. In this paper we present our design of a networking system that enables participating entities to connect with each other and share their possible broadband connections. Every connected entity will gain broadband connection access from the ones that have and share one, independently from how many connections are available and how fast they are. Our goal is to improve the broadband connections' utilization and aim towards fault tolerance on these connections. To this end, the system is built on top of a peer-to-peer network, where the participating entities share their connections according to various scheduling algorithms.
Abstract: With this work we aim to make a three-fold contribution.
We first address the issue of supporting efficiently queries
over string-attributes involving prefix, suffix, containment,
and equality operators in large-scale data networks. Our
first design decision is to employ distributed hash tables
(DHTs) for the data network?s topology, harnessing their
desirable properties. Our next design decision is to derive
DHT-independent solutions, treating DHT as a black box.
Second, we exploit this infrastructure to develop efficient
content based publish/subscribe systems. The main con-
tribution here are algorithms for the efficient processing of
queries (subscriptions) and events (publications). Specifi-
cally, we show that our subscription processing algorithms
require O(logN) messages for a N-node network, and our
event processing algorithms require O(l ? logN) messages
(with l being the average string length).
Third, we develop algorithms for optimizing the proces-
sing of multi-dimensional events, involving several string at-
tributes. Further to our analysis, we provide simulation-
based experiments showing promising performance results
in terms of number of messages, required bandwidth, load
balancing, and response times.
Abstract: This special issue of Computer Science Review features seven papers on foundations of adaptive networked societies of tiny artefacts. The introduction describes the motivation for the special issue and briefly
Abstract: Information retrieval (IR) in peer-to-peer (P2P) networks,
where the corpus is spread across many loosely coupled
peers, has recently gained importance. In contrast to IR
systems on a centralized server or server farm, P2P IR faces
the additional challenge of either being oblivious to global
corpus statistics or having to compute the global measures
from local statistics at the individual peers in an efficient,
distributed manner. One specific measure of interest is the
global document frequency for different terms, which would
be very beneficial as term-specific weights in the scoring and
ranking of merged search results that have been obtained
from different peers.
This paper presents an efficient solution for the problem
of estimating global document frequencies in a large-scale
P2P network with very high dynamics where peers can join
and leave the network on short notice. In particular, the
developed method takes into account the fact that the lo-
cal document collections of autonomous peers may arbitrar-
ily overlap, so that global counting needs to be duplicate-
insensitive. The method is based on hash sketches as a
technique for compact data synopses. Experimental stud-
ies demonstrate the estimator?s accuracy, scalability, and
ability to cope with high dynamics. Moreover, the benefit
for ranking P2P search results is shown by experiments with
real-world Web data and queries.
Abstract: We present the interpolation search B-tree (ISB-tree), a new cache-aware indexing scheme
that supports update operations (insertions and deletions) in O(1) worst-case block
transfers and search operations in O(logB logn) expected block transfers, where B
represents the disk block size and n denotes the number of stored elements. The expected
search bound holds with high probability for a large class of (unknown) input distributions.
The worst-case search bound of our indexing scheme is O(logB n) block transfers. Our
update and expected search bounds constitute a considerable improvement over the
O(logB n) worst-case block transfer bounds for search and update operations achieved
by the B-tree and its numerous variants. This is also verified by an accompanying
experimental study.
Abstract: We present the interpolation search tree (ISB-tree), a new cache-aware indexing scheme that supports update operations (insertions and deletions) in O(1) worst-case (w.c.) block transfers and search operations in O(logB log n) expected block transfers, where B represents the disk block size and n denotes the number of stored elements. The expected search bound holds with high probability for a large class of (unknown) input distributions. The w.c. search bound of our indexing scheme is O(logB n) block transfers. Our update and expected search bounds constitute a considerable improvement over the O(logB n) w.c. block transfer bounds for search and update operations achieved by the B-tree and its numerous variants. This is also suggested by a set of preliminary experiments we have carried out. Our indexing scheme is based on an externalization of a main memory data structure based on interpolation search.
Abstract: This work provides a jitter analysis of size-based
burst assembly algorithms and also discusses other burst assembly
algorithms that use the packet delay as the assembly threshold
to provide a bound on jitter.
Abstract: This paper presents results from the IST Phosphorus project that studies and implements an optical Grid test-bed. A significant part of this project addresses scheduling and routing algorithms and dimensioning problems of optical grids. Given the high costs involved in setting up actual hardware implementations, simulations are a viable alternative. In this paper we present an initial study which proposes models that reflect real-world grid application traffic characteristics, appropriate for simulation purposes. We detail several such models and the corresponding process to extract the model parameters from real grid log traces, and verify that synthetically generated jobs provide a realistic approximation of the real-world grid job submission process.
Abstract: In this paper we present a multicost algorithm for the joint
time scheduling of the communication and computation
resources that will be used by a task. The proposed
algorithm selects the computation resource to execute the
task, determines the path to route the input data, and finds
the starting times for the data transmission and the task
execution, performing advance reservations. We initially
present an optimal scheme of non-polynomial complexity
and by appropriately pruning the set of candidate paths we
also give a heuristic algorithm of polynomial complexity. We
evaluate the performance of our algorithm and compare it to
that of algorithms that handle only the computation or
communication part of the problem separately. We show that
in a Grid network where the tasks are CPU- and dataintensive
important performance benefits can be obtained by
jointly optimizing the use of the communication and
computation resources.
Abstract: In this work we study the combination of multicost
routing and adjustable transmission power in wireless
ad hoc networks, so as to obtain dynamic energy- and
interference-efficient routes to optimize network performance.
In multi-cost routing, a vector of cost parameters is
assigned to each network link, from which the cost vectors
of candidate paths are calculated. Only at the end these
parameters are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. The multi-cost routing problem is a
generalization of the multi-constrained problem, where no
constraints exist, and is also significantly more powerful
than single-cost routing. Since energy is an important
limitation of wireless communications, the cost parameters
considered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be included,
as desired. We assume that nodes can use power control to
adjust their transmission power to the desired level. The
experiments conducted show that the combination of multicost
routing and adjustable transmission power can lead to
reduced interference and energy consumption, improving
network performance and lifetime.
Abstract: In translucent (or managed reach) WDM optical
networks, regenerators are employed at specific nodes. Some of
the connections in such networks are routed transparently, while
others have to go through a sequence of 3R regenerators that serve
as “refueling stations” to restore their quality of transmission
(QoT). We extend an online multicost algorithm for transparent
networks presented in our previous study [1], to obtain an IA-RWA
algorithm that works in translucent networks and makes use,
when required, of the regenerators present at certain locations
of the network. To characterize a path, the algorithm uses a
multicost formulation with several cost parameters, including the
set of available wavelengths, the length of the path, the number of
regenerators used, and noise variance parameters that account for
the physical layer impairments. Given a new connection request
and the current utilization state of the network, the algorithm calculates
a set of non dominated candidate paths, meaning that any
path in this set is not inferior with respect to all cost parameters
than any other path. This set consists of all the cost-effective (in
terms of the domination relation) and feasible (in terms of QoT)
lightpaths for the given source-destination pair, including all the
possible combinations for the utilization of available regenerators
of the network. An optimization function or policy is then applied
to this set in order to select the optimal lightpath. Different optimization
policies correspond to different IA-RWA algorithms.
We propose and evaluate several optimization policies, such as the
most used wavelength, the best quality of transmission, the least
regeneration usage, or a combination of these rules. Our results
indicate that in a translucent network the employed IA-RWA
algorithm has to consider all problem parameters, namely, the
QoT of the lightpaths, the utilization of wavelengths and the
availability of regenerators, to efficiently serve the online traffic.
Abstract: Wireless sensor networks can be very useful in applications that require the detection of crucial events, in physical environments subjected to critical conditions, and the propagation of data reporting their realization to a control center. In this paper we propose jWebDust, a generic and modular application environment for developing and managing applications that are based on wireless sensor networks. Our software architecture provides a range of services that allow to create customized applications with minimum implementation effort that are easy to administrate. We move beyond the ?networking-centric? view of sensor network research and focus on how the end user (administrator, control center supervisor, etc.) will visualize and interact with the system.
We here present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust allows heterogeneous components to interoperate (real world sensor networks will rarely be homogeneous) and allows the integrated management and control of multiple such networks by also defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network. The architecture also illustrates how existing protocols for various services can interoperate in a bigger framework - such as the tree construction, query routing, etc.
Abstract: In this work we discuss Kafebook, a system combining popular social networking platforms
with activity input from the physical domain inside public spaces. The system is envisioned as
a means for users to augment and communicate their activities to other people in their physical
proximity through a public display, while catering for anonymity issues. We developed a
client for Android smartphones that is used as the user interface and the enabling platform,
with which users connect to the system infrastructure and interact with it. Apart from
providing access to input from social networking sites, the Android client allows for chat
between the users and music selection polls. Wireless networking is based on Bluetooth, a
widespread technology in smartphones, which enables a more pervasive mode of operation,
while utilizing it also as a proximity sensor. We deployed our system in a two-day public
event as part of an undergraduate theses showcase, receiving positive feedback from visitors.
Abstract: In this book chapter we will consider key establishment protocols for wireless sensor networks.
Several protocols have been proposed in the literature for the establishment of a shared group key for wired networks.
The choice of a protocol depends whether the key is established by one of the participants (and then transported to the other(s)) or agreed among the participants, and on the underlying cryptographic mechanisms (symmetric or asymmetric). Clearly, the design of key establishment protocols for sensor networks must deal with different problems and challenges that do not exist in wired networks. To name a few, wireless links are particularly vulnerable to eavesdropping, and that sensor devices can be captured (and the secrets they contain can be compromised); in many upcoming wireless sensor networks, nodes cannot rely on the presence of an online trusted server (whereas most standardized authentication and key establishment protocols do rely on such a server).
In particular, we will consider five distributed group key establishment protocols. Each of these protocols applies a different algorithmic technique that makes it more suitable for (i) static sensor networks, (ii) sensor networks where nodes enter sleep mode (i.e. dynamic, with low rate of updates on the connectivity graph) and (iii) fully dynamic networks where nodes may even be mobile. On the other hand, the common factor for all five protocols is that they can be applied in dynamic groups (where members can be excluded or added) and provide forward and backward secrecy. All these protocols are based on the Diffie-Hellman key exchange algorithm and constitute natural extensions of it in the multiparty case.
Abstract: This paper addresses the efficient processing of
top-k queries in wide-area distributed data
repositories where the index lists for the attribute
values (or text terms) of a query are distributed
across a number of data peers and the
computational costs include network latency,
bandwidth consumption, and local peer work.
We present KLEE, a novel algorithmic
framework for distributed top-k queries,
designed for high performance and flexibility.
KLEE makes a strong case for approximate top-k
algorithms over widely distributed data sources.
It shows how great gains in efficiency can be
enjoyed at low result-quality penalties. Further,
KLEE affords the query-initiating peer the
flexibility to trade-off result quality and expected
performance and to trade-off the number of
communication phases engaged during query
execution versus network bandwidth
performance. We have implemented KLEE and
related algorithms and conducted a
comprehensive performance evaluation. Our
evaluation employed real-world and synthetic
large, web-data collections, and query
benchmarks. Our experimental results show that
KLEE can achieve major performance gains in
terms of network bandwidth, query response
times, and much lighter peer loads, all with small
errors in result precision and other result-quality
measures
Abstract: Abstract— Numerous smart city testbeds and system deployments have surfaced around the world, aiming to provide services over unified large heterogeneous IoT infrastructures. Although we have achieved new scales in smart city installations and systems, so far the focus has been to provide diverse sources of data to smart city services consumers, while neglecting to provide ways to simplify making good use of them. We believe that knowledge creation in smart cities through data annotation, supported in both an automated and a crowdsourced manner, is an aspect that will bring additional value to smart cities. We present here our approach, aiming to utilize an existing smart city deployment and the OrganiCity software ecosystem. We discuss key challenges along with characteristic use cases, and report on our design and implementation, along with preliminary results.
Abstract: We consider Boolean formulas with three literals per clause,
or 3-SAT formulas. For random formulas with m clauses over n variables,
such as r = m=n is a constant, it has been experimentally observed
that, asymptotically as r crosses the threshold value 4:2 (approximately)
the probability that {\'A} is satisŻable falls abruptly from nearly 1 to 0.
Moreover, as n increases towards larger and larger values, the transition
of the probability becomes sharper. The purpose of this paper is to simply
outline a connection between the problem of determining bounds to the
threshold value and the concept of Kolmogorov complexity.
Abstract: We investigate the existence and efficient algorithmic
construction of close to optimal independent sets in random models
of intersection graphs. In particular, (a) we propose \emph{a new model} for random intersection graphs
($G_{n, m, \vec{p}}$) which includes the model of
\cite{RIG} (the ``uniform" random intersection graphs model) as an
important special case. We also define an interesting variation of
the model of random intersection graphs, similar in spirit to
random regular graphs. (b) For this model we derive \emph{exact formulae} for the mean
and variance of the number of independent sets of size $k$ (for
any $k$) in the graph. (c) We then propose and analyse \emph{three algorithms} for the
efficient construction of large independent sets in this model.
The first two are variations of the greedy technique while the
third is a totally new algorithm. Our algorithms are analysed for
the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding
\emph{close to optimal} independent sets for an interesting range
of graph parameters.
Abstract: A new social form of play taking place in public spaces, such as city parks, squares and streets that can scale up to involve large number of participants and engage players located simultaneously in dispersed areas is discussed here. This paper presents a platform that allows for the creation of interactive installations in public spaces. The FinN platform provides functionality for detecting and localizing people, by requiring players to carry identifying devices that offer many possibilities for gesture interaction (i.e., gestures such as shaking, facilitated by the sensor-bouquet of the devices). One of the deployed example installations, where visitors can interact with words formations, is presented here as a case study.
Abstract: The possibilities offered by utilizing sensors and pervasive computing technologies for creating large-scale multiplayer games are discussed in this chapter. Such game installations constitute a new social form of play taking place in public spaces. A main characteristic is the need to scale to a large number of users and engage players located simultaneously in dispersed areas, thus connected both on a local and Internet level. Fun in Numbers is a platform for developing and playing mobile, locative and collaborative distributed games and interactive installations, based on the participation of large numbers of people and their movement in the physical space. Players interact with each other using a wide range of hardware devices that are either generic (smartphones) or specific (sensor devices A set of related fundamental issues drawn upon the experience from several public events, where the FinN platform supported as many as 50 local users at the same time, is hereby presented.
Abstract: A number of Future Internet testbeds are being deployed around the world for research experimentation and development. SmartSantander
is an infrastructure of massive scale deployed inside a city centre. We argue that utilising the concept of participatory sensing can augment the functionality and potential use-cases of such a system and be beneficiary in a number of scenarios. We discuss
the concept of extending SmartSantander with participatory sensing through the use of volunteers˘ smartphones. We report on our design and implementation, which allows for developers to write
their code for Android devices and then deploy and execute on the devices automatically through our system. We have tested our implementation in a number of scenarios in two cities with the help
of volunteers with promising results; the data collected enhance the ones by fixed infrastructure both quantitatively and qualitatively across the cities, while also engaging citizens more directly.
Abstract: Although we have reached new levels in smart city installations and systems, efforts so far have focused on providing diverse sources of data to smart city services consumers while neglecting to provide ways to simplify making good use of them. In this context, one first step that will bring added value to smart cities is knowledge creation in smart cities through anomaly detection and data annotation, supported in both an automated and a crowdsourced manner. We present here LearningCity, our solution that has been validated over an existing smart city deployment in Santander, and the OrganiCity experimentation-as-a-service ecosystem. We discuss key challenges along with characteristic use cases, and report on our design and implementation, together with some preliminary results derived from combining large smart city datasets with machine learning.
Abstract: In this paper we study deliberate attacks on the infrastructure of large scale-free networks. These attacks are based on the importance of individual vertices in the network in order to be successful, and the concept of centrality (originating from social science) has already been utilized in their study with success. Some measures of centrality however, as betweenness, have disadvantages that do not facilitate the research in this area. We show that with the aid of scale-free network characteristics such as the clustering coefficient we can get results that balance the current centrality measures, but also gain insight into the workings of these networks.
Abstract: Typical sensor nodes are resource constrained devices containing user level applications, operating system components, and device drivers in a single address space, with no form of memory protection. A malicious user could easily capture a node and tamper the applications running, in order to perform different types of attacks. In this paper, we propose a remote live forensics protection architecture that prevents the execution of tampered software while alarming the owners of the sensors network. Using sandboxing to restrict application memory accesses within the address space and forensic techniques to validate the authenticity of the running applications we prevent malicious code from being executed while specifying the intrusion.
Abstract: We address the issue of measuring storage, or query load distribution fairness in peer-to-peer data management systems. Existing metrics may look promising from the point of view of specific peers, while in reality being far from optimal from a global perspective. Thus, first we define the requirements and study the appropriateness of various statistical metrics for measuring load distribution fairness towards these requirements. The metric proposed as most appropriate is the Gini coefficient (G). Second, we develop novel distributed sampling algorithms to compute G on-line, with high precision, efficiently, and scalably. Third, we show how G can readily be utilized on-line by higher-level algorithms which can now know when to best intervene to correct load imbalances. Our analysis and experiments testify for the efficiency and accuracy of these algorithms, permitting the online use of a rich and reliable metric, conveying a global perspective of the distribution.
Abstract: We consider the problem of searching for a piece of
information in a fully interconnected computer network
(also called a complete network or clique) by exploiting
advice about its location from the network nodes. Each
node contains a database that ?knows? what kind of
documents or information are stored in other nodes
(e.g., a node could be a Web server that answers queries
about documents stored on the Web). The databases in
each node, when queried, provide a pointer that leads to
the node that contains the information. However, this
information is up-to-date (or correct) with some
bounded probability. While, in principle, one may always
locate the information by simply visiting the network
nodes in some prescribed ordering, this requires a time
complexity in the order of the number of nodes of the
network. In this paper, we provide algorithms for locating
an information node in the complete communication
network, which take advantage of advice given from
network nodes. The nodes may either give correct advice,
by pointing directly to the information node, or give
wrong advice, by pointing elsewhere. On the lowerbounds?
side, we show that no fixed-memory (i.e., with
memory independent of the network size) deterministic
algorithm may locate the information node in a constant
(independent of the network size) expected number of
steps. Moreover, if p (1/n) is the probability that a
node of an n-node clique gives correct advice, we show
that no algorithm may locate the information node in an
expected number of steps less than 1/p o(1). To study
how the expected number of steps is affected by the
amount of memory allowed to the algorithms, we give a
memoryless randomized algorithm with expected number
of steps 4/p o(1/p) o(1) and a 1-bit randomized
algorithm requiring on the average at most 2/p o(1)
steps. In addition, in the memoryless case, we also
prove a 4/p lower bound for the expected number of
steps in the case where the nodes giving faulty advice
may decide on the content of this advice in any possible
way and not merely at random (adversarial fault model).
Finally, for the case where faulty nodes behave randomly,
we give an optimal, unlimited memory deterministic
algorithm with expected number of steps bounded
from above by 1/p o(1/p) 1.
Abstract: We consider the problem of searching for a piece of information in a fully interconnected computer network or clique by exploiting
advice about its location from the network nodes Each node contains a
database that knows what kind of documents or information are stored
in other nodes e.g. a node could be a Web server that answers queries
about documents stored on the Web. The databases in each node when
queried provide a pointer that leads to the node that contains the information. However this information is up to date or correct with some
bounded probability. While in principle one may always locate the information by simply visiting the network nodes in some prescribed ordering
this requires a time complexity in the order of the number of nodes of the
network. In this paper we provide algorithms for locating an information node in the complete communication network that take advantage
of advice given from network nodes The nodes may either give correct
advice by pointing directly to the information node or give wrong advice
by pointing elsewhere We show that on the averageif the probability that a node provides correct advice is asymptotically larger than
where is the number of the computer nodes then the average time complexity for locating the information node is asymptotically or depending on the available memory.The probability may in general be a function of the number of network nodes . On the lower bounds
side we prove that noxed memory deterministic algorithm can locate
the information node in nite expected number of steps. We also prove
a lower bound of
for the expected number of steps of any algorithm
that locates the information node in the complete network.
Abstract: In this paper we examine the problem of searching for some information item in the nodes of a fully
interconnected computer network, where each node contains information relevant to some topic
as well as links to other network nodes that also contain information, not necessarily related to
locally kept information. These links are used to facilitate the Internet users and mobile software
agents that try to locate specific pieces of information. However, the links do not necessarily point
to nodes containing information of interest to the user or relevant to the aims of the mobile agent.
Thus an element of uncertainty is introduced. For example, when an Internet user or some search
agent lands on a particular network node, they see a set of links that point to information that is,
supposedly, relevant to the current search. Therefore, we can assume that a link points to relevant
information with some unknown probability p that, in general, is related to the number of nodes
in the network (intuitively, as the network grows, this probability tends to zero since adding more
nodes to the network renders some extant links less accurate or obsolete). Consequently, since there
is uncertainty as to whether the links contained in a node?s Web page are correct or not, a search
algorithm cannot rely on following the links systematically since it may end up spending too much
time visiting nodes that contain irrelevant information. In this work, we will describe and analyze
a search algorithm that is only allowed to transfer a fixed amount of memory along communication
links as it visits the network nodes. The algorithm is, however, allowed to use one bit of memory at
each node as an ?already visited? flag. In this way the algorithm has its memory distributed to the
network nodes, avoiding overloading the network links as it moves from node to node searching for
the information. We work on fully interconnected networks for simplicity reasons and, moreover,
because according to some recent experimental evidence, such networks can be considered to be a
good approximation of the current structure of the World Wide Web.
Abstract: In this paper we derive lower bounds for the degree of polynomials that approximate the square root of the discrete logarithm for Elliptic Curves with orders of various specific types. These bounds can serve as evidence for the difficulty in the computation of the square root of discrete logarithms for such elliptic curves, with properly chosen parameters that result in the curve having order of any of types studied in this paper. The techniques are potentially applicable to elliptic curves of order of any specific, allowable (by Hasse’s bounds), order type that is of interest for the application in hand.
Abstract: Internet of Things technologies are considered the next big
step in Smart Building installations. Although such technologies have
been widely studied in simulation and experimental scenarios it is not so
obvious how problems of real world installations should be dealt with. In
this work we deploy IoT devices for sensing and control in a multi-office
space and employ technologies such as CoAP, RESTful interfaces and
Semantic Descriptions to integrate them with the Web. We report our
research goals, the challenges we faced, the decisions we made and the
experience gained from the design, deployment and operation of all the
hardware and software components that compose our system.
Abstract: The adoption of technologies like the IoT in urban environments, together with the intensive use of smartphones, is driving transformation towards smart cities. Under this perspective, Experimentation-as-a-Service within OrganiCity aims to create an experimental facility with technologies, services, and applications that simplify innovation within urban ecosystems. We discuss here tools that facilitate experimentation, implementing ways to organize, execute, and administer experimentation campaigns in a smart city context. We discuss the benefits of our framework, presenting some preliminary results. This is the first time such tools are paired with large-scale smart city infrastructures, enabling both city-scale experimentation and cross-site experimentation.
Abstract: We briefly present the design and architecture of a system that aims to simplify the process of organizing, executing and administering crowdsensing campaigns in a smart city context over smartphones volunteered by citizens. We built our system on top of an Android app substrate on the end-user level, which enables us to utilize smartphone resources. Our system allows researchers and other developers to manage and distribute their “mini” smart city applications, gather data and publish their results through the Organicity smart city platform. We believe this is the first time such a tool is paired with a large scale IoT
infrastructure, to enable truly city-scale IoT and smart city experimentation.
Abstract: Flow control is the main technique currently used to prevent some of the ordered traffic from entering a communication network, and to avoid congestion. A challenging aspect of flow control is how to treat all sessions "fairly " when it is necessary to turn traffic away from the network. In this work, we show how to extend the theory of max-min fair flow control to the case where priorities are assigned to different varieties of traffic, which are sensitive to traffic levels. We examine priorities expressible in the general form of increasing functions of rates, considering yet in combination the more elaborative case with unescapable upper and lower bounds on rates of traffic sessions. We offer optimal, priority bottleneck algorithms, which iteratively adjust the session rates in order to meet a new condition of max-min fairness under priorities and rate bounds. In our setting, which is realistic for today's technology of guaranteed quality of service, traffic may be turned away not only to avoid congestion, but also to respect particular minimum requirements on bandwidth. Moreover, we establish lower bounds on the competitiveness of network-oblivious schemes compared to optimal schemes with complete knowledge of network structure. Our theory extends significantly the classical theory of max-min fair flow control [2]. Moreover, our results on rejected traffic are fundamentally different from those related to call control and bandwidth allocation, since not only do we wish to optimize the number and rates of accepted sessions, but we also require priority fairness.
Abstract: We extend here the Population Protocol model of Angluin et al. [2004,2006] in order to model more powerful networks of resource-limited agents that are possibly mobile. The main feature of our extended model, called the Mediated Population Protocol (MPP) model, is to allow edges of the communication graph to have states that belong to a constant size set. We then allow the protocol rules for pairwise interactions to modify the corresponding edge state. Protocol specifications preserve both uniformity and anonymity. We first focus on the computational power of the MPP model on complete communication graphs and initially identical edges. We provide the following exact characterization for the class MPS of stably computable predicates: A predicate is in MPS iff it is symmetric and is in NSPACE(n^2)$. We finally ignore the input to the agents and study MPP's ability to compute graph properties.
Abstract: We extend here the Population Protocol model of Angluin et al. [2004] in order to model more powerful networks of very small resource-limited artefacts (agents) that are possibly mobile. Communication can happen only between pairs of artefacts. A communication graph (or digraph) denotes the permissible pairwise interactions. The main feature of our extended model is to allow edges of the communication graph, G, to have states that belong to a constant size set. We also allow edges to have readable only costs, whose values also belong to a constant size set. We then allow the protocol rules for pairwise interactions to modify the corresponding edge state. Thus, our protocol specifications are still independent of the population size and do not use agent ids, i.e. they preserve scalability, uniformity and anonymity. Our Mediated Population Protocols (MPP) can stably compute graph properties of the communication graph. We show this for the properties of maximal matchings (in undirected communication graphs), also for finding the transitive closure of directed graphs and for finding all edges of small cost. We demonstrate that our mediated protocols are stronger than the classical population protocols, by presenting a mediated protocol that stably computes the product of two positive integers, when G is the complete graph. This is not a semilinear predicate. To show this fact, we state and prove a general Theorem about the Composition of two stably computing mediated population protocols. We also show that all predicates stably computable in our model are (non-uniformly) in the class NSPACE(m), where m is the number of edges of the communication graph. We also define Randomized MPP and show that, any Peano predicate accepted by a MPP, can be verified in deterministic Polynomial Time.
Abstract: The promises inherent in users coming together to form data
sharing network communities, bring to the foreground new problems formulated
over such dynamic, ever growing, computing, storage, and networking
infrastructures. A key open challenge is to harness these highly
distributed resources toward the development of an ultra scalable, efficient
search engine. From a technical viewpoint, any acceptable solution
must fully exploit all available resources dictating the removal of any
centralized points of control, which can also readily lead to performance
bottlenecks and reliability/availability problems. Equally importantly,
however, a highly distributed solution can also facilitate pluralism in informing
users about internet content, which is crucial in order to preclude
the formation of information-resource monopolies and the biased visibility
of content from economically-powerful sources. To meet these challenges,
the work described here puts forward MINERVA{\^a}{\"i}ż˝{\"i}ż˝, a novel search
engine architecture, designed for scalability and efficiency. MINERVA{\^a}{\"i}ż˝{\"i}ż˝
encompasses a suite of novel algorithms, including algorithms for creating
data networks of interest, placing data on network nodes, load balancing,
top-k algorithms for retrieving data at query time, and replication algorithms
for expediting top-k query processing. We have implemented the
proposed architecture and we report on our extensive experiments with
real-world, web-crawled, and synthetic data and queries, showcasing the
scalability and efficiency traits of MINERVA{\^a}{\"i}ż˝{\"i}ż˝.
Abstract: We consider the problem of computing minimum congestion, fault-tolerant, redundant assignments of messages to faulty, parallel delivery channels. In particular, we are given a set K of faulty channels, each having an integer capacity ci and failing independently with probability fi. We are also given a set M of messages to be delivered over K, and a fault-tolerance constraint (1 - {\aa}), and we seek a redundant assignment {\"o} that minimizes congestion Cong({\"o}), i.e. the maximum channel load, subject to the constraint that, with probability no less than (1 - e), all the messages have a copy on at least one active channel. We present a polynomial-time 4-approximation algorithm for identical capacity channels and arbitrary message sizes, and a 2[ln(|K|/{\aa})/ln(1/fmax)]-approximation algorithm for related capacity channels and unit size messages. Both algorithms are based on computing a collection (K1,., K{\'i}} of disjoint channel subsets such that, with probability no less than (1 - {\aa}), at least one channel is active in each subset. The objective is to maximize the sum of the minimum subset capacities. Since the exact version of this problem is NP-complete, we provide a 2-approximation algorithm for identical capacities, and a polynomial-time (8+o(1))-approximation algorithm for arbitrary capacities.
Abstract: Recommender Systems (RSs) have been extensively utilized as a means of reducing the information overload and offering travel recommendations to tourists. The emerging mobile RSs are tailored to mobile device users and promise to substantially enrich tourist experiences, recommending rich multimedia content, context-aware services, views/ratings of peer users, etc. New developments in mobile computing, wireless networking, web technologies and social networking leverage massive opportunities to provide highly accurate and effective tourist recommendations that respect personal preferences and capture usage, personal, social and environmental contextual parameters. This article follows a systematic approach in reviewing the state-of-the-art in the field, proposing a classification of mobile tourism RSs and providing insights on their offered services. It also highlights challenges and promising research directions with respect to mobile RSs employed in tourism.
Abstract: We present here, Fun in Numbers, a framework for developing multiplayer pervasive games that rely on the use of ad hoc mobile sensor networks. The unique feature in such games is that players interact with each other and their surrounding environment by using movement and presence as a means of performing game-related actions, utilizing sensor devices. We present the fundamental issues and challenges related to these type of games and the scenarios associated with them is provided. Our framework is developed using Java and is based on a multilayer architecture, which provides developers with a set of templates and services for building and operating new games. Our framework handles a number of challenging fundamental and practical issues, such as synchronization, network congestion, delay-tolerant communication and neighbors discovery. We also present our platform and identify issues that arise in pervasive games which utilize sensor network nodes. The implemented games show how to use non-conventional user interface methods to breathe new life into familiar concepts, like the multiplayer games played out in open space.
Abstract: In this work, we propose an obstacle model to be used while simulating wireless sensor networks. To the best of our knowledge, this is the first time such an integrated and systematic obstacle model appears. We define several types of obstacles that can be found inside the deployment area of a wireless sensor network and provide a categorization of these obstacles, based on their nature (physical and communication obstacles), their shape, as well as
their nature to change over time. In light of this obstacle model we conduct extensive simulations in order to study the effects of obstacles on the performance of representative data propagation protocols for wireless sensor networks. Our findings
show that obstacle presence has a significant impact on protocol performance. Also, we demonstrate the effect of each obstacle type on different protocols, thus providing the network designer with advice on which protocol is best to use.
Abstract: Recently, an interest has arisen for network worms that propagate using Domain Name Servers (DNS) in order to discover victim hosts.These worms generate random strings, as possible network domain names, and then query Domain Name Servers in order to discover the corresponding IP addresses. In this paper we present models for the dynamics of the co-evolution of worm agents in the presence of anti-worm agents that move in the network in order to stop worm propagation. The proposed models consider anti-worm agents who know the network and anti-worm agents that do not know it and need to issue queries in order to discover valid IP addresses. We,further, introduce "honeypot'' domain name servers that attempt to lure worms, introducing only a delay and providing no answer.We show that by simply delaying the response to DNS queries issued by the worm has little positive effect on the worms propagation.
Abstract: Recent rapid developments in micro-electro-mechanical systems
(MEMS), wireless communications and digital electronics have already
led to the development of tiny, low-power, low-cost sensor devices.
Such devices integrate sensing, limited data processing and restricted
communication capabilities.
Each sensor device individually might have small utility, however the
effective distributed co-ordination of large numbers of such devices can
lead to the efficient accomplishment of large sensing tasks. Large numbers
of sensors can be deployed in areas of interest (such as inaccessible
terrains or disaster places) and use self-organization and collaborative
methods to form an ad-hoc network.
We note however that the efficient and robust realization of such large,
highly-dynamic, complex, non-conventional networking environments is
a challenging technological and algorithmic task, because of the unique
characteristics and severe limitations of these devices.
This talk will present and discuss several important aspects of the
design, deployment and operation of sensor networks. In particular, we
provide a brief description of the technical specifications of state-of-theart
sensor, a discussion of possible models used to abstract such networks,
a discussion of some key algorithmic design techniques (like randomization,
adaptation and hybrid schemes), a presentation of representative
protocols for sensor networks, for important problems including data
propagation, collision avoidance and energy balance and an evaluation
of crucial performance properties (correctness, efficiency, fault-tolerance)
of these protocols, both with analytic and simulation means.
Abstract: In this work, we explore context-aware application scenarios that become possible utilizing semantically-rich information derived from real-world mobility and presence traces. Traces produced by people carrying personal mobile devices, capturing social and contextual interactions, serve as enables for Future Internet applications. We discuss the fundamental concepts, technical issues and related research challenges. We propose a reference architecture for setting up a system that collects such traces in a Smart City environment. We present the algorithms used to process the traces and infer interactions and interests for the observed populations. We conduct two 3-day trial deployments: one in an office environment and the other in the context of a Smart Conference application. We discuss our findings regarding the system's capability to track interactions and the overall efficacy of the application.
Abstract: In this work we consider temporal graphs, i.e. graphs, each edge of which isassigned a set of discrete time-labels drawn from a set of integers. The labelsof an edge indicate the discrete moments in time at which the edge isavailable. We also consider temporal paths in a temporal graph, i.e. pathswhose edges are assigned a strictly increasing sequence of labels. Furthermore,we assume the uniform case (UNI-CASE), in which every edge of a graph isassigned exactly one time label from a set of integers and the time labelsassigned to the edges of the graph are chosen randomly and independently, withthe selection following the uniform distribution. We call uniform randomtemporal graphs the graphs that satisfy the UNI-CASE. We begin by deriving theexpected number of temporal paths of a given length in the uniform randomtemporal clique. We define the term temporal distance of two vertices, which isthe arrival time, i.e. the time-label of the last edge, of the temporal paththat connects those vertices, which has the smallest arrival time amongst alltemporal paths that connect those vertices. We then propose and study twostatistical properties of temporal graphs. One is the maximum expected temporaldistance which is, as the term indicates, the maximum of all expected temporaldistances in the graph. The other one is the temporal diameter which, looselyspeaking, is the expectation of the maximum temporal distance in the graph. Wederive the maximum expected temporal distance of a uniform random temporal stargraph as well as an upper bound on both the maximum expected temporal distanceand the temporal diameter of the normalized version of the uniform randomtemporal clique, in which the largest time-label available equals the number ofvertices. Finally, we provide an algorithm that solves an optimization problemon a specific type of temporal (multi)graphs of two vertices.
Abstract: In this paper we propose an energy-aware broadcast algorithm for wireless networks. Our algorithm is based on the multicost approach and selects the set of nodes that by transmitting implement broadcasting in an optimally energy-efficient way. The energy-related parameters taken into account are the node transmission power and the node residual energy. The algorithm{\^a}€™s complexity however is non-polynomial, and therefore, we propose a relaxation producing a near-optimal solution in polynomial time. We also consider a distributed information exchange scheme that can be coupled with the proposed algorithms and examine the overhead introduced by this integration. Using simulations we show that the proposed algorithms outperform other solutions in the literature in terms of energy efficiency. Moreover, it is shown that the near-optimal algorithm obtains most of the performance benefits of the optimal algorithm at a smaller computational overhead.
Abstract: A key problem in Grid networks is how to efficiently manage the available infrastructure, in order to
satisfy user requirements and maximize resource utilization. This is in large part influenced by the
algorithms responsible for the routing of data and the scheduling of tasks. In this paper,wepresent several
multi-cost algorithms for the joint scheduling of the communication and computation resources that
will be used by a Grid task. We propose a multi-cost scheme of polynomial complexity that performs
immediate reservations and selects the computation resource to execute the task and determines the
path to route the input data. Furthermore, we introduce multi-cost algorithms that perform advance
reservations and thus also find the starting times for the data transmission and the task execution. We
initially present an optimal scheme of non-polynomial complexity and by appropriately pruning the set
of candidate paths we also give a heuristic algorithm of polynomial complexity. Our performance results
indicate that in a Grid network in which tasks are either CPU- or data-intensive (or both), it is beneficial
for the scheduling algorithm to jointly consider the computational and communication problems. A
comparison between immediate and advance reservation schemes shows the trade-offs with respect to
task blocking probability, end-to-end delay and the complexity of the algorithms.
Abstract: We propose a class of novel energy-efficient multi-cost routing algorithms for wireless mesh networks, and evaluate their performance. In multi-cost routing, a vector of cost parameters is assigned to each network link, from which the cost vectors of candidate paths are calculated using appropriate operators. In the end these parameters are combined in various optimization functions, corresponding to different routing algorithms, for selecting the optimal path. We evaluate the performance of the proposed energy-aware multi-cost routing algorithms under two models. In the network evacuation model, the network starts with a number of packets that have to be transmitted and an amount of energy per node, and the objective is to serve the packets in the smallest number of steps, or serve as many packets as possible before the energy is depleted. In the dynamic one-to-one communication model, new data packets are generated continuously and nodes are capable of recharging their energy periodically, over an infinite time horizon, and we are interested in the maximum achievable steady-state throughput, the packet delay, and the energy consumption. Our results show that energy-aware multi-cost routing increases the lifetime of the network and achieves better overall network performance than other approaches.
Abstract: In this work we study the combination of
multicost routing and adjustable transmission power
in wireless ad-hoc networks, so as to obtain dynamic
energy and interference-efficient routes to optimize network performance. In multi-cost routing, a vector of
cost parameters is assigned to each network link, from
which the cost vectors of candidate paths are calcu-
lated. Only at the end are these parameters combined in
various optimization functions, corresponding to different routing algorithms, for selecting the optimal path.
The multi-cost routing problem is a generalization of
the multi-constrained problem, where no constraints exist, and is also significantly more powerful than single-
cost routing. Since energy is an important limitation of
wireless communications, the cost parameters consid
ered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be in
cluded, as desired.We assume that nodes can use power
control to adjust their transmission power to the desired
level. The experiments conducted show that the com
bination of multi-cost routing and adjustable transmis sion power can lead to reduced interference and energy
consumption, improving network performance and life-
time.
Abstract: In this work we study the dynamic one-to-one communica-
tion problem in energy- and capacity-constrained wireless ad-hoc net-
works. The performance of such networks is evaluated under random
traffic generation and continuous energy recharging at the nodes over an
infinite-time horizon.We are interested in the maximum throughput that
can be sustained by the network with the node queues being finite and in
the average packet delay for a given throughput. We propose a multicost
energy-aware routing algorithm and compare its performance to that of
minimum-hop routing. The results of our experiments show that gener-
ally the energy-aware algorithm achieves a higher maximum throughput
than the minimum-hop algorithm. More specifically, when the network
is mainly energy-constrained and for the 2-dimensional topology consid-
ered, the throughput of the proposed energy-aware routing algorithm is
found to be almost twice that of the minimum-hop algorithm.
Abstract: We provide an improved FPTAS for multiobjective shortest paths,a fundamental (NP_hard) problem in multiobjective optimization,along with a new generic method for obtaining FPTAS to any multiobjective optimization problem with non-linear objectives. We show how these results can be used to obtain better approximate solutions to three related problems that have important applications in QoS routing and in traffic optimization.
Abstract: We provide an improved FPTAS for multiobjective shortest paths—a fundamental (NP-hard) problem in multiobjective optimization—along with a new generic method for obtaining FPTAS to any multiobjective optimization problem with non-linear objectives. We show how these results can be used to obtain better approximate solutions to three related problems, multiobjective constrained [optimal] path and non-additive shortest path, that have important applications in QoS routing and in traffic optimization. We also show how to obtain a FPTAS to a natural generalization of the weighted multicommodity flow problem with elastic demands and values that models several realistic scenarios in transportation and communication networks.
Abstract: In this work, we showcase a set of implemented multiplayer games and interactive installations based on Fun in Numbers (FINN). FinN allows the quick prototyping of applications that utilize input from multiple physical sources (sensors and other means of interfacing), by offering a set of programming templates and services, such as proximity, localization and synchronization, that hide the underlying complexity.
Abstract: We present an architecture for implementing optical
buffers, based on the feed-forward-buffer concept, that can truly
emulate input queuing and accommodate asynchronous packet
and burst operation. The architecture uses wavelength converters
and fixed-length delay lines that are combined to form either a
multiple-input buffer or a shared buffer. Both architectures are
modular, allowing the expansion of the buffer at a cost that grows
logarithmically with the buffer depth, where the cost is measured
in terms of the number of switching elements, and wavelength
converters are employed. The architectural design also provides
a tradeoff between the number of wavelength converters and their
tunability. The buffer architectures proposed are complemented
with scheduling algorithms that can guarantee lossless communication
and are evaluated using physical-layer simulations to
obtain their performance in terms of bit-error rate and achievable
buffer size.
Abstract: We present an architecture for implementing optical
buffers, based on the feed-forward-buffer concept, that can truly
emulate input queuing and accommodate asynchronous packet
and burst operation. The architecture uses wavelength converters
and fixed-length delay lines that are combined to form either a
multiple-input buffer or a shared buffer. Both architectures are
modular, allowing the expansion of the buffer at a cost that grows
logarithmically with the buffer depth, where the cost is measured
in terms of the number of switching elements, and wavelength
converters are employed. The architectural design also provides
a tradeoff between the number of wavelength converters and their
tunability. The buffer architectures proposed are complemented
with scheduling algorithms that can guarantee lossless communication
and are evaluated using physical-layer simulations to
obtain their performance in terms of bit-error rate and achievable
buffer size.
Abstract: We present and discuss challenges and solutions posed by the design of an
adaptable network infrastructure of tiny artifacts. Such artifacts are characterized by
severe limitations in computational power, communications capacity and energy;
nevertheless they must realize a communication infrastructure able to deliver
services to the end-users in a very dynamic and challenging environment. Namely
we present one unifying scenario for the activities of the FRONTS project
(www.fronts.cti.gr). The aim of the unifying scenario is to show how the results
achieved in the project can be exploited to build such a communication
infrastructure.
Abstract: Let n atomic players be routing their unsplitable flow on mresources.
When each player has the option to drop her current resource and select a better
one, and this option is exercised sequentially and unilaterally, then a Nash Equilibrium
(NE) will be eventually reached. Acting sequentially, however, is unrealistic
in large systems. But, allowing concurrency, with an arbitrary number of
players updating their resources at each time point, leads to an oscillation away
from NE, due to big groups of players moving simultaneously and due to nonsmooth
resource cost functions. In this work, we validate experimentally simple
concurrent protocols that are realistic, distributed and myopic yet are scalable, require
only information local at each resource and, still, are experimentally shown
to quickly reach a NE for a range of arbitrary cost functions.
Abstract: In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, and possibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states
apart possibly from a unique leader node and by unknown that nodes
have no a priori knowledge of the network (apart from some minimal
knowledge when necessary) including ignorance of n. Network dynamicity is modeled by the 1-interval connectivity model [KLO10], in which communication is synchronous and a (worst-case) adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. We also show that a unique leader suffices in order to solve counting in linear time.
Then we focus on dynamic networks with broadcast. We conjecture that
dynamicity renders nontrivial computation impossible. In view of this,
we let the nodes know an upper bound on the maximum degree that will
ever appear and show that in this case the nodes can obtain an upper
bound on n. Finally, we replace broadcast with one-to-each, in which a
node may send a different message to each of its neighbors. Interestingly,
this natural variation is proved to be computationally equivalent to a
full-knowledge model, in which unique names exist and the size of the
network is known.
Abstract: We present the NanoPeers architecture paradigm, a
peer-to-peer network of lightweight devices, lacking all or
most of the capabilities of their computer-world counterparts.
We identify the problems arising when we apply current
routing and searching methods to this nano-world, and
present some initial solutions, using a case study of a sensor
network instance; Smart Dust. Furthermore, we propose
the P2P Worlds framework as a hybrid P2P architecture
paradigm, consisting of cooperating layers of P2P
networks, populated by computing entities with escalating
capabilities. Our position is that (i) experience gained
through research and experimentation in the field of P2P
computing, can be indispensable when moving down the
stair of computing capabilities, and that (ii) the proposed
framework can be the basis of numerous real-world applications,
opening up several challenging research problems.
Abstract: Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [15]. Recently, this approach has been generalized in [13] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e. the probability that the offsprings of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e. undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors/amplifiers of selection, that is, graphs with smaller/greater fixation probability than the complete graph (i.e. the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation
probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [13]. In our new evolutionary model, all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new model and in the model of [13] can be interpreted as an “aggregation” vs. an “all-or-nothing” strategy, respectively. We prove that our new model of mutual influences admits a potential function, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the “healthy” state of the system.
Abstract: We propose local mechanisms for efficiently marking the broader network region around obstacles, for data propagation to early enough avoid them towards near-optimal routing paths. In particular, our methods perform an online identification of sensors lying near obstacle boundaries,which then appropriately emit beacon messages in the network towards establishing efficient obstacle avoidance paths. We provide a variety of beacon dissemination schemes that satisfy different trade-offs between protocol overhead and performance. Compared to greedy, face routing and trustbased methods in the state of the art, our methods achieve significantly shorter propagation paths, while introducing much lower overhead and converging faster to near-optimality.
Abstract: Geographic routing scales well in sensor networks, mainly
due to its stateless nature. Still, most of the algorithms are
concerned with finding some path, while the optimality of
the path is difficult to achieve. In this paper we are presenting
a novel geographic routing algorithm with obstacle
avoidance properties. It aims at finding the optimal path
from a source to a destination when some areas of the network
are unavailable for routing due to low local density or
obstacle presence. It locally and gradually with time (but,
as we show, quite fast) evaluates and updates the suitability
of the previously used paths and ignores non optimal paths
for further routing. By means of extensive simulations, we
are comparing its performance to existing state of the art
protocols, showing that it performs much better in terms of
path length thus minimizing latency, space, overall traffic
and energy consumption.
Abstract: We propose local mechanisms for efficiently marking the
broader network region around obstacles, for data propagation
to early enough avoid them towards near-optimal
routing paths. In particular, our methods perform an online
identification of sensors lying near obstacle boundaries,
which then appropriately emit beacon messages in the network
towards establishing efficient obstacle avoidance paths.
We provide a variety of beacon dissemination schemes that
satisfy different trade-offs between protocol overhead and
performance. Compared to greedy, face routing and trustbased
methods in the state of the art, our methods achieve
significantly shorter propagation paths, while introducing
much lower overhead and converging faster to near-optimality.
Abstract: Consider an information network with harmful procedures called attackers (e.g., viruses); each attacker uses a probability distribution to choose a node of the network to damage. Opponent to the attackers is the system protector scanning and cleaning from attackers some part of the network (e.g., an edge or a path), which it chooses independently using another probability distribution. Each attacker wishes to maximize the probability of escaping its cleaning by the system protector; towards a conflicting objective, the system protector aims at maximizing the expected number of cleaned attackers.
We model this network scenario as a non-cooperative strategic game on graphs. We focus on the special case where the protector chooses a single edge. We are interested in the associated Nash equilibria, where no network entity can unilaterally improve its local objective. We obtain the following results:
{\^a}{\"i}ż˝{\"i}ż˝ No instance of the game possesses a pure Nash equilibrium.
{\^a}{\"i}ż˝{\"i}ż˝Every mixed Nash equilibrium enjoys a graph-theoretic structure, which enables a (typically exponential) algorithm to compute it.
{\^a}{\"i}ż˝{\"i}ż˝ We coin a natural subclass of mixed Nash equilibria, which we call matching Nash equilibria, for this game on graphs. Matching Nash equilibria are defined using structural parameters of graphs, such as independent sets and matchings.
{\^a}{\"i}ż˝{\"i}ż˝We derive a characterization of graphs possessing matching Nash equilibria. The characterization enables a linear time algorithm to compute a matching Nash equilibrium on any such graph with a given independent set and vertex cover.
{\^a}{\"i}ż˝{\"i}ż˝ Bipartite graphs are shown to satisfy the characterization. So, using a polynomial-time algorithm to compute a perfect matching in a bipartite graph, we obtain, as our main result, an efficient graph-theoretic algorithm to compute a matching Nash equilibrium on any instance of the game with a bipartite graph.
Abstract: We study network load games, a class of routing games in
networks which generalize sel{\^A}Żsh routing games on networks consisting
of parallel links. In these games, each user aims to route some tra{\^A}±c from
a source to a destination so that the maximum load she experiences in the
links of the network she occupies is minimum given the routing decisions
of other users. We present results related to the existence, complexity,
and price of anarchy of Pure Nash Equilibria for several network load
games. As corollaries, we present interesting new statements related to
the complexity of computing equilibria for sel{\^A}Żsh routing games in net-
works of restricted parallel links.
Abstract: We propose new burst assembly schemes and fast reservation (FR) protocols for Optical Burst Switched (OBS) networks that are based on traffic prediction. The burst assembly schemes aim at minimizing (for a given burst size) the average delay of the packets incurred during the burst assembly process, while the fast reservation protocols aim at further reducing the end-to-end delay of the data bursts. The burst assembly techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, different for each proposed scheme, is met. The fast reservation protocols use prediction filters to estimate the expected length of the burst and the time needed for the burst assembly process to complete. A Burst Header Packet (BHP) packet carrying these estimates is sent before the burst is completed, in order to reserve bandwidth at intermediate nodes for the time interval the burst is expected to pass from these nodes. Reducing the packet aggregation delay and the time required to perform the reservations, reduces the total time needed for a packet to be transported over an OBS network and is especially important for real-time applications. We evaluate the performance of the proposed burst assembly schemes and show that a number of them outperform the previously proposed timer-based, length-based and average delay-based burst assembly schemes. We also look at the performance of the fast reservation (FR) protocols in terms of the probability of successfully establishing the reservations required to transport the burst.
Abstract: We propose new burst assembly techniques that aim at reducing the average delay experienced by the packets during the burstification process in optical burst switched (OBS) networks, for a given average size of the bursts produced. These techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, which is different for each proposed scheme, is met. Reducing the packet burstification delay, for a given average burst size, is essential for real-time applications; correspondingly, increasing the average burst size for a given packet burstification delay is important for reducing the number of bursts injected into the network and the associated overhead imposed on the core nodes. We evaluate the performance of the proposed schemes and show that two of them outperform the previously proposed timer - based, length - based and average delay-based burst aggregation schemes in terms of the average packet burstification delay for a given average burst size.
Abstract: We address the call control problem in wireless cellular networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region (cell) can communicate simultaneously with other users of the network using distinct frequencies. The available frequency spectrum is limited; hence, its management should be done efficiently. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate in a cellular network, to maximize the number of users that communicate without signal interference. We study the online version of the problem in cellular networks using competitive analysis and present new upper and lower bounds.
Abstract: Wireless sensor networks are about to be part of everyday life. Homes and workplaces capable of self-controlling and adapting air-conditioning for different temperature and humidity levels, sleepless forests ready to detect and react in case of a fire, vehicles able to avoid sudden obstacles or possibly able to self-organize routes to avoid congestion, and so on, will probably be commonplace in the very near future. Mobility plays a central role in such systems and so does passive mobility, that is, mobility of the network stemming from the environment itself. The population protocol model was an intellectual invention aiming to describe such systems in a minimalistic and analysis-friendly way. Having as a starting-point the inherent limitations but also the fundamental establishments of the population protocol model, we try in this monograph to present some realistic and practical enhancements that give birth to some new and surprisingly powerful (for this kind of systems) computational models.
Abstract: Wireless Sensor Networks (WSNs) constitute a recent and promising new
technology that is widely applicable. Due to the applicability of this
technology and its obvious importance for the modern distributed
computational world, the formal scientific foundation of its inherent laws
becomes essential. As a result, many new computational models for WSNs
have been proposed. Population Protocols (PPs) are a special category of
such systems. These are mainly identified by three distinctive
characteristics: the sensor nodes (agents) move passively, that is, they
cannot control the underlying mobility pattern, the available memory to
each agent is restricted, and the agents interact in pairs. It has been
proven that a predicate is computable by the PP model iff it is
semilinear. The class of semilinear predicates is a fairly small class. In
this work, our basic goal is to enhance the PP model in order to improve
the computational power. We first make the assumption that not only the
nodes but also the edges of the communication graph can store restricted
states. In a complete graph of n nodes it is like having added O(n2)
additional memory cells which are only read and written by the endpoints
of the corresponding edge. We prove that the new model, called Mediated
Population Protocol model, can operate as a distributed nondeterministic
Turing machine (TM) that uses all the available memory. The only
difference from a usual TM is that this one computes only symmetric
languages. More formally, we establish that a predicate is computable by
the new model iff it is symmetric and belongs to NSPACE(n2). Moreover, we
study the ability of the new model to decide graph languages (for general
graphs). The next step is to ignore the states of the edges and provide
another enhancement straight away from the PP model. The assumption now is
that the agents are multitape TMs equipped with infinite memory, that can
perform internal computation and interact with other agents, and we define
space-bounded computations. We call this the Passively mobile Machines
model. We prove that if each agent uses at most f(n) memory for f(n)={\`U}(log
n) then a predicate is computable iff it is symmetric and belongs to
NSPACE(nf(n)). We also show that this is not the case for f(n)=o(log n).
Based on these, we show that for f(n)={\`U}(log n) there exists a space
hierarchy like the one for classical symmetric TMs. We also show that the
latter is not the case for f(n)=o(loglog n), since here the corresponding
class collapses in the class of semilinear predicates and finally that for
f(n)={\`U}(loglog n) the class becomes a proper superset of semilinear
predicates. We leave open the problem of characterizing the classes for
f(n)={\`U}(loglog n) and f(n)=o(log n).
Abstract: Motivated by the problem of efficiently collecting data from
wireless sensor networks via a mobile sink, we present an accelerated
random walk on Random Geometric Graphs. Random
walks in wireless sensor networks can serve as fully local,
very simple strategies for sink motion that significantly
reduce energy dissipation but introduce higher latency in the
data collection process. While in most cases random walks
are studied on graphs like Gn,p and Grid, we define and experimentally
evaluate our newly proposed random walk on
the Random Geometric Graphs model, that more accurately
abstracts spatial proximity in a wireless sensor network. We
call this new random walk the \~{a}-stretched random walk, and
compare it to two known random walks; its basic idea is
to favour visiting distant neighbours of the current node
towards reducing node overlap. We also define a new performance
metric called Proximity Cover Time which, along
with other metrics such as visit overlap statistics and proximity
variation, we use to evaluate the performance properties
and features of the various walks.
Abstract: Web services are becoming an important enabler of the Semantic Web. Besides the need for a rich description mechanism, Web Service information should be made available in an accessible way for machine processing. In this paper, we propose a new P2P basedapproach for Web Services discovery. Peers that store Web Services information, such as data item descriptions, are efficiently located using a scalable and robust data indexing structure for Peer-to-Peer data networks, NIPPERS. We present a theoretical analysis which shows that the communication cost of the query and update operations scale double-logarithmically with the number of NIPPERS nodes. Furthermore, we show that the network is robust with respect to failures fulfilling quality of web services requirements.
Abstract: The non-additive shortest path (NASP) problem asks for
finding an optimal path that minimizes a certain multi-attribute nonlinear
cost function. In this paper, we consider the case of a non-linear
convex and non-decreasing function on two attributes.We present an efficient
polynomial algorithm for solving a Lagrangian relaxation of NASP.
We also present an exact algorithm that is based on new heuristics we
introduce here, and conduct a comparative experimental study with synthetic
and real-world data that demonstrates the quality of our approach.
Abstract: We propose a novel, generic definition of probabilistic schedulers for population protocols. We then identify the consistent probabilistic schedulers, and prove that any consistent scheduler that assigns a non-zero probability to any transition i->j, where i and j are configurations satisfying that i is not equal to j, is fair with probability 1. This is a new theoretical framework that aims to simplify proving specific probabilistic schedulers fair. In this paper we propose two new schedulers, the State Scheduler and the Transition Function Scheduler. Both possess the significant capability of being protocol-aware, i.e. they can assign transition probabilities based on information concerning the underlying protocol. By using our framework we prove that the proposed schedulers, and also the Random Scheduler that was defined by Angluin et al., are all fair with probability 1. We also define and study equivalence between schedulers w.r.t. performance (time equivalent schedulers) and correctness (computationally equivalent schedulers). Surprisingly, we prove the following.
1. The protocol-oblivious (or agnostic) Random Scheduler is not time equivalent to the State and Transition Function Schedulers, although all three are fair probabilistic schedulers (with probability 1). To prove the statement we study the performance of the One-Way Epidemic Protocol (OR Protocol) under these schedulers. To illustrate the unexpected performance variations of protocols under different fair probabilistic schedulers, we additionally modify the State Scheduler to obtain a fair probabilistic scheduler, called the Modified Scheduler, that may be adjusted to lead the One-Way Epidemic Protocol to arbitrarily bad performance.
2. The Random Scheduler is not computationally equivalent to the Transition Function Scheduler. To prove the statement we study the Majority Protocol w.r.t. correctness under the Transition Function Scheduler. It turns out that the minority may win with constant probability under the same initial margin for which the majority w.h.p. wins under the Random Scheduler (as proven by Angluin et al.).
Abstract: In this paper we discussed different switch architectures. We focus mainly on optical buffering. We investigate an all-optical buffer architecture comprising of cascaded stages of quantum-dot semiconductor optical amplifier- based tunable wavelength converters, at 160 Gb/s. We also propose the optical buffer with multi-wavelength converters based on quantum-dot semiconductor optical amplifiers. We present multistage switching fabrics with optical buffers, where optical buffers are based on fibre delay lines and are located in the first stage. Finally, we describe a photonic asynchronous packet switch and show that the employment of a few optical buffer stages to complement the electronic ones significantly improves the switch performance. We also propose two asynchronous optical packet switching node architectures, where an efficient contention resolution is based on controllable optical buffers and tunable wavelength converters TWCs.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function Φ: V → IN such that ¦Φ(u)-Φ(v)≥ 2, when u; v are neighbors in G, and ¦Φ(u)-Φ(v)≥1 when the minimum distance of u; v in G is two. The discrete number and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(nΔ) time algorithm (¦V¦ = n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where Δ the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with λ colors, in the case λ ≥ 4λ + 50.
Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical
networks. In such networks and in the absence of regenerators,
the signal quality of transmission degrades due to physical layer
impairments. Because of certain physical effects, routing choices
made for one lightpath affect and are affected by the choices made
for the other lightpaths. This interference among the lightpaths
is particularly difficult to formulate in an offline algorithm since,
in this version of the problem, we start without any established
connections and the utilization of lightpaths are the variables of
the problem.We initially present an algorithm for solving the pure
(without impairments) RWA problem based on a LP-relaxation
formulation that tends to yield integer solutions. Then, we extend
this algorithm and present two impairment-aware (IA) RWA algorithms
that account for the interference among lightpaths in their
formulation. The first algorithm takes the physical layer indirectly
into account by limiting the impairment-generating sources. The
second algorithm uses noise variance-related parameters to directly
account for the most important physical impairments. The
objective of the resulting cross-layer optimization problem is not
only to serve the connections using a small number of wavelengths
(network layer objective), but also to select lightpaths that have
acceptable quality of transmission (physical layer objective).
Simulations experiments using realistic network, physical layer,
and traffic parameters indicate that the proposed algorithms can
solve real problems within acceptable time.
Abstract: We consider a fundamental problem, called QoS-aware Multicommodity Flow, for assessing robustness in transportation planning.
It constitutes a natural generalization of the weighted multicommodity
flow problem, where the demands and commodity values are elastic to
the Quality-of-Service (QoS) characteristics of the underlying network.
The problem is also fundamental in other domains beyond transportation
planning. In this work, we provide an extensive experimental study of
two FPTAS for the QoS-aware Multicommodity Flow Problem enhanced
with several heuristics, and show the superiority of a new heuristic we
introduce here.
Abstract: We study the fundamental problem 2NASH of computing a Nash equilibrium (NE) point in bimatrix games. We start by proposing a novel characterization of the NE set, via a bijective map to the solution set of a parameterized quadratic program (NEQP), whose feasible space is the highly structured set of correlated equilibria (CE). This is, to our knowledge, the first characterization of the subset of CE points that are in “1–1” correspondence with the NE set of the game, and contributes to the quite lively discussion on the relation between the spaces of CE and NE points in a bimatrix game (e.g., [15], [26] and [33]).
We proceed with studying a property of bimatrix games, which we call mutually concavity (MC), that assures polynomial-time tractability of 2NASH, due to the convexity of a proper parameterized quadratic program (either NEQP, or a parameterized variant of the Mangasarian & Stone formulation [23]) for a particular value of the parameter. We prove various characterizations of the MC-games, which eventually lead us to the conclusion that this class is equivalent to the class of strategically zero-sum (SZS) games of Moulin & Vial [25]. This gives an alternative explanation of the polynomial-time tractability of 2NASH for these games, not depending on the solvability of zero-sum games. Moreover, the recognition of the MC-property for an arbitrary game is much faster than the recognition SZS-property. This, along with the comparable time-complexity of linear programs and convex quadratic programs, leads us to a much faster algorithm for 2NASH in MC-games.
We conclude our discussion with a comparison of MC-games (or, SZS-games) to kk-rank games, which are known to admit for 2NASH a FPTAS when kk is fixed [18], and a polynomial-time algorithm for k=1k=1 [2]. We finally explore some closeness properties under well-known NE set preserving transformations of bimatrix games.
Abstract: A constraint network is arc consistent if any value of any of its variables is compatible with at
least one value of any other variable. The Arc Consistency Problem (ACP) consists in filtering out values of
the variables of a given network to obtain one that is arc consistent, without eliminating any solution. ACP is
known to be inherently sequential, or P-complete, so in this paper we examine some weaker versions of it and
their parallel complexity. We propose several natural approximation schemes for ACP and show that they are also
P-complete. In an attempt to overcome these negative results, we turn our attention to the problem of filtering
out values from the variables so that each value in the resulting network is compatible with at least one value of
not necessarily all, but a constant fraction of the other variables. We call such a network partially arc consistent.
We give a parallel algorithm that, for any constraint network, outputs a partially arc consistent subnetwork of it in
sublinear (O.pn log n/) parallel time using O.n2/ processors. This is the first (to our knowledge) sublinear-time
parallel algorithm with polynomially many processors that guarantees that in the resulting network every value is
compatible with at least one value in at least a constant fraction of the remaining variables. Finally, we generalize
the notion of partiality to the k-consistency problem.
Abstract: The voting rules proposed by Dodgson and Young are both
designed to nd the alternative closest to being a Condorcet
winner, according to two dierent notions of proximity; the
score of a given alternative is known to be hard to compute
under either rule.
In this paper, we put forward two algorithms for ap-
proximating the Dodgson score: an LP-based randomized
rounding algorithm and a deterministic greedy algorithm,
both of which yield an O(logm) approximation ratio, where
m is the number of alternatives; we observe that this result
is asymptotically optimal, and further prove that our greedy
algorithm is optimal up to a factor of 2, unless problems in
NP have quasi-polynomial time algorithms. Although the
greedy algorithm is computationally superior, we argue that
the randomized rounding algorithm has an advantage from
a social choice point of view.
Further, we demonstrate that computing any reasonable
approximation of the ranking produced by Dodgson's rule
is NP-hard. This result provides a complexity-theoretic
explanation of sharp discrepancies that have been observed
in the Social Choice Theory literature when comparing
Dodgson elections with simpler voting rules.
Finally, we show that the problem of calculating the
Young score is NP-hard to approximate by any factor. This
leads to an inapproximability result for the Young ranking.
Abstract: The concept of trust plays an important role in the operation and public acceptance of today's computing environment. Although it is a difficult concept to formalize and handle, many efforts have been made towards a clear definition of trust and the development of systematic ways for trust management. Our central viewpoint is that trust cannot be defined, anymore, as consisting of a static set of rules that define systems properties that hold eternally due to the highly dynamic nature of today's computing systems (e.g. wireless networks, ad-hoc networks, virtual communities and digital territories etc.). Our approach is an effort to define trust in terms of properties that hold with some limiting probability as the the system grows and try to establish conditions that ensure that ??good?? properties hold almost certainly. Based on this viewpoint, in this paper we provide a new framework for defining trust through formally definable properties that hold, almost certainly, in the limit in randomly growing combinatorial structures that model ??boundless?? computing systems (e.g. ad-hoc networks), drawing on results that establish the threshold behavior of predicates written in the first and second order logic. We will also see that, interestingly, some trust models have properties that do not have limiting probabilities. This fact can be used to demonstrate that as certain trust networks grow indefinitely, their trust properties are not certain to be present
Abstract: An ad-hoc mobile network is a collection of mobile hosts, with
wireless communication capabilities, forming a temporary network
without the aid of any established fixed infrastructure.
In such networks, topological connectivity is subject to frequent,
unpredictable change. Our work focuses on networks with high
rate of such changes to connectivity. For such dynamic changing
networks we propose protocols which exploit the co-ordinated
(by the protocol) motion of a small part of the network.
We show that such protocols can be designed to work
correctly and efficiently even in the case of arbitrary (but not
malicious) movements of the hosts not affected by the protocol.
We also propose a methodology for the analysis of the expected
behaviour of protocols for such networks, based on the assumption that mobile hosts (whose motion is not guided by
the protocol) conduct concurrent random walks in their
motion space.
Our work examines some fundamental problems such as pair-wise
communication, election of a leader and counting, and proposes
distributed algorithms for each of them. We provide their
proofs of correctness, and also give rigorous analysis by
combinatorial tools and also via experiments.
Abstract: Understanding the graph structure of the Internet is a crucial step for building accurate
network models and designing efficient algorithms for Internet applications.Yet,obtaining this graph
structure can be a surprisingly difficult task,as edges cannot be explicitly queried.For instance,
empirical studies of the network of InternetProtocol (IP) addresses typically rely on indirect methods
like trace route to build what are approximately single-source,all-destinations,shortest-path trees.
These trees only sample a fraction of the network’s edges,and a paper by Lakhinaetal.[2003]found
empirically that there sulting sample is intrinsically biased.Further,in simulations,they observed that the degree distribution under trace route sampling exhibits a power law even when the underlying
degree distribution is Poisson.
Abstract: It was only recently shown by Shi and Wormald, using the differential equation method to analyze an appropriate algorithm, that a random 5-regular graph asymptotically almost surely has chromatic number at most 4. Here, we show that the chromatic number of a random 5-regular graph is asymptotically almost surely equal to 3, provided a certain four-variable function has a unique maximum at a given point in
a bounded domain. We also describe extensive numerical evidence that strongly suggests that the latter condition holds. The proof applies the small subgraph conditioning method to the number of locally rainbow balanced 3-colorings, where a coloring is balanced if the number of vertices of each color is equal, and locally rainbow if every vertex is adjacent to at least one vertex of each of the other colors.
Abstract: We consider a variant of the Complex Multiplication (CM)
method for constructing elliptic curves (ECs) of prime order with additional
security properties. Our variant uses Weber polynomials whose
discriminant D is congruent to 3 (mod 8), and is based on a new transformation
for converting roots of Weber polynomials to their Hilbert
counterparts. We also present a new theoretical estimate of the bit precision
required for the construction of the Weber polynomials for these
values of D. We conduct a comparative experimental study investigating
the time and bit precision of using Weber polynomials against the (typical)
use of Hilbert polynomials. We further investigate the time efficiency
of the new CM variant under four different implementations of a crucial
step of the variant and demonstrate the superiority of two of them.
Abstract: In this paper we demonstrate the significant impact of (a) the mobility rate and (b) the user density on the performance of routing protocols in ad-hoc mobile networks. In particular, we study the effect of these parameters on two different approaches for designing routing protocols: (a) the route creation and maintenance approach and (b) the "support" approach, that forces few hosts to move acting as "helpers" for message delivery. We study one representative protocol for each approach, i.e., AODV for the first approach and RUNNERS for the second. We have implemented the two protocols and performed a large scale and detailed simulation study of their performance. For the first time, we study AODV (and RUNNERS) in the 3D case. The main findings are: the AODV protocol behaves well in networks of high user density and low mobility rate, while its performance drops for sparse networks of highly mobile users. On the other hand, the RUNNERS protocol seems to tolerate well (and in fact benefit from) high mobility rates and low densities. Thus, we are able to partially answer an important conjecture of [Chatzigiannakis, I et al. 2003].
Abstract: In sponsored search auctions, advertisers compete for a number
of available advertisement slots of different quality. The
auctioneer decides the allocation of advertisers to slots using
bids provided by them. Since the advertisers may act
strategically and submit their bids in order to maximize their
individual objectives, such an auction naturally defines a
strategic game among the advertisers. In order to quantify
the efficiency of outcomes in generalized second price auctions,
we study the corresponding games and present new
bounds on their price of anarchy, improving the recent results
of Paes Leme and Tardos [16] and Lucier and Paes
Leme [13]. For the full information setting, we prove a surprisingly
low upper bound of 1.282 on the price of anarchy
over pure Nash equilibria. Given the existing lower bounds,
this bound denotes that the number of advertisers has almost
no impact on the price of anarchy. The proof exploits
the equilibrium conditions developed in [16] and follows by
a detailed reasoning about the structure of equilibria and a
novel relation of the price of anarchy to the objective value
of a compact mathematical program. For more general equilibrium
classes (i.e., mixed Nash, correlated, and coarse correlated
equilibria), we present an upper bound of 2.310 on
the price of anarchy. We also consider the setting where
advertisers have incomplete information about their competitors
and prove a price of anarchy upper bound of 3.037
over Bayes-Nash equilibria. In order to obtain the last two
bounds, we adapt techniques of Lucier and Paes Leme [13]
and significantly extend them with new arguments
Abstract: We present a variant of the complex multiplication method
that generates elliptic curves of cryptographically strong order. Our variant
is based on the computation ofWeber polynomials that require significantly
less time and space resources than their Hilbert counterparts. We
investigate the time efficiency and precision requirements for generating
off-line Weber polynomials and its comparison to another variant based
on the off-line generation of Hilbert polynomials. We also investigate the
efficiency of our variant when the computation of Weber polynomials
should be made on-line due to limitations in resources (e.g., hardware
devices of limited space).We present trade-offs that could be useful to potential
implementors of elliptic curve cryptosystems on resource-limited
hardware devices.
Abstract: We consider the generation of prime-order elliptic curves (ECs) over a prime field $\mathbb{F}_{p}$ using the Complex Multiplication (CM) method. A crucial step of this method is to compute the roots of a special type of class field polynomials with the most commonly used being the Hilbert and Weber ones. These polynomials are uniquely determined by the CM discriminant D. In this paper, we consider a variant of the CM method for constructing elliptic curves (ECs) of prime order using Weber polynomials. In attempting to construct prime-order ECs using Weber polynomials, two difficulties arise (in addition to the necessary transformations of the roots of such polynomials to those of their Hilbert counterparts). The first one is that the requirement of prime order necessitates that D≡3mod8), which gives Weber polynomials with degree three times larger than the degree of their corresponding Hilbert polynomials (a fact that could affect efficiency). The second difficulty is that these Weber polynomials do not have roots in $\mathbb{F}_{p}$ .
In this work, we show how to overcome the above difficulties and provide efficient methods for generating ECs of prime order focusing on their support by a thorough experimental study. In particular, we show that such Weber polynomials have roots in the extension field $\mathbb{F}_{p^{3}}$ and present a set of transformations for mapping roots of Weber polynomials in $\mathbb{F}_{p^{3}}$ to roots of their corresponding Hilbert polynomials in $\mathbb{F}_{p}$ . We also show how an alternative class of polynomials, with degree equal to their corresponding Hilbert counterparts (and hence having roots in $\mathbb{F}_{p}$ ), can be used in the CM method to generate prime-order ECs. We conduct an extensive experimental study comparing the efficiency of using this alternative class against the use of the aforementioned Weber polynomials. Finally, we investigate the time efficiency of the CM variant under four different implementations of a crucial step of the variant and demonstrate the superiority of two of them.
Abstract: We investigate the problem of how to achieve energy balanced data propagation in distributed wireless sensor networks. The energy balance property guarantees that the average per sensor energy dissipation is the same for all sensors in the network, throughout the execution of the data propagation protocol. This property is crucial for prolonging the network lifetime, by avoiding early energy depletion of sensors.
We survey representative solutions from the state of the art. We first present a basic algorithm that in each stepprobabilistically decides whether to propagate data one-hop towards the final destination (the sink), or to send it directly to the sink. This randomized choice trades-off the (cheap, but slow) one-hop transmissions with the direct transmissions to the sink, which are more expensive but bypass the bottleneck region around the sink and propagate data fast. By a detailed analysis using properties of stochastic processes and recurrence relations we precisely estimate (even in closed form) the probability for each propagation option necessary for energy balance.
The fact (shown by our analysis) that direct (expensive) transmissions to the sink are needed only rarely, shows that our protocol, besides energy balanced, is also energy efficient. We then enhance this basic result by surveying some recent findings including a generalized algorithm and demonstrating the optimality of this two-way probabilistic data propagation, as well as providing formal proofs of the energy optimality of the energy balance property.
Abstract: Random Intersection Graphs is a new class of random graphs introduced in [5], in which each of n vertices randomly and independently chooses some elements from a universal set, of cardinality m. Each element is chosen with probability p. Two vertices are joined by an edge iff their chosen element sets intersect. Given n, m so that m=n{\'a}, for any real {\'a} different than one, we establish here, for the first time, tight lower bounds p0(n,m), on the value of p, as a function of n and m, above which the graph Gn,m,p is almost certainly Hamiltonian, i.e. it contains a Hamilton Cycle almost certainly. Our bounds are tight in the sense that when p is asymptotically smaller than p0(n,m) then Gn,m,p almost surely has a vertex of degree less than 2. Our proof involves new, nontrivial, coupling techniques that allow us to circumvent the edge dependencies in the random intersection model. Interestingly, Hamiltonicity appears well below the general thresholds, of [4], at which Gn,m,p looks like a usual random graph. Thus our bounds are much stronger than the trivial bounds implied by those thresholds.
Our results strongly support the existence of a threshold for Hamiltonicity in Gn,m,p.
Abstract: The Moran process models the spread of genetic mutations through a population. A mutant with relative fitness r is introduced into a population and the system evolves, either reaching fixation (in which every individual is a mutant) or extinction (in which none is). In a widely cited paper (Nature, 2005), Lieberman, Hauert and Nowak generalize the model to populations on the vertices of graphs. They describe a class of graphs (called "superstars"), with a parameter k. Superstars are designed to have an increasing fixation probability as k increases. They state that the probability of fixation tends to 1−r−k as graphs get larger but we show that this claim is untrue as stated. Specifically, for k=5, we show that the true fixation probability (in the limit, as graphs get larger) is at most 1−1/j(r) where j(r)=Θ(r4), contrary to the claimed result. We do believe that the qualitative claim of Lieberman et al.\ --- that the fixation probability of superstars tends to 1 as k increases --- is correct, and that it can probably be proved along the lines of their sketch. We were able to run larger computer simulations than the ones presented in their paper. However, simulations on graphs of around 40,000 vertices do not support their claim. Perhaps these graphs are too small to exhibit the limiting behaviour.
Abstract: In routing games, the network performance at equilibrium can be significantly improved if we remove some edges from the network. This counterintuitive fact, widely known as Braess's paradox, gives rise to the (selfish) network design problem, where we seek to recognize routing games suffering from the paradox, and to improve the equilibrium performance by edge removal. In this work, we investigate the computational complexity and the approximability of the network design problem for non-atomic bottleneck routing games, where the individual cost of each player is the bottleneck cost of her path, and the social cost is the bottleneck cost of the network. We first show that bottleneck routing games do not suffer from Braess's paradox either if the network is series-parallel, or if we consider only subpath-optimal Nash flows. On the negative side, we prove that even for games with strictly increasing linear latencies, it is NP-hard not only to recognize instances suffering from the paradox, but also to distinguish between instances for which the Price of Anarchy (PoA) can decrease to 1 and instances for which the PoA is as large as \Omega(n^{0.121}) and cannot improve by edge removal. Thus, the network design problem for such games is NP-hard to approximate within a factor of O(n^{0.121-\eps}), for any constant \eps > 0. On the positive side, we show how to compute an almost optimal subnetwork w.r.t. the bottleneck cost of its worst Nash flow, when the worst Nash flow in the best subnetwork routes a non-negligible amount of flow on all used edges. The running time is determined by the total number of paths, and is quasipolynomial when the number of paths is quasipolynomial.
Abstract: We investigate the practical merits of a parallel priority queue
through its use in the development of a fast and work-efficient parallel
shortest path algorithm, originally designed for an EREW PRAM. Our
study reveals that an efficient implementation on a real supercomputer
requires considerable effort to reduce the communication performance
(which in theory is assumed to take constant time). It turns out that the
most crucial part of the implementation is the mapping of the logical
processors to the physical processing nodes of the supercomputer. We
achieve the requested efficient mapping through a new graph-theoretic
result of independent interest: computing a Hamiltonian cycle on a directed
hyper-torus. No such algorithm was known before for the case of
directed hypertori. Our Hamiltonian cycle algorithm allows us to considerably
improve the communication cost and thus the overall performance
of our implementation.
Abstract: In the uniform random intersection graphs model, denoted by Gn;m;, to each vertex v
we assign exactly randomly chosen labels of some label set M of m labels and we connect every
pair of vertices that has at least one label in common. In this model, we estimate the independence
number (Gn;m;), for the wide, interesting range m = n; < 1 and = O(m1=4). We also prove
the hamiltonicity of this model by an interesting combinatorial construction. Finally, we give a brief
note concerning the independence number of Gn;m;p random intersection graphs, in which each vertex
chooses labels with probability p.
Abstract: We study the performance of approximate Nash equilibria for congestion
games with polynomial latency functions. We consider how much the price of anarchy
worsens and how much the price of stability improves as a function of the
approximation factor . We give tight bounds for the price of anarchy of atomic and
non-atomic congestion games and for the price of stability of non-atomic congestion
games. For the price of stability of atomic congestion games we give non-tight
bounds for linear latencies. Our results not only encompass and generalize the existing
results of exact equilibria to -Nash equilibria, but they also provide a unified
approach which reveals the common threads of the atomic and non-atomic price of
anarchy results. By expanding the spectrum, we also cast the existing results in a new
light.
Abstract: In this paper we present a new upper bound for randomly chosen 3-CNF formulas. In particular we show that any random formula over n variables, with a clauses-to-variables ratio of at least 4.4898 is, as n grows large, asymptotically almost surely unsatisfiable. The previous best such bound, due to Dubois in 1999, was 4.506. The first such bound,
independently discovered by many groups of researchers since 1983, was 5.19. Several decreasing values between 5.19 and 4.506 were published in the years between.We believe that the probabilistic techniques we use for the proof are of independent interest.
Abstract: For various random constraint satisfaction problems there is a significant gap between the largest constraint density
for which solutions exist and the largest density for which any polynomial time algorithm is known to find
solutions. Examples of this phenomenon include random k-SAT, random graph coloring, and a number of other
random Constraint Satisfaction Problems. To understand this gap, we study the structure of the solution space of
random k-SAT (i.e., the set of all satisfying assignments viewed as a subgraph of the Hamming cube). We prove
that for densities well below the satisfiability threshold, the solution space decomposes into an exponential number
of connected components and give quantitative bounds for the diameter, volume and number.
Abstract: For various random constraint satisfaction problems there is a significant gap between
the largest constraint density for which solutions exist and the largest density for which any polynomial
time algorithm is known to find solutions. Examples of this phenomenon include random
k-SAT, random graph coloring, and a number of other random constraint satisfaction problems. To
understand this gap, we study the structure of the solution space of random k-SAT (i.e., the set of
all satisfying assignments viewed as a subgraph of the Hamming cube). We prove that for densities
well below the satisfiability threshold, the solution space decomposes into an exponential number of
connected components and give quantitative bounds for the diameter, volume, and numb
Abstract: We study network connection games where the nodes of a networ
k perform edge swaps
in order to improve their communication costs. For the model
proposed by [2], in which the selfish
cost of a node is the sum of all shortest path distances to the o
ther nodes, we use the probabilistic
method to provide a new, structural characterization of equ
ilibrium graphs. We show how to use this
characterization in order to prove upper bounds on the diame
ter of equilibrium graphs in terms of the
size of the largest
k
-vicinity (defined as the the set of vertices within distance
k
from a vertex), for
any
k
≥
1 and in terms of the number of edges, thus settling positivel
y a conjecture of [2] in the cases
of graphs of large
k
-vicinity size (including graphs of large maximum degree) a
nd of graphs which are
dense enough.
Next, we present a new swap-based network creation game, in w
hich selfish costs depend on the imme-
diate neighborhood of each node; in particular, the profit of
a node is defined as the sum of the degrees
of its neighbors. We prove that, in contrast to the previous m
odel, this network creation game admits
an exact potential, and also that any equilibrium graph cont
ains an induced star. The existence of the
potential function is exploited in order to show that an equi
librium can be reached in expected polyno-
mial time even in the case where nodes can only acquire limite
d knowledge concerning non-neighboring
nodes.
Abstract: In this paper we study the support sizes of evolutionary stable strategies (ESS) in random
evolutionary games. We prove that, when the elements of the payo matrix behave either as uniform,
or normally distributed random variables, almost all ESS have support sizes o(n), where n is the
number of possible types for a player. Our arguments are based exclusively on a stability property
that the payo submatrix indicated by the support of an ESS must satisfy.
We then combine this result with a recent result of McLennan and Berg (2005), concerning the
expected number of Nash Equilibria in normal{random bimatrix games, to show that the expected
number of ESS is signicantly smaller than the expected number of symmetric Nash equilibria of the
underlying symmetric bimatrix game.
Abstract: In future transparent optical networks, it is
important to consider the impact of physical impairments in the
routing and wavelengths assignment process, to achieve efficient
connection provisioning. In this paper, we use classical multi-
objective optimization (MOO) strategies and particularly genetic
algorithms to jointly solve the impairment aware RWA (IA-
RWA) problem. Fiber impairments are indirectly considered
through the insertion of the path length and the number of
common hops in the optimization process. It is shown that
blocking is greatly improved, while the obtained solutions truly
converge towards the Pareto front that constitutes the set of
global optimum solutions. We have evaluated our findings, using
an Q estimator tool, that calculates the signal quality of each path
analytically.
Index Terms RWA, Genetic Algorithm, All-Optical
Networks, Multi Objective Optimization.
Abstract: In many cryptographic applications it is necessary to generate
elliptic curves (ECs) with certain security properties. These curves
are commonly constructed using the Complex Multiplication method
which typically uses the roots of Hilbert or Weber polynomials. The former
generate the EC directly, but have high computational demands,
while the latter are faster to construct but they do not lead, directly, to
the desired EC. In this paper we present in a simple and unifying manner
a complete set of transformations of the roots of a Weber polynomial to
the roots of its corresponding Hilbert polynomial for all discriminant values
on which they are defined. Moreover, we prove a theoretical estimate
of the precision required for the computation of Weber polynomials. Finally,
we experimentally assess the computational efficiency of theWeber
polynomials along with their precision requirements for various discriminant
values and compare the results with the theoretical estimates. Our
experimental results may be used as a guide for the selection of the most
efficient curves in applications residing in resource limited devices such as
smart cards that support secure and efficient Public Key Infrastructure
(PKI) services.
Abstract: This article studies the transmission control
protocol (TCP) synchronization effect in optical burst
switched networks.Synchronization of TCP flows appears
when optical bursts with segments from different flows inside
are dropped in the network causing flow congestion windows decreasing simultaneously. In this article,this imminent
effect is studied with different assembly schemes and network scenarios.Different metrics are applied to quantitatively assess synchronization with classical assembly
schemes.A new burst assembly scheme is proposed that
statically or dynamically allocates flows to multiple assembly queues to control flow aggregation within the assembly
cycle.The effectiveness of the scheme has been evaluated,
showing a good improvement in optical link utilization
Abstract: We study the on-line versions of two fundamental graph problems, maximum independent set and minimum coloring, for the case of disk graphs which are graphs resulting from intersections of disks on the plane. In particular, we investigate whether randomization can be used to break known lower bounds for deterministic on-line independent set algorithms and present new upper and lower bounds; we also present an improved upper bound for on-line coloring.
Abstract: We propose and evaluate an impairment-aware multi-parametric routing and wavelength assignment algorithm for online traffic in transparent optical networks. In such networks the signal quality of transmission degrades due to physical layer impairments. In the multiparametric approach, a vector of cost parameters is assigned to each link, from which the cost vectors of candidate lightpaths are calculated. In the proposed scheme the cost vector includes impairment generating source parameters, such as the path length, the number of hops, the number of crosstalk sources and other inter-lightpath interfering parameters, so as to indirectly account for the physical layer effects. For a requested connection the algorithm calculates a set of candidate lightpaths, whose quality of transmission is validated using a function that combines the impairment generating parameters. For selecting the lightpath we propose and evaluate various optimization functions that correspond to different IA-RWA algorithms. Our performance results indicate that the proposed algorithms utilize efficiently the available resources and minimize the total accumulated signal degradation on the selected lightpaths, while having low execution times.
Abstract: Managing corporate Information Technology (IT) environment becomes increasingly complex as server logic architecture becomes distributed and the number of manageable entities increases. At the same time, the open source community has not yet produced a reliable systems and network management solution, even though there are open source initiatives specializing in individual fields of remote management. This paper presents OpenRSM, an integrated remote management system created by integrating individual open source initiatives and augmenting them to support additional functionality so that a lightweight integrated systems and network management solution is produced.
Abstract: One oft-cited strategy towards sustainability is improving energy efficiency inside public buildings. In this context, the educational buildings sector presents a very interesting and important case for the monitoring and management of buildings, since it addresses both energy and educational issues. In this work, we present and discuss the hardware IoT infrastructure substrate that provides real-time monitoring in multiple school buildings. We believe that such a system needs to follow an open design approach: rely on hardware-agnostic components that communicate over well-defined open interfaces. We present in detail the design of our hardware components, while also providing insights to the overall system design and a first set of results on their operation. The presented hardware components are utilized as the core hardware devices for GAIA, an EU research project aimed at the educational community. As our system has been deployed and tested in several public school buildings in Greece, we also report on its validation.
Abstract: The authors demonstrate an optical buffer architecture which is implemented using quantum dot semiconductor optical amplifiers (QD-SOAs) in order to achieve wavelength conversion with regenerative capabilities, for all optical packet switched networks. The architecture consists of cascaded programmable delay stages that minimise the number of wavelength converters required to implement the buffer. Physical layer simulations have been performed in order to reveal the potential of this scheme as well as the operating and device parameters of QD-SOA-based wavelength converters. The results obtained have indicated that, up to three time-slot interchanger (TSI) cascaded stages show good performance at 160 Gb/s in the 1550 nm communication window.
Abstract: Switching in core optical networks is currently being
performed using high-speed electronic or all-optical
circuit switches. Switching with high-speed electronics
requires optical-to-electronic (O/E) conversion of the
data stream, making the switch a potential bottleneck
of the network: any effort (including parallelization) for
electronics to approach the optical speeds seems to be
already reaching its practical limits. Furthermore, the
store-and-forward approach of packet-switching does
not seem suitable for all-optical implementation due to
the lack of practical optical random-access-memories
to buffer and resolve contentions. Circuit switching on
the other hand, involves a pre-transmission delay for
call setup and requires the aggregation of microlows
into circuits, sacriicing the granularity and the control
over individual lows, and is ineficient for bursty traf-
ic. Optical burst switching (OBS) has been proposed
by Qiao and Yoo (1999) to combine the advantages of
both packet and circuit switching and is considered a
promising technology for the next generation optical
internet.
Abstract: The objective of this research is to propose two new optical procedures for packet routing and forwarding in the framework of transparent optical networks. The single-wavelength label-recognition and packet-forwarding unit, which represents the central physical constituent of the switching node, is fully described in both cases. The first architecture is a hybrid opto-electronic structure relying on an optical serial-to-parallel converter designed to slow down the label processing. The remaining switching operations are done electronically. The routing system remains transparent for the packet payloads. The second architecture is an all-optical architecture and is based on the implementation of all-optical decoding of the parallelized label. The packet-forwarding operations are done optically. The major subsystems required in both of the proposed architectures are described on the basis of nonlinear effects in semiconductor optical amplifiers. The experimental results are compatible with the integration of the whole architecture. Those subsystems are a 4-bit time-to-wavelength converter, a pulse extraction circuit, a an optical wavelength generator, a 3 x 8 all-optical decoder and a packet envelope detector.
Abstract: We demonstrate an optical power limiter using a
semiconductor optical amplifier (SOA)-based interferometric gate
powered by a strong continuous-wave input signal. We present a
detailed theoretical and experimental investigation of the power
limiting characteristics of saturated SOA-based switches, showing
good agreement between theory and experiment.
Abstract: We report a polarization insensitive fiber-based NOLM scheme that incorporates non-PM fibers as the non-linear
medium. The circuit is configured to operate as a fiber compressor and produces narrow and pedestal-free optical
pulses. The FRM stabilization ensures the excellent performance regardless of any externally applied perturbations.
Abstract: We propose and demonstrate a wavelength converter-based Time-Slot-Interchanger architecture consisting of cascaded programmable delay stages. It uses an integrated quadruple switch array of HMZI switches and operated error-free with 10 Gb/s NRZ packets.
Abstract: In this paper we propose an energy-efficient broadcast algorithm for wireless networks for the case where the transmission powers of the nodes are fixed. Our algorithm is based on the multicost approach and selects an optimal energy-efficient set of nodes for broadcasting, taking into account: i) the node residual energies, ii) the transmission powers used by the nodes, and iii) the set of nodes that are covered by a specific schedule. Our algorithm is optimal, in the sense that it can optimize any desired function of the total power consumed by the broadcasting task and the minimum of the current residual energies of the nodes, provided that the optimization function is monotonic in each of these parameters. Our algorithm has non-polynomial complexity, thus, we propose a relaxation producing a near-optimal solution in polynomial time. Using simulations we show that the proposed algorithms outperform other established solutions for energy-aware broadcasting with respect to both energy consumption and network lifetime. Moreover, it is shown that the near-optimal multicost algorithm obtains most of the performance benefits of the optimal multicost algorithm at a smaller computational overhead.
Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different type of resources.
Abstract: This paper studies the data gathering problem in wireless networks, where data generated at the nodes has to be collected at a single sink. We investigate the relationship between routing optimality and fair resource management. In particular, we prove that for energy-balanced data propagation, Pareto optimal routing and flow maximization are equivalent, and also prove that flow maximization is equivalent to maximizing the network lifetime. We algebraically characterize the network structures in which energy-balanced data flows are maximal. Moreover, we algebraically characterize communication links which are not used by an optimal flow. This leads to the characterization of minimal network structures supporting the maximal flows.
We note that energy-balance, although implying global optimality, is a local property that can be computed efficiently and in a distributed manner. We suggest online distributed algorithms for energy-balance in different optimal network structures and numerically show their stability in particular setting. We remark that although the results obtained in this paper have a direct consequence in energy saving for wireless networks they do not limit themselves to this type of networks neither to energy as a resource. As a matter of fact, the results are much more general and can be used for any type of network and different types of resources.
Abstract: Partitioned Optimal Passive Stars network, POPS(d,g), is an optical interconnection network of N processors (N=dg) which uses g2 optical passive star couplers. The processors of this network are partitioned into g groups of d processors each and the g2 couplers are used for connecting each group with each of the groups, including itself. In this paper, we present an optimal embedding of the hypercube on this network for all combinations of values of d and g. Specifically, we show how to optimally simulate the most common hypercube communication pattern where each hypercube node sends a packet along the same dimension. Optimal simulation of this communication on the POPS(d,g) network has already been presented for d {\^a}‰¤ g in the literature, but for the case d> g, the optimality remained an open problem. Now, we show that an optimal simulation is feasible in this case too.
Abstract: Partitioned Optimal Passive Stars network, POPS(d,g), is an optical interconnection network of N processors (N=dg) with g 2 optical passive star couplers. In this network, there are g groups of d processors each and the g 2 couplers are used for connecting each group with each of the groups, including itself. In this paper, we present a technique for optimally simulating a frequently arising hypercube communication pattern on this network for all combinations of values of d and g. Specifically, we show that one-hop movements on the hypercube along the same dimension can be simulated on the POPS(d,g) network in $\lceil \frac{d}{g}\rceil$ slots for d≠g and in 2 slots for d=g.
Abstract: Network continuous-media applications are emerging with a great pace. Cache memories have long been recognized as a key resource (along with network bandwidth) whose intelligent exploitation can ensure high performance for such applications. Cache memories exist at the continuous-media servers and their proxy servers in the network. Within a server, cache memories exist in a hierarchy (at the host, the storage-devices, and at intermediate multi-device controllers). Our research is concerned with how to best exploit these resources in the context of continuous media servers and in particular, how to best exploit the available cache memories at the drive, the disk array controller, and the host levels. Our results determine under which circumstances and system configurations it is preferable to devote the available memory to traditional caching (a.k.a. ldquodata sharingrdquo) techniques as opposed to prefetching techniques. In addition, we show how to configure the available memory for optimal performance and optimal cost. Our results show that prefetching techniques are preferable for small-size caches (such as those expected at the drive level). For very large caches (such as those employed at the host level) caching techniques are preferable. For intermediate cache sizes (such as those at multi-device controllers) a combination of both strategies should be employed.
Abstract: The proliferation of peertopeer
(P2P) systems has come with various
compelling applications including file sharing based on distributed
hash tables (DHTs) or other kinds of overlay networks.
Searching the content of files (especially Web Search) requires
multikeyword
querying with scoring and ranking. Existing approaches
have no way of taking into account the correlation between
the keywords in the query. This paper presents our solution
that incorporates the queries and behavior of the users in the P2P
network such that interesting correlations can be inferred.
Abstract: MINERVA1 is a novel approach towards P2P Web search
that connects an a-priori unlimited number of peers, each of which maintains
a personal local database and a local search facility. Each peer posts
a small amount of metadata to a physically distributed directory layered
on top of a DHT-based overlay network that is used to efficiently select
promising peers from across the peer population that can best locally execute
a query. This paper proposes a live demonstration of MINERVA,
showcasing the full information lifecycle: crawling web pages, disseminating
metadata to a distributed directory, and executing queries online. We
additionally invite all visitors to instantly join the network by executing
a small piece of software.
Abstract: In this paper, we demonstrate optical transparency
in packet formatting and network traffic offered by all-optical
switching devices. Exploiting the bitwise processing capabilities
of these “optical transistors,” simple optical circuits are designed
verifying the independency to packet length, synchronization
and packet-to-packet power fluctuations. Devices with these attributes
are key elements for achieving network flexibility, fine
granularity and efficient bandwidth-on-demand use. To this end, a
header/payload separation circuit operating with IP-like packets,
a clock and data recovery circuit handling asynchronous packets
and a burst-mode receiver for bursty traffic are presented. These
network subsystems can find application in future high capacity
data-centric photonic packet switched networks.
Abstract: Two simple and work-efficient parallel algorithms for the minimum spanning tree problem are presented. Both algorithms perform O( m log n ) work. The first algorithm
runs in O( log^2 n ) time on an EREW PRAM while the second algorithm runs in O( log n ) time on a Common CRCW PRAM.
Abstract: In this paper, we describe the implementation of
applying and testing the ”Lightweight Target Tracking using
Passive Traces algorithm” [1] on a FIRE wireless sensors testbed
located in the Theoretical Computer Science/Sensors Lab in
Geneva, Switzerland. We provide information about the hardware
installation and configuration, the changes we did to the
algorithm to adapt it to a real testbed as well as the tools we
implemented to operate the network and receive feedback from
the algorithm’s operation. Finally, we discuss the performance
evaluation findings of our implementation.
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks. We
call it the PALOMA model, standing for PAssively mobile LOgarithmic space MAchines. The main
modification w.r.t. the Population Protocol model [2] is that agents now, instead of being automata, are
Turing Machines whose memory is logarithmic in the population size n. Note that the new model is still
easily implementable with current technology. We focus on complete communication graphs. We define
the complexity class PLM, consisting of all symmetric predicates on input assignments that are stably
computable by the PALOMA model. We assume that the agents are initially identical. Surprisingly, it
turns out that the PALOMA model can assign unique consecutive ids to the agents and inform them
of the population size! This allows us to give a direct simulation of a Deterministic Turing Machine
of O(n log n) space, thus, establishing that any symmetric predicate in SPACE(n log n) also belongs
to PLM. We next prove that the PALOMA model can simulate the Community Protocol model [15],
thus, improving the previous lower bound to all symmetric predicates in NSPACE(n log n). Going
one step further, we generalize the simulation of the deterministic TM to prove that the PALOMA
model can simulate a Nondeterministic TM of O(n log n) space. Although providing the same lower
bound, the important remark here is that the bound is now obtained in a direct manner, in the sense
that it does not depend on the simulation of a TM by a Pointer Machine. Finally, by showing that a
Nondeterministic TM of O(n log n) space decides any language stably computable by the PALOMA
model, we end up with an exact characterization for PLM: it is precisely the class of all symmetric
predicates in NSPACE(n log n).
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks, called PM, standing for Passively mobile Machines. The main modification w.r.t. the Population Protocol model [Angluin et al. 2006] is that agents now, instead of being automata, are Turing Machines. We provide general definitions for unbounded memories, but we are mainly interested in computations upper-bounded by plausible space limitations. However, we prove that our results hold for more general cases. We focus on \emph{complete interaction graphs} and define the complexity classes PMSPACE(f(n)) parametrically, consisting of all predicates that are stably computable by some PM protocol that uses O(f(n)) memory in each agent. We provide a protocol that generates unique identifiers from scratch only by using O(log n) memory, and use it to provide an exact characterization of the classes PMSPACE(f(n)) when f(n) = Ω(log n): they are precisely the classes of all symmetric predicates in NSPACE(nf(n)). As a consequence, we obtain a space hierarchy of the PM model when the memory bounds are Ω(log n). We next explore the computability of the PM model when the protocols use o(loglog n) space per machine and prove that SEM = PMSPACE(f(n)) when f(n) = o(loglog n), where SEM denotes the class of the semilinear predicates. Finally, we establish that the minimal space requirement for the computation of non-semilinear predicates is O(log log n).
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks, called PM, standing for Passively mobile Machines. The main modification w.r.t. the Population Protocol model [Angluin et al. 2006] is that the agents now, instead of being automata, are Turing Machines. We provide general definitions for unbounded memories, but we are mainly interested in computations upper-bounded by plausible space limitations. However, we prove that our results hold for more general cases. We focus on complete interaction graphs and define the complexity classes PMSPACE(f(n)) parametrically, consisting of all predicates that are stably computable by some PM protocol that uses O(f(n)) memory in each agent. We provide a protocol that generates unique identifiers from scratch only by using O(log n) memory, and use it to provide an exact characterization of the classes PMSPACE(f(n)) when f(n)=Omega(log n): they are precisely the classes of all symmetric predicates in NSPACE(nf(n)). As a consequence, we obtain a space hierarchy of the PM model when the memory bounds are Omega(log n). Finally, we establish that the minimal space requirement for the computation of non-semilinear predicates is O(log log n).
Abstract: In this work we propose and develop a comprehensive
infrastructure, coined PastryStrings, for supporting rich
queries on both numerical (with range, and comparison
predicates) and string attributes, (accommodating equality,
prefix, suffix, and containment predicates) over DHT networks
utilising prefix-based routing. As event-based, publish/
subscribe information systems are a champion application
class, we formulate our solution in terms of this environment.
Abstract: In this work we propose and develop a comprehensive infrastructure, coined PastryStrings, for supporting rich queries on
both numerical (with range, and comparison predicates) and string attributes, (accommodating equality, prefix, suffix, and
containment predicates) over DHT networks utilising prefix-based routing. As event-based, publish/subscribe information
systems are a champion application class, we formulate our solution in terms of this environment.
Abstract: We consider path protection in the routing and
wavelength assignment (RWA) problem for impairment
constrained WDM optical networks. The proposed multicost
RWA algorithms select the primary and the backup lightpaths by
accounting for physical layer impairments. The backup lightpath
may either be activated (1+1 protection) or it may be reserved and
not activated, with activation taking place when/if needed (1:1
protection). In case of 1:1 protection the period of time where the
quality of its transmission (QoT) is valid, despite the possible
establishment of future connections, should be preserved, so as to
be used in case the primary lightpath fails. We show that, by using
the multicost approach for solving the RWA with protection
problem, great benefits can be achieved both in terms of the
connection blocking rate and in terms of the validity period of the
backup lightpath. Moreover the multicost approach, by providing
a set of candidate lightpaths for each source destination pair,
instead of a single one, offers ease and flexibility in selecting the
primary and the backup lightpaths.
Abstract: — This work discusses PatrasSense, a system utilizing a range of technologies for monitoring environmental conditions, based on the concept of participatory monitoring. We discuss here the main issues in this category of applications, and its relation to the Real World Internet vision. We are interested in the use of sensors for collecting data related to the quality of life in urban areas, making it publicly available. We describe a possible architecture for such a system and a plan for deployment in the city of Patras, Greece. We have thus far implemented a set of features and conducted an initial set of small-scale experiments using sensor devices mounted on vehicles. We report on the respective results
Abstract: In this work, we study the impact of dynamically changing
link capacities on the delay bounds of LIS (Longest-In-
System) and SIS (Shortest-In-System) protocols on specific
networks (that can be modelled as Directed Acyclic Graphs-
DAGs) and stability bounds of greedy contention-resolution
protocols running on arbitrary networks under the Adversarial
Queueing Theory. Especially, we consider the model
of dynamic capacities, where each link capacity may take
on integer values from [1, C] withC > 1, under a (w, \~{n})-
adversary.
Abstract: In this work, we study the impact of dynamically changing link capacities on the delay bounds of LIS (Longest-In-System) and SIS (Shortest-In-System) protocols on specific networks (that can be modelled as Directed Acyclic Graphs (DAGs)) and stability bounds of greedy contention–resolution protocols running on arbitrary networks under the Adversarial Queueing Theory. Especially, we consider the model of dynamic capacities, where each link capacity may take on integer values from [1,C] with C>1, under a (w,\~{n})-adversary. We show that the packet delay on DAGs for LIS is upper bounded by O(iw\~{n}C) and lower bounded by {\`U}(iw\~{n}C) where i is the level of a node in a DAG (the length of the longest path leading to node v when nodes are ordered by the topological order induced by the graph). In a similar way, we show that the performance of SIS on DAGs is lower bounded by {\`U}(iw\~{n}C), while the existence of a polynomial upper bound for packet delay on DAGs when SIS is used for contention–resolution remains an open problem. We prove that every queueing network running a greedy contention–resolution protocol is stable for a rate not exceeding a particular stability threshold, depending on C and the length of the longest path in the network.
Abstract: We demonstrate the use of impairment constraint routing
for performance engineering of transparent metropolitan area
optical networks. Our results show the relationship between
blocking probability and different network characteristics such
as span length, amplifier noise figure, and hit rate,and provide
information on the system specifications required to achieve
acceptable network performance.
Abstract: In this paper we present an implementation and performance evaluation of a descent algorithm that was proposed in \cite{tsaspi} for the computation of approximate Nash equilibria of non-cooperative bi-matrix games. This algorithm, which achieves the best polynomially computable \epsilon-approximate equilibria till now, is applied here to several problem instances designed so as to avoid the existence of easy solutions. Its performance is analyzed in terms of quality of approximation and speed of convergence. The results demonstrate significantly better performance than the theoretical worst case bounds, both for the quality of approximation and for the speed of convergence. This motivates further investigation into the intrinsic characteristics of descent algorithms applied to bi-matrix games. We discuss these issues and provide some insights about possible variations and extensions of the algorithmic concept that could lead to further understanding of the complexity of computing equilibria. We also prove here a new significantly better bound on the number of loops required for convergence of the descent algorithm.
Abstract: We present a network operation tool called Impairment Aware Lightpath Computation Engine
(IALCE) that incorporates an impairment-aware routing and wavelength assignment (RWA) algorithm.
We perform experiments illustrating the flexibility of the engine and the performance of the algorithm
Abstract: In this paper a performanse analysis of the packet scheduling switch uses a series of feed forward delays interconnected with elementary optical switches. This series of programmable delay blocks constitute an optical buffer of depth T, whose purpose is to delay/re-arrange incoming packets that request packet contention.Performance results have been obtained for random Bernoulli traffic, Pareto traffic, as well as for smooth with an upper bound of inherent burstiness.
Abstract: A performance analysis of an optically interconnected packet-scheduling switch network is presented. The scheduling switch uses a branch of feed-forward delays for each input port, interconnected with elementary optical switches to resolve contention. The scheduling switch is guaranteed to be lossless under a certain smoothness property condition. We investigate the packet-loss performance of the switch when the smoothness property condition does not hold, as well as the packet delay impairments at the edge of a scheduling switch interconnected network when this property is enforced.
Abstract: We present a detailed performance evaluation of a
hybrid optical switching architecture called Overspill Routing in
Optical Networks (ORION). The ORION architecture combines
wavelength and (electronic) packet switching, so as to obtain the
advantages of both switching paradigms. We have developed an
extensive network simulator where the basic features of the
ORION architecture were modeled, including suitable loadvarying
sources and edge/core node architectures. Various aspects
of the ORION architecture were studied including the routing
policies used (i.e. once ORION always ORION and lightpath reentry)
and the various options available for the buffer
architecture. The complete network study shows that ORION can
absorb temporary traffic overloads, as intended, provided
sufficient buffering is present.
Abstract: Pervasive games represent a radically new game form that extends
gaming experiences out into the physical world, weaving ICTs
into the fabric of players’ real environment. This emerging type of
games is rather challenging for developers who are engaged in
exploring new technologies and methods to achieve high quality
interactive experience for players. This paper follows a systematic
approach to explore the landscape of pervasive gaming. First, we
present ten representative pervasive game projects, classified in
five genres based on their playing environment and features.
Then, we present a comparative view of those projects with
respect to several design aspects. Last, we shed light on current
trends, design principles and future directions for pervasive games
development.
Abstract: This paper describes recent research activities and results in the area of photonic switching
carried out within the Virtual Department on Switching (VDS) of the European e-Photon/
ONe Network of Excellence. Contributions from outstanding European research groups in
this field are collected to offer a platform for future research in optical switching. The paper
contains the main topics related to network scenarios, switch architectures and experiments,
with an effort to investigate synergies and challenging opportunities for collaboration
and integration of research expertise in the field.
Abstract: We consider the problem of planning a mixed line rates (MLR) WDM transport optical network. In a MLR network, the interference between different modulation format/line rate connections affect the transmission reach of these connections. We present algorithms to plan a MLR network that take into account the variation of the transmission reach according to the use of the modulation formats/line rates in the network.
Abstract: We present a polarization insensitive NOLM, employing a FRM and non-PM fibers as the nonlinear
medium. The circuit is configured to operate as a compressor and its performance remains stable irrespective of
any birefringence perturbations.
Abstract: We focus on the problem of computing an epsilon (Porson)-Nash equilibrium of a bimatrix game, when epsilon (Porson) is an absolute constant. We present a simple algorithm for computing a View the MathML source-Nash equilibrium for any bimatrix game in strongly polynomial time and we next show how to extend this algorithm so as to obtain a (potentially stronger) parameterized approximation. Namely, we present an algorithm that computes a View the MathML source-Nash equilibrium, where {\"e} is the minimum, among all Nash equilibria, expected payoff of either player. The suggested algorithm runs in time polynomial in the number of strategies available to the players.
Abstract: We focus on the problem of computing an -Nash equilibrium
of a bimatrix game, when is an absolute constant. We present a simple
algorithm for computing a 3 -Nash equilibrium for any bimatrix game in
4
strongly polynomial time and we next show how to extend this algorithm
so as to obtain a (potentially stronger) parameterized approximation.
Namely, we present an algorithm that computes a 2+{\"e} -Nash equilibrium,
4
where {\"e} is the minimum, among all Nash equilibria, expected payoff of
either player. The suggested algorithm runs in time polynomial in the
number of strategies available to the players.
Abstract: We focus on the problem of computing an ˛-Nash equilibrium
of a bimatrix game, when ˛ is an absolute constant. We present a simple
algorithm for computing a 3
4 -Nash equilibrium for any bimatrix game in
strongly polynomial time and we next show how to extend this algorithm
so as to obtain a (potentially stronger) parameterized approximation.
Namely, we present an algorithm that computes a 2+¸
4 -Nash equilibrium,
where ¸ is the minimum, among all Nash equilibria, expected payoff of
either player. The suggested algorithm runs in time polynomial in the
number of strategies available to the players.
Abstract: In this work we extend the population protocol model of Angluin et al., in
order to model more powerful networks of very small resource limited
artefacts (agents) that is possible to follow some unpredictable passive
movement. These agents communicate in pairs according to the commands of
an adversary scheduler. A directed (or undirected) communication graph
encodes the following information: each edge (u,\~{o}) denotes that during the
computation it is possible for an interaction between u and \~{o} to happen in
which u is the initiator and \~{o} the responder. The new characteristic of
the proposed mediated population protocol model is the existance of a
passive communication provider that we call mediator. The mediator is a
simple database with communication capabilities. Its main purpose is to
maintain the permissible interactions in communication classes, whose
number is constant and independent of the population size. For this reason
we assume that each agent has a unique identifier for whose existence the
agent itself is not informed and thus cannot store it in its working
memory. When two agents are about to interact they send their ids to the
mediator. The mediator searches for that ordered pair in its database and
if it exists in some communication class it sends back to the agents the
state corresponding to that class. If this interaction is not permitted to
the agents, or, in other words, if this specific pair does not exist in
the database, the agents are informed to abord the interaction. Note that
in this manner for the first time we obtain some control on the safety of
the network and moreover the mediator provides us at any time with the
network topology. Equivalently, we can model the mediator by communication
links that are capable of keeping states from a edge state set of constant
cardinality. This alternative way of thinking of the new model has many
advantages concerning the formal modeling and the design of protocols,
since it enables us to abstract away the implementation details of the
mediator. Moreover, we extend further the new model by allowing the edges
to keep readable only costs, whose values also belong to a constant size
set. We then allow the protocol rules for pairwise interactions to modify
the corresponding edge state by also taking into account the costs. Thus,
our protocol descriptions are still independent of the population size and
do not use agent ids, i.e. they preserve scalability, uniformity and
anonymity. The proposed Mediated Population Protocols (MPP) can stably
compute graph properties of the communication graph. We show this for the
properties of maximal matchings (in undirected communication graphs), also
for finding the transitive closure of directed graphs and for finding all
edges of small cost. We demonstrate that our mediated protocols are
stronger than the classical population protocols. First of all we notice
an obvious fact: the classical model is a special case of the new model,
that is, the new model can compute at least the same things with the
classical one. We then present a mediated protocol that stably computes
the product of two nonnegative integers in the case where G is complete
directed and connected. Such kind of predicates are not semilinear and it
has been proven that classical population protocols in complete graphs can
compute precisely the semilinear predicates, thus in this manner we show
that there is at least one predicate that our model computes and which the
classical model cannot compute. To show this fact, we state and prove a
general Theorem about the composition of two mediated population
protocols, where the first one has stabilizing inputs. We also show that
all predicates stably computable in our model are (non-uniformly) in the
class NSPACE(m), where m is the number of edges of the communication
graph. Finally, we define Randomized MPP and show that, any Peano
predicate accepted by a Randomized MPP, can be verified in deterministic
polynomial time.
Abstract: This is a joint work with Ioannis Chatzigiannakis and Othon Michail.
We discuss here the population protocol model and most of its well-known extensions. The population protocol model aims to represent sensor networks consisting of tiny computational devices with sensing capabilities that follow some unpredictable and uncontrollable mobility pattern. It adopts a minimalistic approach and, thus, naturally computes a quite restricted class of predicates and exhibits almost no fault-tolerance. Most recent approaches make extra realistic and implementable assumptions, in order to gain more computational power and/or speed-up the time to convergence and/or improve fault-tolerance. In particular, the mediated population protocol model, the community protocol model, and the PALOMA model, which are all extensions of the population protocol model, are thoroughly discussed. Finally, the inherent difficulty of verifying the correctness of population protocols that run on complete communication graphs is revealed, but a promising algorithmic solution is presented.
Abstract: With a rapidly aging population, the health-care community will
soon face a severe medical personnel shortage. It is imperative that automated
health monitoring technologies be developed to help meet this
shortage. In this direction, we are developing Ayushman, a health monitoring
infrastructure and testbed. The vision behind its development
is two-fold: first, to develop a wireless sensor-based automated health
monitoring system that can be used in diverse situations, from homebased
care, to disaster situations, without much customization; second,
to provide a testbed for implementing and testing communication protocols
and systems. Ayushman provides a collection of services which
enables it to perform this dual role. It possess a hierarchical cluster
topology which provides a fault-tolerant and reliable system by ensuring
that each tier in the hierarchy is self-contained and can survive on its
own in case of network partition. Ayushman is being implemented using
off-the-shelf and diverse hardware and software components, which
presents many challenges in system integration and operational reliability.
This is an ongoing project at the IMPACT lab at Arizona State
University1, and in this paper, we present our system˘s architecture and
some of our experiences in the development of its initial prototype.
Abstract: We consider in this paper the problem of scheduling a set of independent
parallel tasks (jobs) with respect to two criteria, namely,
the makespan (time of the last finishing job) and the minsum (average
completion time). There exist several algorithms with a good
performance guaranty for one of these criteria. We are interested
here in studying the optimization of both criteria simultaneously.
The numerical values are given for the moldable task model, where
the execution time of a task depends on the number of processors
alloted to it. The main result of this paper is to derive explicitly
a family of algorithms guaranteed for both the minsum and the
makespan. The performance guaranty of these algorithms is better
than the best algorithms known so far. The Guaranty curve
of the family is the set of all points (x; y) such that there is an
algorithm with guarantees x on makespan and y on the minsum.
When the ratio on the minsum increases, the curve tends to the
best ratio known for the makespan for moldable tasks (3=2). One
extremal point of the curves is a (3;6)-approximation algorithm.
Finally a randomized version is given, which improves this results
to (3;4.08).
Abstract: This Volume contains the 11 papers corresponding to poster and demo presentations
accepted to the 7th ACM/IEEE International Symposium on Modeling,
Analysis and Simulation ofWireless and Mobile Systems (MSWiM 04),
that is held October 4-6, 2004, in Venice, Italy.
MSWiM 2004 (http://www.cs.unibo.it/mswim2004/) is intended to provide
an international forum for original ideas, recent results and achievements on
issues and challenges related to mobile and wireless systems.
A Call for Posters was announced and widely disseminated, soliciting posters
that report on recent original results or on-going research in the area of wireless
and mobile networks. Prospective authors were encouraged to submit interesting
results on all aspects of modeling, analysis and simulation of mobile and
wireless networks and systems. The scope and topics of the Posters Session
were the same as those included in the MSWiM Call for Papers (see above).
Poster presentations were meant to provide authors with early feedback on
their research work and enable them to present their research and exchange
ideas during the Symposium.
All submissions to the call for posters as well as selected papers submitted
to MSWiM 04 were considered and reviewed. The review process resulted in
accepting the set of 11 papers included in this Volume. Accepted posters will
also be on display during the Symposium.
The set of papers in this Proceedings covers a wide range of important topics
in wireless and mobile computing, including channel allocation in wireless
networks, quality of service provisioning in IEEE 802.11 wireless LANs, IP
mobility support, energy conservation, routing in mobile adhoc networks, resource
sharing, wireless access to the WWW, sensor networks etc. The performance
evaluation techniques used include both analysis and simulation.
We hope that the poster papers included in this Volume will facilitate a fruitful
and lively discussion and exchange of interesting and creative ideas during
the Symposium.
We wish to thank the MSWiM Steering Committee Chair Azzedine Boukerche
and the Program Co-Chairs ofMSWiM 04 Carla-Fabiana Chiasserini and
Lorenzo Donatiello for their valuable help in the selection procedure. Also, the
MSWiM 04 Publicity Co-Chairs Luciano Bononi, Helen Karatza and Mirela
Sechi Moretti Annoni Notare for disseminating the Call for Posters.
We wish to warmly thank the Poster Proceedings Chair Ioannis Chatzigiannakis
for carefully doing an excellent job in preparing the Volume you now
hold in your hands.
Abstract: The 8th ACM/IEEE International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems Symposium (MSWiM 2005) solicits posters that report on recent original results or on-going research in the area of wireless and mobile networks.
Abstract: The 9th ACM/IEEE International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems Symposium (MSWiM 2006) solicits posters that report on recent original results or on-going research in the area of wireless and mobile networks.
Abstract: Wireless sensor network research usually focuses on the reliable and efficient collection of data. Here, we address the next step in the traces lifetime: we aim at investigating and evaluating, by qualitative and quantitative means, data repositories of already collected measurements. We propose the use of a set of new metrics, which enable reliable evaluation of algorithms using traces (both in average cases and "stressful" setups) removing the need for running algorithms in a real testbed, at least in the development stage.
Abstract: Wireless sensor network research usually focuses on the reliable and efficient collection of data. In
this paper we focus on the next step in the lifetime of traces: we aim at investigating and evaluating, by
qualitative and quantitative means, data repositories of already collected measurements. Concerning the
collected datasets, several important topics arise like the need of exchanging traces between researchers
using a common representation of the traces and the need for common classication of the traces based on
a commonly-agreed set of statistical characteristics for in retrospect utilization. In order to qualitatively
address these issues, we propose the use of a novel set of metrics focusing on the in-network data aggregation
problem class. These metrics enable reliable evaluation of algorithms using the same benchmark traces (both
in average cases and \stressful" setups) removing the need for running algorithms in a real testbed, at least
in the initial development stage. We present the results of our research as a rst approach for addressing this
problem, and in order to conrm our method, we characterized several traces with the proposed metrics.
We validate the metrics by predicting the performance of three data-aggregation schemes using the available
traces and checking the results by actually running the algorithms
Abstract: We propose, implement and evaluate new energy conservation schemes for efficient data propagation in wireless sensor networks. Our protocols are adaptive, i.e. locally monitor the network conditions and accordingly adjust towards optimal operation choices. This dynamic feature is particularly beneficial in heterogeneous settings and in cases of redeployment of sensor devices in the network area. We implement our protocols and evaluate their performance through a detailed simulation study using our extended version of ns-2. In particular we combine our schemes with known communication paradigms. The simulation findings demonstrate significant gains and good trade-offs in terms of delivery success, delay and energy dissipation.
Abstract: In this paper we study the problem of assigning transmission ranges to the nodes of a multihoppacket radio network so as to minimize the total power consumed under the constraint
that adequate power is provided to the nodes to ensure that the network is strongly connected
(i.e., each node can communicate along some path in the network to every other node). Such
assignment of transmission ranges is called complete. We also consider the problem of achieving
strongly connected bounded diameter networks.
For the case of n + 1 colinear points at unit distance apart (the unit chain) we give a tight
asymptotic bound for the minimum cost of a range assignment of diameter h when h is a xed
constant and when h>(1 + ) log n, for some constant > 0. When the distances between the
colinear points are arbitrary, we give an O(n4) time dynamic programming algorithm for nding
a minimum cost complete range assignment.
For points in three dimensions we show that the problem of deciding whether a complete
range assignment of a given cost exists, is NP-hard. For the same problem we give an O(n2)
time approximation algorithm which provides a complete range assignment with cost within a
factor of two of the minimum. The complexity of this problem in two dimensions remains open,
while the approximation algorithm works in this case as well.
Abstract: Wireless networks have received significant attention during the recent years. Especially, ad hoc wireless networks emerged due to their potential applications in battlefield, emergency disaster relief, etc. [13]. Unlike traditional wired networks or cellular wireless networks, no wired backbone infrastructure is installed for ad hoc wireless networks.
Abstract: Recent rapid technological developments have led to the
development of tiny, low-power, low-cost sensors. Such devices
integrate sensing, limited data processing and communication
capabilities.The effective distributed collaboration
of large numbers of such devices can lead to the efficient
accomplishment of large sensing tasks.
This talk focuses on several aspects of energy efficiency.
Two protocols for data propagation are studied: the first
creates probabilistically optimized redundant data transmissions
to combine energy efficiency with fault tolerance,
while the second guarantees (in a probabilistic way) the
same per sensor energy dissipation, towards balancing the
energy load and prolong the lifetime of the network.
A third protocol (in fact a power saving scheme) is also
presented, that directly and adaptively affects power dissipation
at each sensor. This “lower level” scheme can be
combined with data propagation protocols to further improve
energy efficiency.
Abstract: Wireless sensor networks are composed of a vast number of ultra-small, fully autonomous computing, communication, and sensing devices, with very restricted energy and computing capabilities, that cooperate to accomplish a large sensing task. Such networks can be very useful in practice. The authors propose extended versions of two data propagation protocols: the Sleep-Awake Probabilistic Forwarding (SW-PFR) protocol and the Hierarchical Threshold-Sensitive Energy-Efficient Network (H-TEEN) protocol. These nontrivial extensions aim at improving the performance of the original protocols by introducing sleep-awake periods in the PFR case to save energy and introducing a hierarchy of clustering in the TEEN case to better cope with large network areas. The authors implemented the two protocols and performed an extensive comparison via simulation of various important measures of their performance with a focus on energy consumption. Data propagation under this approach exhibits high fault tolerance and increases network lifetime.
Abstract: Wireless sensor networks are composed of a vast number of ultra-small, fully autonomous computing, communication, and sensing devices, with very restricted energy and computing capabilities, that cooperate to accomplish a large sensing task. Such networks can be very useful in practice. The authors propose extended versions of two data propagation protocols: the Sleep-Awake Probabilistic Forwarding (SW-PFR) protocol and the Hierarchical Threshold-Sensitive Energy-Efficient Network (H-TEEN) protocol. These nontrivial extensions aim at improving the performance of the original protocols by introducing sleep-awake periods in the PFR case to save energy and introducing a hierarchy of clustering in the TEEN case to better cope with large network areas. The authors implemented the two protocols and performed an extensive comparison via simulation of various important measures of their performance with a focus on energy consumption. Data propagation under this approach exhibits high fault tolerance and increases network lifetime.
Abstract: Wireless sensor and actor networks are comprised of a large number of small, fully autonomous computing, communication, sensing and actuation devices, with very restricted energy and computing capabilities. Such devices co-operate to accomplish a large sensing and acting task. Sensors gather information for an event in the physical world and notify the actors that perform appropriate actions by making a decision on receipt of the sensed information. Such networks can be very useful in practice i.e.~in the local detection of remote crucial events and the propagation of relevant data to decision
centers that perform appropriate actions upon the environment, thus realizing sensing and acting from a distance.
In this work we present a communication protocol that enables scalable, energy efficient and fault tolerant coordination while allowing to prioritize sensing tasks in situated wireless sensor and actor networks. The sensors react locally on environment and context changes and interact with each other in order to adjust the performance of the network in terms of energy, latency and success rate on a per-task basis. To deal with the increased complexity of such large-scale systems, our protocol pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the actors.
We implement and evaluate the protocol using large scale simulation, showing its suitability in networks where sensor to actor and actor to actor coordination are important for accomplishing tasks of different priorities.
Abstract: In this work we focus on the energy efficiency challenge in wireless sensor networks, from both an on-line perspective (related to routing), as well as a network design perspective (related to tracking). We investigate a few representative, important aspects of energy efficiency: a) the robust and fast data propagation b) the problem of balancing the energy
dissipation among all sensors in the network and c) the problem of efficiently tracking moving
entities in sensor networks. Our work here is a methodological survey of selected results that
have alre dy appeared in the related literature.
In particular, we investigate important issues of energy optimization, like minimizing the total
energy dissipation, minimizing the number of transmissions as well as balancing the energy
load to prolong the system˘s lifetime. We review characteristic protocols and techniques in the recent literature, including probabilistic forwarding and local optimization methods. We study the problem of localizing and tracking multiple moving targets from a network design perspective i.e. towards estimating the least possible number of sensors, their positions and operation characteristics needed to efficiently perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a covering combinatorial problem and then design and analyze an efficient approximate method for sensor placement
and operation.
Abstract: In this work we present three new distributed, probabilistic data propagation protocols for Wireless Sensor Networks which aim at maximizing the network's operational life and improve its performance. The keystone of these protocols' design is fairness which declares that fair portions of network's work load should be assigned to each node, depending on their role in the system. All the three protocols, EFPFR, MPFR and TWIST, emerged from the study of the rigorously analyzed protocol PFR. Its design elements were identified and improvements were suggested and incorporated into the introduced protocols. The experiments conducted show that our proposals manage to improve PFR's performance in terms of success rate, total amount of energy saved, number of alive sensors and standard deviation of the energy left. Indicatively we note that while PFR's success rate is 69.5%, TWIST is achieving 97.5% and its standard deviation of energy is almost half of that of PFR.
Abstract: We consider applications of probabilistic techniques in the
framework of algorithmic game theory. We focus on three distinct case
studies: (i) The exploitation of the probabilistic method to demonstrate
the existence of approximate Nash equilibria of logarithmic support sizes
in bimatrix games; (ii) the analysis of the statistical conflict that mixed
strategies cause in network congestion games; (iii) the effect of coalitions
in the quality of congestion games on parallel links.
Abstract: The existence of good probabilistic models for the job
arrival process and job characteristics is important for
the improved understanding of grid systems and the
prediction of their performance. In this study, we
present a thorough analysis of the job inter-arrival
times, the waiting times at the queues, the execution
times, and the data sizes exchanged at the
kallisto.hellasgrid.gr cluster, which is part of the
EGEE Grid infrastructure. By computing the Hurst
parameter of the inter-arrival times we find that the
job arrival process exhibits self-similarity/long-range
dependence. We also propose simple and intuitive
models for the job arrival process and the job
execution times. The models proposed were validated
and were found to be in very good agreement with our
empirical measurements.
Abstract: The efficient delivery of web content has been identified
as a key issue of research for some time. Forward (or
reverse) proxies, which are positioned along the request
route from the users' browsers to the origin content servers,
maintain a cache with copies of content from their origin
servers. The strategic placement of proxies across the
backbone ISP or the Content Delivery Network can
drastically improve the performance of the system (in terms
of network bandwidth savings, origin server load, and
user-request latency).
The ultimate goal of this work is to develop a tool that
decides on the position and the number of proxies required
in order to achieve given performance improvements
(expressed in terms of network bandwidth, origin server
load, and user-latency). We believe such a tool will be very
helpful to ISPs/CDNs, content providers, and end-users
and is, thus, very much lacking.
Abstract: The efficient delivery of internet content has been identified as a key issue of research for some time. Forward (or reverse) proxies, which are positioned along the request route from the users' browsers to the origin content servers, maintain a cache with copies of content from their origin servers. The strategic placement of proxies across the backbone ISP network (or the Content Delivery Network) can drastically improve the performance of the system (in terms of network bandwidth savings, origin server load, and user-request latency).
The ultimate goal of this work is to develop a tool that decides on the position and the number of proxies required in order to achieve given performance improvements (expressed in terms of network bandwidth, origin server load, and user-latency). We believe such a tool will be very helpful to ISPs/CDNs, content providers, and end-users and is, thus, very much lacking.
Abstract: The content-based publish/subscribe (pub/sub) paradigm for system design is becoming increasingly popular, offering
unique benefits for a large number of data-intensive applications. Coupled with the peer-to-peer technology, it can serve as
a central building block for developing data-dissemination applications deployed over a large-scale network infrastructure.
A key open problem towards the creation of large-scale content-based pub/sub infrastructures relates to efficiently and accu-
rately matching subscriptions with substring predicates to incoming events. This work addresses this issue.
Abstract: The content-based publish/subscribe (pub/sub)paradigm for system design is becoming increasingly popular, offering unique benefits for a large number of data-intensive applications. Coupled with the peer-to-peer technology, it can serve as a central building block for such applications
deployed over a large-scale network infrastructure. A key problem toward the creation of large-scale contentbased pub/sub infrastructures relates to dealing efficiently with continuous queries (subscriptions) with rich predicates on string attributes; In particular, efficiently and accurately
matching substring queries to incoming events is an open problem. In this work we study this problem. We provide and analyze novel algorithms for processing subscriptions with substring predicates and events in a variety of environments. We provide experimental data demonstrating the
relative performance behavior of the proposed algorithms using as key metrics the network bandwidth requirements, number of messages, load balancing, as well as requirements for extra routing state (and related maintenance) and design flexibility.
Abstract: In this work we leverage the advantages of the Chord DHT to build a content-based publish-subscribe system that is scalable, self-organizing, and well-performing. However, DHTs provide very good support only for exact-match, equality predicates and range predicates are expected to be very popular when specifying subscriptions in pub/sub systems We will thus also provide solutions supporting efficiently subscriptions with range predicates in Chord-based pub/sub systems.
Abstract: We present methods for obtaining high-repetition-
rate full duty-cycle RZ optical pulse trains from lower rate
laser sources. These methods exploit the memory properties of the
Fabry–Perot filter for rate multiplication, while amplitude equalization
in the output pulse train is achieved with a semiconductor
optical amplifier or with a second transit through the Fabry–Perot
filter.We apply these concepts to experimentally demonstrate rate
quadruplication from 10 to 40 GHz and discuss the possibility of
taking advantage of the proposed methods to achieve repetition
rates up to 160 GHz.
Abstract: We consider the QoS-aware Multicommodity Flow problem,
a natural generalization of the weighted multicommodity flow problem
where the demands and commodity values are elastic to the Quality-of-
Service characteristics of the underlying network. The problem is fundamental
in transportation planning and also has important applications
beyond the transportation domain. We provide a FPTAS for the QoSaware
Multicommodity Flow problem by building upon a Lagrangian
relaxation method and a recent FPTAS for the non-additive shortest
path problem.
Abstract: Grids offer a transparent interface to geographically scattered computation, communication, storage and
other resources. In this chapter we propose and evaluate QoS-aware and fair scheduling algorithms for
Grid Networks, which are capable of optimally or near-optimally assigning tasks to resources, while taking
into consideration the task characteristics and QoS requirements. We categorize Grid tasks according to
whether or not they demand hard performance guarantees. Tasks with one or more hard requirements are
referred to as Guaranteed Service (GS) tasks, while tasks with no hard requirements are referred to as Best
Effort (BE) tasks. For GS tasks, we propose scheduling algorithms that provide deadline or computational
power guarantees, or offer fair degradation in the QoS such tasks receive in case of congestion. Regarding
BE tasks our objective is to allocate resources in a fair way, where fairness is interpreted in the max-min fair
share sense. Though, we mainly address scheduling problems on computation resources, we also look at
the joint scheduling of communication and computation resources and propose routing and scheduling
algorithms aiming at co-allocating both resource type so as to satisfy their respective QoS requirements.
Abstract: This research attempts a first step towards investigating the aspect of radiation awareness in environments with abundant heterogeneous wireless networking. We call radiation at a point of a 3D wireless network the total amount of electromagnetic quantity the point is exposed to, our definition incorporates the effect of topology as well as the time domain, data traffic and environment aspects. Even if the impact of radiation to human health remains largely unexplored and controversial, we believe it is worth trying to understand and control. We first analyze radiation in well known topologies (random and grids), randomness is meant to capture not only node placement but also uncertainty of the wireless propagation model. This initial understanding of how radiation adds (over space and time) can be useful in network design, to reduce health risks. We then focus on the minimum radiation path problem of finding the lowest radiation trajectory of a person moving from a source to a destination point of the network region. We propose three heuristics which provide low radiation paths while keeping path length low, one heuristic gets in fact quite close to the offline solution we compute by a shortest path algorithm. Finally, we investigate the interesting impact on the heuristics' performance of diverse node mobility.
Abstract: This research further investigates the recently introduced
(in [4]) paradigm of radiation awareness in ambient environments with abundant heterogeneous wireless networking
from a distributed computing perspective. We call radiation
at a point of a wireless network the total amount of electromagnetic quantity the point is exposed to; our denition incorporates the eect of topology as well as the time domain
and environment aspects. Even if the impact of radiation to
human health remains largely unexplored and controversial,
we believe it is worth trying to understand and control, in
a way that does not decrease much the quality of service
oered to users of the wireless network.
In particular, we here focus on the fundamental problem
of ecient data propagation in wireless sensor networks, try-
ing to keep latency low while maintaining at low levels the
radiation cumulated by wireless transmissions. We rst propose greedy and oblivious routing heuristics that are radiation aware. We then combine them with temporal back-o
schemes that use local properties of the network (e.g. number of neighbours, distance from sink) in order to spread" radiation in a spatio-temporal way. Our proposed radiation
aware routing heuristics succeed to keep radiation levels low,
while not increasing latency.
Abstract: We employ here the Probabilistic Method, a way of reasoning which shows existence of combinatorial structures and properties to prove refute conjectures. The radiocoloring problem (RCP) is the problem of assigning frequencies to the transmitters of a network so that transmitters of distance one get frequencies that di#er by at least two and any two transmitters of distance one get frequencies that di#er by at least one. The objective of an assignment may be to minimize the number of frequencies used (order) or the range of them (span). Here, we study the optimization version of RCP where the objective is to minimize the order. In graph theory terms the problem is modelled by a variation of the vertex graph coloring problem. We investigate upper bounds for the minimum number of colors needed in a radiocoloring assignment of a graph G. We first provide an upper bound for the minimum number of colors needed to radiocolor a graph G of girth at most 7. Then, we study whether the minimum order of a radiocoloring assignment is determined by local conditions, i.e. by the minimum order radiocoloring assignment of some small subgraphs of it. We state a related conjecture which is analogous to a theorem of Molloy and Reed for the vertex coloring problem [12]. We then investigate whether the conjecture contradicts a Theorem of Molloy and Reed for the vertex coloring when applied on the graph G 2
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The discrete number of frequencies used is called order and the range of frequencies used, span. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span (min span RCP) or the order (min order RCP).
In this paper, we deal with an interesting, yet not examined until now, variation of the radiocoloring problem: that of satisfying frequency assignment requests which exhibit some periodic behavior. In this case, the interference graph (modelling interference between transmitters) is some (infinite) periodic graph. Infinite periodic graphs usually model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they can model very large networks produced by the repetition of a small graph.
A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph Gi(Vi,Ei). The edge set of G is derived by connecting the vertices of each iteration Gi to some of the vertices of the next iteration Gi+1, the same for all Gi. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest.
We give two basic results:
• We prove that the min span RCP is PSPACE-complete for periodic planar graphs.
• We provide an O(n({\"A}(Gi)+{\'o})) time algorithm (where|Vi|=n, {\"A}(Gi) is the maximum degree of the graph Gi and {\'o} is the number of edges connecting each Gi to Gi+1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to View the MathML source as {\"A}(Gi)+{\'o} tends to infinity.
We remark that, any approximation algorithm for the min span RCP of a finite planar graph G, that achieves a span of at most {\'a}{\"A}(G)+constant, for any {\'a} and where {\"A}(G) is the maximum degree of G, can be used as a subroutine in our algorithm to produce an approximation for min span RCP of asymptotic ratio {\'a} for periodic planar graphs.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function {\"O}: V → IN such that ∣{\"O}(u) - {\"O}(v)∣ ≥2, when u, v are neighbors in G, and ∣{\"O}(u) - {\"O}(v)∣ ≥1 when the distance of u, v in G is two. The range of frequencies used is called span. Here, we consider the optimization version of the Radiocoloring Problem (RCP) of finding a radiocoloring assignment of minimum span, called min span RCP. In this paper, we deal with a variation of RCP: that of satisfying frequency assignment requests with some periodic behavior. In this case, the interference graph is an (infinite) periodic graph. Infinite periodic graphs model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they may model very large networks produced by the repetition of a small graph. A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph G i (V i ,E i ). The edge set of G is derived by connecting the vertices of each iteration G i to some of the vertices of the next iteration G i +1, the same for all G i . The model of periodic graphs considered here is similar to that of periodic graphs in Orlin [13], Marathe et al [10]. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest. We give two basic results: - We prove that the min span RCP is PSPACE-complete for periodic planar graphs. - We provide an O(n({\"A}(G i ) + {\'o})) time algorithm, (where ∣V i ∣ = n, {\"A}(G i ) is the maximum degree of the graph G i and {\'o} is the number of edges connecting each G i to G i +1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to 2 as {\"A}(Gi) + {\'o} tends to infinity.
Abstract: We focus on the problem of computing approximate Nash equilibria and well-supported approximate Nash equilibria in random bimatrix games, where each player's payoffs are bounded and independent random variables, not necessarily identically distributed, but with almost common expectations. We show that the completely mixed uniform strategy profile, i.e., the combination of mixed strategies (one per player) where each player plays with equal probability each one of her available pure strategies, is with high probability a TeX -Nash equilibrium and a TeX -well supported Nash equilibrium, where n is the number of pure strategies available to each player. This asserts that the completely mixed, uniform strategy profile is an almost Nash equilibrium for random bimatrix games, since it is, with high probability, an ϵ-well-supported Nash equilibrium where ϵ tends to zero as n tends to infinity.
Abstract: In the last few years there has been a great amount of interest in Random Constraint Satisfaction
Problems, both from an experimental and a theoretical point of view. Quite intriguingly, experimental results
with various models for generating random CSP instances suggest that the probability of such problems having
a solution exhibits a ?threshold-like? behavior. In this spirit, some preliminary theoretical work has been done
in analyzing these models asymptotically, i.e., as the number of variables grows. In this paper we prove that,
contrary to beliefs based on experimental evidence, the models commonly used for generating random CSP
instances do not have an asymptotic threshold. In particular, we prove that asymptotically almost all instances
they generate are overconstrained, suffering from trivial, local inconsistencies. To complement this result we
present an alternative, single-parameter model for generating random CSP instances and prove that, unlike
current models, it exhibits non-trivial asymptotic behavior. Moreover, for this new model we derive explicit
bounds for the narrow region within which the probability of having a solution changes dramatically
Abstract: In this work we present an efficient algorithm which, with high probability, provides
an almost uniform sample from the set of proper $\chi$-colourings of an instance of sparse
random graphs $G_{n,d/n}$, where $\chi=\chi(d)$ is a sufficiently large constant.
This work improves, asymptotically, the result of Dyer, Flaxman Frieze and Vigoda
in \cite{previous-result} where the algorithm proposed there needs at least
$\Theta(\frac{\log \log n}{\log \log \log n})$ colours.
Abstract: The sensor devices are battery powered thus energy is the most precious resource of a wireless sensor
network since periodically replacing the battery of the nodes in large scale deployments is infeasible. The
collected data is disseminated to a static control point { data sink in the network, using node to node
{ multi-hop data propagation, [4, 6]. However, sensor devices consume signicant amounts of energy in
addition to increased implementation complexity since a routing protocol is executed. Also, a point of
failure emerges in the area near the control center where nodes relay the data from nodes that are farther
away
Abstract: We study the on-line version of the maximum independent set problem, for the case of disk graphs which are graphs resulting
from intersections of disks on the plane. In particular, we investigate whether randomization can be used to break known lower
bounds for deterministic on-line independent set algorithms and present new upper and lower bounds.
Abstract: In this work we address the issue of efficient processing of range queries in DHT-based P2P data networks. The novelty of the proposed approach lies on architectures, algorithms, and mechanisms for identifying and appropriately exploiting powerful nodes in such networks. The existence of such nodes has been well documented in the literature and plays a key role in the architecture of most successful real-world P2P applications. However, till now, this heterogeneity has not been taken into account when architecting solutions for complex query processing, especially in DHT networks. With this work we attempt to fill this gap for optimizing the processing of range queries. Significant performance improvements are achieved due to (i) ensuring a much smaller hop count performance for range queries, and (ii) avoiding the dangers and inefficiencies of relying for range query processing on weak nodes, with respect to processing, storage, and communication capacities, and with intermittent connectivity. We present detailed experimental results validating our performance claims.
Abstract: We present a new technique for extending the decay time of the impulse response function of a Fabry-Perot filter while simultaneously maintaining a large bandwidth. It involves double passing through the filter and it can be used for the easy multiplication of the repetition rate of optical sources. We apply the concept to a 10-GHz pulse train to demonstrate experimentally the rate quadruplication to 40 GHz.
Abstract: We consider the problem of planning a mixed line rate
(MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support transmission at different line
rates. Previously proposed planning algorithms have used a
transmission reach bound for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more severe among transmissions
that utilize different modulation formats. Thus, the transmission
reach of a connection with a specific modulation format/line rate
depends also on the other connections that co-propagate with it
in the network. To plan a MLR WDM network, we present
routing and wavelength assignment (RWA) algorithms that
adapt the transmission reach of each connection according to the
use of the modulation formats/line rates in the network. The
proposed algorithms are able to plan the network so as to
alleviate cross-rate interference effects, enabling the
establishment of connections of acceptable quality over paths that
would otherwise be prohibited.
Abstract: A collection of pervasive street games is presented in this paper, that constitute a new social form of play taking place in public spaces, such as city parks, public spaces and streets. The main characteristic of these games is the ability to scale to a large number of players (in some cases involving more than 40 players) and can engage players located simultaneously in dispersed areas. Players interact with each other using a wide range of hardware devices that are either generic (such as smart phones) or specific (such as wireless sensor devices). We discuss a set of fundamental issues related to game design emphasizing on the one hand the interaction of the players with the ubiquitous computing environment and on the other hand the embedding of the game rules within the environment. The games are developed using open source technologies and evaluated in a series of events such as the Athens Plaython 2012 festival. The feedback received from the players indicates that this new form of gaming is indeed very promising.
Abstract: The population protocol model (PP) proposed by Angluin et al. [2] describes sensor networks consisting of passively mobile finite-state agents. The agents sense their environment and communicate in pairs to carry out some computation on the sensed values. The mediated population protocol model (MPP) [13] extended the PP model by communication links equipped with a constant size buffer. The MPP model was proved in [13] to be stronger than the PP model. However, its most important contribution is that it provides us with the ability to devise optimizing protocols, approximation protocols and protocols that decide properties of the communication graph on which they run. The latter case, suggests a simplified model, the GDM model, that was formally defined and studied in [11]. GDM is a special case of MPP that captures MPP's ability to decide properties of the communication graph. Here we survey recent advances in the area initiated by the proposal of the PP model and at the same time we provide new protocols, novel ideas and results.
Abstract: This paper presents a theoretical and experimental
analysis of saturated semiconductor optical amplifier (SOA)-based
interferometric switching arrangements. For the first time, it is
shown that such devices can provide enhanced intensity modulation
reduction to return-to-zero (RZ) formatted input pulse trains,
when the SOA is saturated with a strong continuous-wave (CW)
input signal. A novel theoretical platform has been developed in
the frequency domain, which reveals that the intensity modulation
of the input pulse train can be suppressed by more than 10 dB at
the output. This stems from the presence of the strong CW signal
that transforms the sinusoidal transfer function of the interferometric
switch into an almost flat, strongly nonlinear curve. This
behavior has also been verified experimentally for both periodically
and randomly degraded, in terms of intensity modulation,
signals at 10 Gb/s using the ultrafast nonlinear interferometer as
the switching device. Performance analysis both in the time and
frequency domains is demonstrated, verifying the concept and its
theoretical analysis.
Abstract: In this paper we present a signaling protocol for
QoS differentiation suitable for optical burst switching networks.
The proposed protocol is a two-way reservation scheme that
employs delayed and in-advance reservation of resources. In this
scheme delayed reservations may be relaxed, introducing a
reservation duration parameter that is negotiated during call
setupphase. This feature allows bursts to reserve resources
beyond their actual size to increase their successful forwarding
probability and is used to provide QoS differentiation. The
proposed signaling protocol offers a low blocking probability for
bursts that can tolerate the round-trip delay required for the
reservations. We present the main features of the protocol and
describe in detail timing considerations regarding the call setup
and the reservation process. We also describe several methods
for choosing the protocol parameters so as to optimize
performance and present corresponding evaluation results.
Furthermore, we compare the performance of the proposed
protocol against that of two other typical reservation protocols, a
Tell-and-Wait and a Tell-and-Go protocol.
Abstract: A novel method for the multiplication of the repetition
rate of full duty-cycle return-to-zero optical sources is presented.
It employs the memory property of a Fabry–P{\'e}rot filter
for the multiplication task, combined with the gain saturation of a
semiconductor optical amplifier for amplitude equalization. This
concept has been applied to quadruplicate the rate of a distributed
feedback laser source operating at 10 GHz.
Abstract: We consider the conflicting problems of ensuring data-access load balancing and efficiently processing range queries on peer-to-peer data networks maintained over Distributed Hash Tables (DHTs). Placing consecutive data values in neighboring peers is frequently used in DHTs since it accelerates range query processing. However, such a placement is highly susceptible to load imbalances, which are preferably handled by replicating data (since replication also introduces fault tolerance benefits). In this paper, we present HotRoD, a DHT-based architecture that deals effectively with this combined problem through the use of a novel locality-preserving hash function, and a tunable data replication mechanism which allows trading off replication costs for fair load distribution. Our detailed experimentation study shows strong gains in both range query processing efficiency and data-access load balancing, with low replication overhead. To our knowledge, this is the first work that concurrently addresses the two conflicting problems using data replication.
Abstract: We consider the conflicting problems of ensuring data-access load balancing and efficiently processing range queries on peer-to-peer data networks maintained over Distributed Hash Tables (DHTs). Placing consecutive data values in neighboring peers is frequently used in DHTs since it accelerates range query processing. However, such a placement is highly susceptible to load imbalances, which are preferably handled by replicating data (since replication also introduces fault tolerance benefits). In this paper, we present HotRoD, a DHT-based architecture that deals effectively with this combined problem through the use of a novel locality-preserving hash function, and a tunable data replication mechanism which allows trading off replication costs for fair load distribution. Our detailed experimentation study shows strong gains in both range query processing efficiency and data-access load balancing, with low replication overhead. To our knowledge, this is the first work that concurrently addresses the two conflicting problems using data replication.
Abstract: This paper presents a summary of Optical Burst Switching (OBS) research within the VI framework program e-Photon/ONe network of excellence. The paper includes network aspects such as routing techniques, resilience and contention resolution, together with burst switch architectures. On the other hand, we also discuss traffic analysis issues, Quality of Service (QoS) schemes, TCP/IP over OBS and physical layer aspects for OBS.
Abstract: This paper presents a summary of Optical Burst Switching (OBS) research within the VI framework program e-Photon/ONe
network of excellence. The paper includes network aspects such as routing techniques, resilience and contention resolution, together
with burst switch architectures. On the other hand, we also discuss traffic analysis issues, Quality of Service (QoS) schemes, TCP/IP
over OBS and physical layer aspects for OBS.
Abstract: Braess’s paradox states that removing a part of a network may im-
prove the players’ latency at equilibrium. In this work, we study the approxima-
bility of the best subnetwork problem for the class of random
G
n;p
instances
proven prone to Braess’s paradox by (Roughgarden and Valiant, RSA 2010) and
(Chung and Young, WINE 2010). Our main contribution is a polynomial-time
approximation-preserving reduction of the best subnetwork problem for such in-
stances to the corresponding problem in a simplified network where all neighbors
of
s
and
t
are directly connected by
0
latency edges. Building on this, we obtain
an approximation scheme that for any constant
" >
0
and with high probabil-
ity, computes a subnetwork and an
"
-Nash flow with maximum latency at most
(1+
"
)
L
+
"
, where
L
is the equilibrium latency of the best subnetwork. Our ap-
proximation scheme runs in polynomial time if the random network has average
degree
O
(poly(ln
n
))
and the traffic rate is
O
(poly(lnln
n
))
, and in quasipoly-
nomial time for average degrees up to
o
(
n
)
and traffic rates of
O
(poly(ln
n
))
.
Abstract: We propose information aggregation as a method for summarizing the resource-related information, used by the task scheduler. Through this method the information of a set of resources can be uniformly represented, reducing at the same time the amount of information transferred in a Grid network. A number of techniques are described for aggregating the information of the resources belonging to a hierarchical Grid domain. This information includes the cpu and storage capacities at a site, the number of tasks queued, and other resource-related parameters. The quality of the aggregation scheme affects the efficiency of the scheduler{\^a}€™s decisions. We use as a metric of aggregation efficiency the Stretch Factor (SF), defined as the ratio of the task delay when the task is scheduled using complete resource information over the task delay when an aggregation scheme is used. The simulation experiments performed show that the proposed aggregation schemes achieve large information reduction, while enabling good task scheduling decisions as indicated by the SF achieved.
Abstract: Future Grid Networks should be able to provide Quality of Service (QoS) guarantees
to their users. In this work we examine the way Grid resources should be
configured so as to provide deterministic delay guarantees to Guaranteed Service
(GS) users and fairness to Best Effort (BE) users. The resources are partitioned
in groups that serve GS users only, or BE users only, or both types of users with
different priorities. Furthermore, the GS users are registered to the resources
either statically or dynamically, while both single and multi-Cpu resources are
examined. Finally the proposed resource configurations for providing QoS are
implemented in theGridSim environment and a number simulations are executed.
Our results indicate that the allocation of resources to both types of users, with
different priorities, results in fewer deadlines missed and better resources utilization.
Finally benefits can be derived from the dynamic registration of GS users
to the resources
Abstract: In recent years there has been signi1cant interest in the study of random k-SAT formulae. For
a given set of n Boolean variables, let Bk denote the set of all possible disjunctions of k distinct,
non-complementary literals from its variables (k-clauses). A random k-SAT formula Fk (n;m) is
formed by selectinguniformly and independently m clauses from Bk and takingtheir conjunction.
Motivated by insights from statistical mechanics that suggest a possible relationship between the
?order? of phase transitions and computational complexity, Monasson and Zecchina (Phys. Rev.
E 56(2) (1997) 1357) proposed the random (2+p)-SAT model: for a given p ¸ [0; 1], a random
(2 + p)-SAT formula, F2+p(n;m), has m randomly chosen clauses over n variables, where pm
clauses are chosen from B3 and (1 − p)m from B2. Usingthe heuristic ?replica method? of
statistical mechanics, Monasson and Zecchina gave a number of non-rigorous predictions on the
behavior of random (2 + p)-SAT formulae. In this paper we give the 1rst rigorous results for
random (2 + p)-SAT, includingthe followingsurprisingfact: for p 6 2=5, with probability
1 − o(1), a random (2 + p)-SAT formula is satis1able i@ its 2-SAT subformula is satis1able.
That is, for p 6 2=5, random (2 + p)-SAT behaves like random 2-SAT.
Abstract: Distributed algorithm designers often assume that system processes execute the same predefined software. Alternatively, when they do not assume that, designers turn to non-cooperative games and seek an outcome that corresponds to a rough consensus when no coordination is allowed. We argue that both assumptions are inapplicable in many real distributed systems, e.g., the Internet, and propose designing self-stabilizing and Byzantine fault-tolerant distributed game authorities. Once established, the game authority can secure the execution of any complete information game. As a result, we reduce costs that are due to the processes˘ freedom of choice. Namely, we reduce the price of malice.
Abstract: We consider the line planning problem in public transporta-
tion, under a robustness perspective. We present a mechanism for robust
line planning in the case of multiple line pools, when the line operators
have a different utility function per pool. We conduct an experimen-
tal study of our mechanism on both synthetic and real-world data that
shows fast convergence to the optimum. We also explore a wide range of
scenarios, varying from an arbitrary initial state (to be solved) to small
disruptions in a previously optimal solution (to be recovered). Our ex-
periments with the latter scenario show that our mechanism can be used
as an online recovery scheme causing the system to re-converge to its
optimum extremely fast.
Abstract: The problem of robust line planning requests for a set of
origin-destination paths (lines) along with their tra±c rates (frequencies)
in an underlying railway network infrastructure, which are robust to
°uctuations of real-time parameters of the solution.
In this work, we investigate a variant of robust line planning stemming
from recent regulations in the railway sector that introduce competition
and free railway markets, and set up a new application scenario: there is
a (potentially large) number of line operators that have their lines Żxed
and operate as competing entities struggling to exploit the underlying
network infrastructure via frequency requests, while the management of
the infrastructure itself remains the responsibility of a single (typically
governmental) entity, the network operator.
The line operators are typically unwilling to reveal their true incentives.
Nevertheless, the network operator would like to ensure a fair (or, socially
optimal) usage of the infrastructure, e.g., by maximizing the (unknown to
him) aggregate incentives of the line operators. We show that this can be
accomplished in certain situations via a (possibly anonymous) incentive-
compatible pricing scheme for the usage of the shared resources, that is
robust against the unknown incentives and the changes in the demands
of the entities. This brings up a new notion of robustness, which we
call incentive-compatible robustness, that considers as robustness of the
system its tolerance to the entities' unknown incentives and elasticity
of demands, aiming at an eventual stabilization to an equilibrium point
that is as close as possible to the social optimum.
Abstract: In this paper we present an efficient general simulation strategy for
computations designed for fully operational BSP machines of n ideal processors,
on n-processor dynamic-fault-prone BSP machines. The fault occurrences are failstop
and fully dynamic, i.e., they are allowed to happen on-line at any point of the
computation, subject to the constraint that the total number of faulty processors
may never exceed a known fraction. The computational paradigm can be exploited
for robust computations over virtual parallel settings with a volatile underlying
infrastructure, such as a NETWORK OF WORKSTATIONS (where workstations may be
taken out of the virtual parallel machine by their owner).
Our simulation strategy is Las Vegas (i.e., it may never fail, due to backtracking
operations to robustly stored instances of the computation, in case of locally
unrecoverable situations). It adopts an adaptive balancing scheme of the workload
among the currently live processors of the BSP machine.
Our strategy is efficient in the sense that, compared with an optimal off-line
adversarial computation under the same sequence of fault occurrences, it achieves an O
ˇ
.log n ˘ log log n/2˘
multiplicative factor times the optimal work (namely, this
measure is in the sense of the “competitive ratio” of on-line analysis). In addition,
our scheme is modular, integrated, and considers many implementation points.
We comment that, to our knowledge, no previous work on robust parallel computations
has considered fully dynamic faults in the BSP model, or in general distributed
memory systems. Furthermore, this is the first time an efficient Las Vegas
simulation in this area is achieved.
Abstract: The ROTA project is aimed at offering high-speed network connections to ships traveling at archipelagos, i.e. seas with dense clusters of islands, such as the Aegean sea. The ROTA system is based on the use of directional antennas on-board the ships and stationary antennas on the islands. The stationary antennas on the islands are connected to high-speed commercial computer networks, which are available on practically all inhabited islands at the Aegean sea, thus implementing a Virtual Private Network (VPN). This approach is much more cost-effective when compared to current approaches that rely on the usage of satellite communications. This paper presents the concept behind ROTA platform.
Abstract: A key problem in networks that support advance reservations is the routing and time scheduling of connections with flexible starting time and known data transfer size. In this paper we present a multicost routing and scheduling algorithm for selecting the path to be followed by such a connection and the time the data should start and end transmission at each link so as to minimize the reception time at the destination, or optimize some other performance criterion. The utilization profiles of the network links, the link propagation delays, and the parameters of the connection to be scheduled form the inputs to the algorithm. We initially present a scheme of non-polynomial complexity to compute a set of so called non-dominated candidate paths, from which the optimal path can be found. We then propose two mechanisms to appropriately prune the set of candidate paths in order to find multicost routing and scheduling algorithms of polynomial complexity. We examine the performance of the algorithms in the special case of an Optical Burst Switched network. Our results indicate that the proposed polynomial-time algorithms have performance that is very close to that of the optimal algorithm. We also study the effects network propagation delays and link-state update policies have on performance.
Abstract: We propose QoS-aware scheduling algorithms for Grid Networks that are capable of optimally or near-optimally
assigning computation and communication tasks to grid resources. The routing and scheduling algorithms to be
presented take as input the resource utilization profiles and the task characteristics and QoS requirements, and
co-allocate resources while accounting for the dependencies between communication and computation tasks.
Keywords: communication and computation utilization profiles, multicost routing and scheduling, grid
computing.
Abstract: A key problem in networks that support advance reservations is the routing and time scheduling of
connections with flexible starting time and known data transfer size. In this paper we present a multicost
routing and scheduling algorithm for selecting the path to be followed by such a connection and the time the
data should start and end transmission at each link so as to minimize the reception time at the destination,
or optimize some other performance criterion. The utilization profiles of the network links, the link
propagation delays, and the parameters of the connection to be scheduled form the inputs to the algorithm.
We initially present a scheme of non-polynomial complexity to compute a set of so-called non-dominated
candidate paths, from which the optimal path can be found. We then propose two mechanisms to
appropriately prune the set of candidate paths in order to find multicost routing and scheduling algorithms of
polynomial complexity. We examine the performance of the algorithms in the special case of an Optical
Burst Switched network. Our results indicate that the proposed polynomial time algorithms have performance that is very close to that of the optimal algorithm. We also study the effects network
propagation delays and link-state update policies have on performance.
Abstract: A key problem in networks that support advance
reservations is the routing and time scheduling of connections
with flexible starting time. In this paper we present a multicost
routing and scheduling algorithm for selecting the path to be
followed by such a connection and the time the data should start
so as to minimize the reception time at the destination, or some
other QoS requirement. The utilization profiles of the network
links, the link propagation delays, and the parameters of the
connection to be scheduled form the inputs to the algorithm. We
initially present a scheme of non-polynomial complexity to
compute a set of so-called non-dominated candidate paths, from
which the optimal path can be found. By appropriately pruning
the set of candidate paths using path pseudo-domination
relationships, we also find multicost routing and scheduling
algorithms of polynomial complexity. We examine the
performance of the algorithms in the special case of an Optical
Burst Switched network. Our results indicate that the proposed
polynomial time algorithms have performance that it is very close
to that of the optimal algorithm.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has been recently proposed as a modulation technique for optical
networks, due to its good spectral efficiency and impairment
tolerance. Optical OFDM is much more flexible compared to
traditional WDM systems, enabling elastic bandwidth
transmissions. We consider the planning problem of an OFDMbased optical network where we are given a traffic matrix that
includes the requested transmission rates of the connections to be
served. Connections are provisioned for their requested rate by
elastically allocating spectrum using a variable number of OFDM
subcarriers. We introduce the Routing and Spectrum Allocation
(RSA) problem, as opposed to the typical Routing and
Wavelength Assignment (RWA) problem of traditional WDM
networks, and present various algorithms to solve the RSA. We
start by presenting an optimal ILP RSA algorithm that minimizes
the spectrum used to serve the traffic matrix, and also present a
decomposition method that breaks RSA into two substituent
subproblems, namely, (i) routing and (ii) spectrum allocation
(R+SA) and solves them sequentially. We also propose a heuristic
algorithm that serves connections one-by-one and use it to solve
the planning problem by sequentially serving all traffic matrix
connections. To feed the sequential algorithm, two ordering
policies are proposed; a simulated annealing meta-heuristic is also
proposed to obtain even better orderings. Our results indicate
that the proposed sequential heuristic with appropriate ordering
yields close to optimal solutions in low running times.
Abstract: In this paper we demonstrate the significant impact of the user mobility rates on the performance on two different approaches for designing routing protocols for ad-hoc mobile networks: (a) the route creation and maintenance approach and (b) the "support" approach, that forces few hosts to move acting as
"helpers" for message delivery. We study a set of representative protocols for each approach, i.e.~DSR and ZRP for the first approach and RUNNERS for the second. We have implemented the three protocols and performed a large scale and detailed simulation study of their performance. Our findings are: (i) DSR achieves low message delivery rates but manages to deliver messages very fast; (ii) ZRP behaves well in networks of low mobility rate, while its performance drops for networks of highly mobile users; (iii) RUNNERS seem to tolerate well (and in fact benefit from) high mobility rates.
Based on our investigation, we design and implement two new protocols that result from the synthesis of the investigated routing approaches. We conducted an extensive, comparative simulation study of their performance. The new protocols behave well both in networks of diverse mobility motion rates, and in some cases they even outperform the original ones by achieving lower message delivery delays.
Abstract: Data propagation in wireless sensor
networks is usually performed as a multihopprocess.
Thus,
To deliver a single
message, the resources of many sensor nodes are used and
a lot of energy is spent.
Recently, a novel approach is catching momentum because of important applications;
that of having a mobile sink move inside the network area and collect
the data with low energy cost.
Here we extend this line of research by proposing and evaluating three new protocols.
Our protocols are novel in
a) investigating the impact of having {many} mobile sinks
b) in weak models with restricted mobility, proposing and evaluating
a mix of static and mobile sinks and c) proposing a distributed
protocol that tends to {equally spread the sinks} in the network to
further improve performance.
Our protocols are simple, based on randomization and assume locally
obtainable information. We perform an extensive evaluation via simulation; our
findings demonstrate that our solutions scale very well with respect to the number of sinks
and significantly reduce energy consumption and delivery delay.
Abstract: Raising awareness among young people and changing their behavior and habits concerning energy usage and the environment is key to achieving a sustainable planet. The goal to address the global climate problem requires informing the population on their roles in mitigation actions and adaptation of sustainable behaviors. Addressing climate change and achieve ambitious energy and climate targets requires a change in citizen behavior and consumption practices. IoT sensing and related scenario and practices, which address school children via discovery, gamification, and educational activities, are examined in this paper. Use of seawater sensors in STEM education, that has not previously been addressed, is included in these educational scenaria.
Abstract: We consider information aggregation as a method for reducing the information exchanged in a Grid network and used by the resource manager in order to make scheduling decisions. In this way, information is summarized across nodes and sensitive or detailed information can be kept private, while resources are still publicly available for use. We present a general framework for information aggregation, trying to identify issues that relate to aggregation in Grids. In this context, we describe a number of techniques, including single point and intra-domain aggregation, define appropriate grid-specific domination relations and operators for aggregating static and dynamic resource information, and discuss resource selection optimization functions. The quality of an aggregation scheme is measured both by its effects on the efficiency of the scheduler˘s decisions and also by the reduction it brings on the amount of resource information recorded, a tradeoff that we examine in detail. Simulation experiments demonstrate that the proposed schemes achieve significant information reduction, either in the amount of information exchanged, or in the frequency of the updates, while at the same time maintaining most of the value of the original information as expressed by a stretch factor metric we introduce.
Abstract: In this work we study the problem of scheduling tasks with dependencies in multiprocessor architectures where processors have different speeds. We examine the energy-efficiency and time efficiency of scheduling in an asymmetric system.
Abstract: We study a problem of scheduling client requests to servers. Each client has a particular latency requirement at each server and may choose either to be assigned to some server in order to get serviced provided that her latency requirement is met, or not to participate in the assignment at all. From a global perspective, in order to optimize the performance of such a system, one would aim to maximize the number of clients that participate in the assignment. However, clients may behave selfishly in the sense that, each of them simply aims to participate in an assignment and get serviced by some server where her latency requirement is met with no regard to overall system performance. We model this selfish behavior as a strategic game, show how to compute pure Nash equilibria efficiently, and assess the impact of selfishness on system performance. We also show that the problem of optimizing performance is computationally hard to solve, even in a coordinated way, and present efficient approximation and online algorithms.
Abstract: We study a problem of scheduling client requests to servers.
Each client has a particular latency requirement at each server and may
choose either to be assigned to some server in order to get serviced provided
that her latency requirement is met or not to participate in the
assignment at all. From a global perspective, in order to optimize the
performance of such a system, one would aim to maximize the number
of clients that participate in the assignment. However, clients may behave
selfishly in the sense that each of them simply aims to participate
in an assignment and get serviced by some server where her latency requirement
is met with no regard to the overall system performance. We
model this selfish behavior as a strategic game, show how to compute
equilibria efficiently, and assess the impact of selfishness on system performance.
We also show that the problem of optimizing performance is
computationally hard to solve, even in a coordinated way, and present
efficient approximation and online algorithms.
Abstract: We present SeAl1, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms efectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.
Abstract: We present SeAl, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms effectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.
Abstract: During the last years web search engines have moved from the simple but inefficient syntactical analysis (first generation) to the more robust and usable web graph analysis (second generation). Much of the current research is focussed on the so-called third generation search engines that, in principle, inject human characteristics on how results are obtained and presented to the end user. Approaches exploited towards this direction include (among others): an alteration of PageRank [1] that takes into account user specific characteristics and bias the page ordering using the user preferences (an approach, though, that does not scale well with the number of users). The approach is further exploited in [3], where several PageRanks are computed for a given number of distinct search topics. A similar idea is used in [6], where the PageRank computation takes into account the content of the pages and the query terms the surfer is looking for. In [4], a decomposition of PageRank to basic components is suggested that may be able to scale the different PageRank computations to a bigger number of topics or even distinct users. Another approach to web search is presented in [2], where a rich extension of the web, called semantic web, and the application of searching over this new setting is described.
Abstract: Searchius is a collaborative search engine that produces
search results based solely on user provided web-related
data. We discuss the architecture of this system and how
it compares to current state-of-the-art search engines. We
show that the global users˘ preference over pages can be
efficiently used as a metric of page quality, and that the
inherent organization of the collected data can be used to
discover related URLs. We also conduct an extensive experimental
study, based on the web related data of 36483
users, to analyze the qualitative and quantitative characteristics
of user collected URL collections, to investigate how
well the users URL collections cover the web and discover
the characteristics that affect the quality of the search results
under the proposed setting.
Abstract: In this Phd thesis,, we try to use formal logic and threshold phenomena that asymptotically emerge with certainty in order to build new trust models and to evaluate the existing one. The departure point of our work is that dynamic, global computing systems are not amenable to a static viewpoint of the trust concept, no matter how this concept is formalized. We believe that trust should be a statistical, asymptotic concept to be studied in the limit as the system's components grow according to some growth rate. Thus, our main goal is to define trust as an emerging system property that ``appears'' or "disappears" when a set of properties hold, asymptotically with probability$ 0$ or $1$ correspondingly . Here we try to combine first and second order logic in order to analyze the trust measures of specific network models. Moreover we can use formal logic in order to determine whether generic reliability trust models provide a method for deriving trust between peers/entities as the network's components grow. Our approach can be used in a wide range of applications, such as monitoring the behavior of peers, providing a measure of trust between them, assessing the level of reliability of peers in a network. Wireless sensor networks are comprised of a vast number of ultra-small autonomous computing, communication and sensing devices, with restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Sensor networks can be very useful in practice. Such systems should at least guarantee the confidentiality and integrity of the information reported to the controlling authorities regarding the realization of environmental events. Therefore, key establishment is critical for the protection in wireless sensor networks and the prevention of adversaries from attacking the network. Finally in this dissertation we also propose three distributed group key establishment protocols suitable for such energy constrained networks. This dissertation is composed of two parts. Part I develops the theory of the first and second order logic of graphs - their definition, and the analysis of their properties that are expressible in the {\em first order language} of graphs. In part II we introduce some new distributed group key establishment protocols suitable for sensor networks. Several key establishment schemes are derived and their performance is demonstrated.
Abstract: Elliptic Curve Cryptography (ECC) is one of the
most promising alternatives to conventional public
key cryptography, such as RSA and ElGamal, since
it employs keys of smaller sizes for the same level
of cryptographic strength. Smaller key sizes imply
smaller hardware units for performing the arithmetic
operations required by cryptographic protocols and,
thus, ECC is an ideal candidate for implementation
in embedded systems where the major computational
resources (speed and storage) are limited.
In this paper we present a port, written in ANSI C
for maximum portability, of an open source ECCbased
cryptographic library (ECC-LIB) to ATMEL˘s
AT76C520 802.11 WLAN Access Point. One of the
major features of this port, not found in similar ports,
is that it supports Complex Multiplication (CM) for
the construction of Elliptic Curves with good security
properties. We present some experimental results that
demonstrate that the port is efficient and can lead to generic embedded systems with robust ECC-based
cryptographic protocols using cryptographically strong
ECCs generated with CM. As an application of the
ported library, an EC Diffie-Hellman key exchange
protocol is developed as an alternative of the 4-way
key handshake protocol of the 802.11 protocol.
Abstract: Embedded computing devices dominate our everyday activities, from cell phones to wireless sensors that collect and process data for various applications. Although desktop and high-end server security seems to be under control by the use of current security technology, securing the low-end embedded computing systems is a difficult long-term problem. This is mainly due to the fact that the embedded systems are constrained by their operational environment and the limited resources they are equipped with. Recent research activities focus on the deployment of lightweight cryptographic algorithms and security protocols that are well suited to the limited resources of low-end embedded systems. Elliptic Curve Cryptography (ECC) offers an interesting alternative to the classical public key cryptography for embedded systems (e.g., RSA and ElGamal), since it uses smaller key sizes for achieving the same security level, thus making ECC an attractive and efficient alternative for deployment in embedded systems. In this chapter, the processing requirements and architectures for secure network access, communication functions, storage, and high availability of embedded devices are discussed. In addition, ECC-based state-of-the-art lightweight cryptographic primitives for the deployment of security protocols in embedded systems that fulfill the requirements are presented.
Abstract: The Greek School Network (GSN) is a closed educational network that offers advanced
telematic and networking services to all primary/secondary education schools and
educational administration offices in Greece. The primary objective of GSN is to provide a network infrastructure for the interconnection of school PC laboratories so that modern educational methods and pedagogical models can be used in the school community securely and effectively. GSN has scaled in size, reached maturity, and is currently delivering a wide range of network and telematic services to students and educators. Being the second largest
communications network nationwide, GSN is exposed to all kinds of security threats and, due to its educational hypostasis, naive user behaviour. The current paper presents an evaluation of security management solutions for the enforcement of policies, practices, and user protection methodologies proven viable within the GSN environment, as indicated by statistics and metrics on the use of the related services. The paper reaches the conclusion that GSN security services constitute a sound framework that can successfully cover the needs of
the school community.
Abstract: What is the price of anarchy when unsplittable demands are routed
selfishly in general networks with load-dependent edge delays? Motivated by
this question we generalize the model of [14] to the case of weighted congestion
games.We show that varying demands of users crucially affect the nature of these
games, which are no longer isomorphic to exact potential games, even for very
simple instances. Indeed we construct examples where even a single-commodity
(weighted) network congestion game may have no pure Nash equilibrium.
On the other hand, we study a special family of networks (which we call the
-layered networks) and we prove that any weighted congestion game on such
a network with resource delays equal to the congestions, possesses a pure Nash
Equilibrium. We also show how to construct one in pseudo-polynomial time.
Finally, we give a surprising answer to the question above for such games: The
price of anarchy of any weighted -layered network congestion game with m
edges and edge delays equal to the loads, is {\`E} logm
log logm.
Abstract: In this work we tackle the open problem of self-join size (SJS) estimation in a large-scale distributed data system, where tuples of a relation are distributed over data nodes which comprise an overlay network. Our contributions include adaptations of five well-known SJS estimation centralized techniques (coined sequential, cross-sampling, adaptive, bifocal, and sample-count) to the network environment and a novel technique which is based on the use of the Gini coefficient. We develop analyses showing how Gini estimations can lead to estimations of the underlying Zipfian or power-law value distributions. We further contribute distributed sampling algorithms that can estimate accurately and efficiently the Gini coefficient. Finally, we provide detailed experimental evidence testifying for the claimed increased accuracy, precision, and efficiency of the proposed SJS estimation method, compared to the other methods. The proposed approach is the only one to ensure high efficiency, precision, and accuracy regardless of the skew of the underlying data.
Abstract: Our position is that a key to research efforts on ensuring high
performance in very large scale sharing networks is the idea of
volunteering; recognizing that such networks are comprised of
largely heterogeneous nodes in terms of their capacity and
behaviour, and that, in many real-world manifestations, a few
nodes carry the bulk of the request service load. In this paper we
outline how we employ volunteering as the basic idea using
which we develop altruism-endowed self-organizing sharing
networks to help solve two open problems in large-scale peer-topeer
networks: (i) to develop an overlay topology structure that
enjoys better performance than DHT-structured networks and,
specifically, to offer O(log log N) routing performance in a
network of N nodes, instead of O(log N), and (ii) to efficiently
process complex queries and range queries, in particular.
Abstract: We investigate here the problem
of establishing communication in an ad-hoc
mobile network, that is, we assume the extreme case
of a total absence of any fixed network infrastructure
(for example a case of rapid deployment of a set of
mobile hosts in an unknown terrain). We propose, in
such a case, that a small
subset of the deployed hosts (which we call the support)
should be used to manage network operations.
However, the vast majority
of the hosts are moving arbitrarily according
to application needs.
We then provide a simple, correct and efficient protocol
for communication establishment
that avoids message flooding.
Our protocol manages to establish communication between
any pair of mobile hosts in small, a-priori
guaranteed time bounds even in the worst case of arbitrary motions of the hosts that do not belong to
the support (provided that they do not deliberately try
to avoid the support).
These time bounds, interestingly, do not depend,
on the number of mobile hosts that do not
belong in the support. They depend only on the size of the area
of motions.
Our protocol can be implemented
in very efficient ways by exploiting knowledge of the space of motions
or by adding more power to the hosts of the support.
Our results exploit and further develop some
fundamental properties of random walks in finite graphs.
Abstract: Random Intersection Graphs, Gn,m,p, is a class of random graphs introduced in Karoński (1999) [7] where each of the n vertices chooses independently a random subset of a universal set of m elements. Each element of the universal sets is chosen independently by some vertex with probability p. Two vertices are joined by an edge iff their chosen element sets intersect. Given n, m so that m=left ceilingn{\'a}right ceiling, for any real {\'a} different than one, we establish here, for the first time, a sharp threshold for the graph property “Contains a Hamilton cycle”. Our proof involves new, nontrivial, coupling techniques that allow us to circumvent the edge dependencies in the random intersection graph model.
Abstract: We consider the problem of preprocessing an n-vertex digraph with real edge weights so that
subsequent queries for the shortest path or distance between any two vertices can be efficiently answered. We
give algorithms that depend on the treewidth of the input graph. When the treewidth is a constant, our algorithms
can answer distance queries in O ({\'a}(n) ) time after O.n/ preprocessing. This improves upon previously known
results for the same problem.We also give a dynamic algorithm which, after a change in an edge weight, updates
the data structure in time O.nŻ /, for any constant 0 < Ż < 1. Furthermore, an algorithm of independent interest
is given: computing a shortest path tree, or finding a negative cycle in linear time.
Abstract: We consider the problem of preprocessing an n-vertex digraph with real edge weights so that subsequent queries for the shortest path or distance between any two vertices can be efficiently answered. We give parallel algorithms for the EREW PRAM model of computation that depend on the treewidth of the input graph. When the treewidth is a constant, our algorithms can answer distance queries in O({\'a}(n)) time using a single processor, after a preprocessing of O(log2n) time and O(n) work, where {\'a}(n) is the inverse of Ackermann's function. The class of constant treewidth graphs contains outerplanar graphs and series-parallel graphs, among others. To the best of our knowledge, these are the first parallel algorithms which achieve these bounds for any class of graphs except trees. We also give a dynamic algorithm which, after a change in an edge weight, updates our data structures in O(log n) time using O(n{\^a}) work, for any constant 0 < {\^a} < 1. Moreover, we give an algorithm of independent interest: computing a shortest path tree, or finding a negative cycle in O(log2n) time using O(n) work.
Abstract: Geographic routing is becoming the protocol of choice for
many sensor network applications. The current state of the art is unsatisfactory:
some algorithms are very efficient, however they require a
preliminary planarization of the communication graph. Planarization induces
overhead and is thus not realistic for some scenarios such as the
case of highly dynamic network topologies. On the other hand, georouting
algorithms which do not rely on planarization have fairly low success
rates and fail to route messages around all but the simplest obstacles.
To overcome these limitations, we propose the GRIC geographic routing
algorithm. It has absolutely no topology maintenance overhead, almost
100% delivery rates (when no obstacles are added), bypasses large convex
obstacles, finds short paths to the destination, resists link failure
and is fairly simple to implement. The case of hard concave obstacles
is also studied; such obstacles are hard instances for which performance
diminishes.
Abstract: In this work we consider the problem of finding Hamilton Cycles in graphs derived from the uniform random intersection graphs model $G_{n, m, p}$. In particular, (a) for the case $m = n^{\alpha}, \alpha>1$ we give a result that allows us to apply (with the same probability of success) any algorithm that finds a Hamilton cycle with high probability in a $G_{n, k}$ graph (i.e. a graph chosen equiprobably form the space of all graphs with $k$ edges), (b) we give an \textbf{expected polynomial time} algorithm for the case $p = \textrm{constant}$ and $m \leq \alpha \sqrt{\frac{n}{\log{n}}}$ for some constant $\alpha$, and (c) we show that the greedy approach still works well even in the case $m = o(\frac{n}{\log{n}})$ and $p$ just above the connectivity threshold of $G_{n, m, p}$ (found in \cite{Singerphd}) by giving a greedy algorithm that finds a Hamilton cycle in those ranges of $m, p$ with high probability.
Abstract: In this work, we study protocols so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction, we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol. Moreover, we assume pairwise interactions between the processes that are scheduled by a fair adversary. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network. We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. The expected time to convergence of our protocols is analyzed under a uniform random scheduler. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such k. This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions.
Abstract: In this work, we study protocols (i.e. distributed algorithms) so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol (i.e. the system is homogeneous). Moreover, we assume pairwise interactions between the processes that are scheduled by an adversary. The only constraint on the adversary scheduler is that it must be fair, intuitively meaning that it must assign to every reachable configuration of the system a non-zero probability to occur. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. In particular, in every interaction, the protocol may activate an inactive connection, deactivate an active one, or leave the state of a connection unchanged. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network (i.e. one that does not change any more). We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. We provide proofs of correctness for all of our protocols and analyze the expected time to convergence of most of them under a uniform random scheduler that selects the next pair of interacting processes uniformly at random from all such pairs. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. Our universality protocols use a subset of the population (waste) in order to distributedly construct there a TM able to decide a graph class in some given space. Then, the protocols repeatedly construct in the rest of the population (useful space) a graph equiprobably drawn from all possible graphs. The TM works on this and accepts if the presented graph is in the class. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such k. This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions. Delicate composition and reinitialization issues have to be solved for these general constructions to work.
Abstract: We address an important communication issue in wireless cellular networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region (cell) can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of the available frequencies is limited; thus, efficient solutions to the call control problem are essential. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users that communicate without signal interference. We consider cellular networks of reuse distance kge 2 and we study the on-line version of the problem using competitive analysis.
In cellular networks of reuse distance 2, the previously best known algorithm that beats the lower bound of 3 on the competitiveness of deterministic algorithms works on networks with one frequency, achieves a competitive ratio against oblivious adversaries which is between 2.469 and 2.651, and uses a number of random bits at least proportional to the size of the network. We significantly improve this result by presenting a series of simple randomized algorithms that have competitive ratios smaller than 3, work on networks with arbitrarily many frequencies, and use only a constant number of random bits or a comparable weak random source. The best competitiveness upper bound we obtain is 7/3.
In cellular networks of reuse distance k>2, we present simple randomized on-line call control algorithms with competitive ratios which significantly beat the lower bounds on the competitiveness of deterministic ones and use only random bits. Furthermore, we show a new lower bound on the competitiveness of on-line call control algorithms in cellular networks of reuse distance kge 5.
Abstract: We propose a technique for increasing the repetition rate of solid-state lasers potentially
solving the limitation of cavity length reduction. Based on spectral selection we achieved high
quality multiplication with relatively broad detuning range.
Abstract: In wireless sensor networks data propagation is usually
performed by sensors transmitting data towards a static control center (sink). Inspired by important applications (mostly related to ambient intelligence) and as a first step towards introducing mobility, we propose the idea of having a sink moving in the network area and collecting data from sensors. We propose four characteristic mobility patterns for the sink along with different data collection strategies. Through a detailed simulation study, we evaluate several important performance properties of each protocol. Our findings demonstrate that by taking advantage of the sink's mobility, we can significantly reduce the energy spent in relaying traffic and thus greatly extend the lifetime of the network.
Abstract: Smart Dust is a set of a ast number of ultra-small fully autonomous computing and communication devices, with very restricted energy and computing capabilities, that cooperate to quickly and efficiently accomplish a large sensing task. Smart Dust can be very useful in practice i.e. in the local detection of a remote crucial event and the propagation of data reporting its realization. In this work we make an effort towards the research on smart dust from a basic algorithmic point of view. We first provide a simple but realistic model for smart dust and present an interesting problem, which is how to propagate efficiently information on an event detected locally. Then we present smart dust protocols for local detection and propagation that are simple enough to be implemented on real smart dust systems, and perform, under some simplifying assumptions, a rigorous average case analysis of their efficiency and energy consumption (and their interplay). This analysis leads to concrete results showing that our protocols are very efficient.
Abstract: Smart Dust is a set of a vast number of ultra-small fully
autonomous computing and communication devices, with very restricted
energy and computing capabilities, that co-operate to quickly and efficiently
accomplish a large sensing task.
Smart Dust can be very useful in practice
i.e. in the local detection of a remote crucial event and
the propagation of data reporting its realization.
In this work we make an effort towards the research on smart dust
from a basic algorithmic point of view.
We first provide a simple but realistic model for smart dust
and present an interesting problem, which is how to propagate efficiently
information on an event detected locally.
Then we present smart dust protocols for local detection
and propagation that are simple enough to be implemented
on real smart dust systems, and perform, under some simplifying assumptions,
a rigorous average case analysis of their efficiency and energy consumption
(and their interplay).
This analysis leads to concrete results showing that our protocols
are very efficient.
Abstract: Buildings are among the largest consumers of electricity with a significant portion of this energy use is wasted in unoccupied areas or improperly installed devices.
Identifying such power leaks is hard especially in large office and enterprise buildings.
In this paper we present the design and implementation of a system that uses an underlying sensor network to provide accurate real time information about various characteristics like occupancy, lighting, temperature and power consumption at different levels of granularity.
All sensor devices require minimal installation and maintenance.
Using an experimental installation we evaluate a number of applications and services that achieve energy savings by applying different power conservation policies.
Furthermore we provide energy measurements to users and occupants to show how various choices and behaviors affect their personal energy savings.
Abstract: We present experimental and theoretical analysis of an optical hard limiter that uses
SOA-based interferometers. Its operation relies on the deep saturation of SOA and it can suppress
amplitude modulation in excess of 10 dB.
Abstract: We present recent advances in multi-wavelength, power-equalized laser sources that incorporate a semiconductor optical amplifier (SOA) and simple optical filters, such as Lyot-type and Fabry-Perot, for comb generation. Both linear and ring-cavity configurations are presented, and single-pass optical feedback technique is proposed to improve the performance in terms of the number of simultaneously oscillating lines and output channel power equalization. This technique resulted in a broadened oscillating spectrum of 52 lines spaced at 50 GHz, power-equalized within 0.3 dB. Finally, a simplified version that uses only an uncoated SOA for both gain and comb generation is demonstrated.
Abstract: In this paper we address the problem of capturing and pro-
cessing certain spatiotemporal, social characteristics of hu-
man interactions with the use of Wireless Sensor Networks.
Using TelosB motes, we basically monitor the binary prox-
imity within a group of people. The collected data give an
insight of how people interact with each other (how often,
for how much time, in which room) and provide a novel tool
(which can be further enhanced) to study (quantitatively, in
an automated manner) human social networks.
Abstract: In this paper we address the problem of capturing and pro-
cessing certain spatiotemporal, social characteristics of hu-
man interactions with the use of Wireless Sensor Networks.
Using TelosB motes, we basically monitor the binary prox-
imity within a group of people. The collected data give an
insight of how people interact with each other (how often,
for how much time, in which room) and provide a novel tool
(which can be further enhanced) to study (quantitatively, in
an automated manner) human social networks.
Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing voting rule that today bears his name. Although Dodgson's rule is one of the most well-studied voting rules, it suffers from serious deciencies, both from the computational point of view|it is NP-hard even to approximate the Dodgson score within sublogarithmic factors|and from the social choice point of view|it fails basic social choice desiderata such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we have asked whether there are approximation algorithms for Dodgson's rule that are monotonic or homogeneous. In this paper we give denitive answers to these questions. We design a monotonic exponential-time algorithm that yields a 2-approximation to the Dodgson score, while matching this result with a tight lower bound. We also present a monotonic polynomial-time O(logm)-approximation algorithm (where m is the number of alternatives); this result is tight as well due to a complexity-theoretic lower bound. Furthermore, we show that a slight variation of a known voting rule yields a monotonic, homogeneous, polynomial-time O(mlogm)-approximation algorithm, and establish that it is impossible to achieve a better approximation ratio even if one just asks for homogeneity. We complete the picture by studying several additional social choice properties; for these properties, we prove that algorithms with an approximation ratio that depends only on m do not exist.
Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing
voting rule that today bears his name. Although Dodg-
son's rule is one of the most well-studied voting rules, it suf-
fers from serious deciencies, both from the computational
point of view|it is NP-hard even to approximate the Dodg-
son score within sublogarithmic factors|and from the social
choice point of view|it fails basic social choice desiderata
such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we
have asked whether there are approximation algorithms for
Dodgson's rule that are monotonic or homogeneous. In this
paper we give denitive answers to these questions. We de-
sign a monotonic exponential-time algorithm that yields a
2-approximation to the Dodgson score, while matching this
result with a tight lower bound. We also present a monotonic
polynomial-time O(logm)-approximation algorithm (where
m is the number of alternatives); this result is tight as well
due to a complexity-theoretic lower bound. Furthermore,
we show that a slight variation of a known voting rule yields
a monotonic, homogeneous, polynomial-time O(mlogm)-
approximation algorithm, and establish that it is impossible
to achieve a better approximation ratio even if one just asks
for homogeneity. We complete the picture by studying sev-
eral additional social choice properties; for these properties,
we prove that algorithms with an approximation ratio that
depends only on m do not exist.
Abstract: We generalize Cuckoo Hashing [16] to d-ary Cuckoo Hashing and show how this yields a simple hash table data structure that stores n elements in (1 + ∈) n memory cells, for any constant ∈ > 0. Assuming uniform hashing, accessing or deleting table entries takes at most d = O(ln 1/∈ ) probes and the expected amortized insertion time is constant. This is the first dictionary that has worst case constant access time and expected constant update time, works with (1+∈) n space, and supports satellite information. Experiments indicate that d = 4 choices suffice for ∈ ≈ 0.03. We also describe a hash table data structure using explicit constant time hash functions, using at most d = O(ln2 1/∈ ) probes in the worst case.
A corollary is an expected linear time algorithm for finding maximum cardinality matchings in a rather natural model of sparse random bipartite graphs.
This work was partially supported by DFG grant SA 933/1-1 and the Future and Emerging Technologies programme of the EU under contract number IST-1999- 14186 (ALCOM-FT).
The present work was initiated while this author was at BRICS, Aarhus University, Denmark.
Part of this work was done while the author was at MPII. 1 In this paper “whp.” will mean “with probability 1 - O(1/n)”.
Abstract: We generalize Cuckoo Hashing [16] to d-ary Cuckoo Hashing and show how this yields a simple hash table data structure that stores n elements in (1 + ∈) n memory cells, for any constant ∈ > 0. Assuming uniform hashing, accessing or deleting table entries takes at most d = O(ln 1/∈ ) probes and the expected amortized insertion time is constant. This is the first dictionary that has worst case constant access time and expected constant update time, works with (1+∈) n space, and supports satellite information. Experiments indicate that d = 4 choices suffice for ∈ ≈ 0.03. We also describe a hash table data structure using explicit constant time hash functions, using at most d = O(ln2 1/∈ ) probes in the worst case.
A corollary is an expected linear time algorithm for finding maximum cardinality matchings in a rather natural model of sparse random bipartite graphs.
This work was partially supported by DFG grant SA 933/1-1 and the Future and Emerging Technologies programme of the EU under contract number IST-1999- 14186 (ALCOM-FT).
The present work was initiated while this author was at BRICS, Aarhus University, Denmark.
Part of this work was done while the author was at MPII. 1 In this paper “whp.” will mean “with probability 1 - O(1/n)”.
Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.
Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.
Abstract: We consider the Routing and Spectrum Allocation (RSA) problem in an OFDM-based optical
network with elastic bandwidth allocation. We asses the spectrum utilization gains of this flexible
architecture compared to a traditional fixed-grid rigid-bandwidth WDM network
Abstract: One of the most eminent problems in sensor networks is the
routing of data to a central destination in a robust and e±cient manner.
In this work we propose a new scalable protocol for propagating infor-
mation about a sensed event towards a receiving center. Using only local
information and total absence of coordination between sensors our pro-
tocol achieves to propagate the sensed data to a receiving center by ac-
tivating only those nodes that lie very close to the optimal path between
the source of the event and the destination, resulting in low activation of
the network's sensors. Thus the protocol is very energy e±cient. Further-
more, our protocol is robust as it manages to propagate the information
even when sensors fail with certain probability.
Abstract: In this paper, we analyze the stability properties of the FIFO protocol in the Adversarial Queueing model for packet routing. We show a graph for which FIFO is stable for any adversary with injection rate r ≰ 0.1428. We generalize this results to show upper bound for stability of any network under FIFO protocol, answering partially an open question raised by Andrews et al. in [2]. We also design a network and an adversary for which FIFO is non-stable for any r ≱ 0.8357, improving the previous known bounds of [2].
Abstract: We work on an extension of the Population Protocol model of Angluin et al. that allows edges of the communication graph, G, to have states that belong to a constant size set. In this extension, the so called Mediated Population Protocol model (MPP), both uniformity and anonymity are preserved. We study here a simplified version of MPP in order to capture MPP's ability to stably compute graph properties. To understand properties of the communication graph is an important step in almost any distributed system. We prove that any graph property is not computable if we allow disconnected communication graphs. As a result, we focus on studying (at least) weakly connected communication graphs only and give several examples of computable properties in this case. To do so, we also prove that the class of computable properties is closed under complement, union and intersection operations. Node and edge parity, bounded out-degree by a constant, existence of a node with more incoming than outgoing neighbors, and existence of some directed path of length at least k=O(1) are some examples of properties whose computability is proven. Finally, we prove the existence of symmetry in two specific communication graphs and, by exploiting this, we prove that there exists no protocol, whose states eventually stabilize, to determine whether G contains some directed cycle of length 2.
Abstract: In wireless communication, the signal of a typical broadcast station is transmittes from a broadvast center p and reaches objects at a distance,say , r from it. In addition there is a radius ro, ro < r, such that the signal originating from the center p should be avoided. In other words, points within distance ro from the station compise a hazardous zone. We consider the following station layout problem: Cover a given planar region that includes a collection of buildings with a minimum number of astations so that every point in the region is within the reach of a station, while at the same time no interior point of any building is within the hazardous zone of a station. We give algorithms for computing such station layouts in both the one- and two - dimensional cases
Abstract: The existence of good probabilistic models
for the job arrival process and the delay components
introduced at different stages of job processing in a
Grid environment is important for the improved
understanding of the Grid computing concept. In this
study, we present a thorough analysis of the job
arrival process in the EGEE infrastructure and of the
time durations a job spends at different states in the
EGEE environment. We define four delay compo-
nents of the total job delay and model each compo-
nent separately. We observe that the job inter-arrival
times at the Grid level can be adequately modelled by
a rounded exponential distribution, while the total job
delay (from the time it is generated until the time it
completes execution) is dominated by the computing
element’s register and queuing times and the worker
node’s execution times. Further, we evaluate the
efficiency of the EGEE environment by comparing
the job total delay performance with that of a hypothetical ideal super-cluster and conclude that we
would obtain similar performance if we submitted the
same workload to a super-cluster of size equal to 34%
of the total average number of CPUs participating in
the EGEE infrastructure. We also analyze the job
inter-arrival times, the CE’s queuing times, the WN’s
execution times, and the data sizes exchanged at the
kallisto.hellasgrid.gr cluster, which is node in the
EGEE infrastructure. In contrast to the Grid level, we
find that at the cluster level the job arrival process
exhibits self-similarity/long-range dependence. Final-
ly, we propose simple and intuitive models for the job
arrival process and the execution times at the cluster
level.
Abstract: Efficient query processing in traditional database
management systems relies on statistics on base data. For centralized systems, there is a rich body of research results on such statistics, from simple aggregates to more elaborate synopses such as sketches and histograms. For Internet-scale distributed systems, on the other hand, statisticsmanagement still poses major challenges. With the work in this paper we aim to endow peer-to-peer data management over structured
overlays with the power associated with such statistical information, with emphasis on meeting the scalability challenge.
To this end, we first contribute efficient, accurate, and decentralized algorithms that can compute key aggregates such as Count, CountDistinct, Sum, and Average. We show how to construct several types of histograms, such as simple Equi-Width, Average Shifted Equi-Width, and Equi-Depth histograms. We present a full-fledged open-source implementation
of these tools for distributed statistical synopses,
and report on a comprehensive experimental performance evaluation, evaluating our contributions in terms of efficiency, accuracy, and scalability.
Abstract: We consider the important problem of energy balanced data propagation in wireless sensor networks and we extend and generalize
previous works by allowing adaptive energy assignment. We consider the data gathering problem where data are generated by the sensors and
must be routed toward a unique sink. Sensors route data by either sending the data directly to the sink or in a multi-hop fashion by delivering
the data to a neighbouring sensor. Direct and neighbouring transmissions require different levels of energy consumption. Basically, the protocols balance the energy consumption among the sensors by computing the adequate ratios of direct and neighbouring transmissions. An abstract model of energy dissipation as a random walk is proposed, along with rigorous performance analysis techniques. Two efficient distributed algorithms are presented and analysed, by both rigorous means and simulation.
The first one is easy to implement and fast to execute. The protocol assumes that sensors know a-priori the rate of data they generate.
The sink collects and processes all these information in order to compute the relevant value of the protocol parameter. This value is transmitted
to the sensors which individually compute their optimal ratios of direct and neighbouring transmissions. The second protocol avoids the necessary a-priori knowledge of the data rate generated by sensors by inferring the relevant information from the observation of the data paths.
Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes.
Abstract: In emerging pervasive scenarios, data is collected by sensing devices in streams that occur at several distributed points of observation. The size of the data typically far exceeds the storage and computational capabilities of the tiny devices that have to collect and process them. A general and challenging task is to allow (some of) the nodes of a pervasive network to collectively perform monitoring of a neighbourhood of interest by issuing continuous aggregate queries on the streams observed in its vicinity. This class of algorithms is fully decentralized and diffusive in nature: collecting all the data at a few central nodes of the network is unfeasible in networks of low capability devices or in the presence of massive data sets. Two main problems arise in this scenario: (i) the intrinsic complexity of maintaining statistics over a data stream whose size greatly exceeds the capabilities of the device that performs the computation; (ii) composing the partial outcomes computed at different points of observation into an accurate, global statistic over a neighbourhood of interest, which entails coping with several problems, last but not least the receipt of duplicate information along multiple paths of diffusion.
Streaming techniques have emerged as powerful tools to achieve the general goals described above, in the first place because they assume a computational model in which computational and storage resources are assumed to be far exceeded by the amount of data on which computation occurs. In this contribution, we review the main streaming techniques and provide a classification of the computational problems and the applications they effectively address, with an emphasis on decentralized scenarios, which are of particular interest in pervasive networks
Abstract: In this paper, we present and study solutions for the efficient processing of queries over string attributes in a large P2P data network implemented with DHTs. The proposed solutions support queries with equality, prefix, suffix, and containment predicates over string attributes. Currently, no known solution to this problem exists.
We propose and study algorithms for processing such queries and their optimizations. As event-based, Publish/Subscribe information systems are a champion application class where string attribute (continuous) queries are very common, we pay particular attention to this type of data networks, formulating our solution in terms of this environment. A major design decision behind the proposed solution is our intention to provide a solution that is general (DHT-independent), capable of being implemented on top of any particular DHT.
Abstract: This work extends what is known so far for a basic model of
evolutionary antagonis
m in undirected ne
tworks (graphs).
More specif-
ically, this work studies the generalized Moran process, as introduced
by Lieberman, Hauert, and Nowak [Nature, 433:312-316, 2005], where
the individuals of a population reside on the vertices of an undirected
connected graph. The initial population has a single
mutant
of a
fitness
value
r
(typically
r>
1), residing at some vertex
v
of the graph, while
every other vertex is initially occupied by an individual of fitness 1. At
every step of this process, an individual (i.e. vertex) is randomly chosen
for reproduction with probability proportional to its fitness, and then it
places a copy of itself on a random neighbor, thus replacing the individ-
ual that was residing there. The main quantity of interest is the
fixation
probability
, i.e. the probability that eventually the whole graph is occu-
pied by descendants of the mutant. In this work we concentrate on the
fixation probability when the mutant is initially on a specific vertex
v
,
thus refining the older notion of Lieberman et al. which studied the fix-
ation probability when the initial mutant is placed at a random vertex.
We then aim at finding graphs that have many “strong starts” (or many
“weak starts”) for the mutant. Thus we introduce a parameterized no-
tion of
selective amplifiers
(resp.
selective suppressors
)ofevolution.We
prove the existence of
strong
selective amplifiers (i.e. for
h
(
n
)=
Θ
(
n
)
vertices
v
the fixation probability of
v
is at least 1
−
c
(
r
)
n
for a func-
tion
c
(
r
) that depends only on
r
), and the existence of quite strong
selective suppressors. Regarding the traditional notion of fixation prob-
ability from a random start, we provi
de strong upper and lower bounds:
first we demonstrate the non-existence of “strong universal” amplifiers,
and second we prove the
Thermal Theorem
which states that for any
undirected graph, when the mutant starts at vertex
v
, the fixation prob-
ability at least (
r
−
1)
/
(
r
+
deg
v
deg
min
). This theorem (which extends the
“Isothermal Theorem” of Lieberman et al. for regular graphs) implies
an almost tight lower bound for the usual notion of fixation probability.
Our proof techniques are original and are based on new domination ar-
guments which may be of general interest in Markov Processes that are
of the general birth-death type.
Abstract: We study extreme Nash equilibria in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel identical links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic on those links that minimize its expected latency cost. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.
We provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria. Furthermore, we show, that under a certain condition, the social cost of any Nash equilibrium is within a factor of 2h(1+ɛ) of that of the fully mixed Nash equilibrium, where h is the factor by which the largest user traffic deviates from the average user traffic.
Considering pure Nash equilibria, we provide a PTAS to approximate the best social cost, we give an upper bound on the worst social cost and we show that it is View the MathML source-hard to approximate the worst social cost within a multiplicative factor better than 2-2/(m+1).
Abstract: We study extreme Nash equilibria in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel identical links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic on those links that minimize its expected latency cost. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.We provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria. Furthermore, we show, that under a certain condition, the social cost of any Nash equilibrium is within a factor of 2h(1 + {\aa}) of that of the fully mixed Nash equilibrium, where h is the factor by which the largest user traffic deviates from the average user traffic.Considering pure Nash equilibria, we provide a PTAS to approximate the best social cost, we give an upper bound on the worst social cost and we show that it is N P-hard to approximate the worst social cost within a multiplicative factor better than 2 - 2/(m + 1).
Abstract: A key issue when designing and implementing large-scale publish/subscribe systems is how to efficiently propagate subscriptions among the brokers of the system. Brokers require this information in order to forward incoming events only to interested users, filtering out unrelated events, which can save significant overheads (particularly network bandwidth and processing time at the brokers). In this paper we contribute the notion of subscription summaries, a mechanism appropriately compacting subscription information. We develop the associated data structures and matching algorithms. The proposed mechanism can handle event/subscription schemata that are rich in terms of their attribute types and powerful in terms of the allowed operations on them. Our major results are that the proposed mechanism (i) is scalable, with the bandwidth required to propagate subscriptions increasing only slightly, even at huge-scales, and (ii) is significantly more efficient, up to orders of magnitude, depending on the scale, with respect to the bandwidth requirements for propagating subscriptions.
Abstract: The content-based publish/subscribe (pub/sub)
paradigm for system design is becoming increasingly
popular, offering unique benefits for a large number of
data-intensive applications. Coupled with the peer-to-peer
technology, it can serve as a central building block for
such applications deployed over a large-scale network
infrastructure. A key problem toward the creation of
large-scale content-based pub/sub infrastructures relates to
dealing efficiently with continuous queries (subscriptions)
with rich predicates on string attributes; in this work we
study the problem of efficiently and accurately matching
substring queries to incoming events.
Abstract: Prompt availability of critical information to the right people is the main factor for the management of any
emergency situation. However, despite the numerous investments in the domain of public safety and
security, recent events demonstrate that, both in Europe and US, this is still an open issue.
Abstract: We study computational and coordination efficiency issues of
Nash equilibria in symmetric network congestion games. We first propose
a simple and natural greedy method that computes a pure Nash equilibrium
with respect to traffic congestion in a network. In this algorithm
each user plays only once and allocates her traffic to a path selected via
a shortest path computation. We then show that this algorithm works
for series-parallel networks when users are identical or when users are of
varying demands but have the same best response strategy for any initial
network traffic. We also give constructions where the algorithm fails if
either the above condition is violated (even for series-parallel networks)
or the network is not series-parallel (even for identical users). Thus, we
essentially indicate the limits of the applicability of this greedy approach.
We also study the price of anarchy for the objective of maximum
latency. We prove that for any network of m uniformly related links and
for identical users, the price of anarchy is {\`E}( logm
log logm).
Abstract: We investigate the impact of different mobility rates on the
performance of routing protocols in ad-hoc mobile networks. Based
on our investigation, we design a new protocol that results from
the synthesis of the well known protocols: ZRP and RUNNERS. We have implemented the new protocol as well as
the original two protocols and conducted an extensive, comparative
simulation study of their performance. The new protocol behaves
well both in networks of diverse mobility motion rates, and in
some cases even outperforms the original ones by achieving lower
message delivery delays.
Abstract: We study congestion games where players aim to access a set of resources. Each player has a set of possible strategies and each resource has a function associating the latency it incurs to the players using it. Players are non–cooperative and each wishes to follow strategies that minimize her own latency with no regard to the global optimum. Previous work has studied the impact of this selfish behavior to system performance. In this paper, we study the question of how much the performance can be improved if players are forced to pay taxes for using resources. Our objective is to extend the original game so that selfish behavior does not deteriorate performance. We consider atomic congestion games with linear latency functions and present both negative and positive results. Our negative results show that optimal system performance cannot be achieved even in very simple games. On the positive side, we show that there are ways to assign taxes that can improve the performance of linear congestion games by forcing players to follow strategies where the total latency suffered is within a factor of 2 of the minimum possible; this result is shown to be tight. Furthermore, even in cases where in the absence of taxes the system behavior may be very poor, we show that the total disutility of players (latency plus taxes) is not much larger than the optimal total latency. Besides existential results, we show how to compute taxes in time polynomial in the size of the game by solving convex quadratic programs. Similar questions have been extensively studied in the model of non-atomic congestion games. To the best of our knowledge, this is the first study of the efficiency of taxes in atomic congestion games.
Abstract: Today˘s students are the citizens of tomorrow, and they should have the skills and tools to understand and respond to climate change. Green Awareness in Action (GAIA) has built an IoT infrastructure within 25 schools in Europe, in order to enable lectures that target sustainability and energy efficiency, based on data produced inside school buildings. The school community has reacted very positively to this approach and has reduced energy consumption as a consequence.
Abstract: In this work we consider temporal networks, i.e. networks defined by a labeling $\lambda$ assigning to each edge of an underlying graph G a set of discrete time-labels. The labels of an edge, which are natural numbers, indicate the discrete time moments at which the edge is available. We focus on path problems of temporal networks. In particular, we consider time-respecting paths, i.e. paths whose edges are assigned by $\lambda$ a strictly increasing sequence of labels. We begin by giving two efficient algorithms for computing shortest time-respecting paths on a temporal network. We then prove that there is a natural analogue of Menger’s theorem holding for arbitrary temporal networks. Finally, we propose two cost minimization parameters for temporal network design. One is the temporality of G, in which the goal is to minimize the maximum number of labels of an edge, and the other is the temporal cost of G, in which the goal is to minimize the total number of labels used. Optimization of these parameters is performed subject to some connectivity constraint. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graph families such as rings, directed acyclic graphs, and trees.
Abstract: In this work, we consider a \emph{solution of automata} similar to \emph{Population Protocols} and \emph{Network Constructors}. The automata (also called \emph{nodes}) move passively in a well-mixed solution without being capable of controlling their movement. However, the nodes can \emph{cooperate} by interacting in pairs. Every such interaction may result in an update of the local states of the nodes. Additionally, the nodes may also choose to connect to each other in order to start forming some required structure. We may think of such nodes as the \emph{smallest possible programmable pieces of matter}, like tiny nanorobots or programmable molecules. The model that we introduce here is a more applied version of Network Constructors, imposing \emph{physical} (or \emph{geometrical}) \emph{constraints} on the connections that the nodes are allowed to form. Each node can connect to other nodes only via a very limited number of \emph{local ports}, which implies that at any given time it has only a \emph{bounded number of neighbors}. Connections are always made at \emph{unit distance} and are \emph{perpendicular to connections of neighboring ports}. Though such a model cannot form abstract networks like Network Constructors, it is still capable of forming very practical \emph{2D or 3D shapes}. We provide direct constructors for some basic shape construction problems, like \emph{spanning line}, \emph{spanning square}, and \emph{self-replication}. We then develop \emph{new techniques} for determining the computational and constructive capabilities of our model. One of the main novelties of our approach, concerns our attempt to overcome the inability of such systems to detect termination. In particular, we exploit the assumptions that the system is well-mixed and has a unique leader, in order to \emph{give terminating protocols that are correct with high probability}. This allows us to develop terminating subroutines that can be \emph{sequentially composed} to form larger \emph{modular protocols} (which has not been the case in the relevant literature). One of our main results is a \emph{terminating protocol counting the size $n$ of the system} with high probability. We then use this protocol as a subroutine in order to develop our \emph{universal constructors}, establishing that \emph{it is possible for the nodes to become self-organized with high probability into arbitrarily complex shapes while still detecting termination of the construction}.
Abstract: We extend the population protocol model with a cover-time service that informs a walking state every time it covers the whole network. This represents a known upper bound on the cover time of a random walk. The cover-time service allows us to introduce termination into population protocols, a capability that is crucial for any distributed system. By reduction to an oracle-model we arrive at a very satisfactory lower bound on the computational power of the model: we prove that it is at least as strong as a Turing Machine of space log n with input commutativity, where n is the number of nodes in the network. We also give a log n-space, but nondeterministic this time, upper bound. Finally, we prove interesting similarities of this model to linear bounded automata.
Abstract: We extend the population protocol model with a cover-time service that informs a walking state every time it covers the whole network. This is simply a known upper bound on the cover time of a random walk. This allows us to introduce termination into population protocols, a capability that is crucial for any distributed system. By reduction to an oracle-model we arrive at a very satisfactory lower bound on the computational power of the model: we prove that it is at least as strong as a Turing Machine of space logn with input commutativity, where n is the number of nodes in the network. We also give a logn-space, but nondeterministic this time, upper bound. Finally, we prove interesting similarities of this model to linear bounded automata.
Abstract: In this work, we discuss multiplayer pervasive
games that rely on the use of ad hoc mobile sensor networks.
The unique feature in such games is that players interact
with each other and their surrounding environment by using
movement and presence as a means of performing game-related
actions, utilizing sensor devices. We discuss the fundamental
issues and challenges related to these type of games and the
scenarios associated with them. We also present and evaluate
an example of such a game, called the “Hot Potato”, developed
using the Sun SPOT hardware platform. We provide a set of
experimental results, so as to both evaluate our implementation
and also to identify issues that arise in pervasive games which
utilize sensor network nodes, which show that there is great
potential in this type of games.
Abstract: In this paper we present the design of jWebDust, a generic and modular application environment for developing and managing applications based on wireless sensor networks that are accessible via the internet. Our software architecture provides a range of services that allow to create customized web-based applications with minimum implementation effort that are easy to administrate. We here present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust allows heterogeneous components to interoperate and the integrated management and control of multiple such networks by defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network.
Abstract: A lot of activity is being devoted to studying issues related to energy consumption and efficiency in our buildings, and especially on public buildings. In this context, the educational public buildings should bean important part of the equation. At the same time, there is an evident need for open datasets, which should be publicly available for researchers to use. We have implemented a real-world multi-site Inter-net of Things (IoT) deployment, comprising 25 school buildings across Europe, primarily designed as a foundation for enabling IoT-based energy awareness and sustainability lectures and promoting data-driven energy-saving behaviors. In this work, we present some of the basic aspects to producing datasets from this deployment and discuss its potential uses. We also provide a brief discussion on data derived from a preliminary analysis of thermal comfort-related data produced from this infrastructure.
Abstract: Random scaled sector graphs were introduced as a generalization of random geometric graphs to model networks of sensors using optical communication. In the random scaled sector graph model vertices are placed uniformly at random into the [0, 1]2 unit square. Each vertex i is assigned uniformly at random sector Si, of central angle {\'a}i, in a circle of radius ri (with vertex i as the origin). An arc is present from vertex i to any vertex j, if j falls in Si. In this work, we study the value of the chromatic number {\^O}(Gn), directed clique number {\`u}(Gn), and undirected clique number {\`u}2 (Gn) for random scaled sector graphs with n vertices, where each vertex spans a sector of {\'a} degrees with radius rn = \~{a}ln n/n. We prove that for values {\'a} < {\^I}, as n ¨ ‡ w.h.p., {\^O}(Gn) and {\`u}2 (Gn) are {\`E}(ln n/ln ln n), while {\`u}(Gn) is O(1), showing a clear difference with the random geometric graph model. For {\'a} > {\^I} w.h.p., {\^O}(Gn) and {\`u}2 (Gn) are {\`E} (ln n), being the same for random scaled sector and random geometric graphs, while {\`u}(Gn) is {\`E}(ln n/ln ln n).
Abstract: A dichotomy theorem for a class of decision problems is a result asserting that certain problems in the
class are solvable in polynomial time, while the rest are NP-complete. The first remarkable such dichotomy
theorem was proved by Schaefer in 1978. It concerns the class of generalized satisfiability problems Sat?S{\L},
whose input is a CNF?S{\L}-formula, i.e., a formula constructed from elements of a fixed set S of generalized
connectives using conjunctions and substitutions by variables. Here, we investigate the complexity of
minimal satisfiability problems Min Sat?S{\L}, where S is a fixed set of generalized connectives. The input to
such a problem is a CNF?S{\L}-formula and a satisfying truth assignment; the question is to decide whether
there is another satisfying truth assignment that is strictly smaller than the given truth assignment with
respect to the coordinate-wise partial order on truth assignments. Minimal satisfiability problems were first
studied by researchers in artificial intelligence while investigating the computational complexity of prop-
ositional circumscription. The question of whether dichotomy theorems can be proved for these problems
was raised at that time, but was left open. We settle this question affirmatively by establishing a dichotomy
theorem for the class of all Min Sat?S{\L}-problems, where S is a finite set of generalized connectives. We also
prove a dichotomy theorem for a variant of Min Sat?S{\L} in which the minimization is restricted to a subset of
the variables, whereas the remaining variables may vary arbitrarily (this variant is related to extensions of
propositional circumscription and was first studied by Cadoli). Moreover, we show that similar dichotomy
theorems hold also when some of the variables are assigned constant values. Finally, we give simple criteria that tell apart the polynomial-time solvable cases of these minimal satisfiability problems from the NP-
complete ones.
Abstract: We explore the capability of a network of extremely limited
computational entities to decide properties about any of its subnetworks.
We consider that the underlying network of the interacting
entities (devices, agents, processes etc.) is modeled by a complete in-
teraction graph and we devise simple graph protocols that can decide
properties of some input subgraph provided by some preprocessing on
the network. The agents are modeled as nite-state automata and run
the same global graph protocol. Each protocol is a xed size grammar,
that is, its description is independent of the size (number of agents) of
the network. This size is not known by the agents. We propose a simple
model, the Mediated Graph Protocol (MGP) model, similar to the Population
Protocol model of Angluin et al., in which each network link is
characterized by a state taken from a nite set. This state can be used
and updated during each interaction between the corresponding agents.
We provide some interesting properties of the MGP model among which
is the ability to decide properties on stabilizing (initially changing for a
nite number of steps) input graphs and we show that the MGP model
has the ability to decide properties of disconnected input graphs. We
show that the computational power within the connected components is
fairly restricted. Finally, we give an exact characterization of the class
GMGP, of graph languages decidable by the MGP model: it is equal
to the class of graph languages decidable by a nondeterministic Turing
Machine of linear space that receives its input graph by its adjacency
matrix representation.
Abstract: We explore the capability of a network of extremely limited computational entities to decide properties about itself or any of its subnetworks. We consider that the underlying network of the interacting entities (devices, agents, processes etc.) is modeled by an interaction graph that reflects the network’s connectivity. We examine the following two cases: First, we consider the case where the input graph is the whole interaction graph and second where it is some subgraph of the interaction graph given by some preprocessing on the network. In each case, we devise simple graph protocols that can decide properties of the input graph. The computational entities, that are called agents, are modeled as finite-state automata and run the same global graph protocol. Each protocol is a fixed size grammar, that is, its description is independent of the size (number of agents) of the network. This size is not known by the agents. We present two simple models (one for each case), the Graph Decision Mediated Population Protocol (GDMPP) and the Mediated Graph Protocol (MGP) models, similar to the Population Protocol model of Angluin et al., where each network link (edge of the interaction graph) is characterized by a state taken from a finite set. This state can be used and updated during each interaction between the corresponding agents. We provide some example protocols and some interesting properties for the two models concerning the computability of graph languages in various settings (disconnected input graphs, stabilizing input graphs). We show that the computational power within the family of all (at least) weakly-connected input graphs is fairly restricted. Finally, we give an exact characterization of the class of graph languages decidable by the MGP model in the case of complete interaction graphs: it is equal to the class of graph languages decidable by a nondeterministic Turing Machine of linear space that receives its input graph by its adjacency matrix representation.
Abstract: We address several recent developments in non-cooperative as well as
evolutionary game theory, that give a new viewpoint to Complex Systems understanding.
In particular, we discuss notions like the anarchy cost, equilibria formation,
social costs and evolutionary stability. We indicate how such notions help
in understanding Complex Systems behaviour when the system includes selfish,
antagonistic entities of varying degrees of rationality. Our main motivation is the
Internet, perhaps the most complex artifact to date, as well as large-scale systems
such as the high-level P2P systems, where where the interaction is among
humans, programmes and machines and centralized approaches cannot apply.
Abstract: This work addresses the possibility or impossibility,and the corresponding costs,of devising
concurrent, low-contention implementations of atomicRead&Modify&Write (orRMW) operations
in a distributed system. A natural class of monotone RMW operations associated with monotone
groups,a certain class of algebraic groups introduced here,is considered. The popular Fetch&Add
and Fetch&Multiply operations are examples from the class.
A Monotone Linearizability Lemma is proved and employed as a chief combinatorial instrument in
this work; it establishes inherent ordering constraints of linearizability for a certain class of executions
of any distributed system implementing a monotone RMW operation.
Abstract: Recent advances in the all-optical signal processing
domain report high-speed and nontrivial
functionality directly implemented in the optical
layer. These developments mean that the alloptical
processing of packet headers has a future.
In this article we address various important control
plane issues that must be resolved when
designing networks based on all-optical packetswitched
nodes.
Abstract: In this work we present the design of jWebDust, a
software environment for monitoring and controlling sensor networks via a web interface. Our software architecture provides a range of services that allow to create customized applications with minimum implementation effort that are easy to administrate. We present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust will allow heterogeneous components to operate in the same sensor network, and the integrated management and control of multiple such networks by defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network.
Abstract: We present the conceptual basis and the initial planning for an open
source management architecture for wireless sensor networks (WSN). Although
there is an abundance of open source tools serving the administrative needs of
WSN deployments, there is a lack of tools or platforms for high level integrated
WSN management. The current work is, to our knowledge, the first effort to
conceptualize and design a remote, integrated management platform for the
support of WSN research laboratories. The platform is based on the integration
and extension of two innovative platforms: jWebDust, a WSN operation and
management platform, and OpenRSM, an open source integrated remote
systems and network management platform. The proposed system architecture
can support several levels of integration in order to cover to multiple,
qualitatively differentiated use-cases.
Abstract: In this paper we attempt to contribute to the integration of the
mathematics and the technological developments and demonstrate
their interplay in realizing the concept of a Digital Territory.
We describe the main mathematical tools that can be exploited in
the study of properties that emerge as soon as a population size
reaches a certain threshold point. Our aim is to show that
nowadays we have reached a level of mathematical and technological
maturity sufficient to model and simulate any possible world
model.
Abstract: In the near future, it is reasonable to expect that new types of systems will appear, of massive scale that will operating in a constantly changing networked environment. We expect that most such systems will have the form of a large society of tiny networked artefacts. Angluin et al. introduced the notion of "Probabilistic Population Protocols'' (PPP) in order to model the behavior of such systems where extremely limited agents are represented as finite state machines that interact in pairs under the control of an adversary scheduler. We propose to study the dynamics of Probabilistic Population Protocols, via the differential equations approach. We provide a very general model that allows to examine the continuous dynamics of population protocols and we show that it includes the model of Angluin et. al., under certain conditions, with respect to the continuous dynamics of the two models. Our main proposal here is to exploit the powerful tools of continuous nonlinear dynamics in order to examine the behavior of such systems. We also provide a sufficient condition for stability.
Abstract: Angluin et al. [1] introduced the notion of ``Probabilistic Population Protocols'' where extremely limited agents are represented as finite state machines that interact in pairs under the control of an adversary scheduler. We provide a very general model that allows to examine the continuous dynamics of population protocols and we show that it includes the model of [1], under certain conditions, with respect to the continuous dynamics of the two models.
Abstract: Advancements in both on-board and wireless communication technologies provide the necessary backbone for the deployment of a network between vehicles. Most of the envisioned applications for these networks would be greatly favored by multimedia support provisioning. However, there are several issues to be handled in order to provide multimedia services with reasonable quality. In this work, we analyze a specific multimedia service that can be used by several interesting applications: video broadcasting. Particularly, we throughly discuss and evaluate the role of redundancy in improving this service. We have verified that although redundancy does increase the effectiveness in video broadcasting, coding techniques do not improve redundancy's efficiency.
Abstract: We study the impact of fairness on the e±ciency of allo-
cations. We consider three di®erent notions of fairness, namely propor-
tionality, envy-freeness, and equitability for allocations of divisible and
indivisible goods and chores. We present a series of results on the price of
fairness under the three di®erent notions that quantify the e±ciency loss
in fair allocations compared to optimal ones. Most of our bounds are ei-
ther exact or tight within constant factors. Our study is of an optimistic
nature and aims to identify the potential of fairness in allocations.
Abstract: # I. Caragiannis, C. Kaklamanis, P. Kanellopoulos, and M. Kyropoulou. In Proceedings of the 5th Workshop on Internet and Network Economics (WINE 09), LNCS, Springer, 2009, to appear.
Abstract: We investigate the existence and efficient algorithmic construction
of close to optimal independent sets in random models of intersection
graphs. In particular, (a) we propose a new model for random
intersection graphs (Gn,m,p) which includes the model of [10] (the “uniform”
random intersection graphs model) as an important special case.
We also define an interesting variation of the model of random intersection
graphs, similar in spirit to random regular graphs. (b) For this
model we derive exact formulae for the mean and variance of the number
of independent sets of size k (for any k) in the graph. (c) We then propose
and analyse three algorithms for the efficient construction of large
independent sets in this model. The first two are variations of the greedy
technique while the third is a totally new algorithm. Our algorithms are
analysed for the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding close to optimal
independent sets for an interesting range of graph parameters.
Abstract: We investigate the existence and efficient algorithmic construction of close to opti-
mal independent sets in random models of intersection graphs. In particular, (a) we
propose a new model for random intersection graphs (Gn,m,~p) which includes the
model of [10] (the “uniform” random intersection graphs model) as an important
special case. We also define an interesting variation of the model of random intersec-
tion graphs, similar in spirit to random regular graphs. (b) For this model we derive
exact formulae for the mean and variance of the number of independent sets of size
k (for any k) in the graph. (c) We then propose and analyse three algorithms for
the efficient construction of large independent sets in this model. The first two are
variations of the greedy technique while the third is a totally new algorithm. Our
algorithms are analysed for the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding close to optimal in-
dependent sets for an interesting range of graph parameters.
Abstract: The Greek School Network (GSN) is a closed nationwide
educational network that offers advanced telematic and
networking services to all primary/secondary education schools
and administration offices in Greece. The primary objective of
GSN is the provisioning of a network infrastructure for the interconnection
of school PC laboratories so that modern educational
methods and pedagogical models can be applied to the school
community. GSN has scaled in size, has reached maturity, and
is currently delivering a wide range of network and telematic
services to its users. The emerging power of open-source software
provides a sound technological basis for building cutting-edge
services, capable of meeting internal administrative and monitoring
needs, and modern pedagogical requirements for tools and
services. The current paper presents an overview of GSN and an
evaluation of its services based on the opinions of its users, and on
service utilization and traffic measurement statistics. The paper
reaches the conclusion that open-source solutions provide a sound
technological platform that can cover, to a great extent, the needs
for advanced educational services of the school community.
Abstract: Content integration of web data sources is becoming increasingly
important for the next generation information
systems. However, all proposed solutions are faced with
the same performance bottleneck: the network overhead
incurred when contacting the integrated e-sites. With this
demo paper we shall demonstrate the functionality of HyperHotel.
HyperHotel is used for finding appropriate hotel
rooms when travelling. Its novetlies are that it is designed
and implemented as an internet web-hotel content integration
application and that it is built on top of D.I.C.E. and
Co.In.S.; a novel content integration infrastructure consisting
of a domain-independent COntent INtegration System
and its Data Integration Cache Engine. We˘ll show how the
infrastructure of D.I.C.E. and Co.In.S. can be applied and
exploited in HyperHotel in order to improve the response
time of complex user queries. This exemplifies the significance
of this infrastructure since HyperHotel is representative
of a large class of e-commerce, content integration
applications.
Abstract: A packet-switching network is stable if the number of packets in the network remains bounded at all times. A very natural question that arises in the context of stability properties of such networks is how network structure precisely affects these properties. In this work we embark on a systematic study of this question in the context of Adversarial Queueing Theory, which assumes that packets are adversarially injected into the network. We consider size, diameter, maximum vertex degree, minimum number of disjoint paths that cover all edges of the network and network subgraphs as crucial structural parameters of the network, and we present a comprehensive collection of structural results, in the form of stability and instability bounds on injection rate of the adversary for various greedy protocols: —Increasing the size of a network may result in dropping its instability bound. This is shown through a novel, yet simple and natural, combinatorial construction of a size-parameterized network on which certain compositions of greedy protocols are running. The convergence of the drop to 0.5 is found to be fast with and proportional to the increase in size. —Maintaining the size of a network small may already suffice to drop its instability bound to a substantially low value. This is shown through a construction of a FIFO network with size 22, which becomes unstable at rate 0.704. This represents the current state-of-the-art trade-off between network size and instability bound. —The diameter, maximum vertex degree and minimum number of edge-disjoint paths that cover a network may be used as control parameters for the stability bound of the network. This is shown through an improved analysis of the stability bound of any arbitrary FIFO network, which takes these parameters into account. —How much can network subgraphs that are forbidden for stability affect the instability bound? Through improved combinatorial constructions of networks and executions, we improve the state-of-the-art instability bound induced by certain known forbidden subgraphs on networks running a certain greedy protocol. —Our results shed more light and contribute significantly to a finer understanding of the impact of structural parameters on stability and instability properties of networks.
Abstract: In this work, we study the impact of the dynamic changing of the network link capacities on the stability properties of packet-switched networks. Especially, we consider the Adversarial, Quasi-Static Queuing Theory model, where each link capacity may take on only two possible (integer) values, namely 1 and C>1 under a (w,\~{n})-adversary. We obtain the following results:
• Allowing such dynamic changes to the link capacities of a network with just ten nodes that uses the LIS (Longest-in-System) protocol for contention–resolution results in instability at rates View the MathML source and for large enough values of C.
• The combination of dynamically changing link capacities with compositions of contention–resolution protocols on network queues suffices for similar instability bounds: The composition of LIS with any of SIS (Shortest-in-System), NTS (Nearest-to-Source), and FTG (Furthest-to-Go) protocols is unstable at rates View the MathML source for large enough values of C.
• The instability bound of the network subgraphs that are forbidden for stability is affected by the dynamic changes to the link capacities: we present improved instability bounds for all the directed subgraphs that were known to be forbidden for stability on networks running a certain greedy protocol.
Abstract: Consider k particles, 1 red and k-1 white, chasing each other on the nodes of a graph G. If the red one catches one of the white, it “infects” it with its color. The newly red particles are now available to infect more white ones. When is it the case that all white will become red? It turns out that this simple question is an instance of information propagation between random walks and has important applications to mobile computing where a set of mobile hosts acts as an intermediary for the spread of information.
In this paper we model this problem by k concurrent random walks, one corresponding to the red particle and k-1 to the white ones. The infection time Tk of infecting all the white particles with red color is then a random variable that depends on k, the initial position of the particles, the number of nodes and edges of the graph, as well as on the structure of the graph.
In this work we develop a set of probabilistic tools that we use to obtain upper bounds on the (worst case w.r.t. initial positions of particles) expected value of Tk for general graphs and important special cases. We easily get that an upper bound on the expected value of Tk is the worst case (over all initial positions) expected meeting time m* of two random walks multiplied by . We demonstrate that this is, indeed, a tight bound; i.e. there is a graph G (a special case of the “lollipop” graph), a range of values kposition of particles achieving this bound.
When G is a clique or has nice expansion properties, we prove much smaller bounds for Tk. We have evaluated and validated all our results by large scale experiments which we also present and discuss here. In particular, the experiments demonstrate that our analytical results for these expander graphs are tight.
Abstract: We consider a security problem on a distributed network.
We assume a network whose nodes are vulnerable to infection
by threats (e.g. viruses), the attackers. A system security
software, the defender, is available in the system. However,
due to the network˘s size, economic and performance reasons,
it is capable to provide safety, i.e. clean nodes from
the possible presence of attackers, only to a limited part of
it. The objective of the defender is to place itself in such a
way as to maximize the number of attackers caught, while
each attacker aims not to be caught.
In [7], a basic case of this problem was modeled as a
non-cooperative game, called the Edge model. There, the
defender could protect a single link of the network. Here,
we consider a more general case of the problem where the
defender is able to scan and protect a set of k links of the
network, which we call the Tuple model. It is natural to expect
that this increased power of the defender should result
in a better quality of protection for the network. Ideally,
this would be achieved at little expense on the existence and
complexity of Nash equilibria (profiles where no entity can
improve its local objective unilaterally by switching placements
on the network).
In this paper we study pure and mixed Nash equilibria
in the model. In particular, we propose algorithms for computing
such equilibria in polynomial time and we provide a
polynomial-time transformation of a special class of Nash
equilibria, called matching equilibria, between the Edge
model and the Tuple model, and vice versa. Finally, we
establish that the increased power of the defender results in
higher-quality protection of the network.
Abstract: We consider non-cooperative unsplittable congestion games where players share resources, and each player's strategy is pure and consists of a subset of the resources on which it applies a fixed weight. Such games represent unsplittable routing flow games and also job allocation games. The congestion of a resource is the sum of the weights of the players that use it and the player's cost function is the sum of the utilities of the resources on its strategy. The social cost is the total weighted sum of the player's costs. The quality of Nash equilibria is determined by the price of anarchy (PoA) which expresses how much worse is the social outcome in the worst equilibrium versus the optimal coordinated solution. In the literature the predominant work has only been on games with polynomial utility costs, where it has been proven that the price of anarchy is bounded by the degree of the polynomial. However, no results exist on general bounds for non-polynomial utility functions.
Here, we consider general versions of these games in which the utility of each resource is an arbitrary non-decreasing function of the congestion. In particular, we consider a large family of superpolynomial utility functions which are asymptotically larger than any polynomial. We demonstrate that for every such function there exist games for which the price of anarchy is unbounded and increasing with the number of players (even if they have infinitesimal weights) while network resources remain fixed. We give tight lower and upper bounds which show this dependence on the number of players. Furthermore we provide an exact characterization of the PoA of all congestion games whose utility costs are bounded above by a polynomial function. Heretofore such results existed only for games with polynomial cost functions.
Abstract: We consider a strategic game with two classes of confronting
randomized players on a graph G(V,E): {\'i} attackers, each choosing vertices
and wishing to minimize the probability of being caught, and a
defender, who chooses edges and gains the expected number of attackers
it catches. The Price of Defense is the worst-case ratio, over all Nash
equilibria, of the optimal gain of the defender over its gain at a Nash equilibrium.
We provide a comprehensive collection of trade-offs between the
Price of Defense and the computational efficiency of Nash equilibria.
– Through reduction to a Two-Players, Constant-Sum Game, we prove
that a Nash equilibrium can be computed in polynomial time. The
reduction does not provide any apparent guarantees on the Price of
Defense.
– To obtain such, we analyze several structured Nash equilibria:
• In a Matching Nash equilibrium, the support of the defender is
an Edge Cover. We prove that they can be computed in polynomial
time, and they incur a Price of Defense of {\'a}(G), the
Independence Number of G.
• In a Perfect Matching Nash equilibrium, the support of the defender
is a Perfect Matching. We prove that they can be computed
in polynomial time, and they incur a Price of Defense of
|V |
2 .
• In a Defender Uniform Nash equilibrium, the defender chooses
uniformly each edge in its support. We prove that they incur a
Price of Defense falling between those for Matching and Perfect
Matching Nash Equilibria; however, it is NP-complete to decide
their existence.
• In an Attacker Symmetric and Uniform Nash equilibrium, all
attackers have a common support on which each uses a uniform
distribution. We prove that they can be computed in polynomial
time and incur a Price of Defense of either
|V |
2 or {\'a}(G).
Abstract: Consider a network vulnerable to security attacks and equipped with defense mechanisms. How much is the loss in the provided security guarantees due to the selfish nature of attacks and defenses? The Price of Defense was recently introduced in [7] as a worst-case measure, over all associated Nash equilibria, of this loss. In the particular strategic game considered in [7], there are two classes of confronting randomized players on a graph G(V,E): v attackers, each choosing vertices and wishing to minimize the probability of being caught, and a single defender, who chooses edges and gains the expected number of attackers it catches. In this work, we continue the study of the Price of Defense. We obtain the following results: - The Price of Defense is at least |V| 2; this implies that the Perfect Matching Nash equilibria considered in [7] are optimal with respect to the Price of Defense, so that the lower bound is tight. - We define Defense-Optimal graphs as those admitting a Nash equilibrium that attains the (tight) lower bound of |V| 2. We obtain: › A graph is Defense-Optimal if and only if it has a Fractional Perfect Matching. Since graphs with a Fractional Perfect Matching are recognizable in polynomial time, the same holds for Defense-Optimal graphs. › We identify a very simple graph that is Defense-Optimal but has no Perfect Matching Nash equilibrium. - Inspired by the established connection between Nash equilibria and Fractional Perfect Matchings, we transfer a known bivaluedness result about Fractional Matchings to a certain class of Nash equilibria. So, the connection to Fractional Graph Theory may be the key to revealing the combinatorial structure of Nash equilibria for our network security game.
Abstract: Let M be a single s-t network of parallel links with load dependent latency functions shared by an infinite number of selfish users. This may yield a Nash equilibrium with unbounded Coordination Ratio [E. Koutsoupias, C. Papadimitriou, Worst-case equilibria, in: 16th Annual Symposium on Theoretical Aspects of Computer Science, STACS, vol. 1563, 1999, pp. 404-413; T. Roughgarden, E. Tardos, How bad is selfish routing? in: 41st IEEE Annual Symposium of Foundations of Computer Science, FOCS, 2000, pp. 93-102]. A Leader can decrease the coordination ratio by assigning flow {\'a}r on M, and then all Followers assign selfishly the (1-{\'a})r remaining flow. This is a Stackelberg Scheduling Instance(M,r,{\'a}),0≤{\'a}≤1. It was shown [T. Roughgarden, Stackelberg scheduling strategies, in: 33rd Annual Symposium on Theory of Computing, STOC, 2001, pp. 104-113] that it is weakly NP-hard to compute the optimal Leader's strategy. For any such network M we efficiently compute the minimum portion @b"M of flow r>0 needed by a Leader to induce M's optimum cost, as well as her optimal strategy. This shows that the optimal Leader's strategy on instances (M,r,@a>=@b"M) is in P. Unfortunately, Stackelberg routing in more general nets can be arbitrarily hard. Roughgarden presented a modification of Braess's Paradox graph, such that no strategy controlling {\'a}r flow can induce ≤1/{\'a} times the optimum cost. However, we show that our main result also applies to any s-t net G. We take care of the Braess's graph explicitly, as a convincing example. Finally, we extend this result to k commodities. A conference version of this paper has appeared in [A. Kaporis, P. Spirakis, The price of optimum in stackelberg games on arbitrary single commodity networks and latency functions, in: 18th annual ACM symposium on Parallelism in Algorithms and Architectures, SPAA, 2006, pp. 19-28]. Some preliminary results have also appeared as technical report in [A.C. Kaporis, E. Politopoulou, P.G. Spirakis, The price of optimum in stackelberg games, in: Electronic Colloquium on Computational Complexity, ECCC, (056), 2005].
Abstract: We study the problem of routing traffic through a congested network. We focus on the simplest case of a network consisting of m parallel links. We assume a collection of n network users; each user employs a mixed strategy, which is a probability distribution over links, to control the shipping of its own assigned traffic. Given a capacity for each link specifying the rate at which the link processes traffic, the objective is to route traffic so that the maximum (over all links) latency is minimized. We consider both uniform and arbitrary link capacities. How much decrease in global performace is necessary due to the absence of some central authority to regulate network traffic and implement an optimal assignment of traffic to links? We investigate this fundamental question in the context of Nash equilibria for such a system, where each network user selfishly routes its traffic only on those links available to it that minimize its expected latency cost, given the network congestion caused by the other users. We use the Coordination Ratio, originally defined by Koutsoupias and Papadimitriou, as a measure of the cost of lack of coordination among the users; roughly speaking, the Coordination Ratio is the ratio of the expectation of the maximum (over all links) latency in the worst possible Nash equilibrium, over the least possible maximum latency had global regulation been available. Our chief instrument is a set of combinatorial Minimum Expected Latency Cost Equations, one per user, that characterize the Nash equilibria of this system. These are linear equations in the minimum expected latency costs, involving the user traffics, the link capacities, and the routing pattern determined by the mixed strategies. In turn, we solve these equations in the case of fully mixed strategies, where each user assigns its traffic with a strictly positive probability to every link, to derive the first existence and uniqueness results for fully mixed Nash equilibria in this setting. Through a thorough analysis and characterization of fully mixed Nash equilibria, we obtain tight upper bounds of no worse than O(ln n/ln ln n) on the Coordination Ratio for (i) the case of uniform capacities and arbitrary traffics and (ii) the case of arbitrary capacities and identical traffics.
Abstract: On input a random 3-CNF formula of clauses-to-variables ratio r3 applies repeatedly
the following simple heuristic: Set to True a literal that appears in the maximum number of clauses,
irrespective of their size and the number of occurrences of the negation of the literal (ties are broken
randomly; 1-clauses when they appear get priority). We prove that for r3 < 3.42 this heuristic
succeeds with probability asymptotically bounded away from zero. Previously, heuristics of increasing
sophistication were shown to succeed for r3 < 3.26. We improve up to r3 < 3.52 by further exploiting
the degree of the negation of the evaluated to True literal.
Abstract: We consider the Railway Traveling Salesman Problem (RTSP) in which a salesman using the railway network wishes to visit a certain number of cities to carry out his/her business, starting and ending at the same city, and having as goal to minimize the overall time of the journey. RTSP is an $\mathcal{NP}$ -hard problem. Although it is related to the Generalized Asymmetric Traveling Salesman Problem, in this paper we follow a direct approach and present a modelling of RTSP as an integer linear program based on the directed graph resulted from the timetable information. Since this graph can be very large, we also show how to reduce its size without sacrificing correctness. Finally, we conduct an experimental study with real-world and synthetic data that demonstrates the superiority of the size reduction approach.
Abstract: One of the most challenging problems in probability and complexity theory is
to establish and determine the satisfiability threshold, or phase transition, for
random k-SAT instances: Boolean formulas consisting of clauses with exactly k
literals. As the previous part of the volume has explored, empirical observations
suggest that there exists a critical ratio of the number of clauses to the number
of variables, such that almost all randomly generated formulas with a higher
ratio are unsatisfiable while almost all randomly generated formulas with a lower
ratio are satisfiable. The statement that such a crossover point really exists is
called the satisfiability threshold conjecture. Experiments hint at such a direction,
but as far as theoretical work is concerned, progress has been difficult. In an
important advance, Friedgut [23] showed that the phase transition is a sharp one,
though without proving that it takes place at a “fixed” ratio for large formulas.
Otherwise, rigorous proofs have focused on providing successively better upper
and lower bounds for the value of the (conjectured) threshold. In this chapter, our
Abstract: In this paper we examine spectral properties of random intersection graphs when the number of vertices is equal to the number of labels. We call this class symmetric random intersection graphs. We examine symmetric random intersection graphs when the probability that a vertex selects a label is close to the connectivity threshold $\tau_c$. In particular, we examine the size of the second eigenvalue of the transition matrix corresponding to the Markov Chain that describes a random walk on an instance of the symmetric random intersection graph $G_{n, n, p}$. We show that with high probability the second eigenvalue is upper bounded by some constant $\zeta < 1$.
Abstract: In this paper we examine spectral properties of random intersection graphs when the number
of vertices is equal to the number of labels. We call this class symmetric random intersection graphs.
We examine symmetric random intersection graphs when the probability that a vertex selects a label
is close to the connectivity threshold żc. In particular, we examine the size of the second eigenvalue of
the transition matrix corresponding to the Markov Chain that describes a random walk on an instance
of the symmetric random intersection graph Gn,n,p. We show that with high probability the second
eigenvalue is upper bounded by some constant ł < 1.
Abstract: We propose and evaluate the performance of a new MAC-layer protocol for mobile ad hoc networks, called the Slow Start Power Controlled (abbreviated SSPC) protocol. SSPC improves on IEEE 802.11 by using power control for the RTS/CTS and DATA frame transmissions, so as to reduce energy consumption and increase network throughput and lifetime. In our scheme the transmission power used for the RTS frames is not constant, but follows a slow start principle. The CTS frames, which are sent at maximum transmission power, prevent the neighbouring nodes from transmitting their DATA frames at power levels higher than a computed threshold, while allowing them to transmit at power levels less than that threshold. Reduced energy consumption is achieved by adjusting the node transmission power to the minimum required value for reliable reception at the receiving node, while increase in network throughput is achieved by allowing more transmissions to take place simultaneously. The slow start principle used for calculating the appropriate DATA frames transmission power and the possibility of more simultaneous collision-free transmissions differentiate the SSPC protocol from the other MAC solutions proposed for IEEE 802.11. Simulation results indicate that the SSPC protocol achieves a significant reduction in power consumption, average packet delay and frequency of RTS frame collisions, and a significant increase in network throughput and received-to-sent packets ratio compared to IEEE 802.11 protocol.
Abstract: We review some recent advances for solving core algorithmic problems encountered in public
transportation systems. We show that efficient algorithms can make a great difference both in
efficiency and in optimality, thus contributing significantly to improving the quality and
service-efficiency of public transportation systems.
Abstract: In this paper we solve a particular instance of a class of differential equations that appears not to be
widely documented in the literature and results from the theoretical analysis of a probabilistic key agreement protocol parameterized by an integer k > 1. This class consists of differential equations of the form dy(t) dt = Pk+1 i=0 fi(t)y(t)i, with k > 1 and fi(t); 0 · i · k + 1 specific real functions of t. This form appears to generalize the class of Abel differential equations of the first kind in that it involves the sought function to powers greater than
3, for k > 2. For k > 2 the class of differential equations is not amenable to the general solution methodology of the Abel class of differential equations. To the best of our knowledge, there are no previous efforts to tackle, analytically, differential equations of this form. In this paper we focus on the case k = 3. We show that the solution to the resulting differential equation may be written through the use of a class of Generalized Hyper-
Lambert functions. These functions generalize the well known LambertW function, which frequently arises in the analysis of engineering applications. The solution to the differential equation is achieved, interestingly, through a reduction to a particular real root of the polynomial of degree four that appears in the right hand-side of the differential equation. This technique can be of independent interest and it can be, more generally, applicable to other differential equations of a similar form not amenable to known analytic techniques.
Abstract: We consider random systems of linear equations over GF(2) in which every equation binds k variables. We obtain a precise description of the clustering of solutions in such systems. In particular, we prove that with probability that tends to 1 as the number of variables, n, grows: for every pair of solutions \sigma, \tau, either there exists a sequence of solutions \sigma,...,\tau, in which successive elements differ by O(log n) variables, or every sequence of solutions \sigma,...,\tau, contains a step requiring the simultaneous change of \Omega(n) variables. Furthermore, we determine precisely which pairs of solutions are in each category. Our results are tight and highly quantitative in nature. Moreover, our proof highlights the role of unique extendability as the driving force behind the success of Low Density Parity Check codes and our techniques also apply to the problem of so-called pseudo-codewords in such codes.
Abstract: In this work, we study the combinatorial structure and the
computational complexity of Nash equilibria for a certain game that
models selfish routing over a network consisting of m parallel links. We
assume a collection of n users, each employing a mixed strategy, which
is a probability distribution over links, to control the routing of its own
assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic
on those links that minimize its expected latency cost, given the network
congestion caused by the other users. The social cost of a Nash equilibrium
is the expectation, over all random choices of the users, of the
maximum, over all links, latency through a link.
We embark on a systematic study of several algorithmic problems related
to the computation of Nash equilibria for the selfish routing game we consider.
In a nutshell, these problems relate to deciding the existence of a
Nash equilibrium, constructing a Nash equilibrium with given support
characteristics, constructing the worst Nash equilibrium (the one with
maximum social cost), constructing the best Nash equilibrium (the one
with minimum social cost), or computing the social cost of a (given) Nash
equilibrium. Our work provides a comprehensive collection of efficient algorithms,
hardness results (both as NP-hardness and #P-completeness
results), and structural results for these algorithmic problems. Our results
span and contrast a wide range of assumptions on the syntax of the
Nash equilibria and on the parameters of the system.
Abstract: We study here dynamic antagonism in a fixed network, represented as a graph $G$ of $n$ vertices. In particular, we consider the case of $k \leq n$ particles walking randomly independently around the network. Each particle belongs to exactly one of two antagonistic species, none of which can give birth to children. When two particles meet, they are engaged in a (sometimes mortal) local fight. The outcome of the fight depends on the species to which the particles belong. Our problem is \emph{to predict} (i.e. to compute) the eventual chances of species survival. We prove here that this can indeed be done in \emph{expected polynomial time on the size of the network}, provided that the network is \emph{undirected}.
Abstract: In this paper we study the threshold behavior of the fixed radius random graph model and its applications to the key management problem of sensor networks and, generally, for mobile ad-hoc networks. We show that this random graph model can realistically model the placement of nodes within a certain region and their interaction/sensing capabilities (i.e. transmission range, light sensing sensitivity etc.). We also show that this model can be used to define key sets for the network nodes that satisfy a number of good properties, allowing to set up secure communication with each other depending on randomly created sets of keys related to their current location. Our work hopes to inaugurate a study of key management schemes whose properties are related to properties of an appropriate random graph model and, thus, use the rich theory developed in the random graph literature in order to transfer ?good? properties of the graph model to the key sets of the nodes.
Partially supported by the IST Programme of the European Union under contact number IST-2005-15964 (AEOLUS) and the INTAS Programme under contract with Ref. No 04-77-7173 (Data Flow Systems: Algorithms and Complexity (DFS-AC)).
Abstract: The problem of determining the unsatisfiability threshold for random 3-SAT formulas consists in determining the clause to variable
ratio that marks the experimentally observed abrupt change from almost surely satisfiable formulas to almost surely unsatisfiable. Up
to now, there have been rigorously established increasingly better lower and upper bounds to the actual threshold value. In this paper,
we consider the problem of bounding the threshold value from above using methods that, we believe, are of interest on their own
right. More specifically, we show how the method of local maximum satisfying truth assignments can be combined with results for
the occupancy problem in schemes of random allocation of balls into bins in order to achieve an upper bound for the unsatisfiability
threshold less than 4.571. In order to obtain this value, we establish a bound on the q-binomial coefficients (a generalization of the
binomial coefficients). No such bound was previously known, despite the extensive literature on q-binomial coefficients. Finally,
to prove our result we had to establish certain relations among the conditional probabilities of an event in various probabilistic
models for random formulas. It turned out that these relations were considerably harder to prove than the corresponding ones for
unconditional probabilities, which were previously known.
Abstract: We consider the frugal coverage problem, an interesting vari-
ation of set cover deŻned as follows. Instances of the problem consist of
a universe of elements and a collection of sets over these elements; the
objective is to compute a subcollection of sets so that the number of
elements it covers plus the number of sets not chosen is maximized. The
problem was introduced and studied by Huang and Svitkina [7] due to
its connections to the donation center location problem. We prove that
the greedy algorithm has approximation ratio at least 0:782, improving
a previous bound of 0:731 in [7]. We also present a further improvement
that is obtained by adding a simple corrective phase at the end of the
execution of the greedy algorithm. The approximation ratio achieved in
this way is at least 0:806. Our analysis is based on the use of linear
programs which capture the behavior of the algorithms in worst-case
examples. The obtained bounds are proved to be tight.
Abstract: We present an improved upper bound on the competitiveness of the online colouring algorithm First-Fit in disk graphs, which are graphs representing overlaps of disks on the plane. We also show that this bound is best possible for deterministic online colouring algorithms that do not use the disk representation of the input graph. We also present a related new lower bound for unit disk graphs.
Abstract: We study the load balancing problem in the context of a set of clients each wishing to run a job on a server selected among a subset of permissible servers for the particular client. We consider two different scenarios. In selfish load balancing, each client is selfish in the sense that it selects to run its job to the server among its permissible servers having the smallest latency given the assignments of the jobs of other clients to servers. In online load balancing, clients appear online and, when a client appears, it has to make an irrevocable decision and assign its job to one of its permissible servers. Here, we assume that the clients aim to optimize some global criterion but in an online fashion. A natural local optimization criterion that can be used by each client when making its decision is to assign its job to that server that gives the minimum increase of the global objective. This gives rise to greedy online solutions. The aim of this paper is to determine how much the quality of load balancing is affected by selfishness and greediness.
We characterize almost completely the impact of selfishness and greediness in load balancing by presenting new and improved, tight or almost tight bounds on the price of anarchy and price of stability of selfish load balancing as well as on the competitiveness of the greedy algorithm for online load balancing when the objective is to minimize the total latency of all clients on servers with linear latency functions.
Abstract: We study the load balancing problem in the context of a set of clients each
wishing to run a job on a server selected among a subset of permissible servers for
the particular client. We consider two different scenarios. In selfish load balancing,
each client is selfish in the sense that it chooses, among its permissible servers, to
run its job on the server having the smallest latency given the assignments of the
jobs of other clients to servers. In online load balancing, clients appear online and,
when a client appears, it has to make an irrevocable decision and assign its job to
one of its permissible servers. Here, we assume that the clients aim to optimize some
global criterion but in an online fashion. A natural local optimization criterion that
can be used by each client when making its decision is to assign its job to that server that gives the minimum increase of the global objective. This gives rise to greedy
online solutions. The aim of this paper is to determine how much the quality of load
balancing is affected by selfishness and greediness.
We characterize almost completely the impact of selfishness and greediness in
load balancing by presenting new and improved, tight or almost tight bounds on the
price of anarchy of selfish load balancing as well as on the competitiveness of the
greedy algorithm for online load balancing when the objective is to minimize the
total latency of all clients on servers with linear latency functions. In addition, we
prove a tight upper bound on the price of stability of linear congestion games.
Abstract: In this paper, the impact of burstification delay on the TCP
traffic statistics is presented as well as a new assembly scheme that uses
flow window size as the threshold criterion. It is shown that short assembly
times are ideally suitable for sources with small congestion windows,
allowing for a speed up in their transmission. In addition, large assembly
times do not yield any throughput gain, despite the large number of
segments per burst transmitted, but result in a low throughput variation, and
thus a higher notion of fairness among the individual flows. To this end, in
this paper, we propose a new burst assembly scheme that dynamically
assigns flows to different assembly queues with different assembly timers,
based on their instant window size. Results show that the proposed scheme
with different timers provides a higher average throughput together with a
smaller variance which is a good compromise for bandwidth dimensioning.
Abstract: The timetable information problem can be solved by computing shortest paths in special graphs built from timetable data. In general, two models exist: the time-dependent and time-expanded network. In a recent work, both models are compared with respect to advantages and disadvantages on a theoretical and a practical framework. In addition, an extensive experimental evaluation reveals further differences with respect to query performance. However, delays which occur very frequently in railway systems are not covered. In this work, we show how the time-dependent and the time-expanded models should be updated in order to capture delays. It turns out that delays can be incorporated in the time-dependent model without changing the topology of the network. This is not true for the time-expanded model, whose updating involves a (sometimes large) sequence of edge insertions, deletions, and cost modifications.
Abstract: We give an overview of models and efficient algorithms for
optimally solving timetable information problems like “given a departure
and an arrival station as well as a departure time, which is the
connection that arrives as early as possible at the arrival station?” Two
main approaches that transform the problems into shortest path problems
are reviewed, including issues like the modeling of realistic details
(e.g., train transfers) and further optimization criteria (e.g., the number
of transfers). An important topic is also multi-criteria optimization,
where in general all attractive connections with respect to several criteria
shall be determined. Finally, we discuss the performance of the described
algorithms, which is crucial for their application in a real system.
Abstract: We give an overview of models and efficient algorithms for optimally solving timetable information problems like “given a departure and an arrival station as well as a departure time, which is the connection that arrives as early as possible at the arrival station?” Two main approaches that transform the problems into shortest path problems are reviewed, including issues like the modeling of realistic details (e.g., train transfers) and further optimization criteria (e.g., the number of transfers). An important topic is also multi-criteria optimization, where in general all attractive connections with respect to several criteria shall be determined. Finally, we discuss the performance of the described algorithms, which is crucial for their application in a real system.
Abstract: In this work we study how to process complex queries in DHT-based
Peer-to-Peer (P2P) data networks. Queries are made over tuples and relations
and are expressed in a query language, such as SQL. We describe existing
research approaches for query processing in P2P systems, we suggest
improvements and enhancements, and propose a unifying framework that
consists of a modified DHT architecture, data placement and search algorithms,
and provides efficient support for processing a variety of query types, including
queries with one or more attributes, queries with selection operators (involving
equality and range queries), and queries with join operators. To our knowledge,
this is the first work that puts forth a framework providing support for all these
query types.
Abstract: Peer-to-peer sharing systems are becoming
increasingly popular and an exciting new class of
innovative, internet-based data management
systems. In these systems, users contribute their
own resources (processing units and storage
devices) and content (i.e., documents) to the P2P
community. We focus on the management of
content and resources in such systems. Our goal
is to harness all available resources in the P2P
network so that the users can access all available
content efficiently. Efficiency is taken both from
(i) the point of view of the system, in that we
strive to ensure fair load distribution among all
peer nodes, and (ii) from the point of view of the
users, in that we strive to ensure low user-request
response times.
We propose a novel architecture for this new
class of applications, which differs drastically
from what is either found currently in existing
products or proposed in academia. We contribute
and study novel solutions that achieve our goals,
while at the same time addressing the formidable
challenges due to the autonomy of peers, their
heterogeneous processing and storage capacities,
their different content contributions, the huge
system scale, and the highly dynamic system
environment.
Abstract: In this paper, we discuss the integration of Wireless
Sensor Networks (WSN) and smart objects with the Web. We present a set of research challenges which we believe are the most important ones rising from this integration and propose a prototype system, Uberdust, which addresses such challenges. Uberdust is a brokerage web service for connecting smart objects to the Internet of Things, providing storage, sharing and discovery of real-time and historical data from smart objects, devices & building installations around the world via the Web. Our system provides high-level language-independent APIs so IoT application developers may choose their favorite programming or scripting languages.
Abstract: We consider optimal itinerary problems in time-table information systems supporting a vast number of on-line queries. We exhibit two important extensions of the time-dependent approach to model realistic versions of the Earliest Arrival and Minimum Number of Transfer problems, as well as of a combination of them, that could not be modeled by the original version of the time-dependent approach. We also provide heuristics that speed up implementations and present preliminary experimental results with real-world data.
Abstract: The peer-to-peer computing paradigm is an intriguing alternative to Google-style search
engines for querying and ranking Web content. In a network with many thousands or
millions of peers the storage and access load requirements per peer are much lighter
than for a centralized Google-like server farm; thus more powerful techniques from information
retrieval, statistical learning, computational linguistics, and ontological reasoning
can be employed on each peer˘s local search engine for boosting the quality
of search results. In addition, peers can dynamically collaborate on advanced and particularly
difficult queries. Moroever, a peer-to-peer setting is ideally suited to capture
local user behavior, like query logs and click streams, and disseminate and aggregate
this information in the network, at the discretion of the corresponding user, in order to
incorporate richer cognitive models.
This paper gives an overview of ongoing work in the EU Integrated Project DELIS
that aims to develop foundations for a peer-to-peer search engine with Google-or-better
scale, functionality, and quality, which will operate in a completely decentralized and
self-organizing manner. The paper presents the architecture of such a system and the
Minerva prototype testbed, and it discusses various core pieces of the approach: efficient
execution of top-k ranking queries, strategies for query routing when a search request
needs to be forwarded to other peers, maintaining a self-organizing semantic overlay
network, and exploiting and coping with user and community behavior.
Abstract: In this work we present a new simulation toolkit that we call TRAILS (Toolkit for Realism and Adaptivity In Large-scale Simulations), which extends the \NS simulator by adding several important functionalities and optimizing certain
critical simulator operations. The added features focus on providing the user with the necessary tools to better study wireless networks of high dynamics; in particular, to implement advanced mobility patterns, obstacle presence and disaster scenarios, and failures injection. These scenarios and patterns can dynamically change throughout the execution of the simulation based on network related parameters. Moreover, we define a set of utilities that can facilitate the use of \NS providing advanced statistics and easy-to-use logging mechanisms. This functionality is implemented in a simple and flexible architecture, that follows design patterns, object oriented and generic programming principles, maintaining a proper balance between reusability, extendability and ease of use. We evaluate the performance of TRAILS and show that it offers significant speed-ups (at least 4 times faster) regarding the execution time of \NS in certain important, common wireless settings. Our results also show that this is achieved with minimum overhead in terms of memory usage.
Abstract: We examine the problem of transmitting in minimum time a given amount of data between a
source and a destination in a network with finite channel capacities and nonzero propagation delays. In
the absence of delays, the problem has been shown to be solvable in polynomial time. In this paper, we
show that the general problem is NP-complete. In addition, we examine transmissions along a single path,
called the quickest path, and present algorithms for general and special classes of networks that improve
upon previous approaches. The first dynamic algorithm for the quickest path problem is also
given
Abstract: In this work, we introduce the notion of time to some well-known combinatorial optimization problems. In particular, we study problems defined on temporal graphs. A temporal graph D=(V,A) may be viewed as a time-sequence G_1,G_2,...,G_l of static graphs over the same (static) set of nodes V. Each G_t = D(t) = (V,A(t)) is called the instance of D at time t and l is called the lifetime of D. Our main focus is on analogues of traveling salesman problems in temporal graphs. A sequence of time-labeled edges (e.g. a tour) is called temporal if its labels are strictly increasing. We begin by considering the problem of exploring the nodes of a temporal graph as soon as possible. In contrast to the positive results known for the static case, we prove that, it cannot be approximated within cn, for some constant c > 0, in general temporal graphs and within (2 − \varepsilon), for every constant \varepsilon > 0, in the special case in which D(t) is connected for all 1 <= t <= l, both unless P = NP. We then study the temporal analogue of TSP(1,2), abbreviated TTSP(1,2), where, for all 1 <= t <= l, D(t) is a complete weighted graph with edge-costs from {1,2} and the cost of an edge may vary from instance to instance. The goal is to find a minimum cost temporal TSP tour. We give several polynomial-time approximation algorithms for TTSP(1,2). Our best approximation is (1.7 + \varepsilon) for the generic TTSP(1,2) and (13/8 + \varepsilon) for its interesting special case in which the lifetime of the temporal graph is restricted to n. In the way, we also introduce temporal versions of Maximum Matching, Path Packing, Max-TSP, and Minimum Cycle Cover, for which we obtain polynomial-time approximation algorithms and hardness results.
Abstract: For the Internet of Things to finally become a reality, obstacles on different levels need to be overcome. This is especially true for the upcoming challenge of leaving the domain of technical experts and scientists. Devices need to connect to the Internet and be able to offer services. They have to announce and describe these services in machine understandable ways so that user-facing systems are able to find and utilize them. They have to learn about their physical surroundings, so that they can serve sensing or acting purposes without explicit configuration or programming. Finally, it must be possible to include IoT devices in complex systems that combine local and remote data, from different sources, in novel and surprising ways.
We show how all of that is possible today. Our solution uses open standards and state-of-the art protocols to achieve this. It is based on 6LowPAN and CoAP for the communications part, semantic web technologies for meaningful data exchange, autonomous sensor correlation to learn about the environment, and software built around the Linked Data principles to be open for novel and unforeseen applications.
Abstract: Today we are experiencing a major reconsideration of the computing
paradigm, as witnessed by the abundance and increasing frequency
of use of terms such as {\em ambient intelligence}, {\em ubiquitous computing}, {\em disappearing computer}, {\em grid
computer}, {\em global computing} and {\em mobile ad-hoc
networks}. Systems that can be described with such terms are of a
dynamic, with no clear physical boundary, nature and it seems that
it is impossible (or, at least, difficult) to define sharply a
number of important properties holding with certainty as well as
holding throughout the whole lifetime of the system.
%
One such system property, which is important for the viability of
a system, is {\em trust}. Our departure point is the assumption
that it seems very difficult to define static system properties
related to trust and expect that they hold eternally in the
rapidly changing systems falling under the new computing paradigm.
One should, rather, attempt to define trust in terms of properties
that hold with some limiting probability as the the system grows
and try to establish conditions that ensure that ``good''
properties hold {\em almost certainly}. Based on this viewpoint,
in this paper we provide a new framework for defining trust
through formally definable properties that hold, almost certainly,
in the limit in randomly growing combinatorial structures that
model ``shapeless'' computing systems (e.g. ad-hoc networks),
drawing on results that establish the threshold behavior of
predicates written in the first and second order logic.
Abstract: Digital optical logic circuits capable of performing bit-wise signal processing are critical building blocks for the realization of future high-speed packet-switched networks. In this paper, we present recent advances in all-optical processing circuits and examine the potential of their integration into a system environment. On this concept, we demonstrate serial all-optical Boolean AND/XOR logic at 20 Gb/s and a novel all-optical packet clock recovery circuit, with low capturing time, suitable for burst-mode traffic. The circuits use the semiconductor-based ultrafast nonlinear interferometer (UNI) as the nonlinear switching element. We also present the integration of these circuits in a more complex unit that performs header and payload separation from short synchronous data packets at 10 Gb/s. Finally, we discuss a method to realize a novel packet scheduling switch architecture, which guarantees lossless communication for specific traffic burstiness constraints, using these logic units.
Abstract: In this paper, we review recent advances in ultrafast optical time-domain technology with emphasis on the use in optical packet switching. In this respect, several key building blocks, including high-rate laser sources applicable to any time-division-multiplexing (TDM) application, optical logic circuits for bitwise processing, and clock-recovery circuits for timing synchronization with both synchronous and asynchronous data traffic, are described in detail. The circuits take advantage of the ultrafast nonlinear transfer function of semiconductor-based devices to operate successfully at rates beyond 10 Gb/s. We also demonstrate two more complex circuits-a header extraction unit and an exchange-bypass switch-operating at 10 Gb/s. These two units are key blocks for any general-purpose packet routing/switching application. Finally, we discuss the system perspective of all these modules and propose their possible incorporation in a packet switch architecture to provide low-level but high-speed functionalities. The goal is to perform as many operations as possible in the optical domain to increase node throughput and to alleviate the network from unwanted and expensive optical-electrical-optical conversions.
Abstract: Unattended surveillance of public transportation infrastructures that may generate alarms in the case of a destructive event, and provide information on the current status of the infrastructure as well as on the short interval before the event, can empower the Search and Rescue teams with useful information that may save lives increasing their overall effectiveness. This paper presents a novel system that aims at providing a scalable, unattended surveillance system with networked embedded systems, cameras and sensors for public transportations sector.
Abstract: The interpolation method, originally developed in statistical physics, transforms distributions of random CSPs to distributions of much simpler problems while bounding the change in a number of associated statistical quantities along the transformation path. After a number of further mathematical developments, it is now known that, in principle, the method can yield rigorous unsatisfiability bounds if one “plugs in an appropriate functional distribution”. A drawback of the method is that identifying appropriate distributions and plugging them in leads to major analytical challenges as the distributions required are, in fact, infinite dimensional objects. We develop a variant of the interpolation method for random CSPs on arbitrary sparse degree distributions which trades accuracy for tractability. In particular, our bounds only require the solution of a 1-dimensional optimization problem (which typically turns out to be very easy) and as such can be used to compute explicit rigorous unsatisfiability bounds.
Abstract: In this work, we discuss various aspects of the application of pervasive technologies inside an urban setting. In the last decade we have seen the emergence of a multitude of closely-related pervasive technologies that have only recently started to materialize on a grand scale, such as wireless sensor networks, RFID and NFC. We discuss the arising research challenges associated with such converging fields and we also provide a survey of the state-of-the-art related application scenaria, which we believe set their near-future applied context. Finally, we provide a more analytic discussion on three discrete systems that belong to this category of applications and give insight to the current state-of-the-art work in this field.
Abstract: In this work we add a training phase to an Impairment Aware Routing and Wavelength Assignment (IA-RWA) algorithm so as to improve its performance. The initial IA-RWA algorithm is a multi-parametric algorithm where a vector of physical impairment parameters is assigned to each link, from which the impairment vectors of candidate lightpaths are calculated. The important issue here is how to combine these impairment parameters into a scalar that would reflect the true transmission quality of a path. The training phase of the proposed IA-RWA algorithm is based on an optimization approach, called Particle Swarm Optimization (PSO), inspired by animal social behavior. The training phase gives the ability to the algorithm to be aware of the physical impairments even though the optical layer is seen as a black box. Our simulation studies show that the performance of the proposed scheme is close to that of algorithms that have explicit knowledge of the optical layer and the physical impairments.
Abstract: The use of maker community tools and IoT technologies inside classrooms is spreading in an increasing number of education and science fields. GAIA is a European research project focused on achieving behavior change for sustainability and energy awareness in schools. In this work, we report on how a large IoT deployment in a number of educational buildings and real-world data from this infrastructure, are utilized to support a "maker" lab kit activity inside the classroom, together with a serious game. We also provide some insights to the integration of these activities in the school curriculum, along with a discussion on our feedback so far from a series of workshop activities in a number of schools. Our initial results show strong acceptance by the school community.
Abstract: Recent activity in the field of Internet-of-Things experimentation has focused on the federation of discrete testbeds, thus placing less effort in the integration of other related technologies, such as smartphones; also, while it is gradually moving to more application-oriented paths, such as urban settings, it has not dealt in large with applications having social networking features. We argue here that current IoT infrastructure, testbeds and related software technologies should be used in such a context, capturing real-world human mobility and social networking interactions, for use in evaluating and fine-tuning realistic mobility models and designing human-centric applications. We discuss a system for producing traces for a new generation of human-centric applications, utilizing technologies such as Bluetooth and focusing on human interactions. We describe the architecture for this system and the respective implementation details presenting two distinct deployments; one in an office environment and another in an exhibition/conference event (FET'11, The European Future Technologies Conference and Exhibition) with 103 active participants combined, thus covering two popular scenarios for human centric applications. Our system provides online, almost real-time, feedback and statistics and its implementation allows for rapid and robust deployment, utilizing mainstream technologies and components.
Abstract: The Internet of Things (IoT) and smart cities are two of the most popular directions the research community is pursuing very actively. But although we have made great progress in many fields, we are still trying to figure out how we can utilize our smart city and IoT infrastructures, in order to produce reliable, economically sustainable solutions that create public value, and even more so in the field of education.
GAIA1, a Horizon2020 EC-funded project, has developed an IoT infrastructure across school buildings in Europe. Its primary aim has been to raise awareness about energy consumption and sustainability, based on real-world sensor data produced inside the school buildings where students and teachers live and work. Today's students are the citizens of tomorrow, and they should have the skills to understand and respond to challenges like climate change. Currently, 25 educational building sites participate in GAIA, located in Sweden, Italy, and Greece. An IoT infrastructure [1] is installed in these buildings, monitoring in real-time their power consumption, as well as several indoor and outdoor environmental parameters.
Abstract: The utilization of IoT in the educational domain so far has trailed other more commercial application domains. In this chapter, we study a number of aspects that are
based on big data produced by a large-scale infrastructure deployed inside a fleet of educational buildings in Europe. We discuss how this infrastructure essentially enables a set of different applications, complemented by a detailed discussion regarding both performance aspects of the implementation of this IoT platform, as well as results that provide insights to its actual application in real life, both from an educational and business standpoint.
Abstract: In many fields of application, shortest path finding problems
in very large graphs arise. Scenarios where large numbers of on-line
queries for shortest paths have to be processed in real-time appear for example
in traffic information systems. In such systems, the techniques considered
to speed up the shortest path computation are usually based on
precomputed information. One approach proposed often in this context
is a space reduction, where precomputed shortest paths are replaced by
single edges with weight equal to the length of the corresponding shortest
path. In this paper, we give a first systematic experimental study of
such a space reduction approach. We introduce the concept of multi-level
graph decomposition. For one specific application scenario from the field
of timetable information in public transport, we perform a detailed analysis
and experimental evaluation of shortest path computations based
on multi-level graph decomposition.
Abstract: In this work we present two mobile, locative and collaborative distributed games that are played using wireless sensor
devices. We briefly present the architecture of the two games
and demonstrate their capabilities. The key characteristic of
these games is that players interact with each other and their
surrounding environment by moving, running and gesturing as a mean to perform game related actions, using sensor devices. We demonstrate our system by implementing it using
a combination of JAVA Standard and Mobile editions.
Abstract: A methodology for the estimation of the value of flexible networking in comparison with standard WDM solutions is presented, and specifications for the cost of the required flexible transponders and WSSs are extracted.
Abstract: Experimentally driven research for wireless sensor networks is invaluable to provide benchmarking and comparison of new ideas. An increasingly common tool in support of this is a testbed composed of real hardware devices which increases the realism of evaluation. However, due to hardware costs the size and heterogeneity of these testbeds is usually limited. In addition, a testbed typically has a relatively static configuration in terms of its network topology and its software support infrastructure, which limits the utility of that testbed to specific case-studies. We propose a novel approach that can be used to (i) interconnect a large number of small testbeds to provide a federated testbed of very large size, (ii) support the interconnection of heterogeneous hardware into a single testbed, and (iii) virtualise the physical testbed topology and thus minimise the need to relocate devices. We present the most important design issues of our approach and evaluate its performance. Our results indicate that testbed virtualisation can be achieved with high efficiency and without hindering the realism of experiments.
Abstract: In cooperative multiagent systems an alternative that maximizes the social welfare—the sum of utilities—can only be selected if each agent reports its full utility function. This may be infeasible in environments where communication is restricted. Employing a voting rule to choose an alternative greatly reduces the communication burden, but leads to a possible gap between the social welfare of the optimal alternative and the social welfare of the one that is ultimately elected. Procaccia and Rosenschein (2006) have introduced the concept of distortion to quantify this gap. In this paper, we present the notion of embeddings into voting rules: functions that receive an agent˘s utility function and return the agent˘s vote. We establish that very low distortion can be obtained using randomized embeddings, especially when the number of agents is large compared to the number of alternatives. We investigate our ideas in the context of three prominent voting rules with low communication costs: Plurality, Approval, and Veto. Our results arguably provide a compelling reason for employing voting in cooperative multiagent systems.
Abstract: In cooperative multiagent systems an alternative that maximizes the social welfare—the sum of utilities—can only be selected if each agent reports its full utility function. This may be infeasible in environments where communication is restricted. Employing a voting rule to choose an alternative
greatly reduces the communication burden, but leads to a possible gap between the social welfare of the optimal alternative and the social welfare of the one that is ultimately elected. Procaccia and Rosenschein (2006) have introduced the concept of distortion to quantify this gap.
In this paper, we present the notion of embeddings into voting rules: functions that receive an agent˘s utility function and return the agent˘s vote. We establish that very low distortion can be obtained using randomized embeddings, especially when the number of agents is large compared to the number of alternatives. We investigate our ideas in the context
of three prominent voting rules with low communication costs: Plurality, Approval, and Veto. Our results arguably provide a compelling reason for employing voting in cooperative multiagent systems.
Abstract: We study computationally hard combinatorial problems arising from the important engineering question of how to maximize the number of connections that can be simultaneously served in a WDM optical network. In such networks, WDM technology can satisfy a set of connections by computing a route and assigning a wavelength to each connection so that no two connections routed through the same fiber are assigned the same wavelength. Each fiber supports a limited number of w wavelengths and in order to fully exploit the parallelism provided by the technology, one should select a set connections of maximum cardinality which can be satisfied using the available wavelengths. This is known as the maximum routing and path coloring problem (maxRPC).
Our main contribution is a general analysis method for a class of iterative algorithms for a more general coloring problem. A lower bound on the benefit of such an algorithm in terms of the optimal benefit and the number of available wavelengths is given by a benefit-revealing linear program. We apply this method to maxRPC in both undirected and bidirected rings to obtain bounds on the approximability of several algorithms. Our results also apply to the problem maxPC where paths instead of connections are given as part of the input. We also study the profit version of maxPC in rings where each path has a profit and the objective is to satisfy a set of paths of maximum total profit.
Abstract: We demonstrate a compact, all-optical, packet clock and data recovery circuit that uses integrated MZIs. Clock is acquired within 2 bits irrespective of packet length and phase alignment. Error-free operation is demonstrated at 10 Gb/s.
Abstract: Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated. To do this we employ a cache replacement algorithm, 'CSP', which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularities, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements.
Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated To do this we employ a cache replacement algorithm, 'CSP, which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularifies, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements. Our results show that there are clear performance gains when utilizing the communication cost, the popularity of objects, and the auxiliary cache. In contrast, the size of objects and the admission controller have a negligible performance impact. Our major conclusions going against those in related work are that (i) LRU is preferable to CSP for important parameter values, (ii) accounting for the objects' sizes does not improve latency and/or bandwidth requirements, and (iii) the collaboration of nearby proxies is not very beneficial. Based on these results, we chart the problem solution space, identifying which algorithm is preferable and under which conditions. Finally, we develop a dynamic replacement algorithm that continuously utilizes the best algorithm as the problem-parameter values (e.g., the access distributions) change with time.
Abstract: We present the basic concepts behind the design and implementation of WebDust, a peer-to-peer platform for organizing,
monitoring and controlling wireless sensor networks, along with a discussion of its application regarding an actual testbed.
Our software architecture provides a range of services that allow to create customized applications with relatively low
implementation overhead. WebDust aims to allow heterogeneous components to operate in the same sensor network, and
give the ability to manage and control large numbers of such networks, possibly on a global scale. We also give insight to
several applications that can be implemented using our platform, and a description of our current testbed.
Abstract: In this work, a new algorithm for drawing a weighted random sample of size m from a population of n weighted items, where
m n, is presented. The algorithm can generate a weighted random sample in one-pass over unknown populations.
Abstract: In view of the apparent intractability of constructing Nash Equilibria (NE
in short) in polynomial time, even for bimatrix games, understanding the limitations
of the approximability of the problem is an important challenge.
In this work we study the tractability of a notion of approximate equilibria in bimatrix
games, called well supported approximate Nash Equilibria (SuppNE in short).
Roughly speaking, while the typical notion of approximate NE demands that each
player gets a payoff at least an additive term less than the best possible payoff, in a
SuppNE each player is assumed to adopt with positive probability only approximate
pure best responses to the opponent˘s strategy.
As a first step, we demonstrate the existence of SuppNE with small supports and
at the same time good quality. This is a simple corollary of Alth{\"o}fer˘s Approximation
Lemma, and implies a subexponential time algorithm for constructing SuppNE of
arbitrary (constant) precision.
We then propose algorithms for constructing SuppNE in win lose and normalized
bimatrix games (i.e., whose payoff matrices take values from {0, 1} and [0, 1] respectively).
Our methodology for attacking the problem is based on the solvability of zero sum bimatrix games (via its connection to linear programming) and provides a
0.5-SuppNE for win lose games and a 0.667-SuppNE for normalized games.
To our knowledge, this paper provides the first polynomial time algorithms constructing
{\aa}-SuppNE for normalized or win lose bimatrix games, for any nontrivial
constant 0 ≤{\aa} <1, bounded away from 1.
Abstract: We study the existence and tractability of a notion of approximate
equilibria in bimatrix games, called well supported approximate
Nash Equilibria (SuppNE in short).We prove existence of "−SuppNE for
any constant " 2 (0, 1), with only logarithmic support sizes for both players.
Also we propose a polynomial–time construction of SuppNE, both
for win lose and for arbitrary (normalized) bimatrix games. The quality
of these SuppNE depends on the girth of the Nash Dynamics graph in
the win lose game, or a (rounded–off) win lose image of the original normalized
game. Our constructions are very successful in sparse win lose
games (ie, having a constant number of (0, 1)−elements in the bimatrix)
with large girth in the Nash Dynamics graph. The same holds also for
normalized games whose win lose image is sparse with large girth.
Finally we prove the simplicity of constructing SuppNE both in random
normalized games and in random win lose games. In the former case we
prove that the uniform full mix is an o(1)−SuppNE, while in the case
of win lose games, we show that (with high probability) there is either a
PNE or a 0.5-SuppNE with support sizes only 2.
Abstract: A Nash equilibrium of a routing network represents a stable state of the network where no user finds it beneficial to unilaterally deviate from its routing strategy. In this work, we investigate the structure of such equilibria within the context of a certain game that models selfish routing for a set of n users each shipping its traffic over a network consisting of m parallel links. In particular, we are interested in identifying the worst-case Nash equilibrium – the one that maximizes social cost. Worst-case Nash equilibria were first introduced and studied in the pioneering work of Koutsoupias and Papadimitriou [9].
More specifically, we continue the study of the Conjecture of the Fully Mixed Nash Equilibrium, henceforth abbreviated as FMNE Conjecture, which asserts that the fully mixed Nash equilibrium, when existing, is the worst-case Nash equilibrium. (In the fully mixed Nash equilibrium, the mixed strategy of each user assigns (strictly) positive probability to every link.) We report substantial progress towards identifying the validity, methodologies to establish, and limitations of, the FMNE Conjecture.
Abstract: Data propagation in wireless sensor networks can be performed either by hop-by-hop single transmissions or by multi-path broadcast of data. Although several energy-aware MAC layer protocols exist that operate very well in the case of single point-to-point transmissions, none is especially designed and suitable for multiple broadcast transmissions.In this paper we propose a family of new protocols suitable of multi-path broadcast of data, and show, through a detailed and extended simulation evaluation, that our parameter-based protocols significantly reduce the number of collisions and thus increase the rate of successful message delivery (to above 90%) by trading off the average propagation delay. At the same time, our protocols are shown to be very energy efficient, in terms of the average energy dissipation per delivered message.
Abstract: In this paper we present an overview of WISEBED, a largescale wireless sensor network testbed, which is currently being built for research purposes. This project is led by a number of European Universities
and Research Institutes, hoping to provide scientists, researchers and companies with an environment to conduct experiments with, in order to evaluate and validate their sensor network-related work. The initial planning of the project includes a large, heterogeneous testbed, consisting of at least 9 geographically disparate networks that include both sensor and actuator nodes, and scaling in the order of thousands (currently being in total 550 nodes).We present here the overall architecture
of WISEBED, focusing on certain aspects of the software ecosystem surrounding the project, such as the Open Federation Alliance, which will enable a view of the whole testbed, or parts of it, as single entities, and the testbed's tight integration with the Shawn network simulator. We also present examples of the actual hardware used currently in the testbed and outline the architecture of two of the testbed's sites.
Abstract: There exists a great amount of algorithms for wireless sensor networks (WSNs) that have never been tried in practice. This is due to the fact that programming sensor nodes still happens on a very technical level. We remedy the situation by introducing our algorithm library Wiselib, which allows for simple implementations of algorithms. It can adopt to a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, which allow to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time. The Wiselib runs on different host operating systems such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. The Wiselib provides an algorithm with data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks. In this paper, we describe the building blocks of the Wiselib, analyze the overhead, and show how cryptographically secured routing algorithms can be implemented. We also report on results from experiments with real sensor node hardware.
Abstract: Motivated by the problem of efficient sensor network data collection via a mobile sink, we present undergoing research in accelerated random walks on Random Geometric Graphs. We first propose a new type of random walk, called the {\'a}-stretched random walk, and compare it to three known random walks. We also define a novel performance metric called Proximity Cover Time which, along with other metrics such us visit overlap statistics and proximity variation, we use to evaluate the performance properties and features of the various walks. Finally, we present future plans on investigating a relevant combinatorial property of Random Geometric Graphs that may lead to new, faster random walks and metrics.
Abstract: In this paper we describe a new simulation platform for complex wireless sensor networks that operate a collection of distributed algorithms and network protocols. Simulating such systems is complicated because of the need to coordinate different network layers and debug protocol stacks, often with very different interfaces, options, and fidelities. Our platform (which we call WSNGE) is a flexible and extensible environment that provides a highly scalable simulator with unique characteristics. It focuses on user friendliness, providing every function in both scriptable and visual way, allowing the researcher to define simulations and view results in an easy to use graphical environment. Unlike other solutions, WSNGE does not distinguish between different scenario types, allowing multiple different protocols to run at the same time. It enables rich online interaction with running simulations, allowing parameters, topologies or the whole scenario to be altered at any point in time.