Abstract: Simultaneous oscillation of 23 wavelengths, spaced at 100 GHz, is demonstrated from a single
source using a semiconductor optical amplifier linear cavity. The wavelength comb is generated from an
intra cavity, fiber implemented Lyot filter. Each oscillating wavelength has a linewidth of 12.5 GHz and
the maximum power variation between the 23 wavelengths was less than 3 dB.
Abstract: We demonstrate a 40 Gb/s self-synchronizing, all-optical packet
clock recovery circuit designed for efficient packet-mode traffic. The circuit
locks instantaneously and enables sub-nanosecond packet spacing due to the
low clock persistence time. A low-Q Fabry-Perot filter is used as a passive
resonator tuned to the line-rate that generates a retimed clock-resembling
signal. As a reshaping element, an optical power-limiting gate is
incorporated to perform bitwise pulse equalization. Using two preamble
bits, the clock is captured instantly and persists for the duration of the data
packet increased by 16 bits. The performance of the circuit suggests its
suitability for future all-optical packet-switched networks with reduced
transmission overhead and fine network granularity.
Abstract: We demonstrate instantaneous 40 Gb/s clock extraction from 1 ns long data packets
separated by 750 ps. The circuit comprises a Fabry-Perot filter and an all-optical power limiting
gate and requires very short inter-packet guardbands.
Abstract: The inference problem for propositional circumscription is known to
be highly intractable and, in fact, harder than the inference problem for classi-
cal propositional logic. More precisely, in its full generality this problem is P - 2
complete, which means that it has the same inherent computational complexity
as the satisfiability problem for quantified Boolean formulas with two alternations
(universal-existential) of quantifiers. We use Schaefer?s framework of generalized
satisfiability problems to study the family of all restricted cases of the inference
problem for propositional circumscription. Our main result yields a complete clas-
sification of the ?truly hard? ( P -complete) and the ?easier? cases of this problem
2
(reducible to the inference problem for classical propositional logic). Specifically,
we establish a dichotomy theorem which asserts that each such restricted case either
is P -complete or is in coNP. Moreover, we provide efficiently checkable criteria
2
that tell apart the ?truly hard? cases from the ?easier? ones. We show our results both
when the formulas involved are and are not allowed to contain constants. The present
work complements a recent paper by the same authors, where a complete classifi-
cation into hard and easy cases of the model-checking problem in circumscription
was established.
Abstract: We give an efficient local search
algorithm that computes a good vertex coloring of a graph $G$. In
order to better illustrate this local search method, we view local
moves as selfish moves in a suitably defined game. In particular,
given a graph $G=(V,E)$ of $n$ vertices and $m$ edges, we define
the \emph{graph coloring game} $\Gamma(G)$ as a strategic game
where the set of players is the set of vertices and the players
share the same action set, which is a set of $n$ colors. The
payoff that a vertex $v$ receives, given the actions chosen by all
vertices, equals the total number of vertices that have chosen the
same color as $v$, unless a neighbor of $v$ has also chosen the
same color, in which case the payoff of $v$ is 0. We show:
\begin{itemize}
\item The game $\Gamma(G)$ has always pure Nash equilibria. Each
pure equilibrium is a proper coloring of $G$. Furthermore, there
exists a pure equilibrium that corresponds to an optimum coloring.
\item We give a polynomial time algorithm $\mathcal{A}$ which
computes a pure Nash equilibrium of $\Gamma(G)$. \item The total
number, $k$, of colors used in \emph{any} pure Nash equilibrium
(and thus achieved by $\mathcal{A}$) is $k\leq\min\{\Delta_2+1,
\frac{n+\omega}{2}, \frac{1+\sqrt{1+8m}}{2}, n-\alpha+1\}$, where
$\omega, \alpha$ are the clique number and the independence number
of $G$ and $\Delta_2$ is the maximum degree that a vertex can have
subject to the condition that it is adjacent to at least one
vertex of equal or greater degree. ($\Delta_2$ is no more than the
maximum degree $\Delta$ of $G$.) \item Thus, in fact, we propose
here a \emph{new}, \emph{efficient} coloring method that achieves
a number of colors \emph{satisfying (together) the known general
upper bounds on the chromatic number $\chi$}. Our method is also
an alternative general way of \emph{proving},
\emph{constructively}, all these bounds. \item Finally, we show
how to strengthen our method (staying in polynomial time) so that
it avoids ``bad'' pure Nash equilibria (i.e. those admitting a
number of colors $k$ far away from $\chi$). In particular, we show
that our enhanced method colors \emph{optimally} dense random
$q$-partite graphs (of fixed $q$) with high probability.
\end{itemize}
Abstract: Known general proofs of Nash Theorem (about the existence of Nash Equilibria (NEa) in finite strategic games) rely on the use of a fixed point theorem (e.g. Brouwer or Kakutani. While it seems that there is no general way of proving the existence of Nash equilibria without the use of a fixed point theorem, there do however exist some (not so common in the CS literature) proofs that seem to indicate alternative proof paths, for games of two players. This note discusses two such proofs.
Abstract: We demonstrate a simple method for upgrading the repetition rate of 10 GHz optical sources to 40 GHz. It employs a Fabry-Perot filter and the saturation properties of a Semiconductor Optical Amplifier.
Abstract: Smart Dust is a set of a vast number of ultra-small fully
autonomous computing and communication devices, with very
restricted energy and computing capabilities, that co-operate to
quickly and efficiently accomplish a large sensing task. Smart
Dust can be very useful in practice i.e. in the local detection of
a remote crucial event and the propagation of data reporting its
realization. In this work we continue (see [POMC02]) our
effort towards the research on smart dust from a basic algorithmic
point of view. Under a simple but realistic model for smart dust
we present an interesting problem, which is how to propagate
efficiently information on an event detected locally. Then we
present a new smart dust protocol, which we call the
``Sleep-Awake" protocol, for information propagation that explicitly uses the energy saving features (i.e. the alteration of sleeping and awake time periods) of the smart dust particles. By using both probabilistic some first analysis and extensive
experiments, we provide some first concrete results for the
success probability and the time and energy efficiency of the
protocol, in terms of parameters of the smart dust network. We
note that the study of the interplay of these parameters allows us
to program the smart dust network characteristics accordingly.
Abstract: We investigate the problem of ecient wireless energy recharging in Wireless Rechargeable Sensor Networks (WRSNs). In
such networks a special mobile entity (called the Mobile Charger) traverses the network and wirelessly replenishes the energy
of sensor nodes. In contrast to most current approaches, we envision methods that are distributed, adaptive and use limited
network information. We propose three new, alternative protocols for ecient recharging, addressing key issues which we
identify, most notably (i) to what extent each sensor should be recharged (ii) what is the best split of the total energy between
the charger and the sensors and (iii) what are good trajectories the MC should follow. One of our protocols (
LRP
) performs
some distributed, limited sampling of the network status, while another one (
RTP
) reactively adapts to energy shortage alerts
judiciously spread in the network. As detailed simulations demonstrate, both protocols signicantly outperform known state
of the art methods, while their performance gets quite close to the performance of the global knowledge method (
GKP
) we
also provide, especially in heterogeneous network deployments.
Abstract: In this paper, we demonstrate clock extraction from
10 Gb/s asynchronous short data packets. Successful clock
acquisition is achieved from data packets arriving at time
intervals of only 1.5 ns, irrespective of their precise phase
relation. The clock recovery circuit used consists of a Fabry-
Perot filter (FPF) and a non-linear UNI gate and requires very
short time for synchronization.
Abstract: Packet clock generation from flag bits is demonstrated, using a Fabry-Perot filter followed by a semiconductor optical amplifier. Ten clock pulses are generated from a single pulse with less than 0.45 dB amplitude modulation.
Abstract: This article presents a novel crawling and
clustering method for extracting and pro-
cessing cultural data from the web in a fully
automated fashion. Our architecture relies
upon a focused web crawler to download we
b documents relevant to culture. The
focused crawler is a web crawler that
searches and processes only those web pages
that are relevant to a particular topic. After downloading the pages, we extract from
each document a number of words for each th
ematic cultural area, filtering the docu-
ments with non-cultural content; we then create multidimensional document vectors
comprising the most frequent cultural term o
ccurrences. We calculate the dissimilarity
between the cultural-related document vect
ors and for each cultural theme, we use
cluster analysis to partition the documents in
to a number of clusters. Our approach is
validated via a proof-of-concept applica
tion which analyzes hundreds of web pages
spanning different cultural thematic areas.
Abstract: We consider approval voting elections in which each voter votes for a (possibly empty) set of candidates and the outcome consists of a set of k candidates for some parameter k, e.g., committee elections. We are interested in the minimax approval voting rule in which the outcome represents a compromise among the voters, in the sense that the maximum distance between the preference of any voter and the outcome is as small as possible. This voting rule has two main drawbacks. First, computing an outcome that minimizes the maximum distance is computationally hard. Furthermore, any algorithm that always returns such an outcome provides
incentives to voters to misreport their true preferences.
In order to circumvent these drawbacks, we consider approximation algorithms, i.e., algorithms that produce an outcome that approximates the minimax distance for any given instance. Such algorithms can be considered as alternative voting rules. We present a polynomial-time 2-approximation algorithm that uses a natural linear programming relaxation for the underlying optimization problem and deterministically
rounds the fractional solution in order to compute the outcome; this result improves upon the previously best known algorithm that has an approximation ratio of 3. We are furthermore interested in approximation algorithms that are resistant to manipulation by (coalitions of) voters, i.e., algorithms that do not motivate voters to misreport their true preferences in order to improve their distance from the outcome. We complement previous results in the literature with new upper and lower bounds on strategyproof and group-strategyproof algorithms.
Abstract: Purpose – To examine broadband competition and broadband penetration in a set of countries that employ the same regulation framework. To define the policy and strategy required to promote broadband in weak markets that do not employ alternative infrastructures.
Design/methodology/approach – Study penetration and competition level statistics from 2002 to 2005 in a set of countries with different infrastructures deployed, services provided as well as in their social-economic structures but employing the same regulation framework. Measure the level of inter-platform and intra-platform competition as well as the availability of bitstream access versus the incumbents' shares.
Findings – The paper concludes that a mature broadband market is the one that exhibits a high penetration ratio in combination with a high competition level. Bitstream access can counterbalance the inexistence of alternative broadband infrastructures, especially in weak markets. In particular the availability of numerous bitstream access types in combination with the proper price differentiation can fuel broadband adoption in relatively weak broadband markets.
Originality/value – The paper challenges the general rule that only platform (also known as facility) based competition guarantees long-term growth of the broadband market. Bitstream and resale access do not lag local loop unbundling and can be used in weak markets that do not employ alternative infrastructures to fuel competition in the relevant markets. Different policies and strategies must be followed, in that case, on behalf of the local NRA.
Abstract: We propose and evaluate fast reservation (FR)
protocols for Optical Burst Switched (OBS) networks. The
proposed reservation schemes aim at reducing the end-to-end
delay of a data burst, by sending the Burst Header Packet (BHP)
in the core network before the burst assembly is completed at the
ingress node. We use linear prediction filters to estimate the
expected length of the burst and the time needed for the
burstification process to complete. A BHP packet carrying these
estimates is sent before burst completion, in order to reserve
bandwidth at each intermediate node for the time interval the
burst is expected to pass from that node. Reducing the total time
needed for a packet to be transported over an OBS network is
important, especially for real-time applications. Reserving
bandwidth only for the time interval it is actual going to be used
by a burst is important for network utilization efficiency. In the
simulations conducted we evaluate the proposed extensions and
prove their usefulness.
Abstract: We demonstrate an all-optical clock and data recovery
circuit for short asynchronous data packets at 10-Gb/s line
rate. The technique employs a Fabry–P{\'e}rot filter and an ultrafast
nonlinear interferometer (UNI) to generate the local packet
clock, followed by a second UNI gate to act as decision element,
performing a logical AND operation between the extracted clocks
and the incoming data packets. The circuit can handle short
packets arriving at time intervals as short as 1.5 ns and arbitrary
phase alignment.
Abstract: In recent years, the evolution of urban environments, jointly with the progress of the Information and Communication sector, have enabled the rapid adoption of new solutions that contribute to the growth in popularity of Smart Cities. Currently, the majority of the world population
lives in cities encouraging different stakeholders within these innovative ecosystems to seek new solutions guaranteeing the sustainability and efficiency of such complex environments. In this work, it is discussed how the experimentation with IoT technologies and other data sources form the cities can be utilized to co-create in the OrganiCity project, where key actors like citizens, researchers and other stakeholders shape smart city services and applications in a collaborative fashion. Furthermore,
a novel architecture is proposed that enables this organic growth of the future cities, facilitating the experimentation that tailors the adoption of new technologies and services for a better quality of life, as well as agile and dynamic mechanisms for managing cities. In this work, the different components and enablers of the OrganiCity platform are presented and discussed in detail and include, among others, a portal to manage the experiment life cycle, an Urban Data Observatory to explore data assets,
and an annotations component to indicate quality of data, with a particular focus on the city-scale opportunistic data collection service operating as an alternative to traditional communications.
Abstract: All-optical gate control signal generation is demonstrated
from flag pulses, using a Fabry–P{\'e}rot filter followed by
a semiconductor optical amplifier. Ten control pulses are generated
from a single flag pulse having less than 0.45-dB amplitude
modulation. By doubling or tripling the number of flag pulses, the
number of control pulses increases approximately by a factor of
two or three. The circuit can control the switching state of all-optical
switches, on a packet-by-packet basis, and can be used for
nontrivial network functionalities such us self-routing.
Abstract: In this paper, we present a Programmable Packet Processing Engine suitable for deep header processing in high-speed networking systems.
The engine, which has been – fabricated as part of a complete network processor, consists of a typical RISC-CPU, whose register
Wle has been modiWed in order to support eYcient context switching, and two simple special-purpose processing units. The engine can be
used in a number of network processing units (NPUs), as an alternative to the typical design practice of employing a large number of simple
general purpose processors, or in any other embedded system designed to process mainly network protocols. To assess the performance
of the engine, we have proWled typical networking applications and a series of experiments were carried out. Further, we have
compared the performance of our processing engine to that of two widely used NPUs and show that our proposed packet-processing
engine can run speciWc applications up to three times faster. Moreover, the engine is simpler to be fabricated, less complex in terms of
hardware complexity, while it can still be very easily programmed.
Abstract: Through recent technology advances in the eld of wireless energy transmission, Wireless Rechargeable Sensor Networks
(WRSN) have emerged. In this new paradigm for
WSNs a mobile entity called Mobile Charger (MC) traverses
the network and replenishes the dissipated energy of sensors.
In this work we rst provide a formal denition of the charging
dispatch decision problem and prove its computational
hardness. We then investigate how to optimize the tradeo
s of several critical aspects of the charging process such
as a) the trajectory of the charger, b) the dierent charging
policies and c) the impact of the ratio of the energy
the MC may deliver to the sensors over the total available
energy in the network. In the light of these optimizations,
we then study the impact of the charging process to the
network lifetime for three characteristic underlying routing
protocols; a greedy protocol, a clustering protocol and an
energy balancing protocol. Finally, we propose a Mobile
Charging Protocol that locally adapts the circular trajectory
of the MC to the energy dissipation rate of each sub-region
of the network. We compare this protocol against several
MC trajectories for all three routing families by a detailed
experimental evaluation. The derived ndings demonstrate
signicant performance gains, both with respect to the no
charger case as well as the dierent charging alternatives; in
particular, the performance improvements include the network
lifetime, as well as connectivity, coverage and energy
balance properties.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor
networks, from a network design perspective i.e. towards estimating the least possible number
of sensors to be deployed, their positions and operation chatacteristics needed to perform the
tracking task. To avoid an expensive massive deployment, we try to take advantage of
possible coverage ovelaps over space and time, by introducing a novel combinatorial model
that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial
problem of covering a universe of elements by at least three sets (to ensure that each point in
the network area is covered at any time by at least three sensors, and thus being localized). We
then design and analyze an efficient approximate method for sensor placement and operation,
that with high probability and in polynomial expected time achieves a (log n) approximation
ratio to the optimal solution. Our network design solution can be combined with alternative
collaborative processing methods, to suitably fit different tracking scenaria.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps.
Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a {\`E}(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios.
Abstract: We describe the design and implementation of secure and robust protocol and system for a national electronic lottery. Electronic lotteries at a national level are a viable cost effective alternative to mechanical ones when there is a business need to support many types of rdquogames of chancerdquo and to allow increased drawing frequency. Electronic lotteries are, in fact, extremely high risk financial application: If one discovers a way to predict or otherwise claim the winning numbers (even once) the result is huge financial damages. Moreover, the e-lottery process is complex, which increases the possibility of fraud or costly accidental failures. In addition, a national lottery must adhere to auditability and (regulatory) fairness requirements regarding its drawings. Our mechanism, which we believe is the first one of its kind to be described in the literature, builds upon a number of cryptographic primitives that ensure the unpredictability of the winning numbers, the prevention of their premature leakages and prevention of fraud. We also provide measures for auditability, fairness, and trustworthiness of the process. Besides cryptography, we incorporate security mechanisms that eliminate various risks along the entire process. Our system which was commissioned by a national organization, was implemented in the field and has been operational and active for a while, now.
Abstract: Modern Wireless Sensor Networks offer an easy, lowcost
and reliable alternative to the back-end for monitoring
and controlling large geographical areas like Buildings
and Industries. We present the design and implementation details of an open and efficient Prototype System as a solution for low-cost BMS that comprises of heterogeneous, small-factor wireless devices. Placing that in the context of Internet of Things we come up with a solution that can cooperate with other systems installed on the same site to lower power consumption and costs as well as benefit humans that use its services in an transparent way. We evaluate and assess key aspects of the performance of our prototype. Our findings indicate specific approaches
to reduce the operation costs and allow the development of open applications.
Abstract: The presentwork considers the following computational problem:
Given any finite game in normal form G and the corresponding
infinitely repeated game G∞, determine in polynomial time (wrt1 the representation
ofG) a profile of strategies for the players inG∞ that is an equilibrium
point wrt the limit-of-means payoff. The problem has been solved
for two players [10], based mainly on the implementability of the threats
for this case. Nevertheless, [4] demonstrated that the traditional notion of
threats is a computationally hard problem for games with at least 3 players
(see also [8]). Our results are the following: (i) We propose an alternative
notion of correlated threats, which is polynomial time computable
(and therefore credible). Our correlated threats are also more severe than
the traditional notion of threats, but not overwhelming for any individual
player. (ii) When for the underlying game G there is a correlated strategy
with payoff vector strictly larger than the correlated threats vector,
we efficiently compute a polynomial–size (wrt the description of G) equilibrium
point for G∞, for any constant number of players. (iii) Otherwise,
we demonstrate the construction of an equilibrium point for an arbitrary
number of players and up to 2 concurrently positive payoff coordinates in
any payoff vector of G. This completely resolves the cases of 3 players, and
provides a direction towards handling the cases of more than 3 players. It
is mentioned that our construction is not a Nash equilibrium point, because
the correlated threats we use are implemented via, not only full synchrony
(as in [10]), but also coordination of the other players¢ actions. But
this seems to be a fair trade-off between efficiency of the construction and
players¢ coordination, in particular because it only affects the punishments
(which are anticipated never to be used).
Abstract: Distributed algorithm designers often assume that system processes execute the same predefined software. Alternatively, when they do not assume that, designers turn to non-cooperative games and seek an outcome that corresponds to a rough consensus when no coordination is allowed. We argue that both assumptions are inapplicable in many real distributed systems, e.g., the Internet, and propose designing self-stabilizing and Byzantine fault-tolerant distributed game authorities. Once established, the game authority can secure the execution of any complete information game. As a result, we reduce costs that are due to the processes¢ freedom of choice. Namely, we reduce the price of malice.
Abstract: We present improved methods for computing a set of alternative source-to-destination routes
in road networks in the form of an alternative graph. The resulting alternative graphs are
characterized by minimum path overlap, small stretch factor, as well as low size and complexity.
Our approach improves upon a previous one by introducing a new pruning stage preceding any
other heuristic method and by introducing a new filtering and fine-tuning of two existing methods.
Our accompanying experimental study shows that the entire alternative graph can be computed
pretty fast even in continental size networks.
Abstract: This paper presents results from the IST Phosphorus project that studies and implements an optical Grid test-bed. A significant part of this project addresses scheduling and routing algorithms and dimensioning problems of optical grids. Given the high costs involved in setting up actual hardware implementations, simulations are a viable alternative. In this paper we present an initial study which proposes models that reflect real-world grid application traffic characteristics, appropriate for simulation purposes. We detail several such models and the corresponding process to extract the model parameters from real grid log traces, and verify that synthetically generated jobs provide a realistic approximation of the real-world grid job submission process.
Abstract: Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described by the Moran process [15]. Recently, this approach has been generalized in [13] by arranging individuals on the nodes of a network (in general, directed). In this setting, the existence of directed arcs enables the simulation of extreme phenomena, where the fixation probability of a randomly placed mutant (i.e. the probability that the offsprings of the mutant eventually spread over the whole population) is arbitrarily small or large. On the other hand, undirected networks (i.e. undirected graphs) seem to have a smoother behavior, and thus it is more challenging to find suppressors/amplifiers of selection, that is, graphs with smaller/greater fixation probability than the complete graph (i.e. the homogeneous population). In this paper we focus on undirected graphs. We present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation
probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in [13]. In our new evolutionary model, all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new model and in the model of [13] can be interpreted as an “aggregation” vs. an “all-or-nothing” strategy, respectively. We prove that our new model of mutual influences admits a potential function, which guarantees the convergence of the system for any graph topology and any initial fitness vector of the individuals. Furthermore, we prove fast convergence to the stable state for the case of the complete graph, as well as we provide almost tight bounds on the limit fitness of the individuals. Apart from being important on its own, this new evolutionary model appears to be useful also in the abstract modeling of control mechanisms over invading populations in networks. We demonstrate this by introducing and analyzing two alternative control approaches, for which we bound the time needed to stabilize to the “healthy” state of the system.
Abstract: We propose new burst assembly schemes and fast reservation (FR) protocols for Optical Burst Switched (OBS) networks that are based on traffic prediction. The burst assembly schemes aim at minimizing (for a given burst size) the average delay of the packets incurred during the burst assembly process, while the fast reservation protocols aim at further reducing the end-to-end delay of the data bursts. The burst assembly techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, different for each proposed scheme, is met. The fast reservation protocols use prediction filters to estimate the expected length of the burst and the time needed for the burst assembly process to complete. A Burst Header Packet (BHP) packet carrying these estimates is sent before the burst is completed, in order to reserve bandwidth at intermediate nodes for the time interval the burst is expected to pass from these nodes. Reducing the packet aggregation delay and the time required to perform the reservations, reduces the total time needed for a packet to be transported over an OBS network and is especially important for real-time applications. We evaluate the performance of the proposed burst assembly schemes and show that a number of them outperform the previously proposed timer-based, length-based and average delay-based burst assembly schemes. We also look at the performance of the fast reservation (FR) protocols in terms of the probability of successfully establishing the reservations required to transport the burst.
Abstract: We propose new burst assembly techniques that aim at reducing the average delay experienced by the packets during the burstification process in optical burst switched (OBS) networks, for a given average size of the bursts produced. These techniques use a linear prediction filter to estimate the number of packet arrivals at the ingress node in the following interval, and launch a new burst into the network when a certain criterion, which is different for each proposed scheme, is met. Reducing the packet burstification delay, for a given average burst size, is essential for real-time applications; correspondingly, increasing the average burst size for a given packet burstification delay is important for reducing the number of bursts injected into the network and the associated overhead imposed on the core nodes. We evaluate the performance of the proposed schemes and show that two of them outperform the previously proposed timer - based, length - based and average delay-based burst aggregation schemes in terms of the average packet burstification delay for a given average burst size.
Abstract: We study the fundamental problem 2NASH of computing a Nash equilibrium (NE) point in bimatrix games. We start by proposing a novel characterization of the NE set, via a bijective map to the solution set of a parameterized quadratic program (NEQP), whose feasible space is the highly structured set of correlated equilibria (CE). This is, to our knowledge, the first characterization of the subset of CE points that are in “1–1” correspondence with the NE set of the game, and contributes to the quite lively discussion on the relation between the spaces of CE and NE points in a bimatrix game (e.g., [15], [26] and [33]).
We proceed with studying a property of bimatrix games, which we call mutually concavity (MC), that assures polynomial-time tractability of 2NASH, due to the convexity of a proper parameterized quadratic program (either NEQP, or a parameterized variant of the Mangasarian & Stone formulation [23]) for a particular value of the parameter. We prove various characterizations of the MC-games, which eventually lead us to the conclusion that this class is equivalent to the class of strategically zero-sum (SZS) games of Moulin & Vial [25]. This gives an alternative explanation of the polynomial-time tractability of 2NASH for these games, not depending on the solvability of zero-sum games. Moreover, the recognition of the MC-property for an arbitrary game is much faster than the recognition SZS-property. This, along with the comparable time-complexity of linear programs and convex quadratic programs, leads us to a much faster algorithm for 2NASH in MC-games.
We conclude our discussion with a comparison of MC-games (or, SZS-games) to kk-rank games, which are known to admit for 2NASH a FPTAS when kk is fixed [18], and a polynomial-time algorithm for k=1k=1 [2]. We finally explore some closeness properties under well-known NE set preserving transformations of bimatrix games.
Abstract: A constraint network is arc consistent if any value of any of its variables is compatible with at
least one value of any other variable. The Arc Consistency Problem (ACP) consists in filtering out values of
the variables of a given network to obtain one that is arc consistent, without eliminating any solution. ACP is
known to be inherently sequential, or P-complete, so in this paper we examine some weaker versions of it and
their parallel complexity. We propose several natural approximation schemes for ACP and show that they are also
P-complete. In an attempt to overcome these negative results, we turn our attention to the problem of filtering
out values from the variables so that each value in the resulting network is compatible with at least one value of
not necessarily all, but a constant fraction of the other variables. We call such a network partially arc consistent.
We give a parallel algorithm that, for any constraint network, outputs a partially arc consistent subnetwork of it in
sublinear (O.pn log n/) parallel time using O.n2/ processors. This is the first (to our knowledge) sublinear-time
parallel algorithm with polynomially many processors that guarantees that in the resulting network every value is
compatible with at least one value in at least a constant fraction of the remaining variables. Finally, we generalize
the notion of partiality to the k-consistency problem.
Abstract: The voting rules proposed by Dodgson and Young are both
designed to nd the alternative closest to being a Condorcet
winner, according to two dierent notions of proximity; the
score of a given alternative is known to be hard to compute
under either rule.
In this paper, we put forward two algorithms for ap-
proximating the Dodgson score: an LP-based randomized
rounding algorithm and a deterministic greedy algorithm,
both of which yield an O(logm) approximation ratio, where
m is the number of alternatives; we observe that this result
is asymptotically optimal, and further prove that our greedy
algorithm is optimal up to a factor of 2, unless problems in
NP have quasi-polynomial time algorithms. Although the
greedy algorithm is computationally superior, we argue that
the randomized rounding algorithm has an advantage from
a social choice point of view.
Further, we demonstrate that computing any reasonable
approximation of the ranking produced by Dodgson's rule
is NP-hard. This result provides a complexity-theoretic
explanation of sharp discrepancies that have been observed
in the Social Choice Theory literature when comparing
Dodgson elections with simpler voting rules.
Finally, we show that the problem of calculating the
Young score is NP-hard to approximate by any factor. This
leads to an inapproximability result for the Young ranking.
Abstract: We consider the generation of prime-order elliptic curves (ECs) over a prime field $\mathbb{F}_{p}$ using the Complex Multiplication (CM) method. A crucial step of this method is to compute the roots of a special type of class field polynomials with the most commonly used being the Hilbert and Weber ones. These polynomials are uniquely determined by the CM discriminant D. In this paper, we consider a variant of the CM method for constructing elliptic curves (ECs) of prime order using Weber polynomials. In attempting to construct prime-order ECs using Weber polynomials, two difficulties arise (in addition to the necessary transformations of the roots of such polynomials to those of their Hilbert counterparts). The first one is that the requirement of prime order necessitates that D≡3mod8), which gives Weber polynomials with degree three times larger than the degree of their corresponding Hilbert polynomials (a fact that could affect efficiency). The second difficulty is that these Weber polynomials do not have roots in $\mathbb{F}_{p}$ .
In this work, we show how to overcome the above difficulties and provide efficient methods for generating ECs of prime order focusing on their support by a thorough experimental study. In particular, we show that such Weber polynomials have roots in the extension field $\mathbb{F}_{p^{3}}$ and present a set of transformations for mapping roots of Weber polynomials in $\mathbb{F}_{p^{3}}$ to roots of their corresponding Hilbert polynomials in $\mathbb{F}_{p}$ . We also show how an alternative class of polynomials, with degree equal to their corresponding Hilbert counterparts (and hence having roots in $\mathbb{F}_{p}$ ), can be used in the CM method to generate prime-order ECs. We conduct an extensive experimental study comparing the efficiency of using this alternative class against the use of the aforementioned Weber polynomials. Finally, we investigate the time efficiency of the CM variant under four different implementations of a crucial step of the variant and demonstrate the superiority of two of them.
Abstract: In this work we extend the population protocol model of Angluin et al., in
order to model more powerful networks of very small resource limited
artefacts (agents) that is possible to follow some unpredictable passive
movement. These agents communicate in pairs according to the commands of
an adversary scheduler. A directed (or undirected) communication graph
encodes the following information: each edge (u,\~{o}) denotes that during the
computation it is possible for an interaction between u and \~{o} to happen in
which u is the initiator and \~{o} the responder. The new characteristic of
the proposed mediated population protocol model is the existance of a
passive communication provider that we call mediator. The mediator is a
simple database with communication capabilities. Its main purpose is to
maintain the permissible interactions in communication classes, whose
number is constant and independent of the population size. For this reason
we assume that each agent has a unique identifier for whose existence the
agent itself is not informed and thus cannot store it in its working
memory. When two agents are about to interact they send their ids to the
mediator. The mediator searches for that ordered pair in its database and
if it exists in some communication class it sends back to the agents the
state corresponding to that class. If this interaction is not permitted to
the agents, or, in other words, if this specific pair does not exist in
the database, the agents are informed to abord the interaction. Note that
in this manner for the first time we obtain some control on the safety of
the network and moreover the mediator provides us at any time with the
network topology. Equivalently, we can model the mediator by communication
links that are capable of keeping states from a edge state set of constant
cardinality. This alternative way of thinking of the new model has many
advantages concerning the formal modeling and the design of protocols,
since it enables us to abstract away the implementation details of the
mediator. Moreover, we extend further the new model by allowing the edges
to keep readable only costs, whose values also belong to a constant size
set. We then allow the protocol rules for pairwise interactions to modify
the corresponding edge state by also taking into account the costs. Thus,
our protocol descriptions are still independent of the population size and
do not use agent ids, i.e. they preserve scalability, uniformity and
anonymity. The proposed Mediated Population Protocols (MPP) can stably
compute graph properties of the communication graph. We show this for the
properties of maximal matchings (in undirected communication graphs), also
for finding the transitive closure of directed graphs and for finding all
edges of small cost. We demonstrate that our mediated protocols are
stronger than the classical population protocols. First of all we notice
an obvious fact: the classical model is a special case of the new model,
that is, the new model can compute at least the same things with the
classical one. We then present a mediated protocol that stably computes
the product of two nonnegative integers in the case where G is complete
directed and connected. Such kind of predicates are not semilinear and it
has been proven that classical population protocols in complete graphs can
compute precisely the semilinear predicates, thus in this manner we show
that there is at least one predicate that our model computes and which the
classical model cannot compute. To show this fact, we state and prove a
general Theorem about the composition of two mediated population
protocols, where the first one has stabilizing inputs. We also show that
all predicates stably computable in our model are (non-uniformly) in the
class NSPACE(m), where m is the number of edges of the communication
graph. Finally, we define Randomized MPP and show that, any Peano
predicate accepted by a Randomized MPP, can be verified in deterministic
polynomial time.
Abstract: We present methods for obtaining high-repetition-
rate full duty-cycle RZ optical pulse trains from lower rate
laser sources. These methods exploit the memory properties of the
Fabry–Perot filter for rate multiplication, while amplitude equalization
in the output pulse train is achieved with a semiconductor
optical amplifier or with a second transit through the Fabry–Perot
filter.We apply these concepts to experimentally demonstrate rate
quadruplication from 10 to 40 GHz and discuss the possibility of
taking advantage of the proposed methods to achieve repetition
rates up to 160 GHz.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The discrete number of frequencies used is called order and the range of frequencies used, span. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span (min span RCP) or the order (min order RCP).
In this paper, we deal with an interesting, yet not examined until now, variation of the radiocoloring problem: that of satisfying frequency assignment requests which exhibit some periodic behavior. In this case, the interference graph (modelling interference between transmitters) is some (infinite) periodic graph. Infinite periodic graphs usually model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they can model very large networks produced by the repetition of a small graph.
A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph Gi(Vi,Ei). The edge set of G is derived by connecting the vertices of each iteration Gi to some of the vertices of the next iteration Gi+1, the same for all Gi. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest.
We give two basic results:
• We prove that the min span RCP is PSPACE-complete for periodic planar graphs.
• We provide an O(n({\"A}(Gi)+{\'o})) time algorithm (where|Vi|=n, {\"A}(Gi) is the maximum degree of the graph Gi and {\'o} is the number of edges connecting each Gi to Gi+1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to View the MathML source as {\"A}(Gi)+{\'o} tends to infinity.
We remark that, any approximation algorithm for the min span RCP of a finite planar graph G, that achieves a span of at most {\'a}{\"A}(G)+constant, for any {\'a} and where {\"A}(G) is the maximum degree of G, can be used as a subroutine in our algorithm to produce an approximation for min span RCP of asymptotic ratio {\'a} for periodic planar graphs.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function {\"O}: V → IN such that ∣{\"O}(u) - {\"O}(v)∣ ≥2, when u, v are neighbors in G, and ∣{\"O}(u) - {\"O}(v)∣ ≥1 when the distance of u, v in G is two. The range of frequencies used is called span. Here, we consider the optimization version of the Radiocoloring Problem (RCP) of finding a radiocoloring assignment of minimum span, called min span RCP. In this paper, we deal with a variation of RCP: that of satisfying frequency assignment requests with some periodic behavior. In this case, the interference graph is an (infinite) periodic graph. Infinite periodic graphs model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they may model very large networks produced by the repetition of a small graph. A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph G i (V i ,E i ). The edge set of G is derived by connecting the vertices of each iteration G i to some of the vertices of the next iteration G i +1, the same for all G i . The model of periodic graphs considered here is similar to that of periodic graphs in Orlin [13], Marathe et al [10]. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest. We give two basic results: - We prove that the min span RCP is PSPACE-complete for periodic planar graphs. - We provide an O(n({\"A}(G i ) + {\'o})) time algorithm, (where ∣V i ∣ = n, {\"A}(G i ) is the maximum degree of the graph G i and {\'o} is the number of edges connecting each G i to G i +1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to 2 as {\"A}(Gi) + {\'o} tends to infinity.
Abstract: In the last few years there has been a great amount of interest in Random Constraint Satisfaction
Problems, both from an experimental and a theoretical point of view. Quite intriguingly, experimental results
with various models for generating random CSP instances suggest that the probability of such problems having
a solution exhibits a ?threshold-like? behavior. In this spirit, some preliminary theoretical work has been done
in analyzing these models asymptotically, i.e., as the number of variables grows. In this paper we prove that,
contrary to beliefs based on experimental evidence, the models commonly used for generating random CSP
instances do not have an asymptotic threshold. In particular, we prove that asymptotically almost all instances
they generate are overconstrained, suffering from trivial, local inconsistencies. To complement this result we
present an alternative, single-parameter model for generating random CSP instances and prove that, unlike
current models, it exhibits non-trivial asymptotic behavior. Moreover, for this new model we derive explicit
bounds for the narrow region within which the probability of having a solution changes dramatically
Abstract: We present a new technique for extending the decay time of the impulse response function of a Fabry-Perot filter while simultaneously maintaining a large bandwidth. It involves double passing through the filter and it can be used for the easy multiplication of the repetition rate of optical sources. We apply the concept to a 10-GHz pulse train to demonstrate experimentally the rate quadruplication to 40 GHz.
Abstract: A novel method for the multiplication of the repetition
rate of full duty-cycle return-to-zero optical sources is presented.
It employs the memory property of a Fabry–P{\'e}rot filter
for the multiplication task, combined with the gain saturation of a
semiconductor optical amplifier for amplitude equalization. This
concept has been applied to quadruplicate the rate of a distributed
feedback laser source operating at 10 GHz.
Abstract: Distributed algorithm designers often assume that system processes execute the same predefined software. Alternatively, when they do not assume that, designers turn to non-cooperative games and seek an outcome that corresponds to a rough consensus when no coordination is allowed. We argue that both assumptions are inapplicable in many real distributed systems, e.g., the Internet, and propose designing self-stabilizing and Byzantine fault-tolerant distributed game authorities. Once established, the game authority can secure the execution of any complete information game. As a result, we reduce costs that are due to the processes¢ freedom of choice. Namely, we reduce the price of malice.
Abstract: During the last years web search engines have moved from the simple but inefficient syntactical analysis (first generation) to the more robust and usable web graph analysis (second generation). Much of the current research is focussed on the so-called third generation search engines that, in principle, inject human characteristics on how results are obtained and presented to the end user. Approaches exploited towards this direction include (among others): an alteration of PageRank [1] that takes into account user specific characteristics and bias the page ordering using the user preferences (an approach, though, that does not scale well with the number of users). The approach is further exploited in [3], where several PageRanks are computed for a given number of distinct search topics. A similar idea is used in [6], where the PageRank computation takes into account the content of the pages and the query terms the surfer is looking for. In [4], a decomposition of PageRank to basic components is suggested that may be able to scale the different PageRank computations to a bigger number of topics or even distinct users. Another approach to web search is presented in [2], where a rich extension of the web, called semantic web, and the application of searching over this new setting is described.
Abstract: Elliptic Curve Cryptography (ECC) is one of the
most promising alternatives to conventional public
key cryptography, such as RSA and ElGamal, since
it employs keys of smaller sizes for the same level
of cryptographic strength. Smaller key sizes imply
smaller hardware units for performing the arithmetic
operations required by cryptographic protocols and,
thus, ECC is an ideal candidate for implementation
in embedded systems where the major computational
resources (speed and storage) are limited.
In this paper we present a port, written in ANSI C
for maximum portability, of an open source ECCbased
cryptographic library (ECC-LIB) to ATMEL¢s
AT76C520 802.11 WLAN Access Point. One of the
major features of this port, not found in similar ports,
is that it supports Complex Multiplication (CM) for
the construction of Elliptic Curves with good security
properties. We present some experimental results that
demonstrate that the port is efficient and can lead to generic embedded systems with robust ECC-based
cryptographic protocols using cryptographically strong
ECCs generated with CM. As an application of the
ported library, an EC Diffie-Hellman key exchange
protocol is developed as an alternative of the 4-way
key handshake protocol of the 802.11 protocol.
Abstract: Embedded computing devices dominate our everyday activities, from cell phones to wireless sensors that collect and process data for various applications. Although desktop and high-end server security seems to be under control by the use of current security technology, securing the low-end embedded computing systems is a difficult long-term problem. This is mainly due to the fact that the embedded systems are constrained by their operational environment and the limited resources they are equipped with. Recent research activities focus on the deployment of lightweight cryptographic algorithms and security protocols that are well suited to the limited resources of low-end embedded systems. Elliptic Curve Cryptography (ECC) offers an interesting alternative to the classical public key cryptography for embedded systems (e.g., RSA and ElGamal), since it uses smaller key sizes for achieving the same security level, thus making ECC an attractive and efficient alternative for deployment in embedded systems. In this chapter, the processing requirements and architectures for secure network access, communication functions, storage, and high availability of embedded devices are discussed. In addition, ECC-based state-of-the-art lightweight cryptographic primitives for the deployment of security protocols in embedded systems that fulfill the requirements are presented.
Abstract: We present recent advances in multi-wavelength, power-equalized laser sources that incorporate a semiconductor optical amplifier (SOA) and simple optical filters, such as Lyot-type and Fabry-Perot, for comb generation. Both linear and ring-cavity configurations are presented, and single-pass optical feedback technique is proposed to improve the performance in terms of the number of simultaneously oscillating lines and output channel power equalization. This technique resulted in a broadened oscillating spectrum of 52 lines spaced at 50 GHz, power-equalized within 0.3 dB. Finally, a simplified version that uses only an uncoated SOA for both gain and comb generation is demonstrated.
Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing voting rule that today bears his name. Although Dodgson's rule is one of the most well-studied voting rules, it suffers from serious deciencies, both from the computational point of view|it is NP-hard even to approximate the Dodgson score within sublogarithmic factors|and from the social choice point of view|it fails basic social choice desiderata such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we have asked whether there are approximation algorithms for Dodgson's rule that are monotonic or homogeneous. In this paper we give denitive answers to these questions. We design a monotonic exponential-time algorithm that yields a 2-approximation to the Dodgson score, while matching this result with a tight lower bound. We also present a monotonic polynomial-time O(logm)-approximation algorithm (where m is the number of alternatives); this result is tight as well due to a complexity-theoretic lower bound. Furthermore, we show that a slight variation of a known voting rule yields a monotonic, homogeneous, polynomial-time O(mlogm)-approximation algorithm, and establish that it is impossible to achieve a better approximation ratio even if one just asks for homogeneity. We complete the picture by studying several additional social choice properties; for these properties, we prove that algorithms with an approximation ratio that depends only on m do not exist.
Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing
voting rule that today bears his name. Although Dodg-
son's rule is one of the most well-studied voting rules, it suf-
fers from serious deciencies, both from the computational
point of view|it is NP-hard even to approximate the Dodg-
son score within sublogarithmic factors|and from the social
choice point of view|it fails basic social choice desiderata
such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we
have asked whether there are approximation algorithms for
Dodgson's rule that are monotonic or homogeneous. In this
paper we give denitive answers to these questions. We de-
sign a monotonic exponential-time algorithm that yields a
2-approximation to the Dodgson score, while matching this
result with a tight lower bound. We also present a monotonic
polynomial-time O(logm)-approximation algorithm (where
m is the number of alternatives); this result is tight as well
due to a complexity-theoretic lower bound. Furthermore,
we show that a slight variation of a known voting rule yields
a monotonic, homogeneous, polynomial-time O(mlogm)-
approximation algorithm, and establish that it is impossible
to achieve a better approximation ratio even if one just asks
for homogeneity. We complete the picture by studying sev-
eral additional social choice properties; for these properties,
we prove that algorithms with an approximation ratio that
depends only on m do not exist.
Abstract: A key issue when designing and implementing large-scale publish/subscribe systems is how to efficiently propagate subscriptions among the brokers of the system. Brokers require this information in order to forward incoming events only to interested users, filtering out unrelated events, which can save significant overheads (particularly network bandwidth and processing time at the brokers). In this paper we contribute the notion of subscription summaries, a mechanism appropriately compacting subscription information. We develop the associated data structures and matching algorithms. The proposed mechanism can handle event/subscription schemata that are rich in terms of their attribute types and powerful in terms of the allowed operations on them. Our major results are that the proposed mechanism (i) is scalable, with the bandwidth required to propagate subscriptions increasing only slightly, even at huge-scales, and (ii) is significantly more efficient, up to orders of magnitude, depending on the scale, with respect to the bandwidth requirements for propagating subscriptions.
Abstract: The peer-to-peer computing paradigm is an intriguing alternative to Google-style search
engines for querying and ranking Web content. In a network with many thousands or
millions of peers the storage and access load requirements per peer are much lighter
than for a centralized Google-like server farm; thus more powerful techniques from information
retrieval, statistical learning, computational linguistics, and ontological reasoning
can be employed on each peer¢s local search engine for boosting the quality
of search results. In addition, peers can dynamically collaborate on advanced and particularly
difficult queries. Moroever, a peer-to-peer setting is ideally suited to capture
local user behavior, like query logs and click streams, and disseminate and aggregate
this information in the network, at the discretion of the corresponding user, in order to
incorporate richer cognitive models.
This paper gives an overview of ongoing work in the EU Integrated Project DELIS
that aims to develop foundations for a peer-to-peer search engine with Google-or-better
scale, functionality, and quality, which will operate in a completely decentralized and
self-organizing manner. The paper presents the architecture of such a system and the
Minerva prototype testbed, and it discusses various core pieces of the approach: efficient
execution of top-k ranking queries, strategies for query routing when a search request
needs to be forwarded to other peers, maintaining a self-organizing semantic overlay
network, and exploiting and coping with user and community behavior.
Abstract: In cooperative multiagent systems an alternative that maximizes the social welfare—the sum of utilities—can only be selected if each agent reports its full utility function. This may be infeasible in environments where communication is restricted. Employing a voting rule to choose an alternative greatly reduces the communication burden, but leads to a possible gap between the social welfare of the optimal alternative and the social welfare of the one that is ultimately elected. Procaccia and Rosenschein (2006) have introduced the concept of distortion to quantify this gap. In this paper, we present the notion of embeddings into voting rules: functions that receive an agent¢s utility function and return the agent¢s vote. We establish that very low distortion can be obtained using randomized embeddings, especially when the number of agents is large compared to the number of alternatives. We investigate our ideas in the context of three prominent voting rules with low communication costs: Plurality, Approval, and Veto. Our results arguably provide a compelling reason for employing voting in cooperative multiagent systems.
Abstract: In cooperative multiagent systems an alternative that maximizes the social welfare—the sum of utilities—can only be selected if each agent reports its full utility function. This may be infeasible in environments where communication is restricted. Employing a voting rule to choose an alternative
greatly reduces the communication burden, but leads to a possible gap between the social welfare of the optimal alternative and the social welfare of the one that is ultimately elected. Procaccia and Rosenschein (2006) have introduced the concept of distortion to quantify this gap.
In this paper, we present the notion of embeddings into voting rules: functions that receive an agent¢s utility function and return the agent¢s vote. We establish that very low distortion can be obtained using randomized embeddings, especially when the number of agents is large compared to the number of alternatives. We investigate our ideas in the context
of three prominent voting rules with low communication costs: Plurality, Approval, and Veto. Our results arguably provide a compelling reason for employing voting in cooperative multiagent systems.
Abstract: In this paper we describe a new simulation platform for complex wireless sensor networks that operate a collection of distributed algorithms and network protocols. Simulating such systems is complicated because of the need to coordinate different network layers and debug protocol stacks, often with very different interfaces, options, and fidelities. Our platform (which we call WSNGE) is a flexible and extensible environment that provides a highly scalable simulator with unique characteristics. It focuses on user friendliness, providing every function in both scriptable and visual way, allowing the researcher to define simulations and view results in an easy to use graphical environment. Unlike other solutions, WSNGE does not distinguish between different scenario types, allowing multiple different protocols to run at the same time. It enables rich online interaction with running simulations, allowing parameters, topologies or the whole scenario to be altered at any point in time.