Abstract: We demonstrate a 40 Gb/s self-synchronizing, all-optical packet
clock recovery circuit designed for efficient packet-mode traffic. The circuit
locks instantaneously and enables sub-nanosecond packet spacing due to the
low clock persistence time. A low-Q Fabry-Perot filter is used as a passive
resonator tuned to the line-rate that generates a retimed clock-resembling
signal. As a reshaping element, an optical power-limiting gate is
incorporated to perform bitwise pulse equalization. Using two preamble
bits, the clock is captured instantly andpersists for the duration of the data
packet increased by 16 bits. The performance of the circuit suggests its
suitability for future all-optical packet-switched networks with reduced
transmission overhead and fine network granularity.
Abstract: In this paper, we survey the current state-of-the-art in middleware and systems for Wireless Sensor Networks (WSN). We provide a discussion on the definition of WSN middleware, design issues associated with it, and the taxonomies commonly used to categorize it. We also present a categorization of a number of such middleware platforms, using middleware functionalities and challenges which we think will play a crucial role in developing software for WSN in the near future. Finally, we provide a short discussion on WSN middleware trends.
Abstract: Based on our experience in designing, building and maintaining an information system for supporting a large scale electronic lottery, we present in this paper a unified approach to the designand implementation of electronic lotteries with the focus on pragmatic trust establishment. This approach follows closely the methodologies commonly employed in the development of general information systems. However, central to the proposed approach is the decomposition of a security critical system into layers containing basic trust components so as to facilitate the management of trust, first along the layers, and then as we move from layer to layer. We believe that such a structured approach, based on layers and trust components, can help designers of security critical applications produce demonstrably robust and verifiable systems that people will not hesitate to use.
Abstract: Implementation of a commercial application to a
grid infrastructure introduces new challenges in managing the
quality-of-service (QoS) requirements, most stem from the fact
that negotiation on QoS between the user and the service provider
should strictly be satisfied. An interesting commercial application
with a wide impact on a variety of fields, which can benefit from
the computational grid technologies, is three–dimensional (3-D)
rendering. In order to implement, however, 3-D rendering to a
grid infrastructure, we should develop appropriate scheduling
and resource allocation mechanisms so that the negotiated (QoS)
requirements are met. Efficient scheduling schemes require
modeling andprediction of rendering workload. In this paper
workload prediction is addressed based on a combined fuzzy
classification and neural network model. Initially, appropriate
descriptors are extracted to represent the synthetic world. The
descriptors are obtained by parsing RIB formatted files, which
provides a general structure for describing computer-generated
images. Fuzzy classification is used for organizing rendering
descriptor so that a reliable representation is accomplished which
increases the prediction accuracy. Neural network performs
workload prediction by modeling the nonlinear input-output
relationship between rendering descriptors and the respective
computational complexity. To increase prediction accuracy, a
constructive algorithm is adopted in this paper to train the neural
network so that network weights and size are simultaneously
estimated. Then, a grid scheduler scheme is proposed to estimate
the queuing order that the tasks should be executed and the
most appopriate processor assignment so that the demanded
QoS are satisfied as much as possible. A fair scheduling policy is
considered as the most appropriate. Experimental results on a real
grid infrastructure are presented to illustrate the efficiency of the
proposed workload prediction — scheduling algorithm compared
to other approaches presented in the literature.
Abstract: We propose a simple and intuitive cost mechanism which assigns
costs for the competitive usage of m resources by n selfish agents.
Each agent has an individual demand; demands are drawn according to
some probability distribution. The cost paid by an agent for a resource
she chooses is the total demandput on the resource divided by the number
of agents who chose that same resource. So, resources charge costs
in an equitable, fair way, while each resource makes no profit out of the
agents.We call our model the Fair Pricing model. Its fair cost mechanism
induces a non-cooperative game among the agents. To evaluate the Nash
equilibria of this game, we introduce the Diffuse Price of Anarchy, as an
extension of the Price of Anarchy that takes into account the probability
distribution on the demands. We prove:
– Pure Nash equilibria may not exist, unless all chosen demands are
identical; in contrast, a fully mixed Nash equilibrium exists for all
possible choices of the demands. Further on, the fully mixed Nash
equilibrium is the unique Nash equilibrium in case there are only two
agents.
– In the worst-case choice of demands, the Price of Anarchy is {\`E}(n);
for the special case of two agents, the Price of Anarchy is less than
2 − 1
m.
– Assume now that demands are drawn from a bounded, independent
probability distribution, where all demands are identically distributed
and each is at most a (universal for the class) constant times its expectation.
Then, the Diffuse Price of Anarchy is at most that same
constant, which is just 2 when each demand is distributed symmetrically
around its expectation.
Abstract: eVoting is considered to be one of the most challenging domains of modern eGovernment and one of the main vehicles for increasing eParticipation among citizens. One of the main obstacles for its wide adoptionis the reluctance of citizens to participate in electronic voting procedures. This reluctance can be partially attributed to the low penetration of technology among citizens. However, the main reason behind this reluctance is the lack of trust which stems from the belief of citizens that systems implementing an eVoting process will violate their privacy. The departure point of this approach is that the emergence of such a belief can be considerably facilitated by designing and building systems in a way that evidence about the system’s properties is produced during the designprocess. In this way, the designers can demonstrate the respect in privacy using this evidence that can be understood and checked by the specialist and the informed layman. These tools and models should provide sufficient evidence that the target system handles privacy concerns and requirements that can remove enough of the fears towards eVoting. This paper presents the efforts of the authors‘ organization, the Computer Technology Institute andPress “Diophantus” (CTI), towards the designand implementation of an eVoting system, called PNYKA, with demonstrable security properties. This system was based on a trust-centered engineering approach for building general security critical systems. The authors‘ approach is pragmatic rather than theoretical in that it sidesteps the controversy that besets the nature of trust in information systems and starts with a working definition of trust as people’s positive attitude towards a system that transparently and demonstrably performs its operations, respecting their privacy. The authors also discuss the social side of eVoting, i.e. how one can help boost its acceptance by large social groups targeting the whole population of the country. The authors view eVoting as an innovation that must be diffused to a population and then employ a theoretical model that studies diffusion of innovation in social network, delineating structural properties of the network that help diffuse the innovation fast. Furthermore, the authors explain how CTI’s current situation empowers CTI to realize its vision to implement a privacy preserving, discussion andpublic consultation forum in Greece. This forum will link, together, all Greek educational institutes in order to provide a privacy preserving discussion and opinion gathering tool useful in decision making within the Greek educational system.
Abstract: Paul Spirakis is an eminent, talented, and influential researcher that contributed significantly to computer science. This article is a modest attempt of a biographical sketch of Paul, which we drafted with extreme love and honor.
Abstract: This work addresses networked embedded systems enabling the seam-
less interconnection of smart building automations to the Internet and
their abstractions as web services. In our approach, such abstractions are
used to primarily create a exible, holistic and scalable system and allow
external end-users to compose and run their own smart/green building
automation application services on top of this system.
Towards this direction, in this paper we present a smart building test-
bed consisting of several sensor motes and spanning across seven rooms.
Our test-bed's designand implementation simultaneously addresses sev-
eral corresponding system layers; from hardware interfaces, embedded
IPv6 networking and energy balancing routing algorithms to a RESTful
architecture and over the web development of sophisticated, smart, green
scenarios. In fact, we showcase how IPv6 embedded networking combined
with RESTful architectures make the creation of building automation ap-
plications as easy as creating any other Internet Web Service.
Abstract: In mobile ad-hoc networks (MANETs), the mobility of the nodes is a complicating factor that significantly affects the effectiveness andperformance of the routing protocols. Our work builds upon recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) andproposes a protocol framework that exploits the usually different mobility rates of the nodes by adapting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of an origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessary implementation changes required to incorporate existing routing protocols in to our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.
Abstract: In ad-hoc mobile networks (MANET), the mobility of the nodes is a complicating factor that significantly affects the effectiveness andperformance of the routing protocols. Our work builds upon the recent results on the effect of node mobility on the performance of available routing strategies (i.e.~path based, using support) andproposes a protocol framework that exploits the usually different mobility rates of the nodes by adopting the routing strategy during execution. We introduce a metric for the relative mobility of the nodes, according to which the nodes are classified into mobility classes. These mobility classes determine, for any pair of origin and destination, the routing technique that best corresponds to their mobility properties. Moreover, special care is taken for nodes remaining almost stationary or moving with high (relative) speeds. Our key design goal is to limit the necessery implementation changes required to incorporate existing routing protocols in our framework. We provide extensive evaluation of the proposed framework, using a well-known simulator (NS2). Our first findings demonstrate that the proposed framework improves, in certain cases, the performance of the existing routing protocols.
Abstract: We propose a simple obstacle model to be used while simulating wireless sensor networks. To the best of our knowledge, this is the first time such an integrated and systematic obstacle model for these networks has been proposed. We define several types of obstacles that can be found inside the deployment area of a wireless sensor network andprovide a categorization of these obstacles based on their nature (physical and communication obstacles, i.e. obstacles that are formed out of node distribution patterns or have physical presence, respectively), their shape and their change of nature over time. We make an eXtension to a custom-made sensor network simulator (simDust) and conduct a number of simulations in order to study the effect of obstacles on the performance of some representative (in terms of their logic) data propagation protocols for wireless sensor networks. Our findings confirm that obstacle presence has a significant impact on protocol performance, and also that different obstacle shapes and sizes may affect each protocol in different ways. This provides an insight into how a routing protocol will perform in the presence of obstacles and highlights possible protocol shortcomings. Moreover, our results show that the effect of obstacles is not directly related to the density of a sensor network, and cannot be emulated only by changing the network density.
Abstract: We designand implement a multicost impairment- aware routing and wavelength assignment algorithm for online traffic. In transparent optical networks the quality of a transmission degrades due to physical layer impairments. To serve a connection, the proposed algorithm finds a path and a free wavelength (a lightpath) that has acceptable signal quality performance by estimating a quality of transmission measure, called the Q factor. We take into account channel utilization in the network, which changes as new connections are established or released, in order to calculate the noise variances that correspond to physical impairments on the links. These, along with the time invariant eye impairment penalties of all candidate network paths, form the inputs to the algorithm. The multicost algorithm finds a set of so called non-dominated Q paths from the given source to the given destination. Various objective functions are then evaluated in order to choose the optimal lightpath to serve the connection. The proposed algorithm combines the strength of multicost optimization with low execution time, making it appropriate for serving online connections.
Abstract: We propose and evaluate a new burst assembly algorithm based on the average delay of the packets comprising a burst. This
method fixes the average delay of the packets belonging to an assembled burst to a desired value TAVE that may be different for
each forwarding equivalence class (FEC). We show that the proposed method significantly improves the delay jitter experienced
by the packets during the burst assembly process, when compared to that of timer-based and burst length-based assembly policies.
Minimizing packet delay jitter is important in a number of applications, such as real-audio and streaming-video applications. We
also find that the improvement in the packet delay jitter yields a corresponding significant improvement in the performance of TCP,
whose operation depends critically on the ability to obtain accurate estimates of the round-trip times (RTT).
c
2007 Elsevier B.V. All rights reserved.
Abstract: Smart Dust is a special case of wireless sensor networks, comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, that co-operate to accomplish a large sensing task. Smart Dust can be very useful in practice, i.e. in the local detection of remote crucial events and the propagation of data reporting their realization to a control center.
In this paper, we propose a new energy efficient and fault tolerant protocol for data propagation in smart dust networks, the Variable Transmission Range Protocol (VTRP). The basic idea of data propagation in VTRP is the varying range of data transmissions, i.e. we allow the transmission range to increase in various ways. Thus, data propagation in our protocol exhibits high fault-tolerance (by bypassing obstacles or faulty sensors) and increases network lifetime (since critical sensors, i.e. close to the control center are not overused). As far as we know, it is the first time varying transmission range is used.
We implement the protocol andperform an extensive experimental evaluation and comparison to a representative protocol (LTP) of several important performance measures with a focus on energy consumption. Our findings indeed demonstrate that our protocol achieves significant improvements in energy efficiency and network lifetime.
Abstract: In this work we propose a new energy efficient and fault tolerant protocol for data propagation in wireless sensor networks, the Variable Transmission Range Protocol VTRP. The basic idea of data propagation in VTRP is the varying range of data transmissions, ie. we allow the transmission range to increase in various ways. Thus data propagation in our protocol exhibits high fault-tolerance (by bypassing obstacles or faulty sensors) and increases network lifetime (since critical sensors, ie. close to the control center are not overused). As far as we know, it is the first time varying transmission range is used.
We implement the protocol andperform an extensive experimental evaluation and comparison to a representative protocol (LTP) of several important performance measures with a focus on energy consumption. Our findings indeed demonstrate that our protocol achieves significant improvements in energy efficiency and network lifetime.
Abstract: In this paper, we present a new hybrid optical burst switch architecture (HOBS) that takes advantage of the pre-transmission idle
time during lightpath establishment. In dynamic circuit switching (wavelength routing) networks, capacity is immediately hardreserved
upon the arrival of a setup message at a node, but it is used at least a round-trip time delay later. This waste of resources
is significant in optical multi-gigabit networks and can be used to transmit traffic of a lower class of service in a non-competing
way. The proposed hybrid OBS architecture, takes advantage of this idle time to transmit one-way optical bursts of a lower class of
service, while high priority data explicitly requests and establishes end-to-end lightpaths. In the proposed scheme, the two control
planes (two-way and one-way OBS reservation) are merged, in the sense that each SETUP message, used for the two-way lightpath
establishment, is associated with one-way burst transmission and therefore it is modified to carry routing and overhead information
for the one-way traffic as well. In this paper, we present the main architectural features of the proposed hybrid scheme and further
we assess its performance by conducting simulation experiments on the NSF net backbone topology. The extensive network study
revealed that the proposed hybrid architecture can achieve and sustain an adequate burst transmission rate with a finite worst case
delay.
Abstract: In this work we present the architecture and implementation of WebDust, a software platform for managing multiple, heterogeneous (both in terms of software and hardware), geographically disparate sensor networks. We describe in detail the main concepts behind its design, and basic aspects of its implementation, including the services provided to end-users and developers. WebDust uses a peer-to-peer substrate, based on JXTA, in order to unify multiple sensor networks installed in various geographic areas. We aim at providing a software framework that will permit developers to deal with the new and critical aspects that networks of sensors and tiny devices bring into global computing, and to provide a coherent set of high level services, design rules and technical recommendations, in order to be able to develop the envisioned applications of global sensor networks. Furthermore, we give an overview of a deployed distributed testbed, consisting of a total 56 nodes and describing in more detail two specific testbed sites and the integration of the related software and hardware technologies used for its operation with our platform. Finally, we describe the designand implementation of an interface option provided to end-users, based on the popular Google Earth application.
Abstract: Raising awareness among young people on the
relevance of behaviour change for achieving energy savings is widely considered as a key approach towards long-term and costeffective energy efficiency policies. The GAIA Project aims to deliver a comprehensive solution for both increasing awareness on energy efficiency and achieving energy savings in school buildings. In this framework, we present a novel rule engine that, leveraging a resource-based graph model encoding relevant application domain knowledge, accesses IoT data for producing energy savings recommendations. The engine supports configurability, extensibility and ease-of-use requirements, to be easily applied and customized to different buildings. The paper introduces the main designand implementation details andpresents a set of preliminary performance results.
Abstract: We describe the designand implementation
of a secure and robust architectural data management
model suitable for cultural environments. Usage and exploitation
of the World Wide Web is a critical requirement
for a series of administrative tasks such as collecting, managing
and distributing valuable cultural and artistic information.
This requirement introduces a great number of
Internet threats for cultural organizations that may cause
huge data and/or financial losses, harm their reputation
andpublic acceptance as well as people’s trust on them.
Our model addresses a list of fundamental operational
and security requirements. It utilizes a number of cryptographic
primitives and techniques that provide data safety
and secure user interaction on especially demanding online
collaboration environments. We provide a reference
implementation of our architectural model and discuss
the technical issues. It is designed as a standalone solution
but it can be flexibly adapted in broader management
infrastructures.
Abstract: eVoting is a challenging approach for increasing eParticipation. However, lack of citizens¢ trust seems to be a main obstacle that hinders its successful realization. In this paper we propose a trust-centered engineering approach for building eVoting systems that people can trust, based on transparent designand implementation phases. The approach is based on three components: the decomposition of eVoting systems into “layers of trust” for reducing the complexity of managing trust issues in smaller manageable layers, the application of a risk analysis methodology able to identify and document security critical aspects of the eVoting system, and a cryptographically secure eVoting protocol. Our approach is pragmatic rather than theoretical in the sense that it sidesteps the controversy that besets the nature of trust in information systems and starts with a working definition of trust as people¢s positive attitude towards a system that performs its operations transparently.
Abstract: Designing wireless sensor networks is inherently complex; many aspects such as energy efficiency, limited resources, decentralized collaboration, fault tolerance have to be tackled. To be effective and to produce applicable results, fundamental research has to be tested, at least as a proof-of-concept, in large scale environments, so as to assess the feasibility of the new concepts, verify their large scale effects (not only at technological level, but also as for their foreseeable implications on users, society and economy) and derive further requirements, orientations and inputs for the research. In this paper we focus on the problems of interconnecting existing testbed environments via the Internet andproviding a virtual unifying laboratory that will support academia, research centers and industry in their research on networks and services. In such a facility important issues of trust, security, confidentiality and integrity of data may arise especially for commercial (or not) organizations. In this paper we investigate such issues andpresent the design of a secure and robust architectural model for interconnecting testbeds of wireless sensor networks.
Abstract: We study the problem of fast and energy-efficient data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually “learns” the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.
Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order
to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a)the greedy motion of the sink towards the region of highest density each time and b)taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated.
Abstract: We consider the problem of planning a mixed line
rates (MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support the transmission at different line
rates. Previously proposed planning algorithms, have used a
transmission reach limit for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more significant between
transmissions that utilize different modulation formats. Thus, the
transmission reach of a connection with a specific modulation
format/line rate depends also on the other connections that copropagate
with it in the network. To plan a MLR WDM network,
we present routing and wavelength assignment (RWA)
algorithms that take into account the adaptation of the
transmission reach of each connection according to the use of the
modulation formats/line rates in the network. The proposed
algorithms are able to plan the network so as to alleviate
interference effects, enabling the establishment of connections of
acceptable quality over paths that would otherwise be prohibited
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves (not just the sinks) are mobile. Furthermore, we focus on mobility scenarios characterized by heterogeneous, highly changing mobility roles in the network. To capture these high dynamics of diverse sensory motion we propose a novel network parameter,
the mobility level, which, although simple and local, quite accurately takes into account both the spatial and speed characteristics of motion. We then propose adaptive data dissemination protocols that use the mobility level estimation to optimize performance, by basically exploiting high mobility (redundant message ferrying) as a cost-effective replacement of flooding, e.g. the sensors tend to dynamically propagate less data in the presence
of high mobility, while nodes of high mobility are favored for moving data around. These dissemination schemes are enhanced by a distance-sensitive probabilistic message flooding inhibition mechanism that further reduces communication cost, especially for fast nodes of high mobility level, and as distance to data destination decreases. Our simulation findings
demonstrate significant performance gains of our protocols compared to non-adaptive protocols, i.e. adaptation increases the success rate and reduces latency (even by 15%) while at the same time significantly reducing energy dissipation (in most cases by even 40%). Also, our adaptive schemes achieve significantly higher message delivery ratio and
satisfactory energy-latency trade-offs when compared to flooding when sensor nodes have
limited message queues.
Abstract: We introduce a new modelling assumption for wireless sensor networks, that of node redeployment (addition of sensor devices during protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under these modelling assumptions, we design, implement and evaluate a new power conservation scheme for efficient data propagation. Our scheme is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards improved operation choices. The scheme is simple, distributed and does not require exchange of control messages between nodes.
Implementing our protocol in software we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of the network simulator ns-2. We focus on highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general andparameterized metric capturing the trade-offs between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation protocol) and good trade-offs achieved. Furthermore, the redeployment of additional sensors during network evolution and/or the heterogeneous deployment of sensors, drastically improve (when compared to ``equal total power" simultaneous deployment of identical sensors at the start) the protocol performance (i.e. the success rate increases up to four times} while reducing energy dissipation and, interestingly, keeping latency low).
Abstract: Clustering is a crucial network design approach to enable large-scale wireless sensor networks (WSNs) deployments. A large variety of clustering approaches has been presented focusing on different performance metrics. Such protocols usually aim at minimizing communication overhead, evenly distributing roles among the participating nodes, as well as controlling the network topology. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal wireless communication channels andperfect energy consumption estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper, we design a new clustering protocol that adapts to the changes in the environment and the needs and goals of the user applications. We address the issues that hinder its performance due to the real environment conditions andprovide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the \textsf{WISEBED} framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the clustering problem, regarding adapting to environmental conditions and the quality of topology control. Our study clearly demonstrates the applicability of our approach and the benefits it offers to both research \& development communities.
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. Furthermore, we focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics of diverse sensory motion
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to optimize performance, by basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.
Abstract: Motivated by emerging applications, we consider sensor networks where the sensors themselves
(not just the sinks) are mobile. We focus on mobility
scenarios characterized by heterogeneous, highly changing mobility
roles in the network.
To capture these high dynamics
we propose a novel network parameter, the mobility level, which, although
simple and local, quite accurately takes into account both the
spatial and speed characteristics of motion. We then propose
adaptive data dissemination protocols that use the
mobility level estimation to improve performance. By basically
exploiting high mobility (redundant message ferrying) as a cost-effective
replacement of flooding, e.g., the sensors tend to dynamically propagate
less data in the presence of high mobility, while nodes of high mobility
are favored for moving data around.
These dissemination schemes are enhanced by a distance-sensitive
probabilistic message flooding inhibition mechanism that
further reduces communication cost, especially for fast nodes
of high mobility level, and as distance to data destination
decreases. Our simulation findings demonstrate significant
performance gains of our protocols compared to non-adaptive
protocols, i.e., adaptation increases the success rate and reduces
latency (even by 15\%) while at the same time significantly
reducing energy dissipation (in most cases by even 40\%).
Also, our adaptive schemes achieve significantly
higher message delivery ratio and satisfactory energy-latency
trade-offs when compared to flooding when sensor nodes have limited message queues.
Abstract: Data propagation in wireless sensor networks can be performed either by hop-by-hop single transmissions or by multi-path broadcast of data. Although several energy-aware MAC layer protocols exist that operate very well in the case of single point-to-point transmissions, none is especially designed and suitable for multiple broadcast transmissions. The key idea of our protocols is the passive monitoring of local network conditions and the adaptation of the protocol operation accordingly. The main contribution of our adaptive method is to proactively avoid collisions by implicitly and early enough sensing the need for collision avoidance. Using the above ideas, we design, implement and evaluate three different, new strategies for proactive adaptation. We show, through a detailed and extended simulation evaluation, that our parameter-based family of protocols for multi-path data propagation significantly reduce the number of collisions and thus increase the rate of successful message delivery (to above 90%) by achieving satisfactory trade-offs with the average propagation delay. At the same time, our protocols are shown to be very energy efficient, in terms of the average energy dissipation per delivered message.
Abstract: We consider sensor networks where the sensor nodes are attached on entities that move in a highly dynamic, heterogeneous manner. To capture this mobility diversity we introduce a new network parameter, the direction-aware mobility
level, which measures how fast and close each mobile node is expected to get to the data destination (the sink). We then provide local, distributed data dissemination protocols
that adaptively exploit the node mobility to improve performance. In particular, "high" mobility is used as a low cost replacement for data dissemination (due to the ferrying of data), while in the case of "low" mobility either a) data propagation redundancy is increased (when highly mobile neighbors exist) or b) long-distance data transmissions are used (when the entire neighborhood is of low mobility) to accelerate data dissemination towards the sink. An extensive performance comparison to relevant methods from
the state of the art demonstrates signicant improvements i.e. latency is reduced by even 4 times while keeping energy dissipation and delivery success at very satisfactory levels.
Abstract: We investigate the problem of ecient wireless energy recharging in Wireless Rechargeable Sensor Networks (WRSNs). In
such networks a special mobile entity (called the Mobile Charger) traverses the network and wirelessly replenishes the energy
of sensor nodes. In contrast to most current approaches, we envision methods that are distributed, adaptive and use limited
network information. We propose three new, alternative protocols for ecient recharging, addressing key issues which we
identify, most notably (i) to what extent each sensor should be recharged (ii) what is the best split of the total energy between
the charger and the sensors and (iii) what are good trajectories the MC should follow. One of our protocols (
LRP
) performs
some distributed, limited sampling of the network status, while another one (
RTP
) reactively adapts to energy shortage alerts
judiciously spread in the network. As detailed simulations demonstrate, both protocols signicantly outperform known state
of the art methods, while their performance gets quite close to the performance of the global knowledge method (
GKP
) we
also provide, especially in heterogeneous network deployments.
Abstract: In this paper, we propose an efficient non-linear task workload prediction mechanism incorporated with a fair scheduling algorithm
for task allocation and resource management in Grid computing. Workload prediction is accomplished in a Grid middleware approach
using a non-linear model expressed as a series of finite known functional components using concepts of functional analysis. The coefficient
of functional components are obtained using a training set of appropriate samples, the pairs of which are estimated based on
a runtime estimation model relied on a least squares approximation scheme. The advantages of the proposed non-linear task workload
prediction scheme is that (i) it is not constrained by analysis of source code (analytical methods), which is practically impossible to be
implemented in complicated real-life applications or (ii) it does not exploit the variations of the workload statistics as the statistical
approaches does. The predicted task workload is then exploited by a novel scheduling algorithm, enabling a fair Quality of Service oriented
resource management so that some tasks are not favored against others. The algorithm is based on estimating the adjusted fair
completion times of the tasks for task order selection and on an earliest completion time strategy for the grid resource assignment. Experimental
results and comparisons with traditional scheduling approaches as implemented in the framework of European Union funded
research projects GRIA and GRIDLAB grid infrastructures have revealed the outperformance of the proposed method.
Abstract: Smart cities are becoming a vibrant application domain for a number of science fields. As such, service providers and stakeholders are beginning to integrate co-creation aspects into current implementations to shape the future smart city solutions. In this context, holistic solutions are required to test such aspects in real city-scale IoT deployments, considering the complex city ecosystems. In this work, we discuss OrganiCity¢s implementation of an Experimentation-as-a-Service framework, presenting a toolset that allows developing, deploying and evaluating smart city solutions in a one-stop shop manner. This is the first time such an integrated toolset is offered in the context of a large-scale IoT infrastructure, which spans across multiple European cities. We discuss the designand implementation of the toolset, presenting our view on what Experimentation-as-a-Service should provide, and how it is implemented. We present initial feedback from 25 experimenter teams that have utilized this toolset in the OrganiCity project, along with a discussion on two detailed actual use cases to validate our approach. Learnings from all experiments are discussed as well as architectural considerations for platform scaling. Our feedback from experimenters indicates that Experimentation-as-a-Service is a viable and useful approach.
Abstract: We argue the case for a new paradigm for architecting structured P2P overlay networks, coined AESOP. AESOP consists of 3 layers: (i) an architecture, PLANES, that ensures significant performance speedups, assuming knowledge of altruistic peers; (ii) an accounting/auditing layer, AltSeAl, that identifies and validates altruistic peers; and (iii) SeAledPLANES, a layer that facilitates the coordination/collaboration of the previous two components. We briefly present these components along with experimental and analytical data of the promised significant performance gains and the related overhead. In light of these very encouraging results, we put this three-layer architecture paradigm forth as the way to structure the P2P overlay networks of the future.
Abstract: We investigate the problem of efficient data collection in wireless sensor networks where both the sensors and the sink move. We especially study the important, realistic case where the spatial distribution of sensors is non-uniform and their mobility is diverse and dynamic. The basic idea of our protocol is for the sink to benefit of the local information that sensors spread in the network as they move, in order to extract current local conditions and accordingly adjust its trajectory. Thus, sensory motion anyway present in the network serves as a low cost replacement of network information propagation. In particular, we investigate two variations of our method: a) the greedy motion of the sink towards the region of highest density each time and b) taking into account the aggregate density in wider network regions. An extensive comparative evaluation to relevant data collection methods (both randomized and optimized deterministic), demonstrates that our approach achieves significant performance gains, especially in non-uniform placements (but also in uniform ones). In fact, the greedy version of our approach is more suitable in networks where the concentration regions appear in a spatially balanced manner, while the aggregate scheme is more appropriate in networks where the concentration areas are geographically correlated. We also investigate the case of multiple sinks by suggesting appropriate distributed coordination methods.
Abstract: Optical network designproblems fall in the broad
category of network optimization problems. We give a short
introduction on network optimization and general algorithmic
techniques that can be used to solve complex and difficult
network designproblems. We apply these techniques to address
the static Routing and Wavelength Assignment problem that is
related to planning phase of a WDM optical network. We present
simulation result to evaluate the performance of the proposed
algorithmic solution.
Abstract: As a result of recent significant technological advances, a new computing and communication environment, Mobile Ad Hoc Networks (MANET), is about to enter the mainstream. A multitude of critical aspects, including mobility, severe limitations and limited reliability, create a new set of crucial issues and trade-offs that must be carefully taken into account in the design of robust and efficient algorithms for these environments. The communication among mobile hosts is one among the many issues that need to be resolved efficiently before MANET becomes a commodity.
In this paper, we propose to discuss the communication problem in MANET as well as present some characteristic techniques for the design, the analysis and the performance evaluation of distributed communication protocols for mobile ad hoc networks. More specifically, we propose to review two different design techniques. While the first type of protocols tries to create and maintain routing paths among the hosts, the second set of protocols uses a randomly moving subset of the hosts that acts as an intermediate pool for receiving and delivering messages. We discuss the main design choices for each approach, along with performance analysis of selected protocols.
Abstract: In this work we study the important problem of colouring squares of planar graphs (SQPG). We designand implement two new algorithms that colour in a different way SQPG. We call these algorithms MDsatur and RC. We have also implemented and experimentally evaluated the performance of most of the known approximation colouring algorithms for SQPG [14, 6, 4, 10]. We compare the quality of the colourings achieved by these algorithms, with the colourings obtained by our algorithms and with the results obtained from two well-known greedy colouring heuristics. The heuristics are mainly used for comparison reasons and unexpectedly give very good results. Our algorithm MDsatur outperforms the known algorithms as shown by the extensive experiments we have carried out.
The planar graph instances whose squares are used in our experiments are “non-extremal” graphs obtained by LEDA and hard colourable graph instances that we construct.
The most interesting conclusions of our experimental study are:
1) all colouring algorithms considered here have almost optimal performance on the squares of “non-extremal” planar graphs. 2) all known colouring algorithms especially designed for colouring SQPG, give significantly better results, even on hard to colour graphs, when the vertices of the input graph are randomly named. On the other hand, the performance of our algorithm, MDsatur, becomes worse in this case, however it still has the best performance compared to the others. MDsatur colours the tested graphs with 1.1 OPT colours in most of the cases, even on hard instances, where OPT denotes the number of colours in an optimal colouring. 3) we construct worst case instances for the algorithm of Fotakis el al. [6], which show that its theoretical analysis is tight.
Abstract: This paper reviews the work performed under the
European ESPRIT project DO_ALL (Digital OpticAL Logic
modules) spanning from advanced devices (semiconductor optical
amplifiers) to all-optical modules (laser sources and gates) and
from optical signal processing subsystems (packet clock recovery,
optical write/store memory, and linear feedback shift register) to
their integration in the application level for the demonstration of
nontrivial logic functionality (all-optical bit-error-rate tester and
a 2 2 exchange–bypass switch). The successful accomplishment
of the project¢s goals has opened the road for the implementation
of more complex ultra-high-speed all-optical signal processing
circuits that are key elements for the realization of all-optical
packet switching networks.
Abstract: In this paper, we consider the problem of energy balanced data propagation in wireless sensor networks and we generalise previous works by allowing realistic energy assignment. A new modelisation of the process of energy consumption as a random walk along with a new analysis are proposed. Two new algorithms are presented and analysed. The first one is easy to implement and fast to execute. However, it needs a priori assumptions on the process generating data to be propagated. The second algorithm overcomes this need by inferring information from the observation of the process. Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes. This represents an important contribution for propagating energy balanced data in wireless sensor netwoks due to their highly dynamic nature.
Abstract: In this paper we study the problem of basic communication
in ad-hoc mobile networks where the deployment area changes in a
highly dynamic way and is unknown. We call such networks
highly changing ad-hoc mobile networks.
For such networks we investigate an efficient communication protocol which extends
the idea (introduced in [WAE01,POMC01]) of exploiting the co-ordinated
motion of a small part of an ad-hoc mobile
network (the ``runners support") to achieve
very fast communication between any two mobile users of the network.
The basic idea of the new protocol presented here is, instead
of using a fixed sized support for the whole duration of the protocol,
to employ a support of some initial (small) size which
adapts (given some time which can be made fast enough) to the
actual levels of traffic and the
(unknown andpossibly rapidly changing) network area by
changing its size in order to converge to an optimal size,
thus satisfying certain Quality of Service criteria.
We provide here some proofs of correctness and fault tolerance
of this adaptive approach and we also provide analytical results
using Markov Chains and random walk techniques to show that such
an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive
implementation of the basic ``runners support" idea.
Abstract: We introduce a new modelling assumption in wireless sensor networks, that of node redeployment (addition of sensor devices during the protocol evolution) and we extend the modelling assumption of heterogeneity (having sensor devices of various types). These two features further increase the highly dynamic nature of such networks and adaptation becomes a powerful technique for protocol design. Under this model, we design, implement and evaluate a power conservation scheme for efficient data propagation. Our protocol is adaptive: it locally monitors the network conditions (density, energy) and accordingly adjusts the sleep-awake schedules of the nodes towards best operation choices. Our protocol operates does not require exchange of control messages between nodes to coordinate.Implementing our protocol we combine it with two well-known data propagation protocols and evaluate the achieved performance through a detailed simulation study using our extended version of Ns2. We focus in highly dynamic scenarios with respect to network density, traffic conditions and sensor node resources. We propose a new general andparameterized metric capturing the trade-off between delivery rate, energy efficiency and latency. The simulation findings demonstrate significant gains (such as more than doubling the success rate of the well-known Directed Diffusion propagation paradigm) and good trade-offs. Furthermore, redeployment of sensors during network evolution and/or heterogeneous deployment of sensors drastically improve (when compared to equal total "power" simultaneous deployment of identical sensors at the start) the protocol performance (the success rate increases up to four times while reducing energy dissipation and, interestingly, keeping latency low).
Abstract: The use of Augmented Reality (AR) technologies is currently being investigated in numerous and diverse application domains. In this work, we discuss the ways in which we are integrating AR into educational in-class activities for the GAIA project, aiming to enhance existing tools that target behavioral changes towards energy efficiency in schools. We combine real-time IoT data from a sensing infrastructure inside a fleet of school buildings with AR software running on tablets and smartphones, as companions to a set of educational lab activities aimed at promoting energy awareness in a STEM context. We also utilize this software as a means to ease access to IoT data and simplify device maintenance. We report on the designand current status of our implementation, describing functionality in the context of our target applications, while also relaying our experiences from the use of such technologies in this application domain.
Abstract: We introduce a new model of
ad-hoc mobile networks, which we call hierarchical,
that are comprised of dense subnetworks of mobile
users (corresponding to highly populated
geographical areas, such as cities),
interconnected across access ports
by sparse but frequently used connections
(such as highways).
For such networks, we present
an efficient routing protocol which extends
the idea (introduced in WAE00) of exploiting the co-ordinated
motion of a small part of an ad-hoc mobile
network (the ``support'') to achieve
very fast communication between any two mobile users of the network.
The basic idea of the new protocol presented here is, instead
of using a unique (large) support for the whole network,
to employ a hierarchy of (small) supports (one for each city)
and also take advantage of the regular traffic
of mobile users across the interconnection highways to communicate
between cities.
We combine here theoretical analysis (average case estimations based on random walk properties) and experimental implementations (carried out using the LEDA platform) to claim and validate results showing that such a hierarchical routing approach is,
for this class of ad-hoc mobile networks, significantly more efficient than a simple extension of the
basic ``support'' idea presented in WAE00.
Abstract: The “small world” phenomenon, i.e., the fact that the
global social network is strongly connected in the sense
that every two persons are inter-related through a small
chain of friends, has attracted research attention and has
been strongly related to the results of the social
psychologist¢s Stanley Milgram experiments; properties
of social networks and relevant problems also emerge in
peer-to-peer systems and their study can shed light on
important modern network designproperties.
In this paper, we have experimentally studied greedy
routing algorithms, i.e., algorithms that route information
using “long-range” connections that function as
shortcuts connecting “distant” network nodes. In
particular, we have implemented greedy routing
algorithms, and techniques from the recent literature in
networks of line and grid topology using parallelization
for increasing efficiency. To the best of our knowledge, no
similar attempt has been made so far
Abstract: We present the conceptual basis and the initial planning for an open source management architecture for wireless sensor networks (WSN). Although there is an abundance of open source tools serving the administrative needs of WSN deployments, there is a lack of tools or platforms for high level integrated WSN management. This is because of a variety of factors, including the lack of open source management tools, the immaturity of tools that offer manageability for WSNs, the limited high level management capabilities of sensor devices and architectures, and the lack of standardization. The current work is, to our knowledge, the first effort to conceptualize, formalize and design a remote, integrated management platform for the support of WSN research laboratories. The platform is based on the integration and extension of two innovative platforms: jWebDust, a WSN operation and management platform, and OpenRSM, an open source integrated remote systems and network management platform. The proposed system architecture can support several levels of integration (infrastructure management, functionality integration, firmware management), corresponding to different use-cases and application settings.
Abstract: Urban road networks are represented as directed graphs, accompanied by a metric which assigns cost functions (rather than scalars) to the arcs, e.g. representing time-dependent arc-traversal-times. In this work, we present oracles for providing time-dependent min-cost route plans, and conduct their experimental evaluation on a real-world data set (city of Berlin). Our oracles are based on precomputing all landmark-to-vertex shortest travel-time functions, for properly selected landmark sets. The core of this preprocessing phase is based on a novel, quite efficient and simple oneto-all approximation method for creating approximations of shortest travel-time functions. We then propose three query algorithms, including a PTAS, to efficiently provide mincost route plan responses to arbitrary queries. Apart from the purely algorithmic challenges, we deal also with several
implementation details concerning the digestion of raw traffic data, and we provide heuristic improvements of both the preprocessing phase and the query algorithms. We conduct an extensive, comparative experimental study with all query algorithms and six landmark sets. Our results are quite encouraging, achieving remarkable speedups (at least by two orders of magnitude) and quite small approximation guarantees, over the time-dependent variant of Dijkstra¢s algorithm.
Abstract: We introduce a new model of ad-hoc mobile networks,
which we call hierarchical, that are comprised of
dense subnetworks of mobile users (corresponding to highly
populated geographical areas), interconnected across access
ports by sparse but frequently used connections.
To implement communication in such a case, a possible
solution would be to install a very fast (yet limited) backbone
interconnecting such highly populated mobile user areas, while
employing a hierarchy of (small) supports (one for each lower level
site). This fast backbone provides a limited number of access
ports within these dense areas of mobile users.
We combine here theoretical analysis (average case estimations based on
random walk properties) to claim and validate
results showing that such a hierarchical routing approach is,
for this class of ad-hoc mobile networks, significantly
more efficient than a simple extension of the
basic ``support'' idea presented in [WAE00,DISC01].
Abstract: Motivated by the wavelength assignment problem in WDM optical networks, we study path coloring problems in graphs. Given a set of paths P on a graph G, the path coloring problem is to color the paths of P so that no two paths traversing the same edge of G are assigned the same color and the total number of colors used is minimized. The problem has been proved to be NP-hard even for trees and rings.
Using optimal solutions to fractional path coloring, a natural relaxation of path coloring, on which we apply a randomized rounding technique combined with existing coloring algorithms, we obtain new upper bounds on the minimum number of colors sufficient to color any set of paths on any graph. The upper bounds are either existential or constructive.
The existential upper bounds significantly improve existing ones provided that the cost of the optimal fractional path coloring is sufficiently large and the dilation of the set of paths is small. Our algorithmic results include improved approximation algorithms for path coloring in rings and in bidirected trees. Our results extend to variations of the original path coloring problem arizing in multifiber WDM optical networks.
Abstract: We consider the Moran process, as generalized by Lieberman, Hauert and Nowak (Nature, 433:312--316, 2005). A population resides on the vertices of a finite, connected, undirected graph and, at each time step, an individual is chosen at random with probability proportional to its assigned 'fitness' value. It reproduces, placing a copy of itself on a neighbouring vertex chosen uniformly at random, replacing the individual that was there. The initial population consists of a single mutant of fitness r>0 placed uniformly at random, with every other vertex occupied by an individual of fitness 1. The main quantities of interest are the probabilities that the descendants of the initial mutant come to occupy the whole graph (fixation) and that they die out (extinction); almost surely, these are the only possibilities. In general, exact computation of these quantities by standard Markov chain techniques requires solving a system of linear equations of size exponential in the order of the graph so is not feasible. We show that, with high probability, the number of steps needed to reach fixation or extinction is bounded by a polynomial in the number of vertices in the graph. This bound allows us to construct fully polynomial randomized approximation schemes (FPRAS) for the probability of fixation (when r≥1) and of extinction (for all r>0).
Abstract: The study of the path coloring problem is motivated by the allocation of optical bandwidth to communication requests in all-optical networks that utilize Wavelength Division Multiplexing (WDM). WDM technology establishes communication between pairs of network nodes by establishing transmitter-receiver paths and assigning wavelengths to each path so that no two paths going through the same fiber link use the same wavelength. Optical bandwidth is the number of distinct wavelengths. Since state-of-the-art technology allows for a limited number of wavelengths, the engineering problem to be solved is to establish communication minimizing the total number of wavelengths used. This is known as the wavelength routing problem. In the case where the underlying network is a tree, it is equivalent to the path coloring problem.
We survey recent advances on the path coloring problem in both undirected and bidirected trees. We present hardness results and lower bounds for the general problem covering also the special case of sets of symmetric paths (corresponding to the important case of symmetric communication). We give an overview of the main ideas of deterministic greedy algorithms andpoint out their limitations. For bidirected trees, we present recent results about the use of randomization for path coloring and outline approximation algorithms that find path colorings by exploiting fractional path colorings. Also, we discuss upper and lower bounds on the performance of on-line algorithms.
Abstract: We examine the problem of assigning n independent jobs to m unrelated parallel machines, so that each job is processed without interruption on one of the machines, and at any time, every machine processes at most one job. We focus on the case where m is a fixed constant, andpresent a new rounding approach that yields approximation schemes for multi-objective minimum makespan scheduling with a fixed number of linear cost constraints. The same approach gives approximation schemes for covering problems like maximizing the minimum load on any machine, and for assigning specific or equal loads to the machines.
Abstract: We present a new architecture for bufferless, asynchronous all-optical self-routing network combining
an efficient physical layer structure and conflict-preventing signaling protocol for providing lossless communication
with optimum resource utilization and QoS differentiation.
Abstract: We present a 40 Gb/s asynchronous self-routing network and node architecture that exploits bit
andpacket level optical signal processing to perform synchronization, forwarding and
switching. Optical packets are self-routed on a hop-by-hop basis through the network by using
stacked optical tags, each representing a specific optical node. Each tag contains control signals
for configuring the switching matrix and forwarding each packet to the appropriate outgoing
link and onto the next hop. Physical layer simulations are performed, modeling each optical subsystem
of the node showing acceptable signal quality and Bit Error Rates. Resource reservationbased
signaling algorithms are theoretically modeled for the control plane capable of providing
high performance in terms of blocking probability and holding time.
Abstract: We designand implement an algorithm for solving the static RWA problem based on an LP relaxation formulation. This formulation is capable of providing integer optimal solutions despite the absence of integrality constraints for a large subset of RWA input instances. In static RWA there is no a-priori knowledge of the channels usage and the interference among them cannot be avoided once the solution has been found. To take into consideration adjacent channel interference, we extend our formulation and model the interference by a set of analytical formulas as additional constraints on RWA.
Abstract: In this paper, we present BAD, an application-level multi-
cast infrastructure. BAD is designed to improve the perfor-
mance of multicast dissemination trees, under both a static
and a dynamic environment, where the eective bandwidth
of the network links changes with time. Its main goal is
to improve the data rate that end users perceive during
a multicast operation. BAD can be used for the creation
and management of multicast groups. It can be deployed
over any DHT retaining its fundamental advantages of band-
width improvement. BAD consists of a suit of algorithms
for node joins/leaves, bandwidth distribution to heteroge-
neous nodes, tree rearrangement and reduction of overhead.
We have implemented BAD within the FreePastry system.
We report on the results of a detailed performance evalua-
tion which testies for BAD's eciency and low overhead.
Specically, our experiments show that the improvement on
the minimum bandwidth ranges from 40% to 1400% and the
improvement on the average bandwidth ranges from 60% to
250%.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: Collecting sensory data using a mobile data sink has
been shown to drastically reduce energy consumption at the cost of increasing delivery delay. Towards improved energy-latency trade-offs, we propose a biased, adaptive sink mobility scheme, that adjusts to local network conditions, such as the surrounding
density, remaining energy and the number of past visits in each network region. The sink moves probabilistically, favoring less visited areas in order to cover the network area faster, while adaptively stopping more time in network regions that tend to
produce more data. We implement and evaluate our mobility scheme via simulation in diverse network settings. Compared to known blind random, non-adaptive schemes, our method achieves
significantly reduced latency, especially in networks with nonuniform sensor distribution, without compromising the energy efficiency and delivery success.
Abstract: We focus on range query processing on large-scale, typically
distributed infrastructures. In this work we present the ART
(Autonomous Range Tree) structure, which outperforms
the most popular decentralized structures, including Chord
(and some of its successors), BATON (and its successor) and
Skip-Graphs. ART supports the join/leave and range query
operations in O(log logN) and O(log2
b logN +|A|) expected
w.h.p number of hops respectively, where the base b is a
double-exponentially power of two, N is the total number of
peers and |A| the answer size.
Abstract: Purpose – To examine broadband competition and broadbandpenetration in a set of countries that employ the same regulation framework. To define the policy and strategy required to promote broadband in weak markets that do not employ alternative infrastructures.
Design/methodology/approach – Study penetration and competition level statistics from 2002 to 2005 in a set of countries with different infrastructures deployed, services provided as well as in their social-economic structures but employing the same regulation framework. Measure the level of inter-platform and intra-platform competition as well as the availability of bitstream access versus the incumbents' shares.
Findings – The paper concludes that a mature broadband market is the one that exhibits a high penetration ratio in combination with a high competition level. Bitstream access can counterbalance the inexistence of alternative broadband infrastructures, especially in weak markets. In particular the availability of numerous bitstream access types in combination with the proper price differentiation can fuel broadband adoption in relatively weak broadband markets.
Originality/value – The paper challenges the general rule that only platform (also known as facility) based competition guarantees long-term growth of the broadband market. Bitstream and resale access do not lag local loop unbundling and can be used in weak markets that do not employ alternative infrastructures to fuel competition in the relevant markets. Different policies and strategies must be followed, in that case, on behalf of the local NRA.
Abstract: In this work we present a platform-agnostic framework for intergrating heterogeneous Smart Objects in the Web of Things. Our framework, consists of 4 different hardware platforms, Arduino, SunSPOT, TelosB, iSense. These hardware platforms are the most representative ones, as used by the relevant research community. A first contribution of our work is a careful description of the necessary steps to make such a heterogeneous network interoperate and the implementation of a network stack, in the form of a software library, named mkSense, which enables their intercommunication. Moreover, we describe the designand implementation of software library which can be used for building “intelligent software” for the Web of Things.
Abstract: Ever since the creation of the first human society, people have understood that the only way of sustaining and improving their societies is to rely on each other for exchanging services. This reliance have traditionally built on developing, among them, {\em trust}, a vague, intuitive to a large extend and hard to define concept that brought together people who worked towards the progress we all witness around us today. % Today's society is, however, becoming increasingly massive, collective, and complex and includes not only people, but huge numbers of machines as well. It is no overstatement to say that machines interconnected together into a complex communication fabric as well as communicating directly with people will very soon outnumber people by several orders of magnitude. Thus, trust, being already a difficult concept to define and measure when applied to a few people that form a cooperating group or a set of acquaintances, it is far more difficult to pinpoint when applied to large communities whose members may hardly know each other in person or to interconnected machines employed by these communities. In this paper we attempt to take a pragmatic position with regard to trust definition and measurement. We employ several formalisms, into each of which we define a reasonable notion of trust, and show that inherent weaknesses of these formalisms result in an inability to have a concrete and fully measurable trust concept. % We then argue that trust in the modern intertwined WWW society must, necessarily, incorporate to some degree non-formalizable elements, such as common sense and intuition. Although this may sound pessimistic, our view is that it is not, since by understanding these limitations of formalism with regard to trust, increases self-awareness and caution on people's part and avoid problems that result if one relies only on automation in order to deduce trust.
Abstract: We demonstrate an all-optical clock and data recovery
circuit for short asynchronous data packets at 10-Gb/s line
rate. The technique employs a Fabry–P{\'e}rot filter and an ultrafast
nonlinear interferometer (UNI) to generate the local packet
clock, followed by a second UNI gate to act as decision element,
performing a logical AND operation between the extracted clocks
and the incoming data packets. The circuit can handle short
packets arriving at time intervals as short as 1.5 ns and arbitrary
phase alignment.
Abstract: We study the problem of energy-balanced data propagation in wireless sensor networks. The energy balance property is crucial for maximizing the time the network is functional, by avoiding early energy depletion of a large portion of sensors. We propose a distributed, adaptive data propagation algorithm that exploits limited, local network density information for achieving energy-balance while at the same time
minimizing energy dissipation.
We investigate both uniform and heterogeneous sensor placement distributions. By a detailed experimental evaluation and comparison with well-known energy-balanced protocols, we show that our density-based protocol improves energy efficiency signicantly while also having better energy balance properties.
Furthermore, we compare the performance of our protocol with a centralized, o-line optimum solution derived by a linear program which maximizes the network lifetime and show that it achieves near-optimal performance for uniform sensor deployments.
Abstract: The Team Orienteering Problem with Time Windows (TOPTW)
deals with deriving a number of tours comprising a subset of candidate
nodes (each associated with a \prot" value and a visiting time window)
so as to maximize the overall \prot", while respecting a specied time
span. TOPTW has been used as a reference model for the Tourist Trip
DesignProblem (TTDP) in order to derive near-optimal multiple-day
tours for tourists visiting a destination featuring several points of inter-
est (POIs), taking into account a multitude of POI attributes. TOPTW
is an NP-hard problem and the most ecient known heuristic is based on
Iterated Local Search (ILS). However, ILS treats each POI separately;
hence it tends to overlook highly protable areas of POIs situated far
from the current location, considering them too time-expensive to visit.
We propose two cluster-based extensions to ILS addressing the afore-
mentioned weakness by grouping POIs on disjoint clusters (based on
geographical criteria), thereby making visits to such POIs more attrac-
tive. Our approaches improve on ILS with respect to solutions quality,
while executing at comparable time and reducing the frequency of overly
long transfers among POIs.
Abstract: This work proposes a methodology for source code quality and static behaviour evaluation of a software
system, based on the standard ISO/IEC-9126. It uses elements automatically derived from source code
enhanced with expert knowledge, in the form of quality characteristic rankings, allowing software
engineers to assign weights to source code attributes. It is flexible in terms of the set of metrics and
source code attributes employed, even in terms of the ISO/IEC-9126 characteristics to be assessed. We
applied the methodology to two case studies, involving five open source and one proprietary system.
Results demonstrated that the methodology can capture software quality trends and express expert
perceptions concerning system quality in a quantitative and systematic manner
Abstract: An intersection graph of n vertices assumes that each vertex is equipped with a subset of a global label set. Two vertices share an edge
when their label sets intersect. Random Intersection Graphs (RIGs) (as defined in [18, 31]) consider label sets formed by the following experiment:
each vertex, independently and uniformly, examines all the labels (m in total) one by one. Each examination is independent and the vertex
succeeds to put the label in her set with probability p. Such graphs nicely capture interactions in networks due to sharing of resources among nodes.
We study here the problem of efficiently coloring (and of finding upper bounds to the chromatic number) of RIGs. We concentrate in a range
of parameters not examined in the literature, namely: (a) m = n{\'a} for less than 1 (in this range, RIGs differ substantially from the Erd¨os- Renyi random graphs) and (b) the selection probability p is quite high
(e.g. at least ln2 n m in our algorithm) and disallows direct greedy colouring methods.
We manage to get the following results:
For the case mp ln n, for any constant < 1 − , we prove that np colours are enough to colour most of the vertices of the graph with high probability (whp). This means that even for quite dense
graphs, using the same number of colours as those needed to properly colour the clique induced by any label suffices to colour almost all of the vertices of the graph. Note also that this range of values of m, p
is quite wider than the one studied in [4].
� We propose and analyze an algorithm CliqueColour for finding a proper colouring of a random instance of Gn,m,p, for any mp >=ln2 n. The algorithm uses information of the label sets assigned to the
vertices of Gn,m,pand runs in O (n2mp2/ln n) time, which is polynomial in n and m. We also show by a reduction to the uniform random
intersection graphs model that the number of colours required by the algorithm are of the correct order of magnitude with the actual
chromatic number of Gn,m,p.
⋆ This work was partially supported by the ICT Programme of the European Union under contract number ICT-2008-215270 (FRONTS). Also supported by Research Training Group GK-693 of the Paderborn Institute for Scientific Computation
(PaSCo).
� We finally compare the problem of finding a proper colouring for Gn,m,p to that of colouring hypergraphs so that no edge is monochromatic.We show how one can find in polynomial time a k-colouring of the vertices of Gn,m,p, for any integer k, such that no clique induced by only one label in Gn,m,p is monochromatic. Our techniques are novel and try to exploit as much as possible the hidden structure of random intersection graphs in this interesting range.
Abstract: We investigate random intersection graphs, a combinatorial model that quite accurately abstracts distributed networks with local interactions between nodes blindly sharing critical resources from a limited globally available domain. We study important combinatorial properties (independence and hamiltonicity) of such graphs. These properties relate crucially to algorithmic design for important problems (like secure communication and frequency assignment) in distributed networks characterized by dense, local interactions and resource limitations, such as sensor networks. In particular, we prove that, interestingly, a small constant number of random, resource selections suffices to make the graph hamiltonian and we provide tight evaluations of the independence number of these graphs.
Abstract: This chapter aims at presenting certain important aspects of the design of lightweight, event-driven algorithmic solutions for data dissemination in wireless sensor networks that provide support for reliable, efficient and concurrency-intensive operation. We wish to emphasize that efficient solutions at several levels are needed, e.g.~higher level energy efficient routing protools and lower level power management schemes. Furthermore, it is important to combine such different level methods into integrated protocols and approaches. Such solutions must be simple, distributed and local. Two useful algorithmic designprinciples are randomization (to trade-off efficiency and fault-tolerance) and adaptation (to adjust to high network dynamics towards improved operation). In particular, we provide a) a brief description of the technical specifications of state-of-the-art sensor devices b) a discussion of possible models used to abstract such networks, emphasizing heterogeneity, c) some representative power management schemes, and d) a presentation of some characteristic protocols for data propagation. Crucial efficiency properties of these schemes andprotocols (and their combinations, in some cases) are investigated by both rigorous analysis andperformance evaluations through large scale simulations.
Abstract: In this work, we overview some results concerning communication combinatorial properties in random intersection graphs and uniform random intersection graphs. These properties relate crucially to algorithmic design for important problems (like secure communication and frequency assignment) in distributed networks characterized by dense, local interactions and resource limitations, such as sensor networks. In particular, we present and discuss results concerning the existence of large independent sets of vertices whp in random instances of each of these models. As the main contribution of our paper, we introduce a new, general model, which we denote G(V, χ, f). In this model, V is a set of vertices and χ is a set of m vectors in ℝm. Furthermore, f is a probability distribution over the powerset 2χ of subsets of χ. Every vertex selects a random subset of vectors according to the probability f and two vertices are connected according to a general intersection rule depending on their assigned set of vectors. Apparently, this new general model seems to be able to simulate other known random graph models, by carefully describing its intersection rule.
Abstract: We present and evaluate a compact, all-optical Clock and Data
Recovery (CDR) circuit based on integrated Mach Zehnder interferometric
switches. Successful operation for short packet-mode traffic of variable
length andphase alignment is demonstrated. The acquired clock signal rises
within 2 bits and decays within 15 bits, irrespective of packet length andphase. Error-free operation is demonstrated at 10 Gb/s.
Abstract: We designand implement various algorithms for
solving the static RWA problem with the objective of minimizing
the maximum number of requested wavelengths based on LP
relaxation formulations. We present a link formulation, a path
formulation and a heuristic that breaks the problem in the two
constituent subproblems and solves them individually and
sequentially. The flow cost functions that are used in these
formulations result in providing integer optimal solutions despite
the absence of integrality constraints for a large subset of RWA
input instances, while also minimizing the total number of used
wavelengths. We present a random perturbation technique that is
shown to increase the number of instances for which we find
integer solutions, and we also present appropriate iterative fixing
and rounding methods to be used when the algorithms do not yield
integer solutions. We comment on the number of variables and
constraints these formulations require andperform extensive
simulations to compare their performance to that of a typical minmax
congestion formulation.
Abstract: We address an important communication issue arising in
wireless cellular networks that utilize frequency division
multiplexing (FDM) technology. In such networks, many
users within the same geographical region (cell) can communicate
simultaneously with other users of the network
using distinct frequencies. The spectrum of the available
frequencies is limited; thus, efficient solutions to the call
controlproblemareessential.Theobjectiveofthecallcontrol
problem is, given a spectrum of available frequencies
and users that wish tocommunicate, to maximize the benefit,
i.e., the number of users that communicate without
signalinterference.Weconsidercellularnetworksofreuse
distance k ≥ 2 and we study the online version of the
problem using competitive analysis. In cellular networks
of reuse distance 2, the previously best known algorithm
that beats the lower bound of 3 on the competitiveness
of deterministic algorithms, works on networks with one
frequency, achieves a competitive ratio against oblivious
adversaries, which is between 2.469 and 2.651, and uses
a number of random bits at least proportional to the size
of the network.We significantly improve this result by presentingaseriesofsimplerandomizedalgorithmsthathave
competitiveratiossignificantlysmallerthan3,workonnetworks
with arbitrarily many frequencies, and use only a
constant number of random bits or a comparable weak
random source. The best competitiveness upper bound
we obtain is 16/7 using only four random bits. In cellular
networks of reuse distance k > 2, we present simple
randomized online call control algorithms with competitive
ratios, which significantly beat the lower bounds on
the competitiveness of deterministic ones and use only
O(log k )randombits. Also,weshownewlowerboundson
thecompetitivenessofonlinecallcontrolalgorithmsincellularnetworksofanyreusedistance.
Inparticular,weshow
thatnoonline algorithm can achieve competitive ratio better
than 2, 25/12, and 2.5, in cellular networks with reuse
distancek ∈ {2, 3, 4},k = 5,andk ≥ 6, respectively.
Abstract: All-optical gate control signal generation is demonstrated
from flag pulses, using a Fabry–P{\'e}rot filter followed by
a semiconductor optical amplifier. Ten control pulses are generated
from a single flag pulse having less than 0.45-dB amplitude
modulation. By doubling or tripling the number of flag pulses, the
number of control pulses increases approximately by a factor of
two or three. The circuit can control the switching state of all-optical
switches, on a packet-by-packet basis, and can be used for
nontrivial network functionalities such us self-routing.
Abstract: In this work, we expanded the Arduino's
capabilities by adding an 802.15.4 wireless module, in order to
expose its functionality as a Web of Things node. The second
contribution of our work is a careful description of the necessary
steps to make a heterogeneous network interoperate and the
implementation of a network stack for the 4 most representative
hardware platforms, as used by the relevant research community
(Arduino, SunSPOT, TelosB, iSense), in the form of a software
library, named mkSense, which enables their
intercommunication. Moreover, we describe the designand
implementation of a software library which can be used for
building “intelligent software” for the Web of Things.
Abstract: Ever-increasing bandwidth demands and higher flexibility are the main challenges for the next generation optical core networks. A new trend in order to address these challenges is to consider the impairments of the lightpaths during the design of optical networks. In our work, we focus on translucent optical networks, where some lightpaths are routed transparently, whereas others go through a number of regenerators. We present a cost analysis of design strategies, which are based either on an exact Quality of Transmission (QoT) validation or on a relaxed one and attempt to reduce the amount of regenerators used. In the exact design strategy, regenerators are required if the QoT of a candidate lightpath is below a predefined threshold, assuming empty network conditions. In the relaxed strategy, this predefined threshold is lower, while it is assumed that the network is fully loaded. We evaluate techno-economically the suggested design solutions and also show that adding more flexibility to the optical nodes has a large impact to the total infrastructure cost.
Abstract: We propose a simple and intuitive cost mechanism which assigns costs for the competitive
usage of m resources by n selfish agents. Each agent has an individual demand; demands are
drawn according to some probability distribution. The cost paid by an agent for a resource
it chooses is the total demandput on the resource divided by the number of agents who
chose that same resource. So, resources charge costs in an equitable, fair way, while each
resource makes no profit out of the agents.
We call our model the Fair Pricing model. Its fair cost mechanism induces a noncooperative
game among the agents. To evaluate the Nash equilibria of this game, we
introduce the Diffuse Price of Anarchy, as an extension of the Price of Anarchy that takes
into account the probability distribution on the demands. We prove:
² Pure Nash equilibria may not exist, unless all chosen demands are identical.
² A fully mixed Nash equilibrium exists for all possible choices of the demands. Further
on, the fully mixed Nash equilibrium is the unique Nash equilibrium in case there are
only two agents.
² In the worst-case choice of demands, the Price of Anarchy is £(n); for the special case
of two agents, the Price of Anarchy is less than 2 ¡ 1
m .
² Assume now that demands are drawn from a bounded, independent probability distribution,
where all demands are identically distributed, and each demand may not exceed
some (universal for the class) constant times its expectation. It happens that the constant
is just 2 when each demand is distributed symmetrically around its expectation.
We prove that, for asymptotically large games where the number of agents tends to
infinity, the Diffuse Price of Anarchy is at most that universal constant. This implies
the first separation between Price of Anarchy and Diffuse Price of Anarchy.
Towards the end, we consider two closely related cost sharing models, namely the Average
Cost Pricing and the Serial Cost Sharing models, inspired by Economic Theory. In contrast
to the Fair Pricing model, we prove that pure Nash equilibria do exist for both these models.
Abstract: Counting in general, and estimating the cardinality of (multi-) sets in particular, is highly desirable for a large variety of applications, representing a foundational block for the efficient deployment and access of emerging internet-scale information systems. Examples of such applications range from optimizing query access plans in internet-scale databases, to evaluating the significance (rank/score) of various data items in information retrieval applications. The key constraints that any acceptable solution must satisfy are: (i) efficiency: the number of nodes that need be contacted for counting purposes must be small in order to enjoy small latency and bandwidth requirements; (ii) scalability, seemingly contradicting the efficiency goal: arbitrarily large numbers of nodes nay need to add elements to a (multi-) set, which dictates the need for a highly distributed solution, avoiding server-based scalability, bottleneck, and availability problems; (iii) access and storage load balancing: counting and related overhead chores should be distributed fairly to the nodes of the network; (iv) accuracy: tunable, robust (in the presence of dynamics and failures) and highly accurate cardinality estimation; (v) simplicity and ease of integration: special, solution-specific indexing structures should be avoided. In this paper, first we contribute a highly-distributed, scalable, efficient, and accurate (multi-) set cardinality estimator. Subsequently, we show how to use our solution to build and maintain histograms, which have been a basic building block for query optimization for centralized databases, facilitating their porting into the realm of internet-scale data networks.
Abstract: DAP (Distributed Algorithms Platform) is a generic and homogeneous simulation environment aiming at the implementation, simulation, and testing of distributed algorithms for wired and wireless networks. In this work, we present its architecture, the most important design decisions, and discuss its distinct features and functionalities. DAP allows the algorithm designer to implement a distributed protocol by creating his own customized environment, andprogramming in a standard programming language in a style very similar to that of a real-world application. DAPprovides a graphical user interface that allows the designer to monitor and control the execution of simulations, visualize algorithms, as well as gather statistics and other information for their experimental analysis and testing.
Abstract: The Greek School Network (GSN) is the nationwide network that connects all units of primary and secondary education in Greece. GSN offers a significant set of diverse services to more than 15.000 schools and administrative units, and more than 60.000 teachers, placing GSN second in infrastructure size nationwide. GSN has relied on the emerging power of open source software to build cutting-edge services capable of covering internal administrative and monitoring needs, end user demands, and, foremost, modern pedagogical requirements for tools and services. GSN provides a wide set of advanced services, varying from web mail to virtual classrooms and synchronous/asynchronous tele-education. This paper presents an evaluation of GSN open source services based on the opinions of users who use GSN for educational purposes, and on usage and traffic measurement statistics. The paper reaches the conclusion that open source software provides a sound technological platform that meets the needs for cutting edge educational services deployment, and innovative, competitive software production for educational networks.
Abstract: In this work we introduce two practical and interesting models of ad-hoc mobile networks: (a) hierarchical ad-hoc networks, comprised of dense subnetworks of mobile users interconnected by a very fast yet limited backbone infrastructure, (b) highly changing ad-hoc networks, where the deployment area changes in a highly dynamic way and is unknown to the protocol. In such networks, we study the problem of basic communication, i.e., sending messages from a sender node to a receiver node. For highly changing networks, we investigate an efficient communication protocol exploiting the coordinated motion of a small part of an ad-hoc mobile network (the ldquorunners supportrdquo) to achieve fast communication. This protocol instead of using a fixed sized support for the whole duration of the protocol, employs a support of some initial (small) size which adapts (given some time which can be made fast enough) to the actual levels of traffic and the (unknown andpossibly rapidly changing) network area, by changing its size in order to converge to an optimal size, thus satisfying certain Quality of Service criteria. Using random walks theory, we show that such an adaptive approach is, for this class of ad-hoc mobile networks, significantly more efficient than a simple non-adaptive implementation of the basic ldquorunners supportrdquo idea, introduced in [9,10]. For hierarchical ad-hoc networks, we establish communication by using a ldquorunnersrdquo support in each lower level of the hierarchy (i.e., in each dense subnetwork), while the fast backbone provides interconnections at the upper level (i.e., between the various subnetworks). We analyze the time efficiency of this hierarchical approach. This analysis indicates that the hierarchical implementation of the support approach significantly outperforms a simple implementation of it in hierarchical ad-hoc networks. Finally, we discuss a possible combination of the two approaches above (the hierarchical and the adaptive ones) that can be useful in ad-hoc networks that are both hierarchical and highly changing. Indeed, in such cases the hierarchical nature of these networks further supports the possibility of adaptation.
Abstract: In this work we investigate the problem of communication among mobile hosts, one of the most fundamental problems in ad-hoc mobile networks that is at the core of many algorithms. Our work investigates the extreme case of total absence of any fixed network backbone or centralized administration, instantly forming networks based only on mobile hosts with wireless communication capabilities, where topological connectivity is subject to frequent, unpredictable change.
For such dynamically changing networks we propose a set of protocols which exploit the coordinated (by the protocol) motion of a small part of the network in order to manage network operations. We show that such protocols can be designed to work correctly and efficiently for communication by avoiding message flooding. Our protocols manage to establish communication between any pair of mobile hosts in small, a-priori guaranteed expected time bounds. Our results exploit and further develop some fundamental properties of random walks in finite graph.
Apart from studying the general case, we identify two practical and interesting cases of ad-hoc mobile networks: a) hierarchical ad-hoc networks, b) highly changing ad-hoc networks, for which we propose protocols that efficiently deal with the problem of basic communication.
We have conducted a set of extensive experiments, comprised of thousands of mobile hosts in order to validate the theoretical results and show that our protocols achieve very efficient communication under different scenaria.
Abstract: Evaluating target tracking protocols for wireless sensor networks that can localize multiple mobile devices, can be a very challenging task. Such protocols usually aim at minimizing communication overhead, data processing for the participating nodes, as well as delivering adequate tracking information of the mobile targets in a timely manner. Simulations on such protocols are performed using theoretical models that are based on unrealistic assumptions like the unit disk graph communication model, ideal network localization andperfect distance estimations. With these assumptions taken for granted, theoretical models claim various performance milestones that cannot be achieved in realistic conditions. In this paper we design a new localization protocol, where mobile assets can be tracked passively via software agents. We address the issues that hinder its performance due to the real environment conditions andprovide a deployable protocol. The implementation, integration and experimentation of this new protocol and it's optimizations, were performed using the WISEBED framework. We apply our protocol in multiple indoors wireless sensor testbeds with multiple experimental scenarios to showcase scalability and trade-offs between network properties and configurable protocol parameters. By analysis of the real world experimental output, we present results that depict a more realistic view of the target tracking problem, regarding power consumption and the quality of tracking information. Finally we also conduct some very focused simulations to assess the scalability of our protocol in very large networks and multiple mobile assets.
Abstract: We have designed and implemented a platform that enables monitoring and actuation in multiple buildings, that has been utilised in the context of a research project in Greece, focusing on public school buildings. The Green Mindset project has installed IoT devices in 12 Greek public schools to monitor energy consumption, along with indoor and outdoor environmental parameters. We present the architecture and actual deployment of our system, along with a first set of findings.
Abstract: In this paper, we present a Programmable Packet Processing Engine suitable for deep header processing in high-speed networking systems.
The engine, which has been – fabricated as part of a complete network processor, consists of a typical RISC-CPU, whose register
Wle has been modiWed in order to support eYcient context switching, and two simple special-purpose processing units. The engine can be
used in a number of network processing units (NPUs), as an alternative to the typical designpractice of employing a large number of simple
general purpose processors, or in any other embedded system designed to process mainly network protocols. To assess the performance
of the engine, we have proWled typical networking applications and a series of experiments were carried out. Further, we have
compared the performance of our processing engine to that of two widely used NPUs and show that our proposed packet-processing
engine can run speciWc applications up to three times faster. Moreover, the engine is simpler to be fabricated, less complex in terms of
hardware complexity, while it can still be very easily programmed.
Abstract: Pervasive games are a new type of digital games that combines game andphysical reality within the gameplay. This novel game type raises unprecedented research and design challenges for developers and urges the exploration of new technologies and methods to create high quality game experiences and design novel and compelling forms of content for the players. This chapter follows a systematic approach to explore the landscape of pervasive gaming. First, the authors approach pervasive games from a theoretical point of view, defining the four axes of pervasive games design, introducing the concept of game world persistency, and describing aspects of spatially/temporally/socially expanded games. Then, they present ten pervasive game projects, classified in five genres based on their playing environment and features. Following that, the authors present a comparative view of those projects with respect to several design aspects: communication and localization, context andpersonal awareness aspects, information model, player equipment, and game space visualization. Last, the authors highlight current trends, designprinciples, and future directions for pervasive games development.
Abstract: In the recent years we have seen an increased popularity in game development using Smartphones, which has provided an increasingly ubiquitous platform for designing games. In this paper we wish to investigate the use of a modern Smartphone’s capabilities in game
development by implementing and evaluating a classic game on the iPhone platform. We identify the limitations andpossibilities that this field offers to the different aspect of game design.
Abstract: Wireless Sensor Networks consist of a large number of small, autonomous devices, that are able to interact with their inveronment by sensing and collaborate to fulfill their tasks, as, usually, a single node is incapable of doing so; and they use wireless communication to enable this collaboration. Each device has limited computational and energy resources, thus a basic issue in the applicastions of wireless sensor networks is the low energy consumption and hence, the maximization of the network lifetime.
The collected data is disseminated to a static control point – data sink in the network, using node to node - multi-hop data propagation. However, sensor devices consume significant amounts of energy in addition to increased implementation complexity, since a routing protocol is executed. Also, a point of failure emerges in the area near the control center where nodes relay the data from nodes that are farther away. Recently, a new approach has been developed that shifts the burden from the sensor nodes to the sink. The main idea is that the sink has significant and easily replenishable energy reserves and can move inside the area the sensor network is deployed, in order to acquire the data collected by the sensor nodes at very low energy cost. However, the need to visit all the regions of the network may result in large delivery delays.
In this work we have developed protocols that control the movement of the sink in wireless sensor networks with non-uniform deployment of the sensor nodes, in order to succeed an efficient (with respect to both energy and latency) data collection. More specifically, a graph formation phase is executed by the sink during the initialization: the network area is partitioned in equal square regions, where the sink, pauses for a certain amount of time, during the network traversal, in order to collect data.
We propose two network traversal methods, a deterministic and a random one. When the sink moves in a random manner, the selection of the next area to visit is done in a biased random manner depending on the frequency of visits of its neighbor areas. Thus, less frequently visited areas are favored. Moreover, our method locally determines the stop time needed to serve each region with respect to some global network resources, such as the initial energy reserves of the nodes and the density of the region, stopping for a greater time interval at regions with higher density, and hence more traffic load. In this way, we achieve accelerated coverage of the network as well as fairness in the service time of each region.Besides randomized mobility, we also propose an optimized deterministic trajectory without visit overlaps, including direct (one-hop) sensor-to-sink data transmissions only.
We evaluate our methods via simulation, in diverse network settings and comparatively to related state of the art solutions. Our findings demonstrate significant latency and energy consumption improvements, compared to previous research.
Abstract: Wireless sensor networks are a recently introduced category of ad hoc computer networks, which are comprised by nodes of small size and limited computing and energy resources. Such nodes are able of measuring physical properties such as temperature, humidity, etc., wireless communication between each other and in some cases interaction with their surrounding environments (through the use of electromechanical parts).
As these networks have begun to be widely available (in terms of cost and commercial hardware availability), their field of application andphilosophy of use is constantly evolving. We have numerous examples of their applications, ranging from monitoring the biodiversity of a specific outdoor area to structural health monitoring of bridges, and also networks ranging from few tens of nodes to even thousands of nodes.
In this PhD thesis we investigated the following basic research lines related to wireless sensor networks:
a) their simulation,
b) the development of data propagation protocols suited to such networks and their evaluation through simulation,
c) the modelling of ``hostile'' circumstances (obstacles) during their operation and evaluation of their impact through simulation,
d) the development of a sensor network management application.
Regarding simulation, we initially placed an emphasis to issues such as the effective simulation of networks of several thousands of nodes, and in that respect we developed a network simulator (simDust), which is extendable through the addition of new data propagation protocols and visualization capabilities. This simulator was used to evaluate the performance of a number of characteristic data propagation protocols for wireless sensor networks. Furthermore, we developed a new protocol (VRTP) and evaluated its performance against other similar protocols. Our studies show that the new protocol, that uses dynamic changes of the transmission range of the network nodes, performs better in certain cases than other related protocols, especially in networks containing obstacles and in the case of non-homogeneous placement of nodes.
Moreover, we emphasized on the addition of ``realistic'' conditions to the simulation of such protocols, that have an adversarial effect on their operation. Our goal was to introduce a model for obstacles that adds little computational overhead to a simulator, and also study the effect of the inclusion of such a model on data propagation protocols that use geographic information (absolute or relative). Such protocols are relatively sensitive to dynamic topology changes and network conditions. Through our experiments, we show that the inclusion of obstacles during simulation can have a significant effect on these protocols.
Finally, regarding applications, we initially proposed an architecture (WebDust/ShareSense), for the management of such networks, that would provide basic capabilities of managing such networks and developing applications above it. Features that set it apart are the capability of managing multiple heterogeneous sensor networks, openess, the use of a peer-to-peer architecture for the interconnection of multiple sensor network. A large part of the proposed architecture was implemented, while the overall architecture was extended to also include additional visualization capabilities.
Abstract: Wireless sensor networks are comprised of a vast number of devices, situated in an area of interest that self organize in a structureless network, in order to monitor/record/measure an environmental variable or phenomenon and subsequently to disseminate the data to the control center.
Here we present research focused on the development, simulation and evaluation of energy efficient algorithms, our basic goal is to minimize the energy consumption. Despite technology advances, the problem of energy use optimization remains valid since current and emerging hardware solutions fail to solve it.
We aim to reduce communication cost, by introducing novel techniques that facilitate the development of new algorithms. We investigated techniques of distributed adaptation of the operations of a protocol by using information available locally on every node, thus through local choices we improve overall performance. We propose techniques for collecting and exploiting limited local knowledge of the network conditions. In an energy efficient manner, we collect additional information which is used to achieve improvements such as forming energy efficient, low latency and fault tolerant paths to route data. We investigate techniques for managing mobility in networks where movement is a characteristic of the control center as well as the sensors. We examine methods for traversing and covering the network field based on probabilistic movement that uses local criteria to favor certain areas.
The algorithms we develop based on these techniques operate a) at low level managing devices, b) on the routing layer and c) network wide, achieving macroscopic behavior through local interactions. The algorithms are applied in network cases that differ in density, node distribution, available energy and also in fundamentally different models, such as under faults, with incremental node deployment and mobile nodes. In all these settings our techniques achieve significant gains, thus distinguishing their value as tools of algorithmic design.
Abstract: The domain of smart cities is currently burgeoning, with a lot of potential for scientific and socio-economic innovation gradually being revealed. It is also becoming apparent that cross-discipline research will be instrumental in designing and building smarter cities, where IoT technology is becoming omnipresent. SmartSantander is an FP7 project that built a massive
city-scale IoT testbed aiming to provide both a tool for the research community and a functional system for the local government to implement operational city
services. In this work, we present key smart cities projects, main application domains and representative smart city frameworks that reflect the latest advances in the smart cities domain and our own experience through SmartSantander. The project has deployed 51.910 IoT endpoints, offering a massive infrastructure to the community, as well as functional system services and a number of end-user applications. Based on these aspects, we identify and
discuss a number of key scientific and technological challenges. We also present an overview of the developed system components and applications, and
discuss the ways that current smart city challenges were handled in the project.
Abstract: Peer-to-Peer (P2P) search requires intelligent decisions for query routing: selecting the best peers to which a given query, initiated at some peer, should be forwarded for retrieving additional search results. These decisions are based on statistical summaries for each peer, which are usually organized on a per-keyword basis and managed in a distributed directory of routing indices. Such architectures disregard the possible correlations among keywords. Together with the coarse granularity of per-peer summaries, which are mandated for scalability, this limitation may lead to poor search result quality.
This paper develops and evaluates two solutions to this problem, sk-STAT based on single-key statistics only, and mk-STAT based on additional multi-key statistics. For both cases, hash sketch synopses are used to compactly represent a peer's data items and are efficiently disseminated in the P2P network to form a decentralized directory. Experimental studies with Gnutella and Web data demonstrate the viability and the trade-offs of the approaches.
Abstract: Wireless sensor networks are comprised of a vast number of ultra-small fully autonomous computing, communication and sensing devices, with very restricted energy and computing capabilities, which co-operate to accomplish a large sensing task. Such networks can be very useful in practice in applications that require fine-grain monitoring of physical environment subjected to critical conditions (such as inaccessible terrains or disaster places). Very large numbers of sensor devices can be deployed in areas of interest and use self-organization and collaborative methods to form deeply networked environments. Features including the huge number of sensor devices involved, the severe power, computational and memory limitations, their dense deployment and frequent failures, pose new designand implementation aspects. The efficient and robust realization of such large, highly-dynamic, complex, non-conventional environments is a challenging algorithmic and technological task. In this work we consider certain important aspects of the design, deployment and operation of distributed algorithms for data propagation in wireless sensor networks and discuss some characteristic protocols, along with an evaluation of their performance.
Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication andproposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.
Abstract: An ad hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure. In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol. We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space. In particular, our work examines the fundamental problem of communication andproposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.
Abstract: Counting items in a distributed system, and estimating the cardinality of multisets in particular,
is important for a large variety of applications and a fundamental building block for emerging Internet-scale information systems. Examples of such applications range from optimizing query access plans in peer-to-peer data sharing, to computing the significance (rank/score) of data items in distributed information retrieval. The general formal problem addressed in this article is computing the network-wide distinct number of items with some property (e.g., distinct files with file name
containing “spiderman”) where each node in the network holds an arbitrary subset, possibly overlapping the subsets of other nodes. The key requirements that a viable approach must satisfy are:
(1) scalability towards very large network size, (2) efficiency regarding messaging overhead, (3) load
balance of storage and access, (4) accuracy of the cardinality estimation, and (5) simplicity and easy
integration in applications. This article contributes the DHS (Distributed Hash Sketches) method
for this problem setting: a distributed, scalable, efficient, and accurate multiset cardinality estimator.
DHSis based on hash sketches for probabilistic counting, but distributes the bits of each counter
across network nodes in a judicious manner based on principles of Distributed Hash Tables, paying
careful attention to fast access and aggregation as well as update costs. The article discusses various
design choices, exhibiting tunable trade-offs between estimation accuracy, hop-count efficiency, and
load distribution fairness. We further contribute a full-fledged, publicly available, open-source implementation of all our methods, and a comprehensive experimental evaluation for various settings.
Abstract: In this paper we describe a new simulation platform for heterogeneous distributed systems comprised of small programmable objects (e.g., wireless sensor networks) and traditional networked processors. Simulating such systems is complicated because of the need to coordinate compilers and simulators, often with very different interfaces, options, and fidelities.
Our platform (which we call ADAPT) is a flexible and extensible environment that provides a highly scalable simulator with unique characteristics. While the platform provides advanced functionality such as real-time simulation monitoring, custom topologies and scenarios, mixing real and simulated nodes, etc., the effort required by the user and the impact to her code is minimal. We here present its architecture, the most important design decisions, and discuss its distinct features and functionalities. We integrate our simulator to the Sun SPOT platform to enable simulation of sensing applications that employ both low-end and high-end devices programmed with different languages that are internetworked with heterogeneous technologies. We believe that ADAPT will make the development of applications that use small programmable objects more widely accessible and will enable researchers to conduct a joint research approach that combines both theory andpractice.
Abstract: Service Oriented Computing and its most famous implementation technology Web Services (WS) are becoming an important enabler of networked business models. Discovery mechanisms are a critical factor to the overall utility of Web Services. So far, discovery mechanisms based on the UDDI standard rely on many centralized and area-specific directories, which poses information stress problems such as performance bottlenecks and fault tolerance. In this context, decentralized approaches based on Peer to Peer overlay networks have been proposed by many researchers as a solution. In this paper, we propose a new structured P2P overlay network infrastructure designed for Web Services Discovery. We present theoretical analysis backed up by experimental results, showing that the proposed solution outperforms popular decentralized infrastructures for web discovery, Chord (and some of its successors), BATON (and it¢s successor) and Skip-Graphs.
Abstract: This paper deals with early obstacles recognition in wireless sensor networks under various traffic
patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a
constant factor of 2ð+1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.
Abstract: In this paper we present the efficient burst reservation protocol (EBRP) suitable
for bufferless optical burst switching (OBS) networks. The EBRPprotocol is a
two-way reservation scheme that employs timed and in-advance reservation of
resources. In the EBRPprotocol timed reservations are relaxed, introducing a
reservation time duration parameter that is negotiated during call setupphase.
This feature allows bursts to reserve resources beyond their actual size to
increase their successful forwarding probability and can be used to provide
quality-of-service (QoS) differentiation. The EBRPprotocol is suitable for
OBS networks and can guarantee a low blocking probability for bursts that can
tolerate the round-trip delay associated with the two-way reservation.We present
the main features of the proposed protocol and describe in detail the timing
considerations regarding the call setupphase and the actual reservation process.
Furthermore, we show evaluation results and compare the EBRPperformance
against two other typical reservation schemes, a tell-and-wait and a tell-and-go
(just-enough-time) like protocol. EBRP has been developed for the control plane
of the IST-LASAGNE project.
Abstract: We propose a new data dissemination protocol for wireless sensor networks, that basically pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the sink. This extra information is still local, limited and obtained in a distributed manner. This extra knowledge is acquired by only a small fraction of sensors thus the extra energy cost only marginally affects the overall protocol efficiency. The new protocol has low latency and manages to propagate data successfully even in the case of low densities. Furthermore, we study in detail the effect of failures and show that our protocol is very robust. In particular, we implement and evaluate the protocol using large scale simulation, showing that it significantly outperforms well known relevant solutions in the state of the art.
Abstract: Andrews et al. [Automatic method for hiding latency in high bandwidth networks, in: Proceedings of the ACM Symposium on Theory of Computing, 1996, pp. 257–265; Improved methods for hiding latency in high bandwidth networks, in: Proceedings of the Eighth Annual ACM Symposium on Parallel Algorithms and Architectures, 1996, pp. 52–61] introduced a number of techniques for automatically hiding latency when performing simulations of networks with unit delay links on networks with arbitrary unequal delay links. In their work, they assume that processors of the host network are identical in computational power to those of the guest network being simulated. They further assume that the links of the host are able to pipeline messages, i.e., they are able to deliver Ppackets in time O(P+d) where d is the delay on the link.
In this paper we examine the effect of eliminating one or both of these assumptions. In particular, we provide an efficient simulation of a linear array of homogeneous processors connected by unit-delay links on a linear array of heterogeneous processors connected by links with arbitrary delay. We show that the slowdown achieved by our simulation is optimal. We then consider the case of simulating cliques by cliques; i.e., a clique of heterogeneous processors with arbitrary delay links is used to simulate a clique of homogeneous processors with unit delay links. We reduce the slowdown from the obvious bound of the maximum delay link to the average of the link delays. In the case of the linear array we consider both links with and without pipelining. For the clique simulation the links are not assumed to support pipelining.
The main motivation of our results (as was the case with Andrews et al.) is to mitigate the degradation of performance when executing parallel programs designed for different architectures on a network of workstations (NOW). In such a setting it is unlikely that the links provided by the NOW will support pipelining and it is quite probable the processors will be heterogeneous. Combining our result on clique simulation with well-known techniques for simulating shared memory PRAMs on distributed memory machines provides an effective automatic compilation of a PRAM algorithm on a NOW.
Abstract: We present three new coordination mechanisms for schedul-
ing n sel¯sh jobs on m unrelated machines. A coordination
mechanism aims to mitigate the impact of sel¯shness of jobs
on the e±ciency of schedules by de¯ning a local schedul-
ing policy on each machine. The scheduling policies induce
a game among the jobs and each job prefers to be sched-
uled on a machine so that its completion time is minimum
given the assignments of the other jobs. We consider the
maximum completion time among all jobs as the measure
of the e±ciency of schedules. The approximation ratio of
a coordination mechanism quanti¯es the e±ciency of pure
Nash equilibria (price of anarchy) of the induced game. Our
mechanisms are deterministic, local, andpreemptive in the
sense that the scheduling policy does not necessarily process
the jobs in an uninterrupted way and may introduce some
idle time. Our ¯rst coordination mechanism has approxima-
tion ratio O(logm) and always guarantees that the induced
game has pure Nash equilibria to which the system con-
verges in at most n rounds. This result improves a recent
bound of O(log2 m) due to Azar, Jain, and Mirrokni and,
similarly to their mechanism, our mechanism uses a global
ordering of the jobs according to their distinct IDs. Next
we study the intriguing scenario where jobs are anonymous,
i.e., they have no IDs. In this case, coordination mechanisms
can only distinguish between jobs that have diffeerent load
characteristics. Our second mechanism handles anonymous
jobs and has approximation ratio O
¡ logm
log logm
¢
although the
game induced is not a potential game and, hence, the exis-
tence of pure Nash equilibria is not guaranteed by potential
function arguments. However, it provides evidence that the
known lower bounds for non-preemptive coordination mech-
anisms could be beaten using preemptive scheduling poli-
cies. Our third coordination mechanism also handles anony-
mous jobs and has a nice \cost-revealing" potential func-
tion. Besides in proving the existence of equilibria, we use
this potential function in order to upper-bound the price of stability of the induced game by O(logm), the price of an-
archy by O(log2 m), and the convergence time to O(log2 m)-
approximate assignments by a polynomial number of best-
response moves. Our third coordination mechanism is the
¯rst that handles anonymous jobs and simultaneously guar-
antees that the induced game is a potential game and has
bounded price of anarchy.
Abstract: We examine a task scheduling and data migration problem for grid networks, which we refer to as the Data Consolidation (DC) problem. DC arises when a task concurrently requests multiple pieces of data, possibly scattered throughout the grid network, that have to be present at a selected site before the task¢s execution starts. In such a case, the scheduler and the data manager must select (i) the data replicas to be used, (ii) the site where these data will be gathered for the task to be executed, and (iii) the routing paths to be followed; this is assuming that the selected datasets are transferred concurrently to the execution site. The algorithms or policies for selecting the data replicas, the data consolidating site and the corresponding paths comprise a Data Consolidation scheme. We propose and experimentally evaluate several DC schemes of polynomial number of operations that attempt to estimate the cost of the concurrent data transfers, to avoid congestion that may appear due to these transfers and to provide fault tolerance. Our simulation results strengthen our belief that DC is an important problem that needs to be addressed in the design of data grids, and can lead, if performed efficiently, to significant benefits in terms of task delay, network load and other performance parameters.
Abstract: Data collection is usually performed in wireless sensor networks by the sensors
relaying data towards a static control center (sink). Motivated by important
applications (mostly related to ambient intelligence and remote monitoring)
and as a first step towards introducing mobility, we propose the basic
idea of having a sink moving in the network area and collecting
data from sensors. We propose four characteristic mobility patterns
for the sink along with different data collection strategies. Through a
detailed simulation study, we evaluate several important performance properties of
each approach. Our findings demonstrate that by taking advantage
of the sink's mobility and shifting work from sensors to the powerful sink,
we can significantly reduce the energy spent in relaying traffic and thus greatly
extend the lifetime of the network.
Abstract: Through recent technology advances in the eld of wireless energy transmission, Wireless Rechargeable Sensor Networks
(WRSN) have emerged. In this new paradigm for
WSNs a mobile entity called Mobile Charger (MC) traverses
the network and replenishes the dissipated energy of sensors.
In this work we rst provide a formal denition of the charging
dispatch decision problem andprove its computational
hardness. We then investigate how to optimize the tradeo
s of several critical aspects of the charging process such
as a) the trajectory of the charger, b) the dierent charging
policies and c) the impact of the ratio of the energy
the MC may deliver to the sensors over the total available
energy in the network. In the light of these optimizations,
we then study the impact of the charging process to the
network lifetime for three characteristic underlying routing
protocols; a greedy protocol, a clustering protocol and an
energy balancing protocol. Finally, we propose a Mobile
Charging Protocol that locally adapts the circular trajectory
of the MC to the energy dissipation rate of each sub-region
of the network. We compare this protocol against several
MC trajectories for all three routing families by a detailed
experimental evaluation. The derived ndings demonstrate
signicant performance gains, both with respect to the no
charger case as well as the dierent charging alternatives; in
particular, the performance improvements include the network
lifetime, as well as connectivity, coverage and energy
balance properties.
Abstract: We call radiation at a point of a wireless network the total amount of electromagnetic quantity (energy or power density) the point is exposed to. The impact of radiation can be high and we believe it is worth studying and control; towards radiation aware wireless networking we take (for the first time in the study of this aspect) a distributed computing, algorithmic approach. We exemplify this line of research by focusing on sensor networks, studying the minimum radiation path problem of finding the lowest radiation trajectory of a person moving from a source to a destination point in the network region. For this problem, we sketch the main ideas behind a linear program that can provide a tight approximation of the optimal solution, and then we discuss three heuristics that can lead to low radiation paths. We also plan to investigate the impact of diverse node mobility to the heuristics' performance.
Abstract: Intuitively, Braess's paradox states that destroying a part
of a network may improve the common latency of selsh
ows at Nash
equilibrium. Such a paradox is a pervasive phenomenon in real-world
networks. Any administrator, who wants to improve equilibrium delays
in selsh networks, is facing some basic questions: (i) Is the network
paradox-ridden? (ii) How can we delete some edges to optimize equilibrium
ow delays? (iii) How can we modify edge latencies to optimize
equilibrium
ow delays?
Unfortunately, such questions lead to NP-hard problems in general. In
this work, we impose some natural restrictions on our networks, e.g.
we assume strictly increasing linear latencies. Our target is to formulate
ecient algorithms for the three questions above.We manage to provide:
{ A polynomial-time algorithm that decides if a network is paradoxridden,
when latencies are linear and strictly increasing.
{ A reduction of the problem of deciding if a network with arbitrary
linear latencies is paradox-ridden to the problem of generating all
optimal basic feasible solutions of a Linear Program that describes
the optimal trac allocations to the edges with constant latency.
{ An algorithm for nding a subnetwork that is almost optimal wrt
equilibrium latency. Our algorithm is subexponential when the number
of paths is polynomial and each path is of polylogarithmic length.
{ A polynomial-time algorithm for the problem of nding the best
subnetwork, which outperforms any known approximation algorithm
for the case of strictly increasing linear latencies.
{ A polynomial-time method that turns the optimal
ow into a Nash
ow by deleting the edges not used by the optimal
ow, andperforming
minimal modications to the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity
of recognizing the Braess's paradox most severe manifestations,
and our techniques show novel ways of using the probabilistic method
and of exploiting convex separable quadratic programs.
Abstract: Intuitively, Braess’s paradox states that destroying a part of a network may improve the common latency of selfish flows at Nash equilibrium. Such a paradox is a pervasive phenomenon in real-world networks. Any administrator who wants to improve equilibrium delays in selfish networks, is facing some basic questions:
– Is the network paradox-ridden?
– How can we delete some edges to optimize equilibrium flow delays?
– How can we modify edge latencies to optimize equilibrium flow delays?
Unfortunately, such questions lead to View the MathML sourceNP-hard problems in general. In this work, we impose some natural restrictions on our networks, e.g. we assume strictly increasing linear latencies. Our target is to formulate efficient algorithms for the three questions above. We manage to provide:
– A polynomial-time algorithm that decides if a network is paradox-ridden, when latencies are linear and strictly increasing.
– A reduction of the problem of deciding if a network with (arbitrary) linear latencies is paradox-ridden to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the edges with constant latency.
– An algorithm for finding a subnetwork that is almost optimal wrt equilibrium latency. Our algorithm is subexponential when the number of paths is polynomial and each path is of polylogarithmic length.
– A polynomial-time algorithm for the problem of finding the best subnetwork which outperforms any known approximation for the case of strictly increasing linear latencies.
– A polynomial-time method that turns the optimal flow into a Nash flow by deleting the edges not used by the optimal flow, andperforming minimal modifications on the latencies of the remaining ones.
Our results provide a deeper understanding of the computational complexity of recognizing the most severe manifestations of Braess’s paradox, and our techniques show novel ways of using the probabilistic method and of exploiting convex separable quadratic programs.
Abstract: In this paper we consider communication issues arising in mobile networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency allocation and the call control problem are essential. In the frequency allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies. In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5. For the call control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio of 2.934 in cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call control algorithms for cellular and arbitrary planar networks, respectively.
Abstract: In this paper we consider communication issues arising in cellular (mobile) networks that utilize frequency division multiplexing (FDM) technology. In such networks, many users within the same geographical region can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of available frequencies is limited; thus, efficient solutions to the frequency-allocation and the call-control problems are essential. In the frequency-allocation problem, given users that wish to communicate, the objective is to minimize the required spectrum of frequencies so that communication can be established without signal interference. The objective of the call-control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users served. We consider cellular, planar, and arbitrary network topologies.
In particular, we study the on-line version of both problems using competitive analysis. For frequency allocation in cellular networks, we improve the best known competitive ratio upper bound of 3 achieved by the folklore Fixed Allocation algorithm, by presenting an almost tight competitive analysis for the greedy algorithm; we prove that its competitive ratio is between 2.429 and 2.5 . For the call-control problem, we present the first randomized algorithm that beats the deterministic lower bound of 3 achieving a competitive ratio between 2.469 and 2.651 for cellular networks. Our analysis has interesting extensions to arbitrary networks. Also, using Yao's Minimax Principle, we prove two lower bounds of 1.857 and 2.086 on the competitive ratio of randomized call-control algorithms for cellular and arbitrary planar networks, respectively.
Abstract: We consider the problem of computing minimum congestion,
fault-tolerant, redundant assignments of messages to faulty parallel de-
livery channels. In particular, we are given a set M of faulty channels,
each having an integer capacity ci and failing independently with proba-
bility fi. We are also given a set of messages to be delivered over M, and
a fault-tolerance constraint (1), and we seek a redundant assignment
that minimizes congestion Cong(), i.e. the maximum channel load,
subject to the constraint that, with probability no less than (1 ), all
the messages have a copy on at least one active channel. We present a
4-approximation algorithm for identical capacity channels and arbitrary
message sizes, and a 2l ln(jMj=)
ln(1=fmax)m-approximation algorithm for related
capacity channels and unit size messages.
Both algorithms are based on computing a collection of disjoint chan-
nel subsets such that, with probability no less than (1 ), at least one
channel is active in each subset. The objective is to maximize the sum of
the minimum subset capacities. Since the exact version of this problem
is NP-complete, we present a 2-approximation algorithm for identical
capacities, and a (8 + o(1))-approximation algorithm for arbitrary ca-
pacities.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor
networks, from a network designperspective i.e. towards estimating the least possible number
of sensors to be deployed, their positions and operation chatacteristics needed to perform the
tracking task. To avoid an expensive massive deployment, we try to take advantage of
possible coverage ovelaps over space and time, by introducing a novel combinatorial model
that captures such overlaps.
Under this model, we abstract the tracking network designproblem by a combinatorial
problem of covering a universe of elements by at least three sets (to ensure that each point in
the network area is covered at any time by at least three sensors, and thus being localized). We
then designand analyze an efficient approximate method for sensor placement and operation,
that with high probability and in polynomial expected time achieves a (log n) approximation
ratio to the optimal solution. Our network design solution can be combined with alternative
collaborative processing methods, to suitably fit different tracking scenaria.
Abstract: We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network designperspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps.
Under this model, we abstract the tracking network designproblem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then designand analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a {\`E}(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has recently been proposed as a modulation technique for optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments. We consider the planning problem of an OFDM optical network, where we are given a traffic matrix that includes the requested transmission rates of the connections to be served. Connections are provisioned for their requested rate by elastically allocating spectrum using a variable number of OFDM subcarriers and choosing an appropriate modulation level, taking into account the transmission distance. We introduce the Routing, Modulation Level and Spectrum Allocation (RMLSA) problem, as opposed to the typical Routing and Wavelength Assignment (RWA) problem of traditional WDM networks, prove that is also NP-complete andpresent various algorithms to solve it. We start by presenting an optimal ILP RMLSA algorithm that minimizes the spectrum used to serve the traffic matrix, and also present a decomposition method that breaks RMLSA into its two
substituent subproblems, namely, (i) routing and modulation level, and (ii) spectrum allocation (RML+SA), and solves them sequentially. We also propose a heuristic algorithm that serves connections one-by-one and use it to solve the planning problem by sequentially serving all the connections in the traffic matrix. In the sequential algorithm, we investigate two policies for defining the order in which connections are considered. We also use a simulated annealing meta-heuristic to obtain even better orderings. We examine the performance of the proposed algorithms through simulation experiments and evaluate the spectrum utilization benefits that can be obtained by utilizing OFDM elastic bandwidth allocation, when compared to a traditional WDM network.
Abstract: We describe the designand implementation of secure and robust protocol and system for a national electronic lottery. Electronic lotteries at a national level are a viable cost effective alternative to mechanical ones when there is a business need to support many types of rdquogames of chancerdquo and to allow increased drawing frequency. Electronic lotteries are, in fact, extremely high risk financial application: If one discovers a way to predict or otherwise claim the winning numbers (even once) the result is huge financial damages. Moreover, the e-lottery process is complex, which increases the possibility of fraud or costly accidental failures. In addition, a national lottery must adhere to auditability and (regulatory) fairness requirements regarding its drawings. Our mechanism, which we believe is the first one of its kind to be described in the literature, builds upon a number of cryptographic primitives that ensure the unpredictability of the winning numbers, the prevention of their premature leakages andprevention of fraud. We also provide measures for auditability, fairness, and trustworthiness of the process. Besides cryptography, we incorporate security mechanisms that eliminate various risks along the entire process. Our system which was commissioned by a national organization, was implemented in the field and has been operational and active for a while, now.
Abstract: Modern Wireless Sensor Networks offer an easy, lowcost
and reliable alternative to the back-end for monitoring
and controlling large geographical areas like Buildings
and Industries. We present the designand implementation details of an open and efficient Prototype System as a solution for low-cost BMS that comprises of heterogeneous, small-factor wireless devices. Placing that in the context of Internet of Things we come up with a solution that can cooperate with other systems installed on the same site to lower power consumption and costs as well as benefit humans that use its services in an transparent way. We evaluate and assess key aspects of the performance of our prototype. Our findings indicate specific approaches
to reduce the operation costs and allow the development of open applications.
Abstract: This work is an attempt to present and describe the designand implementation of a system for the cooperative multiplayer control of gaming and entertainment-related software, based on the use of mobile devices with wireless networking capabilities. We are currently using wireless sensor networking devices as the enabling platform, and our prototype application is based on Google Earth�s integrated flight simulator.
Abstract: The energy balance property (i.e., all nodes having the same energy throughout the network evolution) contributes significantly (along with energy efficiency) to the maximization of the network lifespan and network connectivity. The problem of achieving energy balanced propagation is well studied in static networks, as it has attracted a lot of research attention.
Recent technological advances have enabled sensor devices to be attached to mobile entities of our every day life (e.g. smart-phones, cars, PDAs etc), thus introducing the formation of highly mobile sensor networks.
Inspired by the aforementioned applications, this work is (to the best of our knowledge) the first studying the energy balance property in wireless networks where the nodes are highly and dynamically mobile. In particular, in this paper we propose a new diverse mobility model which is easily parameterized and we also present a new protocol which tries to adaptively exploit the inherent node mobility in order to achieve energy balance in the network in an efficient way.
Abstract: In this work we study energy efficient routing strategies
for wireless ad-hoc networks. In this kind of networks,
energy is a scarce resource and its conservation
and efficient use is a major issue. Our strategy follows
the multi-cost routing approach, according to which a
cost vector of various parameters is assigned to each
link. The parameters of interest are the number of hops
on a path, and the residual energy and the transmission
power of the nodes on the path. These parameters
are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. We evaluate the routing algorithms
proposed in a number of scenarios, with respect
to energy consumption, throughput and other performance
parameters of interest. From the experiments
conducted we conclude that routing algorithms that take
into account energy related parameters, increase the
lifetime of the network, while achieving better performance
than other approaches, such as minimum hop
routing.
Abstract: A crucial issue in wireless networks is to support efficiently communication patterns that are typical in traditional (wired) networks. These include broadcasting, multicasting, and gossiping (all-to-all communication). In this work we study such problems in static ad hoc networks. Since, in ad hoc networks, energy is a scarce resource, the important engineering question to be solved is to guarantee a desired communication pattern minimizing the total energy consumption. Motivated by this question, we study a series of wireless network designproblems andpresent new approximation algorithms and inapproximability results.
Abstract: In this paper, we demonstrate the significant impact of (a) the mobility rate and (b) the user density on the performance of routing protocols in ad-hoc mobile networks. In particular, we study the effect of these parameters on two different approaches for designing routing protocols: (a) the route creation and maintenance approach and (b) the support approach that forces few hosts to move, acting as helpers for message delivery. We study one representative protocol for each approach, i.e. AODV for the first approach and RUNNERS for the second. We have implemented the two protocols andperformed a large scale and detailed simulation study of their performance. The main findings are: the AODV protocol behaves well in networks of high user density and low mobility rate, while its performance drops for sparse networks of highly mobile users. On the other hand, the RUNNERS protocol seems to tolerate well (and in fact benefit from) high mobility rates and low densities.
Abstract: Raising awareness among young people, and especially students, on the relevance of behavior change for achieving energy savings is increasingly being considered as a key enabler towards long-term and cost-effective energy efficiency policies. However, the way to successfully apply educational interventions focused on such targets inside schools is still an open question. In this paper, we present our approach for enabling IoT-based energy savings and sustainability awareness lectures andpromoting data-driven energy-saving behaviors focused on a high school audience. We present our experiences toward the successful application of sets of educational tools and software over a real-world Internet of Things (IoT) deployment. We discuss the use of gamification and competition as a very effective end-user engagement mechanism for school audiences. We also present the design of an IoT-based hands-on lab activity, integrated within a high school computer science curriculum utilizing IoT devices and data produced inside the school building, along with the Node-RED platform. We describe the tools used, the organization of the educational activities and related goals. We report on the experience carried out in both directions in a high school in Italy and conclude by discussing the results in terms of achieved energy savings within an observation period.
Abstract: Core optical networks using reconfigurable optical
switches and tunable lasers appear to be on the road towards
widespread deployment and could evolve to all-optical mesh
networks in the coming future. Considering the impact of physical
layer impairments in the planning and operation of all-optical
(and translucent) networks is the main focus of the DICONET
project. The impairment aware network planning and operation
tool (NPOT) is the main outcome of DICONET project, which
is explained in detail in this paper. The key building blocks of
the NPOT, consisting of network description repositories, the
physical layer performance evaluator, the impairment aware
routing and wavelength assignment engines, the component
placement modules, failure handling and the integration of
NPOT in the control plane are the main contributions of this
work. Besides, the experimental result of DICONET proposal for
centralized and distributed control plane integration schemes and
the performance of the failure handling in terms of restoration
time is presented in this work.
Abstract: This paper presents experimental measurements of bulk data
transfer in a wireless multi-hop sensor network environment. We
investigate the effect of the number of the hops and the
conditions of the surrounding environment on the performance of
the network in terms of achieved transfer rates. Our findings
validate the theoretically established results on the relation
between the throughput and the network diameter, i.e.~is inversely
proportional to the network diameter and in particular the number
of hops needed for data to reach its destination. Furthermore, we
indicate how throughput is (significantly) affected by the type of
the physical environment, i.e.~it drops as the harshness of the
ambient conditions increases.
Abstract: In large scale networks users often behave selfishly trying to minimize their routing cost. Modelling this as a noncooperative game, may yield a Nash equilibrium with unboundedly poor network performance. To measure this inefficacy, the Coordination Ratio or Price of Anarchy (PoA) was introduced. It equals the ratio of the cost induced by the worst Nash equilibrium, to the corresponding one induced by the overall optimum assignment of the jobs to the network. On improving the PoA of a given network, a series of papers model this selfish behavior as a Stackelberg or Leader-Followers game.
We consider random tuples of machines, with either linear or M/M/1 latency functions, andPoA at least a tuning parameter c. We validate a variant (NLS) of the Largest Latency First (LLF) Leaderrsquos strategy on tuples with PoA ge c. NLS experimentally improves on LLF for systems with inherently high PoA, where the Leader is constrained to control low portion agr of jobs. This suggests even better performance for systems with arbitrary PoA. Also, we bounded experimentally the least Leaderrsquos portion agr0 needed to induce optimum cost. Unexpectedly, as parameter c increases the corresponding agr0 decreases, for M/M/1 latency functions. All these are implemented in an extensive Matlab toolbox.
Abstract: In the near future, it is reasonable to expect that new types of systems will appear, of massive scale, expansive andpermeating their environment, of very heterogeneous nature, and operating in a constantly changing networked environment. We expect that most such systems will have the form of a very large society of unimpressive networked artefacts. Yet by cooperation, they will be organized in large societies to accomplish tasks that are difficult or beyond the capabilities of todays conventional centralized systems.
The Population Protocol model of Angluin et. al. introduced a novel approach towards the study of such systems by assuming that each artefact is an agent, so limited, that can be represented as a finite-state sensor of constant (O(1)) total storage capacity. Such agents are passively mobile and communicate in pairs using a low-power wireless signal. It has been proven that, although such systems consist of extremely limited, cheapand bulk-produced hardware devices, they are still capable of carrying out very useful nontrivial computations. Based on this approach we investigate many new intriguing directions.
Abstract: We consider the performance of a number of DPLL algorithms on random 3-CNF formulas with n variables and m = rn clauses. A long series of papers analyzing so-called “myopic” DPLL algorithms has provided a sequence of lower bounds for their satisfiability threshold. Indeed, for each myopic algorithm A it is known that there exists an algorithm-specific clause-density, rA , such that if rgnment in linear time. For example, rA equals 8/3 = 2.66.. for orderred-dll and 3.003... for generalized unit clause. We prove that for densities well within the provably satisfiable regime, every backtracking extension of either of these algorithms takes exponential time. Specifically, all extensions of orderred-dll take exponential time for r > 2.78 and the same is true for generalized unit clause for all r > 3.1. Our results imply exponential lower bounds for many other myopic algorithms for densities similarly close to the corresponding rA .
Abstract: We study the combinatorial structure and computational complexity of extreme Nash equilibria, ones that maximize or minimize a certain objective function, in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user routes its traffic on links that minimize its expected latency cost.
Our structural results provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria and that under a certain condition, the social cost of any Nash equilibrium is within a factor of 6 + epsi, of that of the fully mixed Nash equilibrium, assuming that link capacities are identical.
Our complexity results include hardness, approximability and inapproximability ones. Here we show, that for identical link capacities and under a certain condition, there is a randomized, polynomial-time algorithm to approximate the worst social cost within a factor arbitrarily close to 6 + epsi. Furthermore, we prove that for any arbitrary integer k > 0, it is -hard to decide whether or not any given allocation of users to links can be transformed into a pure Nash equilibrium using at most k selfish steps. Assuming identical link capacities, we give a polynomial-time approximation scheme (PTAS) to approximate the best social cost over all pure Nash equilibria. Finally we prove, that it is -hard to approximate the worst social cost within a multiplicative factor . The quantity is the tight upper bound on the ratio of the worst social cost and the optimal cost in the model of identical capacities.
Abstract: We propose a fair scheduling algorithm for Computational Grids, called Fair Execution Time Estimation (FETE) algorithm. FETE assigns a task to the computation resource that minimizes what we call its fair execution time estimation. The fair execution time of a task on a certain resource is an estimation of the time by which a task will be executed on the resource, assuming it gets a fair share of the resource’s computational power. Though space-shared scheduling is used in practice, the estimates of the fair execution times are obtained assuming that a time-sharing discipline is used. We experimentally evaluate the proposed algorithm and observe that it outperforms other known scheduling algorithms. We also propose a version of FETE, called Simple FETE (SFETE), which requires no a-priori knowledge of the tasks workload and in most cases has similar performance to that of FETE.
Abstract: We investigate the impact of multiple, mobile sinks on
efficient data collection in wireless sensor networks. To
improve performance, our protocol design focuses on minimizing
overlaps of sink trajectories and balancing the service load
among the sinks. To cope with high network dynamics, placement
irregularities and limited network knowledge we propose three different
protocols: a) a centralized one, that explicitly equalizes spatial coverage;
this protocol assumes strong modeling assumptions, and also serves as a kind
of performance lower bound in uniform networks of low dynamics b)
a distributed protocol based on mutual avoidance of sinks c) a clustering
protocol that distributively groups service areas towards balancing the load per sink.
Our simulation findings demonstrate significant gains in latency, while keeping the success
rate and the energy dissipation at very satisfactory levels even under
high network dynamics and deployment heterogeneity.
Abstract: We study the problem of fast and energy-efficient
data collection of sensory data using a mobile sink, in wireless sensor networks in which both the sensors and the sink move. Motivated by relevant applications, we focus on dynamic sensory
mobility and heterogeneous sensor placement. Our approach basically suggests to exploit the sensor motion to adaptively propagate information based on local conditions (such as high placement concentrations), so that the sink gradually ”learns”
the network and accordingly optimizes its motion. Compared to relevant solutions in the state of the art (such as the blind random walk, biased walks, and even optimized deterministic sink mobility), our method significantly reduces latency (the improvement ranges from 40% for uniform placements, to 800% for heterogeneous ones), while also improving the success rate and keeping the energy dissipation at very satisfactory levels.
Abstract: We propose a new data dissemination protocol for wireless sensor networks, that basically pulls some additional knowledge about the network in order to subsequently improve data forwarding towards the sink. This extra information is still local, limited and obtained in a distributed manner. This extra knowledge is acquired by only a small fraction of sensors thus the extra energy cost only marginally affects the overall protocol efficiency. The new protocol has low latency and manages to propagate data successfully even in the case of low densities. Furthermore, we study in detail the effect of failures and show that our protocol is very robust. In particular, we implement and evaluate the protocol using large scale simulation, showing that it significantly outperforms well known relevant solutions in the state of the art.
Abstract: ManyWSN algorithms and applications are based on knowledge
regarding the position of nodes inside the network area.
However, the solution of using GPS based modules in order
to perform localization in WSNs is a rather expensive solution
and in the case of indoor applications, such as smart
buildings, is also not applicable. Therefore, several techniques
have been studied in order to perform relative localization
in WSNs; that is, to compute the position of
a node inside the network area relatively to the position
of other nodes. Many such techniques are based on indicators
like the Radio Signal Strength Indicator (RSSI)
and the Link Quality Indicator (LQI). These techniques are
based on the assumption that there is strong correlation between
the Euclidian distance of the communicating motes
and these indicators. Therefore, high values of RSSI and
LQI should indicate physical proximity of two communicating
nodes. However, these indicators do not depend solely on
distance. Physical obstacles, ambient electromagnetic noise
and interferences from other wireless transmissions also affect
the quality of wireless communication in a stochastic
way. In this paper we propose, implement, experimentally
fine tune and evaluate a localization algorithm that exploits
the stochastic nature of interferences during wireless communications
in order to perform localization in WSNs. Our
algorithm is particularly designed for in-door localisation of
moving people in smart buildings. The localisation achieved
is fine-grained, i.e. the position of the target mote is successfully
computed with approximately one meter accuracy.
This fine-grained localisation can be used by smart Building
Management Systems in many applications such as room
adaptation to presence. In our scenario, our proposed algorithm is used by a smart room in order to localise the
position of people inside the room and adapt room illumination
accordingly.
Abstract: The Internet and the Web have arguably surpassed the von Neumann Computer as the most complex computational artifacts of our times. Out of all the formidable characteristics of the Internet/Web, it seems that the most novel is its socio-economic complexity. The Internet and the Web are built, operated and used by a multitude of very diverse economic interests, which compete or collaborate in various degrees. In fact, this suggests that some very important insights about the Web may come from a fusion of ideas from Algorithms with concepts and techniques from Mathematical Economics and Game Theory. Since 2000, a new field has emerged (Algorithmic Game Theory and Computational Internet Economics), which examines exactly such ideas. We feel that this field belongs to Web Science. Some of the main topics of that field examined so far are Internet equilibria, the so called “Price of Anarchy” (a measure of loss of optimality due to selfishness [Koutsoupias,Papadimitriou (1999)], electronic markets and their equilibria, auctions, and algorithmic mechanisms design (inverse game theory or how to design a game in the net in such a clever way that individual players, motivated solely by their selfish interests, actually end up meeting the goals of the game designer!). Our paper here surveys the main concepts and results of this very promising subfield.
Abstract: In this paper we present the design of a simulator platform called FUSE (Fast Universal Simulator Engine). The term Universal means that the Engine can be adapted easily to different domains and be used for varying simulation needs, although our main target is simulation of distributed algorithms in distributed computing environments. The Engine is Fast in the sense that the simulation overhead is minimal and very large systems can be simulated. We discuss the architecture and the design decisions that form the basis of these features. We also describe the functionality that is provided to its users (e.g., monitoring, statistics, etc.).
Abstract: We discuss key findings and technological challenges related to SmartSantander, an EU project that is developing a city-scale experimental facility for Internet of Things and Future Internet experimentation. The main goal of the project is to designand construct a city scale lab for experimentation andprovide an integrated framework for implementing Smart City services.
Abstract: Distributed algorithm designers often assume that system processes execute the same predefined software. Alternatively, when they do not assume that, designers turn to non-cooperative games and seek an outcome that corresponds to a rough consensus when no coordination is allowed. We argue that both assumptions are inapplicable in many real distributed systems, e.g., the Internet, andpropose designing self-stabilizing and Byzantine fault-tolerant distributed game authorities. Once established, the game authority can secure the execution of any complete information game. As a result, we reduce costs that are due to the processes¢ freedom of choice. Namely, we reduce the price of malice.
Abstract: In this work we experimentally study the min order Radiocoloring problem (RCP) on Chordal, Split andPermutation graphs, which are three basic families of perfect graphs. This problem asks to find an assignment using the minimum number of colors to the vertices of a given graph G, so that each pair of vertices which are at distance at most two apart in G have different colors. RCP is an NP-Complete problem on chordal and split graphs [4]. For each of the three families, there are upper bounds or/and approximation algorithms known for minimum number of colors needed to radiocolor such a graph [4,10].
We designand implement radiocoloring heuristics for graphs of above families, which are based on the greedy heuristic. Also, for each one of the above families, we investigate whether there exists graph instances requiring a number of colors in order to be radiocolored, close to the best known upper bound for the family. Towards this goal, we present a number generators that produce graphs of the above families that require either (i) a large number of colors (compared to the best upper bound), in order to be radiocolored, called ldquoextremalrdquo graphs or (ii) a small number of colors, called ldquonon-extremalrdquoinstances. The experimental evaluation showed that random generated graph instances are in the most of the cases ldquonon-extremalrdquo graphs. Also, that greedy like heuristics performs very well in the most of the cases, especially for ldquonon-extremalrdquo graphs.
Abstract: Geographic routing is becoming the protocol of choice for many sensor network applications. Some very efficient geographic routing algorithms exist, however they require a preliminary planarization of the communication graph. Planarization induces overhead which makes this approach not optimal when lightweight protocols are required. On the other hand, georouting algorithms which do not rely on planarization have fairly low success rates and either fail to route messages around all but the simplest obstacles or have a high topology control overhead (e.g. contour detection algorithms). In this entry we describe the GRIC algorithm which was designed to overcome some of those limitations. The GRIC algorithm was proposed in [PN07a]. It is the first lightweight and efficient on demand (i.e. all-to-all) geographic routing algorithm which does not require planarization, has almost 100% delivery rates (when no obstacles are added), and behaves well in the presence of large communication blocking obstacles.
Abstract: The technological as well as software advances in
microelectronics and embedded component design have led to the
development of low cost, small-sized devices capable of forming
wireless, ad-hoc networks and sensing a number of qualities of
their environment, while performing computations that depend
on the sensed qualities as well as information received by their
peers. These sensor networks rely on the collective power of
the separate devices as well as their computational and sensing
capabilities to understand "global" environmental states through
locally sampled information and local sensor interactions. Due
to the locality of the sensor networks, that naturally arises due
to the locality of their communications capabilities, a number
of interesting connections exist between these networks and
geometrical concepts andproblems. In this paper we study two
simple problems that pertain to the formation of low power
and low interference communication patterns in fixed topology
sensor networks. We study the problem of using multihop
communication links instead of direct ones as well as the problem
of forming a communication ring of sensor networks so as to
reduce power consumption as well as interference from other
nodes. Our focus is on the connection between sensor networks
and geometrical concepts, rather than on practicality, so as to
highlight their interrelationship.
Abstract: A fundamental approach in finding efficiently best routes or optimal itineraries in traffic information
systems is to reduce the search space (part of graph visited) of the most commonly used
shortest path routine (Dijkstra¢s algorithm) on a suitably defined graph. We investigate reduction
of the search space while simultaneously retaining data structures, created during a preprocessing
phase, of size linear (i.e., optimal) to the size of the graph. We show that the search space of
Dijkstra¢s algorithm can be significantly reduced by extracting geometric information from a given
layout of the graph and by encapsulating precomputed shortest-path information in resulted geometric
objects (containers). We present an extensive experimental study comparing the impact of
different types of geometric containers using test data from real-world traffic networks. We also
present new algorithms as well as an empirical study for the dynamic case of this problem, where
edge weights are subject to change and the geometric containers have to be updated and show that
our new methods are two to three times faster than recomputing everything from scratch. Finally,
in an appendix, we discuss the software framework that we developed to realize the implementations
of all of our variants of Dijkstra¢s algorithm. Such a framework is not trivial to achieve as our
goal was to maintain a common code base that is, at the same time, small, efficient, and flexible,
as we wanted to enhance and combine several variants in any possible way.
Abstract: The Internet of Things is shaping up to be the ideal vehicle for introducing pervasive computing in our everyday lives, especially in the form of smart home and building management systems. However, although such technologies are gradually becoming more mainstream, there is still a lot of ground to be covered with respect to public buildings and specifically ones in the educational sector. We discuss here \Green Mindset", an action focusing on energy efficiency and
sustainability in Greek public schools. A large-scale sensor infrastructure has been deployed to 12 public school buildings across diverse settings. We report on the overall designand implementation of the system, as well as on some first results coming from the data produced. Our system provides a flexible and efficient basis for realizing a unified approach to monitoring energy consumption and environmental parameters,
that can be used both for building administration
and educational purposes.
Abstract: The simplex method has been successfully used in solving linear programming problems for many years. Parallel approaches have also extensively been studied due to the intensive computations required, especially for the solution of large linear problems (LPs). In this paper we present a highly scalable parallel implementation framework of the standard full tableau simplex method on a highly parallel (distributed memory) environment. Specifically, we have designed and implemented a suitable column distribution scheme as well as a row distribution scheme and we have entirely tested our implementations over a considerably powerful distributed platform (linux cluster with myrinet interface). We then compare our approaches (a) among each other for variable number of problem size (number of rows and columns) and (b) to other recent and valuable corresponding efforts in the literature. In most cases, the column distribution scheme performs quite/much better than the row distribution scheme. Moreover, both schemes (even the row distribution scheme over large-scale problems) lead to particularly high speedupand efficiency values, which are considerably better in all cases than the ones achieved in other similar research efforts and implementations. Moreover, we further evaluate our basic parallelization scheme over very large LPs in order to validate more reliably the high efficiency and scalability achieved.
Abstract: One of the major problems algorithm designers usually face is to know in advance whether a proposed optimization algorithm is going to behave as planned, and if not, what changes are to be made to the way new solutions are examined so that the algorithm performs nicely. In this work we develop a methodology for differentiating good neighborhoods from bad ones. As a case study we consider the structure of the space of assignments for random 3-SAT formulas and we compare two neighborhoods, a simple and a more refined one that we already know the corresponding algorithm behaves extremely well. We give evidence that it is possible to tell in advance what neighborhood structure will give rise to a good search algorithm and we show how our methodology could have been used to discover some recent results on the structure of the SAT space of solutions. We use as a tool Go with the winners, an optimization heuristic that uses many particles that independently search the space of all possible solutions. By gathering statistics, we compare the combinatorial characteristics of the different neighborhoods and we show that there are certain features that make a neighborhood better than another, thus giving rise to good search algorithms.
Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical networks. In such networks and in the absence of regenerators, the signal quality of transmission degrades due to physical layer
impairments. We initially present an algorithm for solving the static RWA problem based on an LP relaxation formulation that tends to yield integer solutions. To account for signal degradation due to physical impairments, we model the effects of the path length, the path hop count, and the interference among ligthpaths by imposing additional (soft) constraints on RWA. The objective of the resulting optimization problem is not only to serve the
connection requests using the available wavelengths, but also to minimize the total accumulated signal degradation on the selected lightpaths. Our simulation studies indicate that the proposed RWA algorithms select the lightpaths for the requested connections so as to avoid impairment generating sources, thus dramatically reducing the overall physical-layer blocking when compared to RWA algorithms that do not account for impairments.
Abstract: Dynamic graph algorithms have been extensively studied in the last two
decades due to their wide applicabilityin manycon texts. Recently, several
implementations and experimental studies have been conducted investigating
the practical merits of fundamental techniques and algorithms. In most
cases, these algorithms required sophisticated engineering and fine-tuning
to be turned into efficient implementations. In this paper, we surveysev -
eral implementations along with their experimental studies for dynamic
problems on undirected and directed graphs. The former case includes
dynamic connectivity, dynamic minimum spanning trees, and the sparsification
technique. The latter case includes dynamic transitive closure and
dynamic shortest paths. We also discuss the designand implementation of
a software libraryfor dynamic graph algorithms.
Abstract: In this work we study the implementation of multicost rout-
ing in a distributed way in wireless mobile ad hoc networks.
In contrast to traditional single-cost routing, where each
path is characterized by a scalar, in multicost routing a
vector of cost parameters is assigned to each network link,
from which the cost vectors of candidate paths are calcu-
lated. These parameters are combined in various optimiza-
tion functions, corresponding to different routing algorithms,
for selecting the optimal path. Up until now the performance
of multicost and multi-constrained routing in wireless ad hoc
networks has been evaluated either at a theoretical level or
by assuming that nodes are static and have full knowledge
of the network topology and nodes� state. In the present
paper we assess the performance of multicost routing based
on energy-related parameters in mobile ad hoc networks by
embedding its logic in the Dynamic Source Routing (DSR)
algorithm, which is a well-known fully distributed routing
algorithm. We use simulations to compare the performance
of the multicost-DSR algorithm to that of the original DSR
algorithm and examine their behavior under various node
mobility scenarios. The results confirm that the multicost-
DSR algorithm improves the performance of the network in
comparison to the original DSR algorithm in terms of energy efficiency. The multicost-DSR algorithm enhances the
performance of the network not only by reducing energy
consumption overall in the network, but also by spreading
energy consumption more uniformly across the network, pro
longing the network lifetime and reducing the packet dropprobability. Furthermore the delay suffered by the packets
reaching their destination for the case of the multicost-DSR
algorithm is shown to be lower than in the case of the orig
inal DSR algorithm.
Abstract: In load balancing games, there is a set of available servers and
a set of clients; each client wishes to run her job on some server. Clients
are sel¯sh and each of them selects a server that, given an assignment
of the other clients to servers, minimizes the latency she experiences
with no regard to the global optimum. In order to mitigate the e®ect of
sel¯shness on the e±ciency, we assign taxes to the servers. In this way,
we obtain a new game where each client aims to minimize the sum of
the latency she experiences and the tax she pays. Our objective is to
¯nd taxes so that the worst equilibrium of the new game is as e±cient
as possible. We present new results concerning the impact of taxes on
the e±ciency of equilibria, with respect to the total latency of all clients
and the maximum latency (makespan).
Abstract: We consider the online impairment-aware routing
and wavelength assignment (IA-RWA) problem in transparent
WDM networks. To serve a new connection, the online algorithm,
in addition to finding a route and a free wavelength (a lightpath),
has to guarantee its transmission quality, which is affected by
physical-layer impairments. Due to interference effects, the establishment
of the new lightpath affects and is affected by the other
lightpaths. We present two multicost algorithms that account
for the actual current interference among lightpaths, as well as
for other physical effects, performing a cross-layer optimization
between the network andphysical layers. In multicost routing,
a vector of cost parameters is assigned to each link, from which
the cost vectors of the paths are calculated. The first algorithm
utilizes cost vectors consisting of impairment-generating source
parameters, so as to be generic and applicable to different physical
settings. These parameters are combined into a scalar cost
that indirectly evaluates the quality of candidate lightpaths. The
second algorithm uses specific physical-layer models to define
noise variance-related cost parameters, so as to directly calculate
the -factor of candidate lightpaths. The algorithms find a set of
so-called nondominated paths to serve the connection in the sense
that no path is better in the set with respect to all cost parameters.
To select the lightpath, we propose various optimization functions
that correspond to different IA-RWA algorithms. The proposed
algorithms combine the strength of multicost optimization with
low execution times, making them appropriate for serving online
connections
Abstract: Wireless networks are nowadays widely used and constantly evolving; we are constantly surrounded by numerous such networks. A modern idea is to exploit this variety and aggregate all available connections together towards improving Internet connectivity. In this paper we present our design of a networking system that enables participating entities to connect with each other and share their possible broadband connections. Every connected entity will gain broadband connection access from the ones that have and share one, independently from how many connections are available and how fast they are. Our goal is to improve the broadband connections' utilization and aim towards fault tolerance on these connections. To this end, the system is built on top of a peer-to-peer network, where the participating entities share their connections according to various scheduling algorithms.
Abstract: With this work we aim to make a three-fold contribution.
We first address the issue of supporting efficiently queries
over string-attributes involving prefix, suffix, containment,
and equality operators in large-scale data networks. Our
first design decision is to employ distributed hash tables
(DHTs) for the data network?s topology, harnessing their
desirable properties. Our next design decision is to derive
DHT-independent solutions, treating DHT as a black box.
Second, we exploit this infrastructure to develop efficient
content based publish/subscribe systems. The main con-
tribution here are algorithms for the efficient processing of
queries (subscriptions) and events (publications). Specifi-
cally, we show that our subscription processing algorithms
require O(logN) messages for a N-node network, and our
event processing algorithms require O(l ? logN) messages
(with l being the average string length).
Third, we develop algorithms for optimizing the proces-
sing of multi-dimensional events, involving several string at-
tributes. Further to our analysis, we provide simulation-
based experiments showing promising performance results
in terms of number of messages, required bandwidth, load
balancing, and response times.
Abstract: We present the interpolation search B-tree (ISB-tree), a new cache-aware indexing scheme
that supports update operations (insertions and deletions) in O(1) worst-case block
transfers and search operations in O(logB logn) expected block transfers, where B
represents the disk block size and n denotes the number of stored elements. The expected
search bound holds with high probability for a large class of (unknown) input distributions.
The worst-case search bound of our indexing scheme is O(logB n) block transfers. Our
update and expected search bounds constitute a considerable improvement over the
O(logB n) worst-case block transfer bounds for search and update operations achieved
by the B-tree and its numerous variants. This is also verified by an accompanying
experimental study.
Abstract: This paper presents results from the IST Phosphorus project that studies and implements an optical Grid test-bed. A significant part of this project addresses scheduling and routing algorithms and dimensioning problems of optical grids. Given the high costs involved in setting up actual hardware implementations, simulations are a viable alternative. In this paper we present an initial study which proposes models that reflect real-world grid application traffic characteristics, appropriate for simulation purposes. We detail several such models and the corresponding process to extract the model parameters from real grid log traces, and verify that synthetically generated jobs provide a realistic approximation of the real-world grid job submission process.
Abstract: In this work we study the combination of multicost
routing and adjustable transmission power in wireless
ad hoc networks, so as to obtain dynamic energy- and
interference-efficient routes to optimize network performance.
In multi-cost routing, a vector of cost parameters is
assigned to each network link, from which the cost vectors
of candidate paths are calculated. Only at the end these
parameters are combined in various optimization functions,
corresponding to different routing algorithms, for selecting
the optimal path. The multi-cost routing problem is a
generalization of the multi-constrained problem, where no
constraints exist, and is also significantly more powerful
than single-cost routing. Since energy is an important
limitation of wireless communications, the cost parameters
considered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be included,
as desired. We assume that nodes can use power control to
adjust their transmission power to the desired level. The
experiments conducted show that the combination of multicost
routing and adjustable transmission power can lead to
reduced interference and energy consumption, improving
network performance and lifetime.
Abstract: Wireless sensor networks can be very useful in applications that require the detection of crucial events, in physical environments subjected to critical conditions, and the propagation of data reporting their realization to a control center. In this paper we propose jWebDust, a generic and modular application environment for developing and managing applications that are based on wireless sensor networks. Our software architecture provides a range of services that allow to create customized applications with minimum implementation effort that are easy to administrate. We move beyond the ?networking-centric? view of sensor network research and focus on how the end user (administrator, control center supervisor, etc.) will visualize and interact with the system.
We here present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust allows heterogeneous components to interoperate (real world sensor networks will rarely be homogeneous) and allows the integrated management and control of multiple such networks by also defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network. The architecture also illustrates how existing protocols for various services can interoperate in a bigger framework - such as the tree construction, query routing, etc.
Abstract: In this book chapter we will consider key establishment protocols for wireless sensor networks.
Several protocols have been proposed in the literature for the establishment of a shared group key for wired networks.
The choice of a protocol depends whether the key is established by one of the participants (and then transported to the other(s)) or agreed among the participants, and on the underlying cryptographic mechanisms (symmetric or asymmetric). Clearly, the design of key establishment protocols for sensor networks must deal with different problems and challenges that do not exist in wired networks. To name a few, wireless links are particularly vulnerable to eavesdropping, and that sensor devices can be captured (and the secrets they contain can be compromised); in many upcoming wireless sensor networks, nodes cannot rely on the presence of an online trusted server (whereas most standardized authentication and key establishment protocols do rely on such a server).
In particular, we will consider five distributed group key establishment protocols. Each of these protocols applies a different algorithmic technique that makes it more suitable for (i) static sensor networks, (ii) sensor networks where nodes enter sleep mode (i.e. dynamic, with low rate of updates on the connectivity graph) and (iii) fully dynamic networks where nodes may even be mobile. On the other hand, the common factor for all five protocols is that they can be applied in dynamic groups (where members can be excluded or added) andprovide forward and backward secrecy. All these protocols are based on the Diffie-Hellman key exchange algorithm and constitute natural extensions of it in the multiparty case.
Abstract: This paper addresses the efficient processing of
top-k queries in wide-area distributed data
repositories where the index lists for the attribute
values (or text terms) of a query are distributed
across a number of data peers and the
computational costs include network latency,
bandwidth consumption, and local peer work.
We present KLEE, a novel algorithmic
framework for distributed top-k queries,
designed for high performance and flexibility.
KLEE makes a strong case for approximate top-k
algorithms over widely distributed data sources.
It shows how great gains in efficiency can be
enjoyed at low result-quality penalties. Further,
KLEE affords the query-initiating peer the
flexibility to trade-off result quality and expected
performance and to trade-off the number of
communication phases engaged during query
execution versus network bandwidth
performance. We have implemented KLEE and
related algorithms and conducted a
comprehensive performance evaluation. Our
evaluation employed real-world and synthetic
large, web-data collections, and query
benchmarks. Our experimental results show that
KLEE can achieve major performance gains in
terms of network bandwidth, query response
times, and much lighter peer loads, all with small
errors in result precision and other result-quality
measures
Abstract: Abstract— Numerous smart city testbeds and system deployments have surfaced around the world, aiming to provide services over unified large heterogeneous IoT infrastructures. Although we have achieved new scales in smart city installations and systems, so far the focus has been to provide diverse sources of data to smart city services consumers, while neglecting to provide ways to simplify making good use of them. We believe that knowledge creation in smart cities through data annotation, supported in both an automated and a crowdsourced manner, is an aspect that will bring additional value to smart cities. We present here our approach, aiming to utilize an existing smart city deployment and the OrganiCity software ecosystem. We discuss key challenges along with characteristic use cases, and report on our designand implementation, along with preliminary results.
Abstract: A number of Future Internet testbeds are being deployed around the world for research experimentation and development. SmartSantander
is an infrastructure of massive scale deployed inside a city centre. We argue that utilising the concept of participatory sensing can augment the functionality andpotential use-cases of such a system and be beneficiary in a number of scenarios. We discuss
the concept of extending SmartSantander with participatory sensing through the use of volunteers¢ smartphones. We report on our designand implementation, which allows for developers to write
their code for Android devices and then deploy and execute on the devices automatically through our system. We have tested our implementation in a number of scenarios in two cities with the help
of volunteers with promising results; the data collected enhance the ones by fixed infrastructure both quantitatively and qualitatively across the cities, while also engaging citizens more directly.
Abstract: Although we have reached new levels in smart city installations and systems, efforts so far have focused on providing diverse sources of data to smart city services consumers while neglecting to provide ways to simplify making good use of them. In this context, one first step that will bring added value to smart cities is knowledge creation in smart cities through anomaly detection and data annotation, supported in both an automated and a crowdsourced manner. We present here LearningCity, our solution that has been validated over an existing smart city deployment in Santander, and the OrganiCity experimentation-as-a-service ecosystem. We discuss key challenges along with characteristic use cases, and report on our designand implementation, together with some preliminary results derived from combining large smart city datasets with machine learning.
Abstract: Internet of Things technologies are considered the next big
step in Smart Building installations. Although such technologies have
been widely studied in simulation and experimental scenarios it is not so
obvious how problems of real world installations should be dealt with. In
this work we deploy IoT devices for sensing and control in a multi-office
space and employ technologies such as CoAP, RESTful interfaces and
Semantic Descriptions to integrate them with the Web. We report our
research goals, the challenges we faced, the decisions we made and the
experience gained from the design, deployment and operation of all the
hardware and software components that compose our system.
Abstract: The adoption of technologies like the IoT in urban environments, together with the intensive use of smartphones, is driving transformation towards smart cities. Under this perspective, Experimentation-as-a-Service within OrganiCity aims to create an experimental facility with technologies, services, and applications that simplify innovation within urban ecosystems. We discuss here tools that facilitate experimentation, implementing ways to organize, execute, and administer experimentation campaigns in a smart city context. We discuss the benefits of our framework, presenting some preliminary results. This is the first time such tools are paired with large-scale smart city infrastructures, enabling both city-scale experimentation and cross-site experimentation.
Abstract: We briefly present the designand architecture of a system that aims to simplify the process of organizing, executing and administering crowdsensing campaigns in a smart city context over smartphones volunteered by citizens. We built our system on top of an Android app substrate on the end-user level, which enables us to utilize smartphone resources. Our system allows researchers and other developers to manage and distribute their “mini” smart city applications, gather data andpublish their results through the Organicity smart city platform. We believe this is the first time such a tool is paired with a large scale IoT
infrastructure, to enable truly city-scale IoT and smart city experimentation.
Abstract: Flow control is the main technique currently used to prevent some of the ordered traffic from entering a communication network, and to avoid congestion. A challenging aspect of flow control is how to treat all sessions "fairly " when it is necessary to turn traffic away from the network. In this work, we show how to extend the theory of max-min fair flow control to the case where priorities are assigned to different varieties of traffic, which are sensitive to traffic levels. We examine priorities expressible in the general form of increasing functions of rates, considering yet in combination the more elaborative case with unescapable upper and lower bounds on rates of traffic sessions. We offer optimal, priority bottleneck algorithms, which iteratively adjust the session rates in order to meet a new condition of max-min fairness under priorities and rate bounds. In our setting, which is realistic for today's technology of guaranteed quality of service, traffic may be turned away not only to avoid congestion, but also to respect particular minimum requirements on bandwidth. Moreover, we establish lower bounds on the competitiveness of network-oblivious schemes compared to optimal schemes with complete knowledge of network structure. Our theory extends significantly the classical theory of max-min fair flow control [2]. Moreover, our results on rejected traffic are fundamentally different from those related to call control and bandwidth allocation, since not only do we wish to optimize the number and rates of accepted sessions, but we also require priority fairness.
Abstract: We extend here the Population Protocol model of Angluin et al. [2004,2006] in order to model more powerful networks of resource-limited agents that are possibly mobile. The main feature of our extended model, called the Mediated Population Protocol (MPP) model, is to allow edges of the communication graph to have states that belong to a constant size set. We then allow the protocol rules for pairwise interactions to modify the corresponding edge state. Protocol specifications preserve both uniformity and anonymity. We first focus on the computational power of the MPP model on complete communication graphs and initially identical edges. We provide the following exact characterization for the class MPS of stably computable predicates: A predicate is in MPS iff it is symmetric and is in NSPACE(n^2)$. We finally ignore the input to the agents and study MPP's ability to compute graph properties.
Abstract: The promises inherent in users coming together to form data
sharing network communities, bring to the foreground new problems formulated
over such dynamic, ever growing, computing, storage, and networking
infrastructures. A key open challenge is to harness these highly
distributed resources toward the development of an ultra scalable, efficient
search engine. From a technical viewpoint, any acceptable solution
must fully exploit all available resources dictating the removal of any
centralized points of control, which can also readily lead to performance
bottlenecks and reliability/availability problems. Equally importantly,
however, a highly distributed solution can also facilitate pluralism in informing
users about internet content, which is crucial in order to preclude
the formation of information-resource monopolies and the biased visibility
of content from economically-powerful sources. To meet these challenges,
the work described here puts forward MINERVA{\^a}{\"i}¿½{\"i}¿½, a novel search
engine architecture, designed for scalability and efficiency. MINERVA{\^a}{\"i}¿½{\"i}¿½
encompasses a suite of novel algorithms, including algorithms for creating
data networks of interest, placing data on network nodes, load balancing,
top-k algorithms for retrieving data at query time, and replication algorithms
for expediting top-k query processing. We have implemented the
proposed architecture and we report on our extensive experiments with
real-world, web-crawled, and synthetic data and queries, showcasing the
scalability and efficiency traits of MINERVA{\^a}{\"i}¿½{\"i}¿½.
Abstract: We consider the problem of computing minimum congestion, fault-tolerant, redundant assignments of messages to faulty, parallel delivery channels. In particular, we are given a set K of faulty channels, each having an integer capacity ci and failing independently with probability fi. We are also given a set M of messages to be delivered over K, and a fault-tolerance constraint (1 - {\aa}), and we seek a redundant assignment {\"o} that minimizes congestion Cong({\"o}), i.e. the maximum channel load, subject to the constraint that, with probability no less than (1 - e), all the messages have a copy on at least one active channel. We present a polynomial-time 4-approximation algorithm for identical capacity channels and arbitrary message sizes, and a 2[ln(|K|/{\aa})/ln(1/fmax)]-approximation algorithm for related capacity channels and unit size messages. Both algorithms are based on computing a collection (K1,., K{\'i}} of disjoint channel subsets such that, with probability no less than (1 - {\aa}), at least one channel is active in each subset. The objective is to maximize the sum of the minimum subset capacities. Since the exact version of this problem is NP-complete, we provide a 2-approximation algorithm for identical capacities, and a polynomial-time (8+o(1))-approximation algorithm for arbitrary capacities.
Abstract: In this work, we propose an obstacle model to be used while simulating wireless sensor networks. To the best of our knowledge, this is the first time such an integrated and systematic obstacle model appears. We define several types of obstacles that can be found inside the deployment area of a wireless sensor network andprovide a categorization of these obstacles, based on their nature (physical and communication obstacles), their shape, as well as
their nature to change over time. In light of this obstacle model we conduct extensive simulations in order to study the effects of obstacles on the performance of representative data propagation protocols for wireless sensor networks. Our findings
show that obstacle presence has a significant impact on protocol performance. Also, we demonstrate the effect of each obstacle type on different protocols, thus providing the network designer with advice on which protocol is best to use.
Abstract: Recent rapid developments in micro-electro-mechanical systems
(MEMS), wireless communications and digital electronics have already
led to the development of tiny, low-power, low-cost sensor devices.
Such devices integrate sensing, limited data processing and restricted
communication capabilities.
Each sensor device individually might have small utility, however the
effective distributed co-ordination of large numbers of such devices can
lead to the efficient accomplishment of large sensing tasks. Large numbers
of sensors can be deployed in areas of interest (such as inaccessible
terrains or disaster places) and use self-organization and collaborative
methods to form an ad-hoc network.
We note however that the efficient and robust realization of such large,
highly-dynamic, complex, non-conventional networking environments is
a challenging technological and algorithmic task, because of the unique
characteristics and severe limitations of these devices.
This talk will present and discuss several important aspects of the
design, deployment and operation of sensor networks. In particular, we
provide a brief description of the technical specifications of state-of-theart
sensor, a discussion of possible models used to abstract such networks,
a discussion of some key algorithmic design techniques (like randomization,
adaptation and hybrid schemes), a presentation of representative
protocols for sensor networks, for important problems including data
propagation, collision avoidance and energy balance and an evaluation
of crucial performance properties (correctness, efficiency, fault-tolerance)
of these protocols, both with analytic and simulation means.
Abstract: In this work we consider temporal graphs, i.e. graphs, each edge of which isassigned a set of discrete time-labels drawn from a set of integers. The labelsof an edge indicate the discrete moments in time at which the edge isavailable. We also consider temporal paths in a temporal graph, i.e. pathswhose edges are assigned a strictly increasing sequence of labels. Furthermore,we assume the uniform case (UNI-CASE), in which every edge of a graph isassigned exactly one time label from a set of integers and the time labelsassigned to the edges of the graph are chosen randomly and independently, withthe selection following the uniform distribution. We call uniform randomtemporal graphs the graphs that satisfy the UNI-CASE. We begin by deriving theexpected number of temporal paths of a given length in the uniform randomtemporal clique. We define the term temporal distance of two vertices, which isthe arrival time, i.e. the time-label of the last edge, of the temporal paththat connects those vertices, which has the smallest arrival time amongst alltemporal paths that connect those vertices. We then propose and study twostatistical properties of temporal graphs. One is the maximum expected temporaldistance which is, as the term indicates, the maximum of all expected temporaldistances in the graph. The other one is the temporal diameter which, looselyspeaking, is the expectation of the maximum temporal distance in the graph. Wederive the maximum expected temporal distance of a uniform random temporal stargraph as well as an upper bound on both the maximum expected temporal distanceand the temporal diameter of the normalized version of the uniform randomtemporal clique, in which the largest time-label available equals the number ofvertices. Finally, we provide an algorithm that solves an optimization problemon a specific type of temporal (multi)graphs of two vertices.
Abstract: We propose a class of novel energy-efficient multi-cost routing algorithms for wireless mesh networks, and evaluate their performance. In multi-cost routing, a vector of cost parameters is assigned to each network link, from which the cost vectors of candidate paths are calculated using appropriate operators. In the end these parameters are combined in various optimization functions, corresponding to different routing algorithms, for selecting the optimal path. We evaluate the performance of the proposed energy-aware multi-cost routing algorithms under two models. In the network evacuation model, the network starts with a number of packets that have to be transmitted and an amount of energy per node, and the objective is to serve the packets in the smallest number of steps, or serve as many packets as possible before the energy is depleted. In the dynamic one-to-one communication model, new data packets are generated continuously and nodes are capable of recharging their energy periodically, over an infinite time horizon, and we are interested in the maximum achievable steady-state throughput, the packet delay, and the energy consumption. Our results show that energy-aware multi-cost routing increases the lifetime of the network and achieves better overall network performance than other approaches.
Abstract: In this work we study the combination of
multicost routing and adjustable transmission power
in wireless ad-hoc networks, so as to obtain dynamic
energy and interference-efficient routes to optimize network performance. In multi-cost routing, a vector of
cost parameters is assigned to each network link, from
which the cost vectors of candidate paths are calcu-
lated. Only at the end are these parameters combined in
various optimization functions, corresponding to different routing algorithms, for selecting the optimal path.
The multi-cost routing problem is a generalization of
the multi-constrained problem, where no constraints exist, and is also significantly more powerful than single-
cost routing. Since energy is an important limitation of
wireless communications, the cost parameters consid
ered are the number of hops, the interference caused,
the residual energy and the transmission power of the
nodes on the path; other parameters could also be in
cluded, as desired.We assume that nodes can use power
control to adjust their transmission power to the desired
level. The experiments conducted show that the com
bination of multi-cost routing and adjustable transmis sion power can lead to reduced interference and energy
consumption, improving network performance and life-
time.
Abstract: We present an architecture for implementing optical
buffers, based on the feed-forward-buffer concept, that can truly
emulate input queuing and accommodate asynchronous packet
and burst operation. The architecture uses wavelength converters
and fixed-length delay lines that are combined to form either a
multiple-input buffer or a shared buffer. Both architectures are
modular, allowing the expansion of the buffer at a cost that grows
logarithmically with the buffer depth, where the cost is measured
in terms of the number of switching elements, and wavelength
converters are employed. The architectural design also provides
a tradeoff between the number of wavelength converters and their
tunability. The buffer architectures proposed are complemented
with scheduling algorithms that can guarantee lossless communication
and are evaluated using physical-layer simulations to
obtain their performance in terms of bit-error rate and achievable
buffer size.
Abstract: We present an architecture for implementing optical
buffers, based on the feed-forward-buffer concept, that can truly
emulate input queuing and accommodate asynchronous packet
and burst operation. The architecture uses wavelength converters
and fixed-length delay lines that are combined to form either a
multiple-input buffer or a shared buffer. Both architectures are
modular, allowing the expansion of the buffer at a cost that grows
logarithmically with the buffer depth, where the cost is measured
in terms of the number of switching elements, and wavelength
converters are employed. The architectural design also provides
a tradeoff between the number of wavelength converters and their
tunability. The buffer architectures proposed are complemented
with scheduling algorithms that can guarantee lossless communication
and are evaluated using physical-layer simulations to
obtain their performance in terms of bit-error rate and achievable
buffer size.
Abstract: We present and discuss challenges and solutions posed by the design of an
adaptable network infrastructure of tiny artifacts. Such artifacts are characterized by
severe limitations in computational power, communications capacity and energy;
nevertheless they must realize a communication infrastructure able to deliver
services to the end-users in a very dynamic and challenging environment. Namely
we present one unifying scenario for the activities of the FRONTS project
(www.fronts.cti.gr). The aim of the unifying scenario is to show how the results
achieved in the project can be exploited to build such a communication
infrastructure.
Abstract: In this work, we study the fundamental naming and counting problems (and some variations) in networks that are anonymous, unknown, andpossibly dynamic. In counting, nodes must determine the size of the network n and in naming they must end up with unique identities. By anonymous we mean that all nodes begin from identical states
apart possibly from a unique leader node and by unknown that nodes
have no a priori knowledge of the network (apart from some minimal
knowledge when necessary) including ignorance of n. Network dynamicity is modeled by the 1-interval connectivity model [KLO10], in which communication is synchronous and a (worst-case) adversary chooses the edges of every round subject to the condition that each instance is connected. We first focus on static networks with broadcast where we prove that, without a leader, counting is impossible to solve and that naming is impossible to solve even with a leader and even if nodes know n. These impossibilities carry over to dynamic networks as well. We also show that a unique leader suffices in order to solve counting in linear time.
Then we focus on dynamic networks with broadcast. We conjecture that
dynamicity renders nontrivial computation impossible. In view of this,
we let the nodes know an upper bound on the maximum degree that will
ever appear and show that in this case the nodes can obtain an upper
bound on n. Finally, we replace broadcast with one-to-each, in which a
node may send a different message to each of its neighbors. Interestingly,
this natural variation is proved to be computationally equivalent to a
full-knowledge model, in which unique names exist and the size of the
network is known.
Abstract: We propose local mechanisms for efficiently marking the broader network region around obstacles, for data propagation to early enough avoid them towards near-optimal routing paths. In particular, our methods perform an online identification of sensors lying near obstacle boundaries,which then appropriately emit beacon messages in the network towards establishing efficient obstacle avoidance paths. We provide a variety of beacon dissemination schemes that satisfy different trade-offs between protocol overhead andperformance. Compared to greedy, face routing and trustbased methods in the state of the art, our methods achieve significantly shorter propagation paths, while introducing much lower overhead and converging faster to near-optimality.
Abstract: Geographic routing scales well in sensor networks, mainly
due to its stateless nature. Still, most of the algorithms are
concerned with finding some path, while the optimality of
the path is difficult to achieve. In this paper we are presenting
a novel geographic routing algorithm with obstacle
avoidance properties. It aims at finding the optimal path
from a source to a destination when some areas of the network
are unavailable for routing due to low local density or
obstacle presence. It locally and gradually with time (but,
as we show, quite fast) evaluates and updates the suitability
of the previously used paths and ignores non optimal paths
for further routing. By means of extensive simulations, we
are comparing its performance to existing state of the art
protocols, showing that it performs much better in terms of
path length thus minimizing latency, space, overall traffic
and energy consumption.
Abstract: We propose local mechanisms for efficiently marking the
broader network region around obstacles, for data propagation
to early enough avoid them towards near-optimal
routing paths. In particular, our methods perform an online
identification of sensors lying near obstacle boundaries,
which then appropriately emit beacon messages in the network
towards establishing efficient obstacle avoidance paths.
We provide a variety of beacon dissemination schemes that
satisfy different trade-offs between protocol overhead andperformance. Compared to greedy, face routing and trustbased
methods in the state of the art, our methods achieve
significantly shorter propagation paths, while introducing
much lower overhead and converging faster to near-optimality.
Abstract: We address the call control problem in wireless cellular networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region (cell) can communicate simultaneously with other users of the network using distinct frequencies. The available frequency spectrum is limited; hence, its management should be done efficiently. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate in a cellular network, to maximize the number of users that communicate without signal interference. We study the online version of the problem in cellular networks using competitive analysis andpresent new upper and lower bounds.
Abstract: Wireless Sensor Networks (WSNs) constitute a recent andpromising new
technology that is widely applicable. Due to the applicability of this
technology and its obvious importance for the modern distributed
computational world, the formal scientific foundation of its inherent laws
becomes essential. As a result, many new computational models for WSNs
have been proposed. Population Protocols (PPs) are a special category of
such systems. These are mainly identified by three distinctive
characteristics: the sensor nodes (agents) move passively, that is, they
cannot control the underlying mobility pattern, the available memory to
each agent is restricted, and the agents interact in pairs. It has been
proven that a predicate is computable by the PP model iff it is
semilinear. The class of semilinear predicates is a fairly small class. In
this work, our basic goal is to enhance the PP model in order to improve
the computational power. We first make the assumption that not only the
nodes but also the edges of the communication graph can store restricted
states. In a complete graph of n nodes it is like having added O(n2)
additional memory cells which are only read and written by the endpoints
of the corresponding edge. We prove that the new model, called Mediated
Population Protocol model, can operate as a distributed nondeterministic
Turing machine (TM) that uses all the available memory. The only
difference from a usual TM is that this one computes only symmetric
languages. More formally, we establish that a predicate is computable by
the new model iff it is symmetric and belongs to NSPACE(n2). Moreover, we
study the ability of the new model to decide graph languages (for general
graphs). The next step is to ignore the states of the edges andprovide
another enhancement straight away from the PP model. The assumption now is
that the agents are multitape TMs equipped with infinite memory, that can
perform internal computation and interact with other agents, and we define
space-bounded computations. We call this the Passively mobile Machines
model. We prove that if each agent uses at most f(n) memory for f(n)={\`U}(log
n) then a predicate is computable iff it is symmetric and belongs to
NSPACE(nf(n)). We also show that this is not the case for f(n)=o(log n).
Based on these, we show that for f(n)={\`U}(log n) there exists a space
hierarchy like the one for classical symmetric TMs. We also show that the
latter is not the case for f(n)=o(loglog n), since here the corresponding
class collapses in the class of semilinear predicates and finally that for
f(n)={\`U}(loglog n) the class becomes a proper superset of semilinear
predicates. We leave open the problem of characterizing the classes for
f(n)={\`U}(loglog n) and f(n)=o(log n).
Abstract: Motivated by the problem of efficiently collecting data from
wireless sensor networks via a mobile sink, we present an accelerated
random walk on Random Geometric Graphs. Random
walks in wireless sensor networks can serve as fully local,
very simple strategies for sink motion that significantly
reduce energy dissipation but introduce higher latency in the
data collection process. While in most cases random walks
are studied on graphs like Gn,pand Grid, we define and experimentally
evaluate our newly proposed random walk on
the Random Geometric Graphs model, that more accurately
abstracts spatial proximity in a wireless sensor network. We
call this new random walk the \~{a}-stretched random walk, and
compare it to two known random walks; its basic idea is
to favour visiting distant neighbours of the current node
towards reducing node overlap. We also define a new performance
metric called Proximity Cover Time which, along
with other metrics such as visit overlap statistics andproximity
variation, we use to evaluate the performance properties
and features of the various walks.
Abstract: We propose a novel, generic definition of probabilistic schedulers for population protocols. We then identify the consistent probabilistic schedulers, andprove that any consistent scheduler that assigns a non-zero probability to any transition i->j, where i and j are configurations satisfying that i is not equal to j, is fair with probability 1. This is a new theoretical framework that aims to simplify proving specific probabilistic schedulers fair. In this paper we propose two new schedulers, the State Scheduler and the Transition Function Scheduler. Both possess the significant capability of being protocol-aware, i.e. they can assign transition probabilities based on information concerning the underlying protocol. By using our framework we prove that the proposed schedulers, and also the Random Scheduler that was defined by Angluin et al., are all fair with probability 1. We also define and study equivalence between schedulers w.r.t. performance (time equivalent schedulers) and correctness (computationally equivalent schedulers). Surprisingly, we prove the following.
1. The protocol-oblivious (or agnostic) Random Scheduler is not time equivalent to the State and Transition Function Schedulers, although all three are fair probabilistic schedulers (with probability 1). To prove the statement we study the performance of the One-Way Epidemic Protocol (OR Protocol) under these schedulers. To illustrate the unexpected performance variations of protocols under different fair probabilistic schedulers, we additionally modify the State Scheduler to obtain a fair probabilistic scheduler, called the Modified Scheduler, that may be adjusted to lead the One-Way Epidemic Protocol to arbitrarily bad performance.
2. The Random Scheduler is not computationally equivalent to the Transition Function Scheduler. To prove the statement we study the Majority Protocol w.r.t. correctness under the Transition Function Scheduler. It turns out that the minority may win with constant probability under the same initial margin for which the majority w.h.p. wins under the Random Scheduler (as proven by Angluin et al.).
Abstract: In this paper we discussed different switch architectures. We focus mainly on optical buffering. We investigate an all-optical buffer architecture comprising of cascaded stages of quantum-dot semiconductor optical amplifier- based tunable wavelength converters, at 160 Gb/s. We also propose the optical buffer with multi-wavelength converters based on quantum-dot semiconductor optical amplifiers. We present multistage switching fabrics with optical buffers, where optical buffers are based on fibre delay lines and are located in the first stage. Finally, we describe a photonic asynchronous packet switch and show that the employment of a few optical buffer stages to complement the electronic ones significantly improves the switch performance. We also propose two asynchronous optical packet switching node architectures, where an efficient contention resolution is based on controllable optical buffers and tunable wavelength converters TWCs.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function Φ: V → IN such that ¦Φ(u)-Φ(v)≥ 2, when u; v are neighbors in G, and ¦Φ(u)-Φ(v)≥1 when the minimum distance of u; v in G is two. The discrete number and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(nΔ) time algorithm (¦V¦ = n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where Δ the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with λ colors, in the case λ ≥ 4λ + 50.
Abstract: We consider the offline version of the routing and
wavelength assignment (RWA) problem in transparent all-optical
networks. In such networks and in the absence of regenerators,
the signal quality of transmission degrades due to physical layer
impairments. Because of certain physical effects, routing choices
made for one lightpath affect and are affected by the choices made
for the other lightpaths. This interference among the lightpaths
is particularly difficult to formulate in an offline algorithm since,
in this version of the problem, we start without any established
connections and the utilization of lightpaths are the variables of
the problem.We initially present an algorithm for solving the pure
(without impairments) RWA problem based on a LP-relaxation
formulation that tends to yield integer solutions. Then, we extend
this algorithm andpresent two impairment-aware (IA) RWA algorithms
that account for the interference among lightpaths in their
formulation. The first algorithm takes the physical layer indirectly
into account by limiting the impairment-generating sources. The
second algorithm uses noise variance-related parameters to directly
account for the most important physical impairments. The
objective of the resulting cross-layer optimization problem is not
only to serve the connections using a small number of wavelengths
(network layer objective), but also to select lightpaths that have
acceptable quality of transmission (physical layer objective).
Simulations experiments using realistic network, physical layer,
and traffic parameters indicate that the proposed algorithms can
solve real problems within acceptable time.
Abstract: We study the fundamental problem 2NASH of computing a Nash equilibrium (NE) point in bimatrix games. We start by proposing a novel characterization of the NE set, via a bijective map to the solution set of a parameterized quadratic program (NEQP), whose feasible space is the highly structured set of correlated equilibria (CE). This is, to our knowledge, the first characterization of the subset of CE points that are in “1–1” correspondence with the NE set of the game, and contributes to the quite lively discussion on the relation between the spaces of CE and NE points in a bimatrix game (e.g., [15], [26] and [33]).
We proceed with studying a property of bimatrix games, which we call mutually concavity (MC), that assures polynomial-time tractability of 2NASH, due to the convexity of a proper parameterized quadratic program (either NEQP, or a parameterized variant of the Mangasarian & Stone formulation [23]) for a particular value of the parameter. We prove various characterizations of the MC-games, which eventually lead us to the conclusion that this class is equivalent to the class of strategically zero-sum (SZS) games of Moulin & Vial [25]. This gives an alternative explanation of the polynomial-time tractability of 2NASH for these games, not depending on the solvability of zero-sum games. Moreover, the recognition of the MC-property for an arbitrary game is much faster than the recognition SZS-property. This, along with the comparable time-complexity of linear programs and convex quadratic programs, leads us to a much faster algorithm for 2NASH in MC-games.
We conclude our discussion with a comparison of MC-games (or, SZS-games) to kk-rank games, which are known to admit for 2NASH a FPTAS when kk is fixed [18], and a polynomial-time algorithm for k=1k=1 [2]. We finally explore some closeness properties under well-known NE set preserving transformations of bimatrix games.
Abstract: The voting rules proposed by Dodgson and Young are both
designed to nd the alternative closest to being a Condorcet
winner, according to two dierent notions of proximity; the
score of a given alternative is known to be hard to compute
under either rule.
In this paper, we put forward two algorithms for ap-
proximating the Dodgson score: an LP-based randomized
rounding algorithm and a deterministic greedy algorithm,
both of which yield an O(logm) approximation ratio, where
m is the number of alternatives; we observe that this result
is asymptotically optimal, and further prove that our greedy
algorithm is optimal up to a factor of 2, unless problems in
NP have quasi-polynomial time algorithms. Although the
greedy algorithm is computationally superior, we argue that
the randomized rounding algorithm has an advantage from
a social choice point of view.
Further, we demonstrate that computing any reasonable
approximation of the ranking produced by Dodgson's rule
is NP-hard. This result provides a complexity-theoretic
explanation of sharp discrepancies that have been observed
in the Social Choice Theory literature when comparing
Dodgson elections with simpler voting rules.
Finally, we show that the problem of calculating the
Young score is NP-hard to approximate by any factor. This
leads to an inapproximability result for the Young ranking.
Abstract: An ad-hoc mobile network is a collection of mobile hosts, with
wireless communication capabilities, forming a temporary network
without the aid of any established fixed infrastructure.
In such networks, topological connectivity is subject to frequent,
unpredictable change. Our work focuses on networks with high
rate of such changes to connectivity. For such dynamic changing
networks we propose protocols which exploit the co-ordinated
(by the protocol) motion of a small part of the network.
We show that such protocols can be designed to work
correctly and efficiently even in the case of arbitrary (but not
malicious) movements of the hosts not affected by the protocol.
We also propose a methodology for the analysis of the expected
behaviour of protocols for such networks, based on the assumption that mobile hosts (whose motion is not guided by
the protocol) conduct concurrent random walks in their
motion space.
Our work examines some fundamental problems such as pair-wise
communication, election of a leader and counting, andproposes
distributed algorithms for each of them. We provide their
proofs of correctness, and also give rigorous analysis by
combinatorial tools and also via experiments.
Abstract: Understanding the graph structure of the Internet is a crucial step for building accurate
network models and designing efficient algorithms for Internet applications.Yet,obtaining this graph
structure can be a surprisingly difficult task,as edges cannot be explicitly queried.For instance,
empirical studies of the network of InternetProtocol (IP) addresses typically rely on indirect methods
like trace route to build what are approximately single-source,all-destinations,shortest-path trees.
These trees only sample a fraction of the network’s edges,and a paper by Lakhinaetal.[2003]found
empirically that there sulting sample is intrinsically biased.Further,in simulations,they observed that the degree distribution under trace route sampling exhibits a power law even when the underlying
degree distribution is Poisson.
Abstract: In this paper we demonstrate the significant impact of (a) the mobility rate and (b) the user density on the performance of routing protocols in ad-hoc mobile networks. In particular, we study the effect of these parameters on two different approaches for designing routing protocols: (a) the route creation and maintenance approach and (b) the "support" approach, that forces few hosts to move acting as "helpers" for message delivery. We study one representative protocol for each approach, i.e., AODV for the first approach and RUNNERS for the second. We have implemented the two protocols andperformed a large scale and detailed simulation study of their performance. For the first time, we study AODV (and RUNNERS) in the 3D case. The main findings are: the AODV protocol behaves well in networks of high user density and low mobility rate, while its performance drops for sparse networks of highly mobile users. On the other hand, the RUNNERS protocol seems to tolerate well (and in fact benefit from) high mobility rates and low densities. Thus, we are able to partially answer an important conjecture of [Chatzigiannakis, I et al. 2003].
Abstract: In sponsored search auctions, advertisers compete for a number
of available advertisement slots of different quality. The
auctioneer decides the allocation of advertisers to slots using
bids provided by them. Since the advertisers may act
strategically and submit their bids in order to maximize their
individual objectives, such an auction naturally defines a
strategic game among the advertisers. In order to quantify
the efficiency of outcomes in generalized second price auctions,
we study the corresponding games andpresent new
bounds on their price of anarchy, improving the recent results
of Paes Leme and Tardos [16] and Lucier andPaes
Leme [13]. For the full information setting, we prove a surprisingly
low upper bound of 1.282 on the price of anarchy
over pure Nash equilibria. Given the existing lower bounds,
this bound denotes that the number of advertisers has almost
no impact on the price of anarchy. The proof exploits
the equilibrium conditions developed in [16] and follows by
a detailed reasoning about the structure of equilibria and a
novel relation of the price of anarchy to the objective value
of a compact mathematical program. For more general equilibrium
classes (i.e., mixed Nash, correlated, and coarse correlated
equilibria), we present an upper bound of 2.310 on
the price of anarchy. We also consider the setting where
advertisers have incomplete information about their competitors
andprove a price of anarchy upper bound of 3.037
over Bayes-Nash equilibria. In order to obtain the last two
bounds, we adapt techniques of Lucier andPaes Leme [13]
and significantly extend them with new arguments
Abstract: We present a variant of the complex multiplication method
that generates elliptic curves of cryptographically strong order. Our variant
is based on the computation ofWeber polynomials that require significantly
less time and space resources than their Hilbert counterparts. We
investigate the time efficiency andprecision requirements for generating
off-line Weber polynomials and its comparison to another variant based
on the off-line generation of Hilbert polynomials. We also investigate the
efficiency of our variant when the computation of Weber polynomials
should be made on-line due to limitations in resources (e.g., hardware
devices of limited space).We present trade-offs that could be useful to potential
implementors of elliptic curve cryptosystems on resource-limited
hardware devices.
Abstract: Random Intersection Graphs is a new class of random graphs introduced in [5], in which each of n vertices randomly and independently chooses some elements from a universal set, of cardinality m. Each element is chosen with probability p. Two vertices are joined by an edge iff their chosen element sets intersect. Given n, m so that m=n{\'a}, for any real {\'a} different than one, we establish here, for the first time, tight lower bounds p0(n,m), on the value of p, as a function of n and m, above which the graph Gn,m,p is almost certainly Hamiltonian, i.e. it contains a Hamilton Cycle almost certainly. Our bounds are tight in the sense that when p is asymptotically smaller than p0(n,m) then Gn,m,p almost surely has a vertex of degree less than 2. Our proof involves new, nontrivial, coupling techniques that allow us to circumvent the edge dependencies in the random intersection model. Interestingly, Hamiltonicity appears well below the general thresholds, of [4], at which Gn,m,p looks like a usual random graph. Thus our bounds are much stronger than the trivial bounds implied by those thresholds.
Our results strongly support the existence of a threshold for Hamiltonicity in Gn,m,p.
Abstract: The Moran process models the spread of genetic mutations through a population. A mutant with relative fitness r is introduced into a population and the system evolves, either reaching fixation (in which every individual is a mutant) or extinction (in which none is). In a widely cited paper (Nature, 2005), Lieberman, Hauert and Nowak generalize the model to populations on the vertices of graphs. They describe a class of graphs (called "superstars"), with a parameter k. Superstars are designed to have an increasing fixation probability as k increases. They state that the probability of fixation tends to 1−r−k as graphs get larger but we show that this claim is untrue as stated. Specifically, for k=5, we show that the true fixation probability (in the limit, as graphs get larger) is at most 1−1/j(r) where j(r)=Θ(r4), contrary to the claimed result. We do believe that the qualitative claim of Lieberman et al.\ --- that the fixation probability of superstars tends to 1 as k increases --- is correct, and that it can probably be proved along the lines of their sketch. We were able to run larger computer simulations than the ones presented in their paper. However, simulations on graphs of around 40,000 vertices do not support their claim. Perhaps these graphs are too small to exhibit the limiting behaviour.
Abstract: In routing games, the network performance at equilibrium can be significantly improved if we remove some edges from the network. This counterintuitive fact, widely known as Braess's paradox, gives rise to the (selfish) network designproblem, where we seek to recognize routing games suffering from the paradox, and to improve the equilibrium performance by edge removal. In this work, we investigate the computational complexity and the approximability of the network designproblem for non-atomic bottleneck routing games, where the individual cost of each player is the bottleneck cost of her path, and the social cost is the bottleneck cost of the network. We first show that bottleneck routing games do not suffer from Braess's paradox either if the network is series-parallel, or if we consider only subpath-optimal Nash flows. On the negative side, we prove that even for games with strictly increasing linear latencies, it is NP-hard not only to recognize instances suffering from the paradox, but also to distinguish between instances for which the Price of Anarchy (PoA) can decrease to 1 and instances for which the PoA is as large as \Omega(n^{0.121}) and cannot improve by edge removal. Thus, the network designproblem for such games is NP-hard to approximate within a factor of O(n^{0.121-\eps}), for any constant \eps > 0. On the positive side, we show how to compute an almost optimal subnetwork w.r.t. the bottleneck cost of its worst Nash flow, when the worst Nash flow in the best subnetwork routes a non-negligible amount of flow on all used edges. The running time is determined by the total number of paths, and is quasipolynomial when the number of paths is quasipolynomial.
Abstract: We investigate the practical merits of a parallel priority queue
through its use in the development of a fast and work-efficient parallel
shortest path algorithm, originally designed for an EREW PRAM. Our
study reveals that an efficient implementation on a real supercomputer
requires considerable effort to reduce the communication performance
(which in theory is assumed to take constant time). It turns out that the
most crucial part of the implementation is the mapping of the logical
processors to the physical processing nodes of the supercomputer. We
achieve the requested efficient mapping through a new graph-theoretic
result of independent interest: computing a Hamiltonian cycle on a directed
hyper-torus. No such algorithm was known before for the case of
directed hypertori. Our Hamiltonian cycle algorithm allows us to considerably
improve the communication cost and thus the overall performance
of our implementation.
Abstract: In the uniform random intersection graphs model, denoted by Gn;m;, to each vertex v
we assign exactly randomly chosen labels of some label set M of m labels and we connect every
pair of vertices that has at least one label in common. In this model, we estimate the independence
number (Gn;m;), for the wide, interesting range m = n; < 1 and = O(m1=4). We also prove
the hamiltonicity of this model by an interesting combinatorial construction. Finally, we give a brief
note concerning the independence number of Gn;m;p random intersection graphs, in which each vertex
chooses labels with probability p.
Abstract: For various random constraint satisfaction problems there is a significant gap between the largest constraint density
for which solutions exist and the largest density for which any polynomial time algorithm is known to find
solutions. Examples of this phenomenon include random k-SAT, random graph coloring, and a number of other
random Constraint Satisfaction Problems. To understand this gap, we study the structure of the solution space of
random k-SAT (i.e., the set of all satisfying assignments viewed as a subgraph of the Hamming cube). We prove
that for densities well below the satisfiability threshold, the solution space decomposes into an exponential number
of connected components and give quantitative bounds for the diameter, volume and number.
Abstract: For various random constraint satisfaction problems there is a significant gap between
the largest constraint density for which solutions exist and the largest density for which any polynomial
time algorithm is known to find solutions. Examples of this phenomenon include random
k-SAT, random graph coloring, and a number of other random constraint satisfaction problems. To
understand this gap, we study the structure of the solution space of random k-SAT (i.e., the set of
all satisfying assignments viewed as a subgraph of the Hamming cube). We prove that for densities
well below the satisfiability threshold, the solution space decomposes into an exponential number of
connected components and give quantitative bounds for the diameter, volume, and numb
Abstract: In this paper we study the support sizes of evolutionary stable strategies (ESS) in random
evolutionary games. We prove that, when the elements of the payo matrix behave either as uniform,
or normally distributed random variables, almost all ESS have support sizes o(n), where n is the
number of possible types for a player. Our arguments are based exclusively on a stability property
that the payo submatrix indicated by the support of an ESS must satisfy.
We then combine this result with a recent result of McLennan and Berg (2005), concerning the
expected number of Nash Equilibria in normal{random bimatrix games, to show that the expected
number of ESS is signicantly smaller than the expected number of symmetric Nash equilibria of the
underlying symmetric bimatrix game.
Abstract: In future transparent optical networks, it is
important to consider the impact of physical impairments in the
routing and wavelengths assignment process, to achieve efficient
connection provisioning. In this paper, we use classical multi-
objective optimization (MOO) strategies andparticularly genetic
algorithms to jointly solve the impairment aware RWA (IA-
RWA) problem. Fiber impairments are indirectly considered
through the insertion of the path length and the number of
common hops in the optimization process. It is shown that
blocking is greatly improved, while the obtained solutions truly
converge towards the Pareto front that constitutes the set of
global optimum solutions. We have evaluated our findings, using
an Q estimator tool, that calculates the signal quality of each path
analytically.
Index Terms RWA, Genetic Algorithm, All-Optical
Networks, Multi Objective Optimization.
Abstract: We propose and evaluate an impairment-aware multi-parametric routing and wavelength assignment algorithm for online traffic in transparent optical networks. In such networks the signal quality of transmission degrades due to physical layer impairments. In the multiparametric approach, a vector of cost parameters is assigned to each link, from which the cost vectors of candidate lightpaths are calculated. In the proposed scheme the cost vector includes impairment generating source parameters, such as the path length, the number of hops, the number of crosstalk sources and other inter-lightpath interfering parameters, so as to indirectly account for the physical layer effects. For a requested connection the algorithm calculates a set of candidate lightpaths, whose quality of transmission is validated using a function that combines the impairment generating parameters. For selecting the lightpath we propose and evaluate various optimization functions that correspond to different IA-RWA algorithms. Our performance results indicate that the proposed algorithms utilize efficiently the available resources and minimize the total accumulated signal degradation on the selected lightpaths, while having low execution times.
Abstract: One oft-cited strategy towards sustainability is improving energy efficiency inside public buildings. In this context, the educational buildings sector presents a very interesting and important case for the monitoring and management of buildings, since it addresses both energy and educational issues. In this work, we present and discuss the hardware IoT infrastructure substrate that provides real-time monitoring in multiple school buildings. We believe that such a system needs to follow an open design approach: rely on hardware-agnostic components that communicate over well-defined open interfaces. We present in detail the design of our hardware components, while also providing insights to the overall system designand a first set of results on their operation. The presented hardware components are utilized as the core hardware devices for GAIA, an EU research project aimed at the educational community. As our system has been deployed and tested in several public school buildings in Greece, we also report on its validation.
Abstract: The objective of this research is to propose two new optical procedures for packet routing and forwarding in the framework of transparent optical networks. The single-wavelength label-recognition andpacket-forwarding unit, which represents the central physical constituent of the switching node, is fully described in both cases. The first architecture is a hybrid opto-electronic structure relying on an optical serial-to-parallel converter designed to slow down the label processing. The remaining switching operations are done electronically. The routing system remains transparent for the packet payloads. The second architecture is an all-optical architecture and is based on the implementation of all-optical decoding of the parallelized label. The packet-forwarding operations are done optically. The major subsystems required in both of the proposed architectures are described on the basis of nonlinear effects in semiconductor optical amplifiers. The experimental results are compatible with the integration of the whole architecture. Those subsystems are a 4-bit time-to-wavelength converter, a pulse extraction circuit, a an optical wavelength generator, a 3 x 8 all-optical decoder and a packet envelope detector.
Abstract: We demonstrate an optical power limiter using a
semiconductor optical amplifier (SOA)-based interferometric gate
powered by a strong continuous-wave input signal. We present a
detailed theoretical and experimental investigation of the power
limiting characteristics of saturated SOA-based switches, showing
good agreement between theory and experiment.
Abstract: Network continuous-media applications are emerging with a great pace. Cache memories have long been recognized as a key resource (along with network bandwidth) whose intelligent exploitation can ensure high performance for such applications. Cache memories exist at the continuous-media servers and their proxy servers in the network. Within a server, cache memories exist in a hierarchy (at the host, the storage-devices, and at intermediate multi-device controllers). Our research is concerned with how to best exploit these resources in the context of continuous media servers and in particular, how to best exploit the available cache memories at the drive, the disk array controller, and the host levels. Our results determine under which circumstances and system configurations it is preferable to devote the available memory to traditional caching (a.k.a. ldquodata sharingrdquo) techniques as opposed to prefetching techniques. In addition, we show how to configure the available memory for optimal performance and optimal cost. Our results show that prefetching techniques are preferable for small-size caches (such as those expected at the drive level). For very large caches (such as those employed at the host level) caching techniques are preferable. For intermediate cache sizes (such as those at multi-device controllers) a combination of both strategies should be employed.
Abstract: In this paper, we demonstrate optical transparency
in packet formatting and network traffic offered by all-optical
switching devices. Exploiting the bitwise processing capabilities
of these “optical transistors,” simple optical circuits are designed
verifying the independency to packet length, synchronization
andpacket-to-packet power fluctuations. Devices with these attributes
are key elements for achieving network flexibility, fine
granularity and efficient bandwidth-on-demand use. To this end, a
header/payload separation circuit operating with IP-like packets,
a clock and data recovery circuit handling asynchronous packets
and a burst-mode receiver for bursty traffic are presented. These
network subsystems can find application in future high capacity
data-centric photonic packet switched networks.
Abstract: We propose a new theoretical model for passively mobile Wireless Sensor Networks. We
call it the PALOMA model, standing for PAssively mobile LOgarithmic space MAchines. The main
modification w.r.t. the Population Protocol model [2] is that agents now, instead of being automata, are
Turing Machines whose memory is logarithmic in the population size n. Note that the new model is still
easily implementable with current technology. We focus on complete communication graphs. We define
the complexity class PLM, consisting of all symmetric predicates on input assignments that are stably
computable by the PALOMA model. We assume that the agents are initially identical. Surprisingly, it
turns out that the PALOMA model can assign unique consecutive ids to the agents and inform them
of the population size! This allows us to give a direct simulation of a Deterministic Turing Machine
of O(n log n) space, thus, establishing that any symmetric predicate in SPACE(n log n) also belongs
to PLM. We next prove that the PALOMA model can simulate the Community Protocol model [15],
thus, improving the previous lower bound to all symmetric predicates in NSPACE(n log n). Going
one step further, we generalize the simulation of the deterministic TM to prove that the PALOMA
model can simulate a Nondeterministic TM of O(n log n) space. Although providing the same lower
bound, the important remark here is that the bound is now obtained in a direct manner, in the sense
that it does not depend on the simulation of a TM by a Pointer Machine. Finally, by showing that a
Nondeterministic TM of O(n log n) space decides any language stably computable by the PALOMA
model, we end up with an exact characterization for PLM: it is precisely the class of all symmetric
predicates in NSPACE(n log n).
Abstract: We consider path protection in the routing and
wavelength assignment (RWA) problem for impairment
constrained WDM optical networks. The proposed multicost
RWA algorithms select the primary and the backup lightpaths by
accounting for physical layer impairments. The backup lightpath
may either be activated (1+1 protection) or it may be reserved and
not activated, with activation taking place when/if needed (1:1
protection). In case of 1:1 protection the period of time where the
quality of its transmission (QoT) is valid, despite the possible
establishment of future connections, should be preserved, so as to
be used in case the primary lightpath fails. We show that, by using
the multicost approach for solving the RWA with protection
problem, great benefits can be achieved both in terms of the
connection blocking rate and in terms of the validity period of the
backup lightpath. Moreover the multicost approach, by providing
a set of candidate lightpaths for each source destination pair,
instead of a single one, offers ease and flexibility in selecting the
primary and the backup lightpaths.
Abstract: In this paper we present an implementation andperformance evaluation of a descent algorithm that was proposed in \cite{tsaspi} for the computation of approximate Nash equilibria of non-cooperative bi-matrix games. This algorithm, which achieves the best polynomially computable \epsilon-approximate equilibria till now, is applied here to several problem instances designed so as to avoid the existence of easy solutions. Its performance is analyzed in terms of quality of approximation and speed of convergence. The results demonstrate significantly better performance than the theoretical worst case bounds, both for the quality of approximation and for the speed of convergence. This motivates further investigation into the intrinsic characteristics of descent algorithms applied to bi-matrix games. We discuss these issues andprovide some insights about possible variations and extensions of the algorithmic concept that could lead to further understanding of the complexity of computing equilibria. We also prove here a new significantly better bound on the number of loops required for convergence of the descent algorithm.
Abstract: We present a network operation tool called Impairment Aware Lightpath Computation Engine
(IALCE) that incorporates an impairment-aware routing and wavelength assignment (RWA) algorithm.
We perform experiments illustrating the flexibility of the engine and the performance of the algorithm
Abstract: Pervasive games represent a radically new game form that extends
gaming experiences out into the physical world, weaving ICTs
into the fabric of players’ real environment. This emerging type of
games is rather challenging for developers who are engaged in
exploring new technologies and methods to achieve high quality
interactive experience for players. This paper follows a systematic
approach to explore the landscape of pervasive gaming. First, we
present ten representative pervasive game projects, classified in
five genres based on their playing environment and features.
Then, we present a comparative view of those projects with
respect to several design aspects. Last, we shed light on current
trends, designprinciples and future directions for pervasive games
development.
Abstract: In this work we extend the population protocol model of Angluin et al., in
order to model more powerful networks of very small resource limited
artefacts (agents) that is possible to follow some unpredictable passive
movement. These agents communicate in pairs according to the commands of
an adversary scheduler. A directed (or undirected) communication graph
encodes the following information: each edge (u,\~{o}) denotes that during the
computation it is possible for an interaction between u and \~{o} to happen in
which u is the initiator and \~{o} the responder. The new characteristic of
the proposed mediated population protocol model is the existance of a
passive communication provider that we call mediator. The mediator is a
simple database with communication capabilities. Its main purpose is to
maintain the permissible interactions in communication classes, whose
number is constant and independent of the population size. For this reason
we assume that each agent has a unique identifier for whose existence the
agent itself is not informed and thus cannot store it in its working
memory. When two agents are about to interact they send their ids to the
mediator. The mediator searches for that ordered pair in its database and
if it exists in some communication class it sends back to the agents the
state corresponding to that class. If this interaction is not permitted to
the agents, or, in other words, if this specific pair does not exist in
the database, the agents are informed to abord the interaction. Note that
in this manner for the first time we obtain some control on the safety of
the network and moreover the mediator provides us at any time with the
network topology. Equivalently, we can model the mediator by communication
links that are capable of keeping states from a edge state set of constant
cardinality. This alternative way of thinking of the new model has many
advantages concerning the formal modeling and the design of protocols,
since it enables us to abstract away the implementation details of the
mediator. Moreover, we extend further the new model by allowing the edges
to keep readable only costs, whose values also belong to a constant size
set. We then allow the protocol rules for pairwise interactions to modify
the corresponding edge state by also taking into account the costs. Thus,
our protocol descriptions are still independent of the population size and
do not use agent ids, i.e. they preserve scalability, uniformity and
anonymity. The proposed Mediated Population Protocols (MPP) can stably
compute graph properties of the communication graph. We show this for the
properties of maximal matchings (in undirected communication graphs), also
for finding the transitive closure of directed graphs and for finding all
edges of small cost. We demonstrate that our mediated protocols are
stronger than the classical population protocols. First of all we notice
an obvious fact: the classical model is a special case of the new model,
that is, the new model can compute at least the same things with the
classical one. We then present a mediated protocol that stably computes
the product of two nonnegative integers in the case where G is complete
directed and connected. Such kind of predicates are not semilinear and it
has been proven that classical population protocols in complete graphs can
compute precisely the semilinear predicates, thus in this manner we show
that there is at least one predicate that our model computes and which the
classical model cannot compute. To show this fact, we state andprove a
general Theorem about the composition of two mediated population
protocols, where the first one has stabilizing inputs. We also show that
all predicates stably computable in our model are (non-uniformly) in the
class NSPACE(m), where m is the number of edges of the communication
graph. Finally, we define Randomized MPPand show that, any Peano
predicate accepted by a Randomized MPP, can be verified in deterministic
polynomial time.
Abstract: We propose, implement and evaluate new energy conservation schemes for efficient data propagation in wireless sensor networks. Our protocols are adaptive, i.e. locally monitor the network conditions and accordingly adjust towards optimal operation choices. This dynamic feature is particularly beneficial in heterogeneous settings and in cases of redeployment of sensor devices in the network area. We implement our protocols and evaluate their performance through a detailed simulation study using our extended version of ns-2. In particular we combine our schemes with known communication paradigms. The simulation findings demonstrate significant gains and good trade-offs in terms of delivery success, delay and energy dissipation.
Abstract: In this paper we study the problem of assigning transmission ranges to the nodes of a multihoppacket radio network so as to minimize the total power consumed under the constraint
that adequate power is provided to the nodes to ensure that the network is strongly connected
(i.e., each node can communicate along some path in the network to every other node). Such
assignment of transmission ranges is called complete. We also consider the problem of achieving
strongly connected bounded diameter networks.
For the case of n + 1 colinear points at unit distance apart (the unit chain) we give a tight
asymptotic bound for the minimum cost of a range assignment of diameter h when h is a xed
constant and when h>(1 + ) log n, for some constant > 0. When the distances between the
colinear points are arbitrary, we give an O(n4) time dynamic programming algorithm for nding
a minimum cost complete range assignment.
For points in three dimensions we show that the problem of deciding whether a complete
range assignment of a given cost exists, is NP-hard. For the same problem we give an O(n2)
time approximation algorithm which provides a complete range assignment with cost within a
factor of two of the minimum. The complexity of this problem in two dimensions remains open,
while the approximation algorithm works in this case as well.
Abstract: In this work we focus on the energy efficiency challenge in wireless sensor networks, from both an on-line perspective (related to routing), as well as a network designperspective (related to tracking). We investigate a few representative, important aspects of energy efficiency: a) the robust and fast data propagation b) the problem of balancing the energy
dissipation among all sensors in the network and c) the problem of efficiently tracking moving
entities in sensor networks. Our work here is a methodological survey of selected results that
have alre dy appeared in the related literature.
In particular, we investigate important issues of energy optimization, like minimizing the total
energy dissipation, minimizing the number of transmissions as well as balancing the energy
load to prolong the system¢s lifetime. We review characteristic protocols and techniques in the recent literature, including probabilistic forwarding and local optimization methods. We study the problem of localizing and tracking multiple moving targets from a network designperspective i.e. towards estimating the least possible number of sensors, their positions and operation characteristics needed to efficiently perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network designproblem by a covering combinatorial problem and then designand analyze an efficient approximate method for sensor placement
and operation.
Abstract: In this work we present three new distributed, probabilistic data propagation protocols for Wireless Sensor Networks which aim at maximizing the network's operational life and improve its performance. The keystone of these protocols' design is fairness which declares that fair portions of network's work load should be assigned to each node, depending on their role in the system. All the three protocols, EFPFR, MPFR and TWIST, emerged from the study of the rigorously analyzed protocol PFR. Its design elements were identified and improvements were suggested and incorporated into the introduced protocols. The experiments conducted show that our proposals manage to improve PFR's performance in terms of success rate, total amount of energy saved, number of alive sensors and standard deviation of the energy left. Indicatively we note that while PFR's success rate is 69.5%, TWIST is achieving 97.5% and its standard deviation of energy is almost half of that of PFR.
Abstract: The content-based publish/subscribe (pub/sub) paradigm for system design is becoming increasingly popular, offering
unique benefits for a large number of data-intensive applications. Coupled with the peer-to-peer technology, it can serve as
a central building block for developing data-dissemination applications deployed over a large-scale network infrastructure.
A key open problem towards the creation of large-scale content-based pub/sub infrastructures relates to efficiently and accu-
rately matching subscriptions with substring predicates to incoming events. This work addresses this issue.
Abstract: The content-based publish/subscribe (pub/sub)paradigm for system design is becoming increasingly popular, offering unique benefits for a large number of data-intensive applications. Coupled with the peer-to-peer technology, it can serve as a central building block for such applications
deployed over a large-scale network infrastructure. A key problem toward the creation of large-scale contentbased pub/sub infrastructures relates to dealing efficiently with continuous queries (subscriptions) with rich predicates on string attributes; In particular, efficiently and accurately
matching substring queries to incoming events is an open problem. In this work we study this problem. We provide and analyze novel algorithms for processing subscriptions with substring predicates and events in a variety of environments. We provide experimental data demonstrating the
relative performance behavior of the proposed algorithms using as key metrics the network bandwidth requirements, number of messages, load balancing, as well as requirements for extra routing state (and related maintenance) and design flexibility.
Abstract: Grids offer a transparent interface to geographically scattered computation, communication, storage and
other resources. In this chapter we propose and evaluate QoS-aware and fair scheduling algorithms for
Grid Networks, which are capable of optimally or near-optimally assigning tasks to resources, while taking
into consideration the task characteristics and QoS requirements. We categorize Grid tasks according to
whether or not they demand hard performance guarantees. Tasks with one or more hard requirements are
referred to as Guaranteed Service (GS) tasks, while tasks with no hard requirements are referred to as Best
Effort (BE) tasks. For GS tasks, we propose scheduling algorithms that provide deadline or computational
power guarantees, or offer fair degradation in the QoS such tasks receive in case of congestion. Regarding
BE tasks our objective is to allocate resources in a fair way, where fairness is interpreted in the max-min fair
share sense. Though, we mainly address scheduling problems on computation resources, we also look at
the joint scheduling of communication and computation resources andpropose routing and scheduling
algorithms aiming at co-allocating both resource type so as to satisfy their respective QoS requirements.
Abstract: This research attempts a first step towards investigating the aspect of radiation awareness in environments with abundant heterogeneous wireless networking. We call radiation at a point of a 3D wireless network the total amount of electromagnetic quantity the point is exposed to, our definition incorporates the effect of topology as well as the time domain, data traffic and environment aspects. Even if the impact of radiation to human health remains largely unexplored and controversial, we believe it is worth trying to understandand control. We first analyze radiation in well known topologies (random and grids), randomness is meant to capture not only node placement but also uncertainty of the wireless propagation model. This initial understanding of how radiation adds (over space and time) can be useful in network design, to reduce health risks. We then focus on the minimum radiation path problem of finding the lowest radiation trajectory of a person moving from a source to a destination point of the network region. We propose three heuristics which provide low radiation paths while keeping path length low, one heuristic gets in fact quite close to the offline solution we compute by a shortest path algorithm. Finally, we investigate the interesting impact on the heuristics' performance of diverse node mobility.
Abstract: This research further investigates the recently introduced
(in [4]) paradigm of radiation awareness in ambient environments with abundant heterogeneous wireless networking
from a distributed computing perspective. We call radiation
at a point of a wireless network the total amount of electromagnetic quantity the point is exposed to; our denition incorporates the eect of topology as well as the time domain
and environment aspects. Even if the impact of radiation to
human health remains largely unexplored and controversial,
we believe it is worth trying to understandand control, in
a way that does not decrease much the quality of service
oered to users of the wireless network.
In particular, we here focus on the fundamental problem
of ecient data propagation in wireless sensor networks, try-
ing to keep latency low while maintaining at low levels the
radiation cumulated by wireless transmissions. We rst propose greedy and oblivious routing heuristics that are radiation aware. We then combine them with temporal back-o
schemes that use local properties of the network (e.g. number of neighbours, distance from sink) in order to spread" radiation in a spatio-temporal way. Our proposed radiation
aware routing heuristics succeed to keep radiation levels low,
while not increasing latency.
Abstract: We employ here the Probabilistic Method, a way of reasoning which shows existence of combinatorial structures andproperties to prove refute conjectures. The radiocoloring problem (RCP) is the problem of assigning frequencies to the transmitters of a network so that transmitters of distance one get frequencies that di#er by at least two and any two transmitters of distance one get frequencies that di#er by at least one. The objective of an assignment may be to minimize the number of frequencies used (order) or the range of them (span). Here, we study the optimization version of RCP where the objective is to minimize the order. In graph theory terms the problem is modelled by a variation of the vertex graph coloring problem. We investigate upper bounds for the minimum number of colors needed in a radiocoloring assignment of a graph G. We first provide an upper bound for the minimum number of colors needed to radiocolor a graph G of girth at most 7. Then, we study whether the minimum order of a radiocoloring assignment is determined by local conditions, i.e. by the minimum order radiocoloring assignment of some small subgraphs of it. We state a related conjecture which is analogous to a theorem of Molloy and Reed for the vertex coloring problem [12]. We then investigate whether the conjecture contradicts a Theorem of Molloy and Reed for the vertex coloring when applied on the graph G 2
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters, by exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"O}(u)-{\"O}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The number of discrete frequencies and the range of frequencies used are called order and span, respectively. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span or the order. In this paper we prove that the radiocoloring problem for general graphs is hard to approximate (unless NP=ZPP) within a factor of n1/2-{\aa} (for any View the MathML source), where n is the number of vertices of the graph. However, when restricted to some special cases of graphs, the problem becomes easier. We prove that the min span RCP is NP-complete for planar graphs. Next, we provide an O(n{\"A}) time algorithm (|V|=n) which obtains a radiocoloring of a planar graph G that approximates the minimum order within a ratio which tends to 2 (where {\"A} the maximum degree of G). Finally, we provide a fully polynomial randomized approximation scheme (fpras) for the number of valid radiocolorings of a planar graph G with {\"e} colors, in the case where {\"e}greater-or-equal, slanted4{\"A}+50.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. A Radiocoloring (RC) of a graph G(V,E) is an assignment function View the MathML source such that |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted2, when u,v are neighbors in G, and |{\"E}(u)−{\"E}(v)|greater-or-equal, slanted1 when the distance of u,v in G is two. The discrete number of frequencies used is called order and the range of frequencies used, span. The optimization versions of the Radiocoloring Problem (RCP) are to minimize the span (min span RCP) or the order (min order RCP).
In this paper, we deal with an interesting, yet not examined until now, variation of the radiocoloring problem: that of satisfying frequency assignment requests which exhibit some periodic behavior. In this case, the interference graph (modelling interference between transmitters) is some (infinite) periodic graph. Infinite periodic graphs usually model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they can model very large networks produced by the repetition of a small graph.
A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph Gi(Vi,Ei). The edge set of G is derived by connecting the vertices of each iteration Gi to some of the vertices of the next iteration Gi+1, the same for all Gi. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest.
We give two basic results:
• We prove that the min span RCP is PSPACE-complete for periodic planar graphs.
• We provide an O(n({\"A}(Gi)+{\'o})) time algorithm (where|Vi|=n, {\"A}(Gi) is the maximum degree of the graph Gi and {\'o} is the number of edges connecting each Gi to Gi+1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to View the MathML source as {\"A}(Gi)+{\'o} tends to infinity.
We remark that, any approximation algorithm for the min span RCP of a finite planar graph G, that achieves a span of at most {\'a}{\"A}(G)+constant, for any {\'a} and where {\"A}(G) is the maximum degree of G, can be used as a subroutine in our algorithm to produce an approximation for min span RCP of asymptotic ratio {\'a} for periodic planar graphs.
Abstract: The Frequency Assignment Problem (FAP) in radio networks is the problem of assigning frequencies to transmitters exploiting frequency reuse while keeping signal interference to acceptable levels. The FAP is usually modelled by variations of the graph coloring problem. The Radiocoloring (RC) of a graph G(V,E) is an assignment function {\"O}: V → IN such that ∣{\"O}(u) - {\"O}(v)∣ ≥2, when u, v are neighbors in G, and ∣{\"O}(u) - {\"O}(v)∣ ≥1 when the distance of u, v in G is two. The range of frequencies used is called span. Here, we consider the optimization version of the Radiocoloring Problem (RCP) of finding a radiocoloring assignment of minimum span, called min span RCP. In this paper, we deal with a variation of RCP: that of satisfying frequency assignment requests with some periodic behavior. In this case, the interference graph is an (infinite) periodic graph. Infinite periodic graphs model finite networks that accept periodic (in time, e.g. daily) requests for frequency assignment. Alternatively, they may model very large networks produced by the repetition of a small graph. A periodic graph G is defined by an infinite two-way sequence of repetitions of the same finite graph G i (V i ,E i ). The edge set of G is derived by connecting the vertices of each iteration G i to some of the vertices of the next iteration G i +1, the same for all G i . The model of periodic graphs considered here is similar to that of periodic graphs in Orlin [13], Marathe et al [10]. We focus on planar periodic graphs, because in many cases real networks are planar and also because of their independent mathematical interest. We give two basic results: - We prove that the min span RCP is PSPACE-complete for periodic planar graphs. - We provide an O(n({\"A}(G i ) + {\'o})) time algorithm, (where ∣V i ∣ = n, {\"A}(G i ) is the maximum degree of the graph G i and {\'o} is the number of edges connecting each G i to G i +1), which obtains a radiocoloring of a periodic planar graph G that approximates the minimum span within a ratio which tends to 2 as {\"A}(Gi) + {\'o} tends to infinity.
Abstract: In this work we address the issue of efficient processing of range queries in DHT-based P2P data networks. The novelty of the proposed approach lies on architectures, algorithms, and mechanisms for identifying and appropriately exploiting powerful nodes in such networks. The existence of such nodes has been well documented in the literature andplays a key role in the architecture of most successful real-world P2P applications. However, till now, this heterogeneity has not been taken into account when architecting solutions for complex query processing, especially in DHT networks. With this work we attempt to fill this gap for optimizing the processing of range queries. Significant performance improvements are achieved due to (i) ensuring a much smaller hop count performance for range queries, and (ii) avoiding the dangers and inefficiencies of relying for range query processing on weak nodes, with respect to processing, storage, and communication capacities, and with intermittent connectivity. We present detailed experimental results validating our performance claims.
Abstract: We consider the problem of planning a mixed line rate
(MLR) wavelength division multiplexing (WDM) transport
optical network. In such networks, different modulation formats
are usually employed to support transmission at different line
rates. Previously proposed planning algorithms have used a
transmission reach bound for each modulation format/line rate,
mainly driven by single line rate systems. However, transmission
experiments in MLR networks have shown that physical layer
interference phenomena are more severe among transmissions
that utilize different modulation formats. Thus, the transmission
reach of a connection with a specific modulation format/line rate
depends also on the other connections that co-propagate with it
in the network. To plan a MLR WDM network, we present
routing and wavelength assignment (RWA) algorithms that
adapt the transmission reach of each connection according to the
use of the modulation formats/line rates in the network. The
proposed algorithms are able to plan the network so as to
alleviate cross-rate interference effects, enabling the
establishment of connections of acceptable quality over paths that
would otherwise be prohibited.
Abstract: A collection of pervasive street games is presented in this paper, that constitute a new social form of play taking place in public spaces, such as city parks, public spaces and streets. The main characteristic of these games is the ability to scale to a large number of players (in some cases involving more than 40 players) and can engage players located simultaneously in dispersed areas. Players interact with each other using a wide range of hardware devices that are either generic (such as smart phones) or specific (such as wireless sensor devices). We discuss a set of fundamental issues related to game design emphasizing on the one hand the interaction of the players with the ubiquitous computing environment and on the other hand the embedding of the game rules within the environment. The games are developed using open source technologies and evaluated in a series of events such as the Athens Plaython 2012 festival. The feedback received from the players indicates that this new form of gaming is indeed very promising.
Abstract: This paper presents a theoretical and experimental
analysis of saturated semiconductor optical amplifier (SOA)-based
interferometric switching arrangements. For the first time, it is
shown that such devices can provide enhanced intensity modulation
reduction to return-to-zero (RZ) formatted input pulse trains,
when the SOA is saturated with a strong continuous-wave (CW)
input signal. A novel theoretical platform has been developed in
the frequency domain, which reveals that the intensity modulation
of the input pulse train can be suppressed by more than 10 dB at
the output. This stems from the presence of the strong CW signal
that transforms the sinusoidal transfer function of the interferometric
switch into an almost flat, strongly nonlinear curve. This
behavior has also been verified experimentally for both periodically
and randomly degraded, in terms of intensity modulation,
signals at 10 Gb/s using the ultrafast nonlinear interferometer as
the switching device. Performance analysis both in the time and
frequency domains is demonstrated, verifying the concept and its
theoretical analysis.
Abstract: In this paper we present a signaling protocol for
QoS differentiation suitable for optical burst switching networks.
The proposed protocol is a two-way reservation scheme that
employs delayed and in-advance reservation of resources. In this
scheme delayed reservations may be relaxed, introducing a
reservation duration parameter that is negotiated during call
setupphase. This feature allows bursts to reserve resources
beyond their actual size to increase their successful forwarding
probability and is used to provide QoS differentiation. The
proposed signaling protocol offers a low blocking probability for
bursts that can tolerate the round-trip delay required for the
reservations. We present the main features of the protocol and
describe in detail timing considerations regarding the call setupand the reservation process. We also describe several methods
for choosing the protocol parameters so as to optimize
performance andpresent corresponding evaluation results.
Furthermore, we compare the performance of the proposed
protocol against that of two other typical reservation protocols, a
Tell-and-Wait and a Tell-and-Go protocol.
Abstract: In recent years there has been signi1cant interest in the study of random k-SAT formulae. For
a given set of n Boolean variables, let Bk denote the set of all possible disjunctions of k distinct,
non-complementary literals from its variables (k-clauses). A random k-SAT formula Fk (n;m) is
formed by selectinguniformly and independently m clauses from Bk and takingtheir conjunction.
Motivated by insights from statistical mechanics that suggest a possible relationship between the
?order? of phase transitions and computational complexity, Monasson and Zecchina (Phys. Rev.
E 56(2) (1997) 1357) proposed the random (2+p)-SAT model: for a given p ¸ [0; 1], a random
(2 + p)-SAT formula, F2+p(n;m), has m randomly chosen clauses over n variables, where pm
clauses are chosen from B3 and (1 − p)m from B2. Usingthe heuristic ?replica method? of
statistical mechanics, Monasson and Zecchina gave a number of non-rigorous predictions on the
behavior of random (2 + p)-SAT formulae. In this paper we give the 1rst rigorous results for
random (2 + p)-SAT, includingthe followingsurprisingfact: for p 6 2=5, with probability
1 − o(1), a random (2 + p)-SAT formula is satis1able i@ its 2-SAT subformula is satis1able.
That is, for p 6 2=5, random (2 + p)-SAT behaves like random 2-SAT.
Abstract: Distributed algorithm designers often assume that system processes execute the same predefined software. Alternatively, when they do not assume that, designers turn to non-cooperative games and seek an outcome that corresponds to a rough consensus when no coordination is allowed. We argue that both assumptions are inapplicable in many real distributed systems, e.g., the Internet, andpropose designing self-stabilizing and Byzantine fault-tolerant distributed game authorities. Once established, the game authority can secure the execution of any complete information game. As a result, we reduce costs that are due to the processes¢ freedom of choice. Namely, we reduce the price of malice.
Abstract: In this paper we present an efficient general simulation strategy for
computations designed for fully operational BSP machines of n ideal processors,
on n-processor dynamic-fault-prone BSP machines. The fault occurrences are failstopand fully dynamic, i.e., they are allowed to happen on-line at any point of the
computation, subject to the constraint that the total number of faulty processors
may never exceed a known fraction. The computational paradigm can be exploited
for robust computations over virtual parallel settings with a volatile underlying
infrastructure, such as a NETWORK OF WORKSTATIONS (where workstations may be
taken out of the virtual parallel machine by their owner).
Our simulation strategy is Las Vegas (i.e., it may never fail, due to backtracking
operations to robustly stored instances of the computation, in case of locally
unrecoverable situations). It adopts an adaptive balancing scheme of the workload
among the currently live processors of the BSP machine.
Our strategy is efficient in the sense that, compared with an optimal off-line
adversarial computation under the same sequence of fault occurrences, it achieves an O
¡
.log n ¢ log log n/2¢
multiplicative factor times the optimal work (namely, this
measure is in the sense of the “competitive ratio” of on-line analysis). In addition,
our scheme is modular, integrated, and considers many implementation points.
We comment that, to our knowledge, no previous work on robust parallel computations
has considered fully dynamic faults in the BSP model, or in general distributed
memory systems. Furthermore, this is the first time an efficient Las Vegas
simulation in this area is achieved.
Abstract: We propose QoS-aware scheduling algorithms for Grid Networks that are capable of optimally or near-optimally
assigning computation and communication tasks to grid resources. The routing and scheduling algorithms to be
presented take as input the resource utilization profiles and the task characteristics and QoS requirements, and
co-allocate resources while accounting for the dependencies between communication and computation tasks.
Keywords: communication and computation utilization profiles, multicost routing and scheduling, grid
computing.
Abstract: Orthogonal Frequency Division Multiplexing (OFDM)
has been recently proposed as a modulation technique for optical
networks, due to its good spectral efficiency and impairment
tolerance. Optical OFDM is much more flexible compared to
traditional WDM systems, enabling elastic bandwidth
transmissions. We consider the planning problem of an OFDMbased optical network where we are given a traffic matrix that
includes the requested transmission rates of the connections to be
served. Connections are provisioned for their requested rate by
elastically allocating spectrum using a variable number of OFDM
subcarriers. We introduce the Routing and Spectrum Allocation
(RSA) problem, as opposed to the typical Routing and
Wavelength Assignment (RWA) problem of traditional WDM
networks, andpresent various algorithms to solve the RSA. We
start by presenting an optimal ILP RSA algorithm that minimizes
the spectrum used to serve the traffic matrix, and also present a
decomposition method that breaks RSA into two substituent
subproblems, namely, (i) routing and (ii) spectrum allocation
(R+SA) and solves them sequentially. We also propose a heuristic
algorithm that serves connections one-by-one and use it to solve
the planning problem by sequentially serving all traffic matrix
connections. To feed the sequential algorithm, two ordering
policies are proposed; a simulated annealing meta-heuristic is also
proposed to obtain even better orderings. Our results indicate
that the proposed sequential heuristic with appropriate ordering
yields close to optimal solutions in low running times.
Abstract: In this paper we demonstrate the significant impact of the user mobility rates on the performance on two different approaches for designing routing protocols for ad-hoc mobile networks: (a) the route creation and maintenance approach and (b) the "support" approach, that forces few hosts to move acting as
"helpers" for message delivery. We study a set of representative protocols for each approach, i.e.~DSR and ZRP for the first approach and RUNNERS for the second. We have implemented the three protocols andperformed a large scale and detailed simulation study of their performance. Our findings are: (i) DSR achieves low message delivery rates but manages to deliver messages very fast; (ii) ZRP behaves well in networks of low mobility rate, while its performance drops for networks of highly mobile users; (iii) RUNNERS seem to tolerate well (and in fact benefit from) high mobility rates.
Based on our investigation, we designand implement two new protocols that result from the synthesis of the investigated routing approaches. We conducted an extensive, comparative simulation study of their performance. The new protocols behave well both in networks of diverse mobility motion rates, and in some cases they even outperform the original ones by achieving lower message delivery delays.
Abstract: Data propagation in wireless sensor
networks is usually performed as a multihopprocess.
Thus,
To deliver a single
message, the resources of many sensor nodes are used and
a lot of energy is spent.
Recently, a novel approach is catching momentum because of important applications;
that of having a mobile sink move inside the network area and collect
the data with low energy cost.
Here we extend this line of research by proposing and evaluating three new protocols.
Our protocols are novel in
a) investigating the impact of having {many} mobile sinks
b) in weak models with restricted mobility, proposing and evaluating
a mix of static and mobile sinks and c) proposing a distributed
protocol that tends to {equally spread the sinks} in the network to
further improve performance.
Our protocols are simple, based on randomization and assume locally
obtainable information. We perform an extensive evaluation via simulation; our
findings demonstrate that our solutions scale very well with respect to the number of sinks
and significantly reduce energy consumption and delivery delay.
Abstract: We consider information aggregation as a method for reducing the information exchanged in a Grid network and used by the resource manager in order to make scheduling decisions. In this way, information is summarized across nodes and sensitive or detailed information can be kept private, while resources are still publicly available for use. We present a general framework for information aggregation, trying to identify issues that relate to aggregation in Grids. In this context, we describe a number of techniques, including single point and intra-domain aggregation, define appropriate grid-specific domination relations and operators for aggregating static and dynamic resource information, and discuss resource selection optimization functions. The quality of an aggregation scheme is measured both by its effects on the efficiency of the scheduler¢s decisions and also by the reduction it brings on the amount of resource information recorded, a tradeoff that we examine in detail. Simulation experiments demonstrate that the proposed schemes achieve significant information reduction, either in the amount of information exchanged, or in the frequency of the updates, while at the same time maintaining most of the value of the original information as expressed by a stretch factor metric we introduce.
Abstract: We study a problem of scheduling client requests to servers. Each client has a particular latency requirement at each server and may choose either to be assigned to some server in order to get serviced provided that her latency requirement is met, or not to participate in the assignment at all. From a global perspective, in order to optimize the performance of such a system, one would aim to maximize the number of clients that participate in the assignment. However, clients may behave selfishly in the sense that, each of them simply aims to participate in an assignment and get serviced by some server where her latency requirement is met with no regard to overall system performance. We model this selfish behavior as a strategic game, show how to compute pure Nash equilibria efficiently, and assess the impact of selfishness on system performance. We also show that the problem of optimizing performance is computationally hard to solve, even in a coordinated way, andpresent efficient approximation and online algorithms.
Abstract: We study a problem of scheduling client requests to servers.
Each client has a particular latency requirement at each server and may
choose either to be assigned to some server in order to get serviced provided
that her latency requirement is met or not to participate in the
assignment at all. From a global perspective, in order to optimize the
performance of such a system, one would aim to maximize the number
of clients that participate in the assignment. However, clients may behave
selfishly in the sense that each of them simply aims to participate
in an assignment and get serviced by some server where her latency requirement
is met with no regard to the overall system performance. We
model this selfish behavior as a strategic game, show how to compute
equilibria efficiently, and assess the impact of selfishness on system performance.
We also show that the problem of optimizing performance is
computationally hard to solve, even in a coordinated way, andpresent
efficient approximation and online algorithms.
Abstract: We present SeAl1, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms efectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.
Abstract: We present SeAl, a novel data/resource and data-access management infrastructure designed for the purpose of addressing a key problem in P2P data sharing networks, namely the problem of wide-scale selfish peer behavior. Selfish behavior has been manifested and well documented and it is widely accepted that unless this is dealt with, the scalability, efficiency, and the usefulness of P2P sharing networks will be diminished. SeAl essentially consists of a monitoring/accounting subsystem, an auditing/verification subsystem, and incentive mechanisms. The monitoring subsystem facilitates the classification of peers into selfish/altruistic. The auditing/verification layer provides a shield against perjurer/slandering and colluding peers that may try to cheat the monitoring subsystem. The incentives mechanisms effectively utilize these layers so to increase the computational/networking and data resources that are available to the community. Our extensive performance results show that SeAl performs its tasks swiftly, while the overhead introduced by our accounting and auditing mechanisms in terms of response time, network, and storage overheads are very small.
Abstract: Our position is that a key to research efforts on ensuring high
performance in very large scale sharing networks is the idea of
volunteering; recognizing that such networks are comprised of
largely heterogeneous nodes in terms of their capacity and
behaviour, and that, in many real-world manifestations, a few
nodes carry the bulk of the request service load. In this paper we
outline how we employ volunteering as the basic idea using
which we develop altruism-endowed self-organizing sharing
networks to help solve two open problems in large-scale peer-topeer
networks: (i) to develop an overlay topology structure that
enjoys better performance than DHT-structured networks and,
specifically, to offer O(log log N) routing performance in a
network of N nodes, instead of O(log N), and (ii) to efficiently
process complex queries and range queries, in particular.
Abstract: Random Intersection Graphs, Gn,m,p, is a class of random graphs introduced in Karoński (1999) [7] where each of the n vertices chooses independently a random subset of a universal set of m elements. Each element of the universal sets is chosen independently by some vertex with probability p. Two vertices are joined by an edge iff their chosen element sets intersect. Given n, m so that m=left ceilingn{\'a}right ceiling, for any real {\'a} different than one, we establish here, for the first time, a sharp threshold for the graph property “Contains a Hamilton cycle”. Our proof involves new, nontrivial, coupling techniques that allow us to circumvent the edge dependencies in the random intersection graph model.
Abstract: In this work, we study protocols (i.e. distributed algorithms) so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol (i.e. the system is homogeneous). Moreover, we assume pairwise interactions between the processes that are scheduled by an adversary. The only constraint on the adversary scheduler is that it must be fair, intuitively meaning that it must assign to every reachable configuration of the system a non-zero probability to occur. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of their connection and updates all of them. In particular, in every interaction, the protocol may activate an inactive connection, deactivate an active one, or leave the state of a connection unchanged. Initially all connections are inactive and the goal is for the processes, after interacting and activating/deactivating connections for a while, to end up with a desired stable network (i.e. one that does not change any more). We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. We provide proofs of correctness for all of our protocols and analyze the expected time to convergence of most of them under a uniform random scheduler that selects the next pair of interacting processes uniformly at random from all such pairs. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks. Our universality protocols use a subset of the population (waste) in order to distributedly construct there a TM able to decide a graph class in some given space. Then, the protocols repeatedly construct in the rest of the population (useful space) a graph equiprobably drawn from all possible graphs. The TM works on this and accepts if the presented graph is in the class. We additionally show how to partition the population into k supernodes, each being a line of log k nodes, for the largest such k. This amount of local memory is sufficient for the supernodes to obtain unique names and exploit their names and their memory to realize nontrivial constructions. Delicate composition and reinitialization issues have to be solved for these general constructions to work.
Abstract: We address an important communication issue in wireless cellular networks that utilize Frequency Division Multiplexing (FDM) technology. In such networks, many users within the same geographical region (cell) can communicate simultaneously with other users of the network using distinct frequencies. The spectrum of the available frequencies is limited; thus, efficient solutions to the call control problem are essential. The objective of the call control problem is, given a spectrum of available frequencies and users that wish to communicate, to maximize the number of users that communicate without signal interference. We consider cellular networks of reuse distance kge 2 and we study the on-line version of the problem using competitive analysis.
In cellular networks of reuse distance 2, the previously best known algorithm that beats the lower bound of 3 on the competitiveness of deterministic algorithms works on networks with one frequency, achieves a competitive ratio against oblivious adversaries which is between 2.469 and 2.651, and uses a number of random bits at least proportional to the size of the network. We significantly improve this result by presenting a series of simple randomized algorithms that have competitive ratios smaller than 3, work on networks with arbitrarily many frequencies, and use only a constant number of random bits or a comparable weak random source. The best competitiveness upper bound we obtain is 7/3.
In cellular networks of reuse distance k>2, we present simple randomized on-line call control algorithms with competitive ratios which significantly beat the lower bounds on the competitiveness of deterministic ones and use only random bits. Furthermore, we show a new lower bound on the competitiveness of on-line call control algorithms in cellular networks of reuse distance kge 5.
Abstract: In wireless sensor networks data propagation is usually
performed by sensors transmitting data towards a static control center (sink). Inspired by important applications (mostly related to ambient intelligence) and as a first step towards introducing mobility, we propose the idea of having a sink moving in the network area and collecting data from sensors. We propose four characteristic mobility patterns for the sink along with different data collection strategies. Through a detailed simulation study, we evaluate several important performance properties of each protocol. Our findings demonstrate that by taking advantage of the sink's mobility, we can significantly reduce the energy spent in relaying traffic and thus greatly extend the lifetime of the network.
Abstract: Buildings are among the largest consumers of electricity with a significant portion of this energy use is wasted in unoccupied areas or improperly installed devices.
Identifying such power leaks is hard especially in large office and enterprise buildings.
In this paper we present the designand implementation of a system that uses an underlying sensor network to provide accurate real time information about various characteristics like occupancy, lighting, temperature andpower consumption at different levels of granularity.
All sensor devices require minimal installation and maintenance.
Using an experimental installation we evaluate a number of applications and services that achieve energy savings by applying different power conservation policies.
Furthermore we provide energy measurements to users and occupants to show how various choices and behaviors affect their personal energy savings.
Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing voting rule that today bears his name. Although Dodgson's rule is one of the most well-studied voting rules, it suffers from serious deciencies, both from the computational point of view|it is NP-hard even to approximate the Dodgson score within sublogarithmic factors|and from the social choice point of view|it fails basic social choice desiderata such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we have asked whether there are approximation algorithms for Dodgson's rule that are monotonic or homogeneous. In this paper we give denitive answers to these questions. We design a monotonic exponential-time algorithm that yields a 2-approximation to the Dodgson score, while matching this result with a tight lower bound. We also present a monotonic polynomial-time O(logm)-approximation algorithm (where m is the number of alternatives); this result is tight as well due to a complexity-theoretic lower bound. Furthermore, we show that a slight variation of a known voting rule yields a monotonic, homogeneous, polynomial-time O(mlogm)-approximation algorithm, and establish that it is impossible to achieve a better approximation ratio even if one just asks for homogeneity. We complete the picture by studying several additional social choice properties; for these properties, we prove that algorithms with an approximation ratio that depends only on m do not exist.
Abstract: In 1876 Charles Lutwidge Dodgson suggested the intriguing
voting rule that today bears his name. Although Dodg-
son's rule is one of the most well-studied voting rules, it suf-
fers from serious deciencies, both from the computational
point of view|it is NP-hard even to approximate the Dodg-
son score within sublogarithmic factors|and from the social
choice point of view|it fails basic social choice desiderata
such as monotonicity and homogeneity.
In a previous paper [Caragiannis et al., SODA 2009] we
have asked whether there are approximation algorithms for
Dodgson's rule that are monotonic or homogeneous. In this
paper we give denitive answers to these questions. We de-
sign a monotonic exponential-time algorithm that yields a
2-approximation to the Dodgson score, while matching this
result with a tight lower bound. We also present a monotonic
polynomial-time O(logm)-approximation algorithm (where
m is the number of alternatives); this result is tight as well
due to a complexity-theoretic lower bound. Furthermore,
we show that a slight variation of a known voting rule yields
a monotonic, homogeneous, polynomial-time O(mlogm)-
approximation algorithm, and establish that it is impossible
to achieve a better approximation ratio even if one just asks
for homogeneity. We complete the picture by studying sev-
eral additional social choice properties; for these properties,
we prove that algorithms with an approximation ratio that
depends only on m do not exist.
Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.
Abstract: Efficient task scheduling is fundamental for the success of the Grids,
since it directly affects the Quality of Service (QoS) offered to the users. Efficient
scheduling policies should be evaluated based not only on performance
metrics that are of interest to the infrastructure side, such as the Grid resources
utilization efficiency, but also on user satisfaction metrics, such as the percentage
of tasks served by the Grid without violating their QoS requirements. In this
paper, we propose a scheduling algorithm for tasks with strict timing requirements,
given in the form of a desired start and finish time. Our algorithm aims
at minimizing the violations of the time constraints, while at the same time
minimizing the number of processors used. The proposed scheduling method
exploits concepts derived from spectral clustering, and groups together for assignment
to a computing resource the tasks so to a) minimize the time overlapping
of the tasks assigned to a given processor and b) maximize the degree of
time overlapping among tasks assigned to different processors. Experimental
results show that our proposed strategy outperforms greedy scheduling algorithms
for different values of the task load submitted.
Abstract: In this paper, we analyze the stability properties of the FIFO protocol in the Adversarial Queueing model for packet routing. We show a graph for which FIFO is stable for any adversary with injection rate r ≰ 0.1428. We generalize this results to show upper bound for stability of any network under FIFO protocol, answering partially an open question raised by Andrews et al. in [2]. We also design a network and an adversary for which FIFO is non-stable for any r ≱ 0.8357, improving the previous known bounds of [2].
Abstract: In wireless communication, the signal of a typical broadcast station is transmittes from a broadvast center pand reaches objects at a distance,say , r from it. In addition there is a radius ro, ro < r, such that the signal originating from the center p should be avoided. In other words, points within distance ro from the station compise a hazardous zone. We consider the following station layout problem: Cover a given planar region that includes a collection of buildings with a minimum number of astations so that every point in the region is within the reach of a station, while at the same time no interior point of any building is within the hazardous zone of a station. We give algorithms for computing such station layouts in both the one- and two - dimensional cases
Abstract: We consider the important problem of energy balanced data propagation in wireless sensor networks and we extend and generalize
previous works by allowing adaptive energy assignment. We consider the data gathering problem where data are generated by the sensors and
must be routed toward a unique sink. Sensors route data by either sending the data directly to the sink or in a multi-hop fashion by delivering
the data to a neighbouring sensor. Direct and neighbouring transmissions require different levels of energy consumption. Basically, the protocols balance the energy consumption among the sensors by computing the adequate ratios of direct and neighbouring transmissions. An abstract model of energy dissipation as a random walk is proposed, along with rigorous performance analysis techniques. Two efficient distributed algorithms are presented and analysed, by both rigorous means and simulation.
The first one is easy to implement and fast to execute. The protocol assumes that sensors know a-priori the rate of data they generate.
The sink collects andprocesses all these information in order to compute the relevant value of the protocol parameter. This value is transmitted
to the sensors which individually compute their optimal ratios of direct and neighbouring transmissions. The second protocol avoids the necessary a-priori knowledge of the data rate generated by sensors by inferring the relevant information from the observation of the data paths.
Furthermore, this algorithm is based on stochastic estimation methods and is adaptive to environmental changes.
Abstract: In this paper, we present and study solutions for the efficient processing of queries over string attributes in a large P2P data network implemented with DHTs. The proposed solutions support queries with equality, prefix, suffix, and containment predicates over string attributes. Currently, no known solution to this problem exists.
We propose and study algorithms for processing such queries and their optimizations. As event-based, Publish/Subscribe information systems are a champion application class where string attribute (continuous) queries are very common, we pay particular attention to this type of data networks, formulating our solution in terms of this environment. A major design decision behind the proposed solution is our intention to provide a solution that is general (DHT-independent), capable of being implemented on top of any particular DHT.
Abstract: We study extreme Nash equilibria in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel identical links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic on those links that minimize its expected latency cost. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.
We provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria. Furthermore, we show, that under a certain condition, the social cost of any Nash equilibrium is within a factor of 2h(1+ɛ) of that of the fully mixed Nash equilibrium, where h is the factor by which the largest user traffic deviates from the average user traffic.
Considering pure Nash equilibria, we provide a PTAS to approximate the best social cost, we give an upper bound on the worst social cost and we show that it is View the MathML source-hard to approximate the worst social cost within a multiplicative factor better than 2-2/(m+1).
Abstract: We study extreme Nash equilibria in the context of a selfish routing game. Specifically, we assume a collection of n users, each employing a mixed strategy, which is a probability distribution over m parallel identical links, to control the routing of its own assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic on those links that minimize its expected latency cost. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.We provide substantial evidence for the Fully Mixed Nash Equilibrium Conjecture, which states that the worst Nash equilibrium is the fully mixed Nash equilibrium, where each user chooses each link with positive probability. Specifically, we prove that the Fully Mixed Nash Equilibrium Conjecture is valid for pure Nash equilibria. Furthermore, we show, that under a certain condition, the social cost of any Nash equilibrium is within a factor of 2h(1 + {\aa}) of that of the fully mixed Nash equilibrium, where h is the factor by which the largest user traffic deviates from the average user traffic.Considering pure Nash equilibria, we provide a PTAS to approximate the best social cost, we give an upper bound on the worst social cost and we show that it is N P-hard to approximate the worst social cost within a multiplicative factor better than 2 - 2/(m + 1).
Abstract: A key issue when designing and implementing large-scale publish/subscribe systems is how to efficiently propagate subscriptions among the brokers of the system. Brokers require this information in order to forward incoming events only to interested users, filtering out unrelated events, which can save significant overheads (particularly network bandwidth andprocessing time at the brokers). In this paper we contribute the notion of subscription summaries, a mechanism appropriately compacting subscription information. We develop the associated data structures and matching algorithms. The proposed mechanism can handle event/subscription schemata that are rich in terms of their attribute types andpowerful in terms of the allowed operations on them. Our major results are that the proposed mechanism (i) is scalable, with the bandwidth required to propagate subscriptions increasing only slightly, even at huge-scales, and (ii) is significantly more efficient, up to orders of magnitude, depending on the scale, with respect to the bandwidth requirements for propagating subscriptions.
Abstract: The content-based publish/subscribe (pub/sub)
paradigm for system design is becoming increasingly
popular, offering unique benefits for a large number of
data-intensive applications. Coupled with the peer-to-peer
technology, it can serve as a central building block for
such applications deployed over a large-scale network
infrastructure. A key problem toward the creation of
large-scale content-based pub/sub infrastructures relates to
dealing efficiently with continuous queries (subscriptions)
with rich predicates on string attributes; in this work we
study the problem of efficiently and accurately matching
substring queries to incoming events.
Abstract: We investigate the impact of different mobility rates on the
performance of routing protocols in ad-hoc mobile networks. Based
on our investigation, we design a new protocol that results from
the synthesis of the well known protocols: ZRPand RUNNERS. We have implemented the new protocol as well as
the original two protocols and conducted an extensive, comparative
simulation study of their performance. The new protocol behaves
well both in networks of diverse mobility motion rates, and in
some cases even outperforms the original ones by achieving lower
message delivery delays.
Abstract: We study congestion games where players aim to access a set of resources. Each player has a set of possible strategies and each resource has a function associating the latency it incurs to the players using it. Players are non–cooperative and each wishes to follow strategies that minimize her own latency with no regard to the global optimum. Previous work has studied the impact of this selfish behavior to system performance. In this paper, we study the question of how much the performance can be improved if players are forced to pay taxes for using resources. Our objective is to extend the original game so that selfish behavior does not deteriorate performance. We consider atomic congestion games with linear latency functions andpresent both negative andpositive results. Our negative results show that optimal system performance cannot be achieved even in very simple games. On the positive side, we show that there are ways to assign taxes that can improve the performance of linear congestion games by forcing players to follow strategies where the total latency suffered is within a factor of 2 of the minimum possible; this result is shown to be tight. Furthermore, even in cases where in the absence of taxes the system behavior may be very poor, we show that the total disutility of players (latency plus taxes) is not much larger than the optimal total latency. Besides existential results, we show how to compute taxes in time polynomial in the size of the game by solving convex quadratic programs. Similar questions have been extensively studied in the model of non-atomic congestion games. To the best of our knowledge, this is the first study of the efficiency of taxes in atomic congestion games.
Abstract: In this work we consider temporal networks, i.e. networks defined by a labeling $\lambda$ assigning to each edge of an underlying graph G a set of discrete time-labels. The labels of an edge, which are natural numbers, indicate the discrete time moments at which the edge is available. We focus on path problems of temporal networks. In particular, we consider time-respecting paths, i.e. paths whose edges are assigned by $\lambda$ a strictly increasing sequence of labels. We begin by giving two efficient algorithms for computing shortest time-respecting paths on a temporal network. We then prove that there is a natural analogue of Menger’s theorem holding for arbitrary temporal networks. Finally, we propose two cost minimization parameters for temporal network design. One is the temporality of G, in which the goal is to minimize the maximum number of labels of an edge, and the other is the temporal cost of G, in which the goal is to minimize the total number of labels used. Optimization of these parameters is performed subject to some connectivity constraint. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graph families such as rings, directed acyclic graphs, and trees.
Abstract: We extend the population protocol model with a cover-time service that informs a walking state every time it covers the whole network. This is simply a known upper bound on the cover time of a random walk. This allows us to introduce termination into population protocols, a capability that is crucial for any distributed system. By reduction to an oracle-model we arrive at a very satisfactory lower bound on the computational power of the model: we prove that it is at least as strong as a Turing Machine of space logn with input commutativity, where n is the number of nodes in the network. We also give a logn-space, but nondeterministic this time, upper bound. Finally, we prove interesting similarities of this model to linear bounded automata.
Abstract: In this paper we present the design of jWebDust, a generic and modular application environment for developing and managing applications based on wireless sensor networks that are accessible via the internet. Our software architecture provides a range of services that allow to create customized web-based applications with minimum implementation effort that are easy to administrate. We here present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust allows heterogeneous components to interoperate and the integrated management and control of multiple such networks by defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network.
Abstract: A lot of activity is being devoted to studying issues related to energy consumption and efficiency in our buildings, and especially on public buildings. In this context, the educational public buildings should bean important part of the equation. At the same time, there is an evident need for open datasets, which should be publicly available for researchers to use. We have implemented a real-world multi-site Inter-net of Things (IoT) deployment, comprising 25 school buildings across Europe, primarily designed as a foundation for enabling IoT-based energy awareness and sustainability lectures andpromoting data-driven energy-saving behaviors. In this work, we present some of the basic aspects to producing datasets from this deployment and discuss its potential uses. We also provide a brief discussion on data derived from a preliminary analysis of thermal comfort-related data produced from this infrastructure.
Abstract: Random scaled sector graphs were introduced as a generalization of random geometric graphs to model networks of sensors using optical communication. In the random scaled sector graph model vertices are placed uniformly at random into the [0, 1]2 unit square. Each vertex i is assigned uniformly at random sector Si, of central angle {\'a}i, in a circle of radius ri (with vertex i as the origin). An arc is present from vertex i to any vertex j, if j falls in Si. In this work, we study the value of the chromatic number ƒ{\^O}(Gn), directed clique number {\`u}(Gn), and undirected clique number {\`u}2 (Gn) for random scaled sector graphs with n vertices, where each vertex spans a sector of {\'a} degrees with radius rn = \~{a}ln n/n. We prove that for values {\'a} < ƒ{\^I}, as n ¨ ‡ w.h.p., ƒ{\^O}(Gn) and {\`u}2 (Gn) are {\`E}(ln n/ln ln n), while {\`u}(Gn) is O(1), showing a clear difference with the random geometric graph model. For {\'a} > ƒ{\^I} w.h.p., ƒ{\^O}(Gn) and {\`u}2 (Gn) are {\`E} (ln n), being the same for random scaled sector and random geometric graphs, while {\`u}(Gn) is {\`E}(ln n/ln ln n).
Abstract: A dichotomy theorem for a class of decision problems is a result asserting that certain problems in the
class are solvable in polynomial time, while the rest are NP-complete. The first remarkable such dichotomy
theorem was proved by Schaefer in 1978. It concerns the class of generalized satisfiability problems Sat?S{\L},
whose input is a CNF?S{\L}-formula, i.e., a formula constructed from elements of a fixed set S of generalized
connectives using conjunctions and substitutions by variables. Here, we investigate the complexity of
minimal satisfiability problems Min Sat?S{\L}, where S is a fixed set of generalized connectives. The input to
such a problem is a CNF?S{\L}-formula and a satisfying truth assignment; the question is to decide whether
there is another satisfying truth assignment that is strictly smaller than the given truth assignment with
respect to the coordinate-wise partial order on truth assignments. Minimal satisfiability problems were first
studied by researchers in artificial intelligence while investigating the computational complexity of prop-
ositional circumscription. The question of whether dichotomy theorems can be proved for these problems
was raised at that time, but was left open. We settle this question affirmatively by establishing a dichotomy
theorem for the class of all Min Sat?S{\L}-problems, where S is a finite set of generalized connectives. We also
prove a dichotomy theorem for a variant of Min Sat?S{\L} in which the minimization is restricted to a subset of
the variables, whereas the remaining variables may vary arbitrarily (this variant is related to extensions of
propositional circumscription and was first studied by Cadoli). Moreover, we show that similar dichotomy
theorems hold also when some of the variables are assigned constant values. Finally, we give simple criteria that tell apart the polynomial-time solvable cases of these minimal satisfiability problems from the NP-
complete ones.
Abstract: Recent advances in the all-optical signal processing
domain report high-speed and nontrivial
functionality directly implemented in the optical
layer. These developments mean that the alloptical
processing of packet headers has a future.
In this article we address various important control
plane issues that must be resolved when
designing networks based on all-optical packetswitched
nodes.
Abstract: In this work we present the design of jWebDust, a
software environment for monitoring and controlling sensor networks via a web interface. Our software architecture provides a range of services that allow to create customized applications with minimum implementation effort that are easy to administrate. We present its open architecture, the most important design decisions, and discuss its distinct features and functionalities. jWebDust will allow heterogeneous components to operate in the same sensor network, and the integrated management and control of multiple such networks by defining web-based mechanisms to visualize the network state, the results of queries, and a means to inject queries in the network.
Abstract: We present the conceptual basis and the initial planning for an open
source management architecture for wireless sensor networks (WSN). Although
there is an abundance of open source tools serving the administrative needs of
WSN deployments, there is a lack of tools or platforms for high level integrated
WSN management. The current work is, to our knowledge, the first effort to
conceptualize and design a remote, integrated management platform for the
support of WSN research laboratories. The platform is based on the integration
and extension of two innovative platforms: jWebDust, a WSN operation and
management platform, and OpenRSM, an open source integrated remote
systems and network management platform. The proposed system architecture
can support several levels of integration in order to cover to multiple,
qualitatively differentiated use-cases.
Abstract: We investigate the existence and efficient algorithmic construction
of close to optimal independent sets in random models of intersection
graphs. In particular, (a) we propose a new model for random
intersection graphs (Gn,m,p) which includes the model of [10] (the “uniform”
random intersection graphs model) as an important special case.
We also define an interesting variation of the model of random intersection
graphs, similar in spirit to random regular graphs. (b) For this
model we derive exact formulae for the mean and variance of the number
of independent sets of size k (for any k) in the graph. (c) We then propose
and analyse three algorithms for the efficient construction of large
independent sets in this model. The first two are variations of the greedy
technique while the third is a totally new algorithm. Our algorithms are
analysed for the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding close to optimal
independent sets for an interesting range of graph parameters.
Abstract: We investigate the existence and efficient algorithmic construction of close to opti-
mal independent sets in random models of intersection graphs. In particular, (a) we
propose a new model for random intersection graphs (Gn,m,~p) which includes the
model of [10] (the “uniform” random intersection graphs model) as an important
special case. We also define an interesting variation of the model of random intersec-
tion graphs, similar in spirit to random regular graphs. (b) For this model we derive
exact formulae for the mean and variance of the number of independent sets of size
k (for any k) in the graph. (c) We then propose and analyse three algorithms for
the efficient construction of large independent sets in this model. The first two are
variations of the greedy technique while the third is a totally new algorithm. Our
algorithms are analysed for the special case of uniform random intersection graphs.
Our analyses show that these algorithms succeed in finding close to optimal in-
dependent sets for an interesting range of graph parameters.
Abstract: Content integration of web data sources is becoming increasingly
important for the next generation information
systems. However, all proposed solutions are faced with
the same performance bottleneck: the network overhead
incurred when contacting the integrated e-sites. With this
demo paper we shall demonstrate the functionality of HyperHotel.
HyperHotel is used for finding appropriate hotel
rooms when travelling. Its novetlies are that it is designed
and implemented as an internet web-hotel content integration
application and that it is built on top of D.I.C.E. and
Co.In.S.; a novel content integration infrastructure consisting
of a domain-independent COntent INtegration System
and its Data Integration Cache Engine. We¢ll show how the
infrastructure of D.I.C.E. and Co.In.S. can be applied and
exploited in HyperHotel in order to improve the response
time of complex user queries. This exemplifies the significance
of this infrastructure since HyperHotel is representative
of a large class of e-commerce, content integration
applications.
Abstract: A packet-switching network is stable if the number of packets in the network remains bounded at all times. A very natural question that arises in the context of stability properties of such networks is how network structure precisely affects these properties. In this work we embark on a systematic study of this question in the context of Adversarial Queueing Theory, which assumes that packets are adversarially injected into the network. We consider size, diameter, maximum vertex degree, minimum number of disjoint paths that cover all edges of the network and network subgraphs as crucial structural parameters of the network, and we present a comprehensive collection of structural results, in the form of stability and instability bounds on injection rate of the adversary for various greedy protocols: —Increasing the size of a network may result in dropping its instability bound. This is shown through a novel, yet simple and natural, combinatorial construction of a size-parameterized network on which certain compositions of greedy protocols are running. The convergence of the drop to 0.5 is found to be fast with andproportional to the increase in size. —Maintaining the size of a network small may already suffice to drop its instability bound to a substantially low value. This is shown through a construction of a FIFO network with size 22, which becomes unstable at rate 0.704. This represents the current state-of-the-art trade-off between network size and instability bound. —The diameter, maximum vertex degree and minimum number of edge-disjoint paths that cover a network may be used as control parameters for the stability bound of the network. This is shown through an improved analysis of the stability bound of any arbitrary FIFO network, which takes these parameters into account. —How much can network subgraphs that are forbidden for stability affect the instability bound? Through improved combinatorial constructions of networks and executions, we improve the state-of-the-art instability bound induced by certain known forbidden subgraphs on networks running a certain greedy protocol. —Our results shed more light and contribute significantly to a finer understanding of the impact of structural parameters on stability and instability properties of networks.
Abstract: Consider a network vulnerable to security attacks and equipped with defense mechanisms. How much is the loss in the provided security guarantees due to the selfish nature of attacks and defenses? The Price of Defense was recently introduced in [7] as a worst-case measure, over all associated Nash equilibria, of this loss. In the particular strategic game considered in [7], there are two classes of confronting randomized players on a graph G(V,E): v attackers, each choosing vertices and wishing to minimize the probability of being caught, and a single defender, who chooses edges and gains the expected number of attackers it catches. In this work, we continue the study of the Price of Defense. We obtain the following results: - The Price of Defense is at least |V| 2; this implies that the Perfect Matching Nash equilibria considered in [7] are optimal with respect to the Price of Defense, so that the lower bound is tight. - We define Defense-Optimal graphs as those admitting a Nash equilibrium that attains the (tight) lower bound of |V| 2. We obtain: › A graph is Defense-Optimal if and only if it has a Fractional Perfect Matching. Since graphs with a Fractional Perfect Matching are recognizable in polynomial time, the same holds for Defense-Optimal graphs. › We identify a very simple graph that is Defense-Optimal but has no Perfect Matching Nash equilibrium. - Inspired by the established connection between Nash equilibria and Fractional Perfect Matchings, we transfer a known bivaluedness result about Fractional Matchings to a certain class of Nash equilibria. So, the connection to Fractional Graph Theory may be the key to revealing the combinatorial structure of Nash equilibria for our network security game.
Abstract: Let M be a single s-t network of parallel links with load dependent latency functions shared by an infinite number of selfish users. This may yield a Nash equilibrium with unbounded Coordination Ratio [E. Koutsoupias, C. Papadimitriou, Worst-case equilibria, in: 16th Annual Symposium on Theoretical Aspects of Computer Science, STACS, vol. 1563, 1999, pp. 404-413; T. Roughgarden, E. Tardos, How bad is selfish routing? in: 41st IEEE Annual Symposium of Foundations of Computer Science, FOCS, 2000, pp. 93-102]. A Leader can decrease the coordination ratio by assigning flow {\'a}r on M, and then all Followers assign selfishly the (1-{\'a})r remaining flow. This is a Stackelberg Scheduling Instance(M,r,{\'a}),0≤{\'a}≤1. It was shown [T. Roughgarden, Stackelberg scheduling strategies, in: 33rd Annual Symposium on Theory of Computing, STOC, 2001, pp. 104-113] that it is weakly NP-hard to compute the optimal Leader's strategy. For any such network M we efficiently compute the minimum portion @b"M of flow r>0 needed by a Leader to induce M's optimum cost, as well as her optimal strategy. This shows that the optimal Leader's strategy on instances (M,r,@a>=@b"M) is in P. Unfortunately, Stackelberg routing in more general nets can be arbitrarily hard. Roughgarden presented a modification of Braess's Paradox graph, such that no strategy controlling {\'a}r flow can induce ≤1/{\'a} times the optimum cost. However, we show that our main result also applies to any s-t net G. We take care of the Braess's graph explicitly, as a convincing example. Finally, we extend this result to k commodities. A conference version of this paper has appeared in [A. Kaporis, P. Spirakis, The price of optimum in stackelberg games on arbitrary single commodity networks and latency functions, in: 18th annual ACM symposium on Parallelism in Algorithms and Architectures, SPAA, 2006, pp. 19-28]. Some preliminary results have also appeared as technical report in [A.C. Kaporis, E. Politopoulou, P.G. Spirakis, The price of optimum in stackelberg games, in: Electronic Colloquium on Computational Complexity, ECCC, (056), 2005].
Abstract: We study the problem of routing traffic through a congested network. We focus on the simplest case of a network consisting of m parallel links. We assume a collection of n network users; each user employs a mixed strategy, which is a probability distribution over links, to control the shipping of its own assigned traffic. Given a capacity for each link specifying the rate at which the link processes traffic, the objective is to route traffic so that the maximum (over all links) latency is minimized. We consider both uniform and arbitrary link capacities. How much decrease in global performace is necessary due to the absence of some central authority to regulate network traffic and implement an optimal assignment of traffic to links? We investigate this fundamental question in the context of Nash equilibria for such a system, where each network user selfishly routes its traffic only on those links available to it that minimize its expected latency cost, given the network congestion caused by the other users. We use the Coordination Ratio, originally defined by Koutsoupias andPapadimitriou, as a measure of the cost of lack of coordination among the users; roughly speaking, the Coordination Ratio is the ratio of the expectation of the maximum (over all links) latency in the worst possible Nash equilibrium, over the least possible maximum latency had global regulation been available. Our chief instrument is a set of combinatorial Minimum Expected Latency Cost Equations, one per user, that characterize the Nash equilibria of this system. These are linear equations in the minimum expected latency costs, involving the user traffics, the link capacities, and the routing pattern determined by the mixed strategies. In turn, we solve these equations in the case of fully mixed strategies, where each user assigns its traffic with a strictly positive probability to every link, to derive the first existence and uniqueness results for fully mixed Nash equilibria in this setting. Through a thorough analysis and characterization of fully mixed Nash equilibria, we obtain tight upper bounds of no worse than O(ln n/ln ln n) on the Coordination Ratio for (i) the case of uniform capacities and arbitrary traffics and (ii) the case of arbitrary capacities and identical traffics.
Abstract: In this paper we examine spectral properties of random intersection graphs when the number
of vertices is equal to the number of labels. We call this class symmetric random intersection graphs.
We examine symmetric random intersection graphs when the probability that a vertex selects a label
is close to the connectivity threshold ¿c. In particular, we examine the size of the second eigenvalue of
the transition matrix corresponding to the Markov Chain that describes a random walk on an instance
of the symmetric random intersection graph Gn,n,p. We show that with high probability the second
eigenvalue is upper bounded by some constant ³ < 1.
Abstract: We propose and evaluate the performance of a new MAC-layer protocol for mobile ad hoc networks, called the Slow Start Power Controlled (abbreviated SSPC) protocol. SSPC improves on IEEE 802.11 by using power control for the RTS/CTS and DATA frame transmissions, so as to reduce energy consumption and increase network throughput and lifetime. In our scheme the transmission power used for the RTS frames is not constant, but follows a slow start principle. The CTS frames, which are sent at maximum transmission power, prevent the neighbouring nodes from transmitting their DATA frames at power levels higher than a computed threshold, while allowing them to transmit at power levels less than that threshold. Reduced energy consumption is achieved by adjusting the node transmission power to the minimum required value for reliable reception at the receiving node, while increase in network throughput is achieved by allowing more transmissions to take place simultaneously. The slow start principle used for calculating the appropriate DATA frames transmission power and the possibility of more simultaneous collision-free transmissions differentiate the SSPC protocol from the other MAC solutions proposed for IEEE 802.11. Simulation results indicate that the SSPC protocol achieves a significant reduction in power consumption, average packet delay and frequency of RTS frame collisions, and a significant increase in network throughput and received-to-sent packets ratio compared to IEEE 802.11 protocol.
Abstract: We review some recent advances for solving core algorithmic problems encountered in public
transportation systems. We show that efficient algorithms can make a great difference both in
efficiency and in optimality, thus contributing significantly to improving the quality and
service-efficiency of public transportation systems.
Abstract: In this work, we study the combinatorial structure and the
computational complexity of Nash equilibria for a certain game that
models selfish routing over a network consisting of m parallel links. We
assume a collection of n users, each employing a mixed strategy, which
is a probability distribution over links, to control the routing of its own
assigned traffic. In a Nash equilibrium, each user selfishly routes its traffic
on those links that minimize its expected latency cost, given the network
congestion caused by the other users. The social cost of a Nash equilibrium
is the expectation, over all random choices of the users, of the
maximum, over all links, latency through a link.
We embark on a systematic study of several algorithmic problems related
to the computation of Nash equilibria for the selfish routing game we consider.
In a nutshell, these problems relate to deciding the existence of a
Nash equilibrium, constructing a Nash equilibrium with given support
characteristics, constructing the worst Nash equilibrium (the one with
maximum social cost), constructing the best Nash equilibrium (the one
with minimum social cost), or computing the social cost of a (given) Nash
equilibrium. Our work provides a comprehensive collection of efficient algorithms,
hardness results (both as NP-hardness and #P-completeness
results), and structural results for these algorithmic problems. Our results
span and contrast a wide range of assumptions on the syntax of the
Nash equilibria and on the parameters of the system.
Abstract: The problem of determining the unsatisfiability threshold for random 3-SAT formulas consists in determining the clause to variable
ratio that marks the experimentally observed abrupt change from almost surely satisfiable formulas to almost surely unsatisfiable. Up
to now, there have been rigorously established increasingly better lower and upper bounds to the actual threshold value. In this paper,
we consider the problem of bounding the threshold value from above using methods that, we believe, are of interest on their own
right. More specifically, we show how the method of local maximum satisfying truth assignments can be combined with results for
the occupancy problem in schemes of random allocation of balls into bins in order to achieve an upper bound for the unsatisfiability
threshold less than 4.571. In order to obtain this value, we establish a bound on the q-binomial coefficients (a generalization of the
binomial coefficients). No such bound was previously known, despite the extensive literature on q-binomial coefficients. Finally,
to prove our result we had to establish certain relations among the conditional probabilities of an event in various probabilistic
models for random formulas. It turned out that these relations were considerably harder to prove than the corresponding ones for
unconditional probabilities, which were previously known.
Abstract: We study the load balancing problem in the context of a set of clients each wishing to run a job on a server selected among a subset of permissible servers for the particular client. We consider two different scenarios. In selfish load balancing, each client is selfish in the sense that it selects to run its job to the server among its permissible servers having the smallest latency given the assignments of the jobs of other clients to servers. In online load balancing, clients appear online and, when a client appears, it has to make an irrevocable decision and assign its job to one of its permissible servers. Here, we assume that the clients aim to optimize some global criterion but in an online fashion. A natural local optimization criterion that can be used by each client when making its decision is to assign its job to that server that gives the minimum increase of the global objective. This gives rise to greedy online solutions. The aim of this paper is to determine how much the quality of load balancing is affected by selfishness and greediness.
We characterize almost completely the impact of selfishness and greediness in load balancing by presenting new and improved, tight or almost tight bounds on the price of anarchy andprice of stability of selfish load balancing as well as on the competitiveness of the greedy algorithm for online load balancing when the objective is to minimize the total latency of all clients on servers with linear latency functions.
Abstract: We study the load balancing problem in the context of a set of clients each
wishing to run a job on a server selected among a subset of permissible servers for
the particular client. We consider two different scenarios. In selfish load balancing,
each client is selfish in the sense that it chooses, among its permissible servers, to
run its job on the server having the smallest latency given the assignments of the
jobs of other clients to servers. In online load balancing, clients appear online and,
when a client appears, it has to make an irrevocable decision and assign its job to
one of its permissible servers. Here, we assume that the clients aim to optimize some
global criterion but in an online fashion. A natural local optimization criterion that
can be used by each client when making its decision is to assign its job to that server that gives the minimum increase of the global objective. This gives rise to greedy
online solutions. The aim of this paper is to determine how much the quality of load
balancing is affected by selfishness and greediness.
We characterize almost completely the impact of selfishness and greediness in
load balancing by presenting new and improved, tight or almost tight bounds on the
price of anarchy of selfish load balancing as well as on the competitiveness of the
greedy algorithm for online load balancing when the objective is to minimize the
total latency of all clients on servers with linear latency functions. In addition, we
prove a tight upper bound on the price of stability of linear congestion games.
Abstract: In this paper, the impact of burstification delay on the TCP
traffic statistics is presented as well as a new assembly scheme that uses
flow window size as the threshold criterion. It is shown that short assembly
times are ideally suitable for sources with small congestion windows,
allowing for a speed up in their transmission. In addition, large assembly
times do not yield any throughput gain, despite the large number of
segments per burst transmitted, but result in a low throughput variation, and
thus a higher notion of fairness among the individual flows. To this end, in
this paper, we propose a new burst assembly scheme that dynamically
assigns flows to different assembly queues with different assembly timers,
based on their instant window size. Results show that the proposed scheme
with different timers provides a higher average throughput together with a
smaller variance which is a good compromise for bandwidth dimensioning.
Abstract: The peer-to-peer computing paradigm is an intriguing alternative to Google-style search
engines for querying and ranking Web content. In a network with many thousands or
millions of peers the storage and access load requirements per peer are much lighter
than for a centralized Google-like server farm; thus more powerful techniques from information
retrieval, statistical learning, computational linguistics, and ontological reasoning
can be employed on each peer¢s local search engine for boosting the quality
of search results. In addition, peers can dynamically collaborate on advanced andparticularly
difficult queries. Moroever, a peer-to-peer setting is ideally suited to capture
local user behavior, like query logs and click streams, and disseminate and aggregate
this information in the network, at the discretion of the corresponding user, in order to
incorporate richer cognitive models.
This paper gives an overview of ongoing work in the EU Integrated Project DELIS
that aims to develop foundations for a peer-to-peer search engine with Google-or-better
scale, functionality, and quality, which will operate in a completely decentralized and
self-organizing manner. The paper presents the architecture of such a system and the
Minerva prototype testbed, and it discusses various core pieces of the approach: efficient
execution of top-k ranking queries, strategies for query routing when a search request
needs to be forwarded to other peers, maintaining a self-organizing semantic overlay
network, and exploiting and coping with user and community behavior.
Abstract: In this work we present a new simulation toolkit that we call TRAILS (Toolkit for Realism and Adaptivity In Large-scale Simulations), which extends the \NS simulator by adding several important functionalities and optimizing certain
critical simulator operations. The added features focus on providing the user with the necessary tools to better study wireless networks of high dynamics; in particular, to implement advanced mobility patterns, obstacle presence and disaster scenarios, and failures injection. These scenarios andpatterns can dynamically change throughout the execution of the simulation based on network related parameters. Moreover, we define a set of utilities that can facilitate the use of \NS providing advanced statistics and easy-to-use logging mechanisms. This functionality is implemented in a simple and flexible architecture, that follows designpatterns, object oriented and generic programming principles, maintaining a proper balance between reusability, extendability and ease of use. We evaluate the performance of TRAILS and show that it offers significant speed-ups (at least 4 times faster) regarding the execution time of \NS in certain important, common wireless settings. Our results also show that this is achieved with minimum overhead in terms of memory usage.
Abstract: Digital optical logic circuits capable of performing bit-wise signal processing are critical building blocks for the realization of future high-speed packet-switched networks. In this paper, we present recent advances in all-optical processing circuits and examine the potential of their integration into a system environment. On this concept, we demonstrate serial all-optical Boolean AND/XOR logic at 20 Gb/s and a novel all-optical packet clock recovery circuit, with low capturing time, suitable for burst-mode traffic. The circuits use the semiconductor-based ultrafast nonlinear interferometer (UNI) as the nonlinear switching element. We also present the integration of these circuits in a more complex unit that performs header andpayload separation from short synchronous data packets at 10 Gb/s. Finally, we discuss a method to realize a novel packet scheduling switch architecture, which guarantees lossless communication for specific traffic burstiness constraints, using these logic units.
Abstract: In this work we add a training phase to an Impairment Aware Routing and Wavelength Assignment (IA-RWA) algorithm so as to improve its performance. The initial IA-RWA algorithm is a multi-parametric algorithm where a vector of physical impairment parameters is assigned to each link, from which the impairment vectors of candidate lightpaths are calculated. The important issue here is how to combine these impairment parameters into a scalar that would reflect the true transmission quality of a path. The training phase of the proposed IA-RWA algorithm is based on an optimization approach, called Particle Swarm Optimization (PSO), inspired by animal social behavior. The training phase gives the ability to the algorithm to be aware of the physical impairments even though the optical layer is seen as a black box. Our simulation studies show that the performance of the proposed scheme is close to that of algorithms that have explicit knowledge of the optical layer and the physical impairments.
Abstract: Recent activity in the field of Internet-of-Things experimentation has focused on the federation of discrete testbeds, thus placing less effort in the integration of other related technologies, such as smartphones; also, while it is gradually moving to more application-oriented paths, such as urban settings, it has not dealt in large with applications having social networking features. We argue here that current IoT infrastructure, testbeds and related software technologies should be used in such a context, capturing real-world human mobility and social networking interactions, for use in evaluating and fine-tuning realistic mobility models and designing human-centric applications. We discuss a system for producing traces for a new generation of human-centric applications, utilizing technologies such as Bluetooth and focusing on human interactions. We describe the architecture for this system and the respective implementation details presenting two distinct deployments; one in an office environment and another in an exhibition/conference event (FET'11, The European Future Technologies Conference and Exhibition) with 103 active participants combined, thus covering two popular scenarios for human centric applications. Our system provides online, almost real-time, feedback and statistics and its implementation allows for rapid and robust deployment, utilizing mainstream technologies and components.
Abstract: Experimentally driven research for wireless sensor networks is invaluable to provide benchmarking and comparison of new ideas. An increasingly common tool in support of this is a testbed composed of real hardware devices which increases the realism of evaluation. However, due to hardware costs the size and heterogeneity of these testbeds is usually limited. In addition, a testbed typically has a relatively static configuration in terms of its network topology and its software support infrastructure, which limits the utility of that testbed to specific case-studies. We propose a novel approach that can be used to (i) interconnect a large number of small testbeds to provide a federated testbed of very large size, (ii) support the interconnection of heterogeneous hardware into a single testbed, and (iii) virtualise the physical testbed topology and thus minimise the need to relocate devices. We present the most important design issues of our approach and evaluate its performance. Our results indicate that testbed virtualisation can be achieved with high efficiency and without hindering the realism of experiments.
Abstract: We study computationally hard combinatorial problems arising from the important engineering question of how to maximize the number of connections that can be simultaneously served in a WDM optical network. In such networks, WDM technology can satisfy a set of connections by computing a route and assigning a wavelength to each connection so that no two connections routed through the same fiber are assigned the same wavelength. Each fiber supports a limited number of w wavelengths and in order to fully exploit the parallelism provided by the technology, one should select a set connections of maximum cardinality which can be satisfied using the available wavelengths. This is known as the maximum routing andpath coloring problem (maxRPC).
Our main contribution is a general analysis method for a class of iterative algorithms for a more general coloring problem. A lower bound on the benefit of such an algorithm in terms of the optimal benefit and the number of available wavelengths is given by a benefit-revealing linear program. We apply this method to maxRPC in both undirected and bidirected rings to obtain bounds on the approximability of several algorithms. Our results also apply to the problem maxPC where paths instead of connections are given as part of the input. We also study the profit version of maxPC in rings where each path has a profit and the objective is to satisfy a set of paths of maximum total profit.
Abstract: We demonstrate a compact, all-optical, packet clock and data recovery circuit that uses integrated MZIs. Clock is acquired within 2 bits irrespective of packet length andphase alignment. Error-free operation is demonstrated at 10 Gb/s.
Abstract: Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated. To do this we employ a cache replacement algorithm, 'CSP', which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularities, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements.
Numerous research efforts have produced a large number of algorithms and mechanisms for web proxy caches. In order to build powerful web proxies and understand their performance, one must be able to appreciate the impact and significance of earlier contributions and how they can be integrated To do this we employ a cache replacement algorithm, 'CSP, which integrates key knowledge from previous work. CSP utilizes the communication Cost to fetch web objects, the objects' Sizes, their Popularifies, an auxiliary cache and a cache admission control algorithm. We study the impact of these components with respect to hit ratio, latency, and bandwidth requirements. Our results show that there are clear performance gains when utilizing the communication cost, the popularity of objects, and the auxiliary cache. In contrast, the size of objects and the admission controller have a negligible performance impact. Our major conclusions going against those in related work are that (i) LRU is preferable to CSP for important parameter values, (ii) accounting for the objects' sizes does not improve latency and/or bandwidth requirements, and (iii) the collaboration of nearby proxies is not very beneficial. Based on these results, we chart the problem solution space, identifying which algorithm is preferable and under which conditions. Finally, we develop a dynamic replacement algorithm that continuously utilizes the best algorithm as the problem-parameter values (e.g., the access distributions) change with time.
Abstract: We present the basic concepts behind the designand implementation of WebDust, a peer-to-peer platform for organizing,
monitoring and controlling wireless sensor networks, along with a discussion of its application regarding an actual testbed.
Our software architecture provides a range of services that allow to create customized applications with relatively low
implementation overhead. WebDust aims to allow heterogeneous components to operate in the same sensor network, and
give the ability to manage and control large numbers of such networks, possibly on a global scale. We also give insight to
several applications that can be implemented using our platform, and a description of our current testbed.
Abstract: A Nash equilibrium of a routing network represents a stable state of the network where no user finds it beneficial to unilaterally deviate from its routing strategy. In this work, we investigate the structure of such equilibria within the context of a certain game that models selfish routing for a set of n users each shipping its traffic over a network consisting of m parallel links. In particular, we are interested in identifying the worst-case Nash equilibrium – the one that maximizes social cost. Worst-case Nash equilibria were first introduced and studied in the pioneering work of Koutsoupias andPapadimitriou [9].
More specifically, we continue the study of the Conjecture of the Fully Mixed Nash Equilibrium, henceforth abbreviated as FMNE Conjecture, which asserts that the fully mixed Nash equilibrium, when existing, is the worst-case Nash equilibrium. (In the fully mixed Nash equilibrium, the mixed strategy of each user assigns (strictly) positive probability to every link.) We report substantial progress towards identifying the validity, methodologies to establish, and limitations of, the FMNE Conjecture.
Abstract: Data propagation in wireless sensor networks can be performed either by hop-by-hop single transmissions or by multi-path broadcast of data. Although several energy-aware MAC layer protocols exist that operate very well in the case of single point-to-point transmissions, none is especially designed and suitable for multiple broadcast transmissions.In this paper we propose a family of new protocols suitable of multi-path broadcast of data, and show, through a detailed and extended simulation evaluation, that our parameter-based protocols significantly reduce the number of collisions and thus increase the rate of successful message delivery (to above 90%) by trading off the average propagation delay. At the same time, our protocols are shown to be very energy efficient, in terms of the average energy dissipation per delivered message.